# Turing machine

131 results back to index

pages: 210 words: 62,771

Turing's Vision: The Birth of Computer Science by Chris Bernhardt

Fortunately, it doesn’t require much more work to obtain some undecidable questions that are much more relevant. Does a Turing Machine Diverge on Its Encoding? Is Undecidable When we run a Turing machine on an input string there are three possible outcomes: it halts in the accept state; it halts in the reject state; or it never halts. Let us look at three questions related to this. Is it possible to construct a Turing machine that receives encodings of Turing machines as input and halts in the accept state only if the encoded Turing machine accepts its encoding? Is it possible to construct a Turing machine that receives encodings of Turing machines as input and halts in the accept state only if the encoded Turing machine rejects its encoding? Is it possible to construct a Turing machine that receives encodings of Turing machines as input and halts in the accept state only if the encoded Turing machine diverges on its its encoding?

However, this gives a contradiction, as we have shown that there is no Turing machine with this property. The conclusion is that there is no Turing machine that receives encodings of Turing machines as input and halts in the accept state only if the encoded Turing machine diverges on its its encoding. This shows that the question Does a Turing machine diverge on its encoding? is undecidable. Again, the negation will also be undecidable, so we know that the question Does a Turing machine halt on its encoding? is undecidable. The Acceptance, Halting, and Blank Tape Problems We now have two undecidable questions: Does a Turing machine accept its encoding? and Does a Turing machine halt on its encoding? We will generalize both questions. The acceptance problem: Given any Turing machine M and any input I, does M accept I?

We can ask the corresponding question. Could MFA be a Turing machine? The answer is, of course, yes. We have shown that there is an algorithm for deciding whether or not finite automata accept their encodings. Consequently, there is a Turing machine that does this. Turing Machines That Do Not Accept Their Encodings We now turn our attention to Turing machines and look at whether or not they accept their encodings. Just as with finite automata, there are Turing machines that accept their encodings and there are Turing machines that don’t. One difference from the previous case is that we know that if a finite automaton does not accept its encoding, then it has rejected its encoding, but with Turing machines there are three possible outcomes. Being told that a Turing machine does not accept its encoding does not imply that the Turing machine has rejected its encoding.

pages: 524 words: 120,182

Complexity: A Guided Tour by Melanie Mitchell

The idea is that, given a particular problem to solve, you can construct a definite procedure for solving it by designing a Turing machine that solves it. Turing machines were put forth as the definition of “definite procedure,” hitherto a vague and ill-defined notion. When formulating these ideas, Turing didn’t build any actual machines (though he built significant ones later on). Instead, all his thinking about Turing machines was done with pencil and paper alone. Universal Turing Machines Next, Turing proved an amazing fact about Turing machines: one can design a special universal Turing machine (let’s call it U) that can emulate the workings of any other Turing machine. For U to emulate a Turing machine M running on an input I, U starts with part of its tape containing a sequence of 0s, 1s, and blanks that encodes input I, and part of its tape containing a sequence of 0s, 1s, and blanks that encodes machine M.

In short, he was able to design a cellular automaton rule, in his case with twenty-nine states per cell instead of just two, that would create a perfect reproduction of any initial pattern placed on the cellular automaton lattice. Von Neumann also was able to show that his cellular automaton was equivalent to a universal Turing machine (cf. chapter 4). The cell update rule plays the role of the rules for the Turing machine tape head, and the configuration of states plays the role of the Turing machine tape—that is, it encodes the program and data for the universal machine to run. The step-by-step updates of the cells correspond to the step-by-step iteration of the universal Turing machine. Systems that are equivalent in power to universal Turing machines (i.e., can compute anything that a universal Turing machine can) are more generally called universal computers, or are said to be capable of universal computation or to support universal computation. The Game of Life Von Neumann’s cellular automaton rule was rather complicated; a much simpler, two-state cellular automaton also capable of universal computation was invented in 1970 by the mathematician John Conway.

Following the intuition of Leibniz of more than two centuries earlier, Turing formulated his definition by thinking about a powerful calculating machine—one that could not only perform arithmetic but also could manipulate symbols in order to prove mathematical statements. By thinking about how humans might calculate, he constructed a mental design of such a machine, which is now called a Turing machine. The Turing machine turned out to be a blueprint for the invention of the electronic programmable computer. Alan Turing, 1912–1954 (Photograph copyright ©2003 by Photo Researchers Inc. Reproduced by permission.) A QUICK INTRODUCTION TO TURING MACHINES As illustrated in figure 4.1, a Turing machine consists of three parts: (1) A tape, divided into squares (or “cells”), on which symbols can be written and from which symbols can be read. The tape is infinitely long in both directions. (2) A movable read/write tape head that reads symbols from the tape and writes symbols to the tape.

pages: 463 words: 118,936

Darwin Among the Machines by George Dyson

“The restriction is not one which seriously affects computation, since the use of more complicated states of mind can be avoided by writing more symbols on the tape.”5 The Turing machine thus embodies the relationship between a finite, if arbitrarily large, sequence of symbols in space and a finite, if arbitrarily large, sequence of events in time. Turing was careful to remove all traces of intelligence. The machine can do nothing more complicated or knowledgeable at any given moment than make a mark, erase a mark, and move the tape one square to the right or to the left. Each step in the relationship between tape and Turing machine is determined by an instruction table (now called a program) listing all possible internal states, all possible external symbols, and, for every possible combination, what to do (write or erase a symbol, move right or left, change internal state) in the event that combination comes up. The Turing machine follows instructions and never makes mistakes.

The key was added modulo 2 to the plaintext message (counting by two the way we count hours by twelve, so that 0 + 1 = 1 and 1 + 1 = 0), with 1 and 0 represented by the presence or absence of a hole in the tape. Adding the key to the enciphered text a second time would return the original text. Each Fish was a species of Turing machine, and the process by which the Colossi were used to break the various species of Fish was a textbook example of the process by which the function (or partial function) of one Turing machine could be encoded as a subsidiary function of another Turing machine to produce simulated results. The problem, of course, was that the British didn’t know the constantly changing state of the Fish; they had to guess. Colossus was programmed, in Boolean logic mode, by a plugboard and toggle switches at the back of the machine. “The flexible nature of the programming was probably proposed by Newman and perhaps also Turing, both of whom were familiar with Boolean logic, and this flexibility paid off handsomely,” recalled I.

A similar principle of distributed intelligence (enforced by need-to-know security rules) led to successful code breaking at Bletchley Park. The Turing machine, as a universal representation of the relations between patterns in space and sequences in time, has given these intuitive models of intelligence a common language that translates freely between concrete and theoretical domains. Turing’s machine has grown progressively more universal for sixty years. From McCulloch and Pitts’s demonstration of the equivalence between Turing machines and neural nets in 1943 to John von Neumann’s statement that “as far as the machine is concerned, let the whole outside world consist of a long paper tape,”51 the Turing machine has established the measure by which all models of computation have been defined. Only in theories of quantum computation—in which quantum superposition allows multiple states to exist at the same time—have the powers of the discrete-state Turing machine been left behind.

pages: 229 words: 67,599

The Logician and the Engineer: How George Boole and Claude Shannon Created the Information Age by Paul J. Nahin

At some time after we turn the Turing machine on, it presumably completes its task (whatever that might be), and the finite-state machine enters state 0 (called the halting state) and stops. Figure 9.1.1. A Turing machine. As far as I know, nobody has ever actually constructed a Turing machine. It is a purely theoretical concept. Described in 1936 by the English mathematician Alan Turing (see Shannon’s mini-biography in Chapter 3)— that is, ten years before the first actual electronic computers began to be constructed—Turing’s machines are nevertheless as powerful as any modern machine in what they can compute. Compared to a modern, electronic-speed computer, a Turing machine is really, really slow (think of a race between a snail and a photon), but time is irrelevant. There is an infinity of it yet to come. Given enough time, a Turing machine can compute anything that can be computed.2 A Turing machine’s power to compute comes not from super technology, but from its tape, for two reasons.

I’ll return to the concept of a computable number in the final section of this chapter. 9.2 TWO TURING MACHINES As my first specific example of a Turing machine, consider the following state-transition table for a Turing machine that adds any two non-negative integers m and n that are placed on the tape, leaves their sum on the tape, and then halts. Our convention for representing m and n is to write m + 1 consecutive 1s for m, then a 0, and then n + 1 consecutive 1s for n. All the rest of the tape squares, initially, have the symbol 0. (This representation for m and n is called unary notation.) The reason for using one additional 1 for each value of m and n is to allow either (or both) of m and n to be zero (which is represented by a single 1). This Turing machine is said to compute the function f (m, n) = m + n. Notice that in this machine the read/write head either doesn’t move during a machine cycle or, if it does move, it does so always to the right.

But, before we get to that, a couple of final comments on Turing machines. First, Shannon suggested that a reasonable measure of the “complexity” of a Turing machine is the product of the number of different symbols that can appear on the tape, and the number of states for the finite-state machine portion. That is, symbols and states can be traded-off against each other. In 1956, in fact, Shannon took this trade-off to the limit and proved the remarkable result that if one uses enough symbols, then, given any computable function, there exists a Turing machine with just two states that can evaluate that function. A companion demonstration shows that if one uses enough states, then, for any computable function, there exists a Turing machine using just two symbols. And second, one of the most astonishing results in Turing’s 1936 paper is that we do not have to build a different Turing machine for every new function we wish to compute.

pages: 245 words: 12,162

In Pursuit of the Traveling Salesman: Mathematics at the Limits of Computation by William J. Cook

Although extra tapes are convenient, anything we can compute on a multiple-tape machine can also be computed on a machine with a single tape, albeit a bit more slowly. This last point, about simulating a multiple-tape machine with a single tape, is important. We would like to define an algorithm as something that can be carried out on a single-tape Turing machine, but does this capture everything we might want in an algorithm? All we can say is that, so far, Turing machines have been able to handle everything that has been thrown at them. If something is computable on a modern-day computer, then a lightning-fast Turing machine could also carry out the computation. The working assumption that we can equate algorithms and Turing machines is known as the Church-Turing Thesis.3 This thesis is widely Complexity accepted and it gives the formal model of an algorithm that is used to make precise P vs. N P and other complexity questions. Perhaps, one day, exotic computing capabilities will come along to cause us to consider an expanded definition of an algorithm, but for over seventy years Turing has served up just what the research community has needed.

Perhaps, one day, exotic computing capabilities will come along to cause us to consider an expanded definition of an algorithm, but for over seventy years Turing has served up just what the research community has needed. Universal Turing Machines There is a fundamental difference between a modern telephone, such as the iPhone, and, say, a pair of shoes. Shoes are designed for the single duty of protecting your feet, whereas the hundreds of thousands of applications available for the iPhone allow it to take on tasks that were not imagined when the hardware was designed. This is something we take for granted, but the creation of programmable machines was an intellectual leap, made by Turing in his original paper. A Turing machine is a great model for describing what we mean by an algorithm, but a Turing machine is designed for a single task, such as adding two numbers. In this sense, Turing machines are closer to a pair of shoes than to an iPhone. A crucial point made by Turing, however, is that one can design a Universal Turing machine capable of simulating every Turing machine.

Formally, P is the class of problems that can be solved in polynomial time on a single-tape Turing machine, that is, if n is the number symbols on the input tape, then the machine is guaranteed to halt after a number of steps that is at most C times nk , for some power k and some constant C . This definition of P is certainly tidy: we could replace the single-tape Turing machine by a multiple-tape machine, or even by a powerful modern digital computer, without altering the class. Indeed, the simulation of a modern computer via a Turing machine will slow down computations, but the slow-down factor is only polynomial in n. So if we have a polynomial-time algorithm on a modern computer, then we also have a polynomial-time algorithm on a single-tape Turing machine. Complexity Belonging to P is the gold standard for decision problems, but Stephen Cook studied a possibly larger class that arises in a natural way.

pages: 285 words: 86,853

What Algorithms Want: Imagination in the Age of Computing by Ed Finn

The argument for computationalism begins with the Universal Turing Machine, mathematician Alan Turing’s breathtaking vision of a computer that can complete any finite calculation simply by reading and writing to an infinite tape marked with 1s and 0s, moving the tape forward or backward based on the current state of the machine. Using just this simple mechanism one could emulate any kind of computer, from a scientific calculator finding the area under a curve to a Nintendo moving Mario across a television screen. In other words, this establishes a computational “ceiling” where any Turing computer can emulate any other: the instructions may proceed more slowly or quickly, but are mathematically equivalent. The Universal Turing Machine is a thought experiment that determines the bounds of what is computable: Turing and his fellow mathematician Alonzo Church were both struggling with the boundary problems of mathematics.

Their responses to Hilbert, now called the Church–Turing thesis, define algorithms for theorists in a way that is widely accepted but ultimately unprovable: a calculation with natural numbers, or what most of us know as whole numbers, is “effectively computable” (that is, given enough time and pencils, a human could do it) only if the Universal Turing Machine can do it. The thesis uses this informal definition to unite three different rigorous mathematical theses about computation (Turing machines, Church’s lambda calculus, and mathematician Kurt Gödel’s concept of recursive functions), translating their specific mathematical claims into a more general boundary statement about the limits of computational abstraction. In another framing, as David Berlinski argues in his mathematical history The Advent of the Algorithm, the computability boundary that Turing, Gödel, and Church were wrestling with was also an investigation into the deep foundations of mathematical logic.20 Gödel proved, to general dismay, that it was impossible for a symbolic logical system to be internally consistent and provable using only statements within the system.

Turing's Cathedral by George Dyson

The machine can do nothing more intelligent at any given moment than make a mark, erase a mark, and move the tape one square to the right or to the left. The tape is not infinite, but if more tape is needed, the supply can be counted on never to run out. Each step in the relationship between tape and Turing machine is determined by an instruction table listing all possible internal states, all possible external symbols, and, for every possible combination, what to do (write or erase a symbol, move right or left, change the internal state) in the event that combination comes up. The Turing machine follows instructions and never makes mistakes. Complicated behavior does not require complicated states of mind. By taking copious notes, the Turing machine can function with as few as two internal states. Behavioral complexity is equivalent whether embodied in complex states of mind (m-configurations) or complex symbols (or strings of simple symbols) encoded on the tape.

“The mode of operation was for a cryptanalyst to sit at Colossus and issue instructions to a Wren [Women’s Royal Navy Service] for revised plugging, depending on what was printed on the automatic typewriter. At this stage there was a close synergy between man, woman, and machine.”33 As a step toward the modern computer, Colossus represented as great a leap as the ENIAC, and was both running and replicated while the one-of-a-kind ENIAC was still being built. Each Fish was a form of Turing machine, and the process by which the Colossi were used to break the various species of Fish demonstrated how the function (or partial function) of one Turing machine could be encoded for execution by another Turing machine. Since the British did not know the constantly changing state of the Fish, they had to guess. Colossus, trained to sense the direction of extremely faint gradients that distinguished enciphered German from random alphabetic noise, was the distant progenitor of the search engine: scanning the Precambrian digital universe for fragments of the missing key, until the pieces fit.

In 1936, logician Alan Turing had formalized the powers (and limitations) of digital computers by giving a precise description of a class of devices (including an obedient human being) that could read, write, remember, and erase marks on an unbounded supply of tape. These “Turing machines” were able to translate, in both directions, between bits embodied as structure (in space) and bits encoded as sequences (in time). Turing then demonstrated the existence of a Universal Computing Machine that, given sufficient time, sufficient tape, and a precise description, could emulate the behavior of any other computing machine. The results are independent of whether the instructions are executed by tennis balls or electrons, and whether the memory is stored in semiconductors or on paper tape. “Being digital should be of more interest than being electronic,” Turing pointed out.6 Von Neumann set out to build a Universal Turing Machine that would operate at electronic speeds. At its core was a 32-by-32-by-40-bit matrix of high-speed random-access memory—the nucleus of all things digital ever since.

pages: 352 words: 120,202

Tools for Thought: The History and Future of Mind-Expanding Technology by Howard Rheingold

And digital computers are universal Turing machines. It isn't easy to think of the rules of a game as a kind of machine. The task is somewhat easier if you think about "mechanical processes" that are so clearly and specifically defined that a machine can perform them by referring to an instruction table. All universal Turing machines are functionally identical devices for following the program specified by an instruction table. The instruction tables can differ, and they can turn the universal Turing machine into many different kinds of machine. For this reason, the programs are sometimes called "virtual machines." The distinction between a universal Turing machine and the many different Turing machines it is able to imitate is a direct analogy to digital computers. Like universal Turing machines, all digital computers are functionally identical.

The particular instructions described by the code are what the universal Turing machine operates upon. If you can describe, in similarly codable instructions, a machine for tripling, or extracting square roots, or performing differential equations, then your basic, dumb old universal Turing machine can imitate your tripling machine or square root machine. That ability to imitate other machines is what led to computers. The numbers (or Xs and Os) on the tape aren't that important. They are only symbols for states of a process -- markers in a "doubling game." The list of instructions (the program) is what enables the machine to double the input number. The instructions, not the symbols that keep track of the way they are carried out -- the rules, not the markers -- are what make the Turing machine work. Universal Turing machines are primarily symbol manipulators.

Turing proved that his hypothetical machine is an automated version of a formal system specified by the starting position (the pattern of Os and Xs on the tape at the beginning of the computation) and the rules (the instructions given by the instruction tables). The moves of the game are the changing states of the machine that correspond to the specified steps of the computation. Turing then proved that for any formal system, there exists a Turing machine that can be programmed to imitate it. This kind of general formal system with the ability to imitate any other formal system was what Turing was getting at. These systems are now known as "universal Turing machines." The theory was first stated in a paper with the forbidding title "On Computable Numbers, with an application to the Entscheidungsproblem." The Turing Machine was a hypothetical device Turing invented on the way to settling a critical question about the foundations of mathematics as a formalized means of thinking. He showed that his device could solve infinitely many problems, but that there are some problems that cannot be solved because there is no way of predicting in advance whether or when the machine is going to stop.

pages: 405 words: 117,219

In Our Own Image: Savior or Destroyer? The History and Future of Artificial Intelligence by George Zarkadakis

He then took Gödel’s theorem and reformulated it by replacing Gödel’s arithmetic-based notation with simple, hypothetical devices that became known as ‘Turing machines’. A Turing machine is like a tape recorder. The tape can move in both directions, back and forth, and the machine can record symbols on the tape by following simple instructions. There is also a ‘state register’, something we would today call ‘memory’, which keeps track of what the Turing machine has been up to. Remember Gödel’s genius idea of substituting logical operations and expressions with numbers? In turn, Turing substituted logical operations and expressions with Turing machines. A Turing machine took a statement as its input, applied a logical expression (or ‘formula’) that was written as a set of instructions, and produced a binary result: the statement was either true or false. The first thing that Turing showed was that a Turing machine capable of performing a mathematical computation was equivalent to an algorithm (i.e. a series of logical steps that processed a statement and arrived at a conclusion).

Could that be a better computer model for how human consciousness works? To build this computer model in practice Turing suggested that classical, algorithmic machines should be augmented with ‘oracles’: these are machines that can decide what is undecidable by a normal Turing machine, for instance the halting problem. But what would an oracle machine look like? And how would it function? In theory, oracle machines are just like Turing machines, with the additional ability to answer ‘yes’ or ‘no’ whenever the normal Turing machine cannot find the answer. This, effectively, transforms an improvable theorem into an axiom. Nesting Turing machines with oracles produces a hypercomputer. You can go beyond that too: you can start nesting hypercomputers together, all the way to infinity. And that would be a replica of the human mind. The provocative, and very interesting, thing about the concept of oracles is their randomness.

Turing had shown how a machine could code any kind of information – a concept he termed a ‘Universal Machine’. Von Neumann realised that, in essence, this meant the Universal Turing Machine could also code itself. Indeed, modern computers, which are Universal Turing Machines, have exactly this ability. All software stored in your computer can be copied to another computer, by your computer. In fact, copying is what takes place whenever you perform any transaction using a computer. When you ‘send’ an email, for example, nothing actually moves from one place to another: an exact copy of your email is reproduced in the computer of the person you want to communicate with. Von Neumann was fascinated with this self-copying property of the Universal Turing Machine. In true cybernetic fashion, he set off to formulate a general theory of self-reproduction that would include living organisms as well as machines.

pages: 855 words: 178,507

The Information: A History, a Theory, a Flood by James Gleick

The device he could not get out of his head was the Turing machine—that impossibly elegant abstraction, marching back and forth along its infinite paper tape, reading and writing symbols. Free from all the real world’s messiness, free from creaking wheel-work and finical electricity, free from any need for speed, the Turing machine was the ideal computer. Von Neumann, too, had kept coming back to Turing machines. They were the ever-handy lab mice of computer theory. Turing’s U had a transcendent power: a universal Turing machine can simulate any other digital computer, so computer scientists can disregard the messy details of any particular make or model. This is liberating. Claude Shannon, having moved from Bell Labs to MIT, reanalyzed the Turing machine in 1956. He stripped it down to the smallest possible skeleton, proving that the universal computer could be constructed with just two internal states, or with just two symbols, 0 and 1, or blank and nonblank.

In 1965 Chaitin was an undergraduate at the City College of New York, writing up a discovery he hoped to submit to a journal; it would be his first publication. He began, “In this paper the Turing machine is regarded as a general purpose computer and some practical questions are asked about programming it.” Chaitin, as a high-school student in the Columbia Science Honors Program, had the opportunity to practice programming in machine language on giant IBM mainframes, using decks of punched cards—one card for each line of a program. He would leave his card deck in the computer center and come back the next day for the program’s output. He could run Turing machines in his head, too: write 0, write 1, write blank, shift tape left, shift tape right.… The universal computer gave him a nice way to distinguish between numbers like Alice and Bob’s A and B. He could write a program to make a Turing machine print out “010101 …” a million times, and he could write down the length of that program—quite short.

Naturally he picks the language of a universal Turing machine. And then what does it mean, how do you name an integer? Well, you name an integer by giving a way to calculate it. A program names an integer if its output is that integer—you know, it outputs that integer, just one, and then it stops. Asking whether a number is interesting is the inverse of asking whether it is random. If the number n can be computed by an algorithm that is relatively short, then n is interesting. If not, it is random. The algorithm PRINT 1 AND THEN PRINT 100 ZEROES generates an interesting number (a googol). Similarly, FIND THE FIRST PRIME NUMBER, ADD THE NEXT PRIME NUMBER, AND REPEAT A MILLION TIMES generates a number that is interesting: the sum of the first million primes. It would take a Turing machine a long time to compute that particular number, but a finite time nonetheless.

pages: 247 words: 43,430

Think Complexity by Allen B. Downey

For a given program running on a real computer, it is possible (at least in principle) to construct a Turing machine that performs an equivalent computation. The Turing machine is useful because it is possible to characterize the set of functions that can be computed by a Turing machine, which is what Turing did. Functions in this set are called Turing computable. To say that a Turing machine can compute any Turing-computable function is a tautology: it is true by definition. But Turing computability is more interesting than that. It turns out that just about every reasonable model of computation anyone has come up with is Turing complete; that is, it can compute exactly the same set of functions as the Turing machine. Some of these models, like lambda calculus, are very different from a Turing machine, so their equivalence is surprising. This observation led to the Church-Turing thesis, which is essentially a definition of what it means to be computable.

Universality To understand universality, we have to understand computability theory, which is about models of computation and what they compute. One of the most general models of computation is the Turing machine, which is an abstract computer proposed by Alan Turing in 1936. A Turing machine is a 1-D CA, infinite in both directions, augmented with a read-write head. At any time, the head is positioned over a single cell. It can read the state of that cell (usually there are only two states), and it can write a new value into the cell. In addition, the machine has a register, which records the state of the machine (one of a finite number of states), and a table of rules. For each machine state and cell state, the table specifies an action. Actions include modifying the cell the head is over, and moving one cell to the left or right. A Turing machine is not a practical design for a computer, but it models common computer architectures.

It may be less obvious that Class 3 is simple, but in a way, perfect randomness is as simple as perfect order; complexity happens in between. So Wolfram’s claim is that Class 4 behavior is common in the natural world and almost all of the systems that manifest it are computationally equivalent. Example 6-4. The goal of this exercise is to implement a Turing machine. See http://en.wikipedia.org/wiki/Turing_machine. Start with a copy of CA.py named TM.py. Add attributes to represent the location of the head, the action table, and the state register. Override step to implement a Turing machine update. For the action table, use the rules for a three-state busy beaver. Write a class named TMDrawer that generates an image that represents the state of the tape and the position and state of the head. For one example of what that might look like, see http://mathworld.wolfram.com/TuringMachine.html

pages: 372 words: 101,174

How to Create a Mind: The Secret of Human Thought Revealed by Ray Kurzweil

But his central premise—that humans can solve Turing’s and Gödel’s insoluble problems—is unfortunately simply not true. A famous unsolvable problem called the busy beaver problem is stated as follows: Find the maximum number of 1s that a Turing machine with a certain number of states can write on its tape. So to determine the busy beaver of the number n, we build all of the Turing machines that have n states (which will be a finite number if n is finite) and then determine the largest number of 1s that these machines write on their tapes, excluding those Turing machines that get into an infinite loop. This is unsolvable because as we seek to simulate all of these n-state Turing machines, our simulator will get into an infinite loop when it attempts to simulate one of the Turing machines that does get into an infinite loop. However, it turns out that computers have nonetheless been able to determine the busy beaver function for certain ns.

Note that even though the tape is theoretically infinite in length, any actual program that does not get into an infinite loop will use only a finite portion of the tape, so if we limit ourselves to a finite tape, the machine will still solve a useful set of problems. If the Turing machine sounds simple, it is because that was its inventor’s objective. Turing wanted his machine to be as simple as possible (but no simpler, to paraphrase Einstein). Turing and Alonzo Church (1903–1995), his former professor, went on to develop the Church-Turing thesis, which states that if a problem that can be presented to a Turing machine is not solvable by it, it is also not solvable by any machine, following natural law. Even though the Turing machine has only a handful of commands and processes only one bit at a time, it can compute anything that any computer can compute. Another way to say this is that any machine that is “Turing complete” (that is, that has equivalent capabilities to a Turing machine) can compute any algorithm (any procedure that we can define).

In 1936 Alan Turing described his “Turing machine,” which was not an actual machine but another thought experiment. His theoretical computer consists of an infinitely long memory tape with a 1 or a 0 in each square. Input to the machine is presented on this tape, which the machine can read one square at a time. The machine also contains a table of rules—essentially a stored program—that consist of numbered states. Each rule specifies one action if the square currently being read is a 0, and a different action if the current square is a 1. Possible actions include writing a 0 or 1 on the tape, moving the tape one square to the right or left, or halting. Each state will then specify the number of the next state that the machine should be in. The input to the Turing machine is presented on the tape. The program runs, and when the machine halts, it has completed its algorithm, and the output of the process is left on the tape.

pages: 236 words: 50,763

The Golden Ticket: P, NP, and the Search for the Impossible by Lance Fortnow

This Church-Turing thesis, developed before the invention of modern computers, has stood the test of time. Everything computable, now or in the future, is indeed computable by a Turing machine. This means we don’t have to worry about the specifics of the Turing machine; other models of computers give us no more computational power. You don’t really need to understand exactly what a Turing machine is to understand computation. Just think of any programming language that has access to an unlimited amount of memory. All programming languages are functionally equivalent, and all can compute exactly what can be computed by the simple Turing machine. Turing back in 1936 showed that a Turing machine cannot in fact compute everything. The most famous example, the halting problem, says that no computer can look at some code of a program and determine whether that code will run forever or eventually halt.

Turing wondered how mathematicians thought, and came up with a formal mathematical model of that thought process, a model we now call the Turing machine, which has become the standard model of computation. Alan Turing was born in London in 1912. In the early 1930s he attended King’s College, Cambridge University, where he excelled in mathematics. During that time he thought of computation in terms of himself as a mathematician. A mathematician has a process in mind, a limited amount of memory, and a large supply of paper and pencils. The mathematician would scribble some notes on a page and either move on to the next page or go back to the previous one where he could modify what he wrote before. Turing used this intuition to create a formal model of computation we now call the Turing machine. Figure 5-1. Turing Machine. Though the machine was a very simple model, Turing stated that everything computable was computable by his machine.

If that civilization has a concept that is the same or similar to one of ours, then that concept should be natural, having been developed through multiple independent sources. Of course, we don’t have a Martian civilization to compare ourselves with, so we have to use our imagination. The Martians would have an Exigius machine of computation that would be different from the Turing machine but have the same computational ability. The Martians would have their own version of the Church-Turing thesis that everything computable is computable by an Exigius machine. So the Turing machine itself is not natural, but computation is. For P versus NP we have some non-Martian evidence. Researchers in Russia and North America with limited communication ended up developing the same P versus NP question and NP-complete problems. They had different motivation, in the East to understand the necessity of perebor and in the West to understand the power of efficient computation.

pages: 696 words: 143,736

The Age of Spiritual Machines: When Computers Exceed Human Intelligence by Ray Kurzweil

Perhaps the most interesting unsolvable problem is called the Busy Beaver, which may be stated as follows: Each Turing machine has a certain number of commands in its program. Given a positive integer n, we construct all of the Turing machines that have n states (i.e., n commands). Next we eliminate those n-state Turing machines that get into an infinite loop (i.e., never halt). Finally, we select the machine (one that halts) that writes the largest number of 1s on its tape. The number of 1s that this Turing machine writes is called busy beaver of n. Tibor Rado, a mathematician and admirer of Turing, showed that there is no algorithm (that is, no Turing machine) that can compute the busy beaver function for all n’s. The crux of the problem is sorting out those n-state Turing machines that get into infinite loops. If we program a Turing machine to generate and simulate every possible n-state Turing machine, this simulator itself goes into an infinite loop when it attempts to simulate one of the n-state Turing machines that gets into an infinite loop.

Because of their hexagonal and pentagonal shape, the molecules were dubbed “buckyballs” in reference to R. Buckminster Fuller’s building designs. Busy beaver One example of a class of noncomputational functions; an unsolvable problem in mathematics. Being a “Turing machine unsolvable problem,” the busy beaver function cannot be computed by a Turing machine. To compute busy beaver of n, one creates all the n-state Turing machines that do not write an infinite number of Is on their tape. The largest number of Is written by the Turing machine in this set that writes the largest number of Is is busy beaver of n. BWA See Biowarfare Agency Byte A contraction for “by eight.” A group of eight bits clustered together to store one unit of information on a computer. A byte may correspond, for example, to a letter of the English alphabet.

He created the first theoretical computer in 1936 (first introduced in Alan M. Turing, “On Computable Numbers with an Application to the Entscheinungs Problem,” Proc. London Math. Soc. 42 [1936]: 230-265) in an eponymous conception called the Turing machine. As with a number of Turing’s breakthroughs, he would have both the first and last word. The Turing machine represented the founding of modern computational theory. It has also persisted as our primary theoretical model of a computer because of its combination of simplicity and power.The Turing machine is one example of the simplicity of the foundations of intelligence. A Turing machine consists of two primary (theoretical) units: a tape drive and a computation unit. The tape drive has a tape of infinite length on which it can write, and (subsequently) read, a series of two symbols: zero and one.

pages: 611 words: 186,716

The Diamond Age by Neal Stephenson

In other words, once Princess Nell had deciphered the messages, her stall functioned like another Turing machine. It would have been easy enough to conclude that this whole castle was, like the others, a Turing machine. But the Primer had taught Nell to be very careful about making unwarranted assumptions. Just because her stall functioned according to Turing rules did not mean that all of the others did. And even if every stall in this castle was, in fact, a Turing machine, she still could not come to any fixed conclusions. She had seen riders carrying books to and from the castle, which meant that cipherers must be at work elsewhere in this kingdom. She could not verify that all of them were Turing machines. It did not take long for Nell to attain prosperity here. After a few months (which in the Primer were summarized in as many sentences) her employers announced that they were getting more work than they could handle.

She had not yet had the opportunity to study it in detail, but after her experiences in all of King Coyote's other castles, she suspected that it, too, was just another Turing machine. Her study of the Cipherers' Market, and particularly of the rulebooks used by the cipherers to respond to messages, had taught her that for all its complexity, it too was nothing more than another Turing machine. She had come here to the Castle of King Coyote to see whether the King answered his messages according to Turing-like rules. For if he did, then the entire system- the entire kingdom- the entire Land Beyond-was nothing more than a vast Turing machine. And as she had established when she'd been locked up in the dungeon at Castle Turing, communicating with the mysterious Duke by sending messages on a chain, a Turing machine, no matter how complex, was not human. It had no soul. It could not do what a human did.

"What was your purpose in coming here?" "To obtain the twelfth key." "Anything else?" "To learn about Wizard 0.2." "Ah." "To discover whether it was, in fact, a Turing machine." "Well, you have your answer. Wizard 0.2 is most certainly a Turing machine-the most powerful ever built." "And the Land Beyond?" "All grown from seeds. Seeds that I invented." "And it is also a Turing machine, then? All controlled by Wizard 0.2?" "No," said King Coyote. "Managed by Wizard. Controlled by me." "But the messages in the Cipherers' Market control all the events in the Land Beyond, do they not?" "You are most perceptive, Princess Nell." "Those messages came to Wizard-just another Turing machine." "Open the altar," said King Coyote, pointing to a large brass plate with a keyhole in the middle. Princess Nell used her key to open the lock, and King Coyote flipped back the lid of the altar.

pages: 268 words: 109,447

The Cultural Logic of Computation by David Golumbia

A term with application in nearly every academic discourse, functionalism has a speciﬁc meaning within contemporary analytic philosophy: as proposed by Hilary Putnam and subsequently adopted by other writers, functionalism is a “model of the mind” according to which “psychological states (‘believing that p,’ ‘desiring that p,’ ‘considering whether p,’ etc.) are simply ‘computational states’ of the brain. The proper way to think of the brain is as a digital computer.” This is not simply a metaphor: “according to the version of functionalism that I originally proposed, mental states can be deﬁned in terms of Turing machine states and loadings of the memory (the paper tape of the Turing machine)” (Putnam 1988, 73). Many of its advocates give this view the straightforward name “the computer model of the mind” (e.g., Block 1990; Schank 1973). According to functionalism, the brain just is a digital computer, or something similar enough to one such that if we could discern its physical structure in sufﬁcient detail, we would discover a binary mechanism, probably electrochemical in nature, that constitutes mental representations, exactly as a computer can be said to create representations.

Putnam writes that the view “seemed so evident to [him] (and still seems evident to many philosophers of mind)” was this: If the whole human body is a physical system obeying the laws of Newtonian physics, and if any such system, up to and including the whole physical universe, is at least metaphorically a machine, then the whole human body is at Genealogies of Philosophical Functionalism p 75 least metaphorically a machine. And materialists believe that a human body is just a living body. So . . . materialists are committed to the view that the human body is—at least metaphorically—a machine. It is understandable that the notion of a Turing machine might be seen as just a way of making this materialist idea precise. Understandable, but hardly well thought out. The problem is the following: a “machine” in the sense of a physical system obeying the laws of Newtonian physics need not be a Turing machine. (4) Putnam offers some intriguing logical results to support this conclusion, but in a sense it is clearly available to empirical investigation, since many sorts of extremely sophisticated cellular and molecular “machines” are in no sense capable of universal computation in Turing’s sense (instead, they are properly analog machines, devoted to one or several tasks but not generally applicable to others).

Today, philosophers write about computationalism not as a view to be embraced directly, for the most part, but instead as a problematic doctrine which raises as many questions about philosophical practice as it does about its own putative subject matter. In its received (sometimes called its “classical”) form, computationalism is the view that not just human minds are computers but that mind itself must be a computer—that our notion of intellect is, at bottom, identical with abstract computation, and that in discovering the principles of algorithmic computation via the Turing Machine human beings have, in fact, discovered the essence not just of human thought in practice but all thought in principle (see especially Kurzweil 1990, 1999, 2006). Today, few philosophers can accept any of the straightforward versions of computationalism (although there are certainly exceptions to this rule), generally because, as Scheutz writes, “computation, assumed to be deﬁned in abstract syntactic terms, necessarily neglects the real-time, embodied, realworld constraints with which cognitive systems intrinsically cope” (Scheutz 2002, ix).

pages: 434 words: 135,226

The Music of the Primes by Marcus Du Sautoy

Robinson felt sure that there should be a way to exploit the groundwork laid by Turing. She understood that each Turing machine gives rise to a sequence of numbers. For example, one of the Turing machines could be made to produce a list of the square numbers 1, 4, 9, 16, …, whilst another generated the primes. One of the steps in Turing’s solution of Hilbert’s Decision Problem had been to prove that no program existed that could decide whether, given a Turing machine and a number, that number is the output of the machine. Robinson was looking for a connection between equations and Turing machines. Each Turing machine, she believed, would correspond to a particular equation. If there were such a connection, Robinson hoped that asking whether a number was the output of a particular Turing machine would translate into asking whether the equation corresponding to that machine had a solution.

If there existed a program to test equations for solutions, as Hilbert was hoping for when he posed his tenth problem, then the same program could be used, via Robinson’s as yet hypothetical connection between equations and Turing machines, to check which numbers were outputs of Turing machines. But Turing had shown that no such program – one that could decide about the outputs of Turing machines – existed. Therefore there could be no program that could decide whether equations had solutions. The answer to Hilbert’s tenth problem would be ‘no’. Robinson set about understanding why each Turing machine might have its own equation. She wanted an equation whose solutions were connected to the sequence of numbers output by the Turing machine. She was rather amused by the question she had set herself. ‘Usually in mathematics you have an equation and you want to find a solution. Here you were given a solution and you had to find the equation.

pages: 362 words: 97,862

Physics in Mind: A Quantum View of the Brain by Werner Loewenstein

Time spans, evolutionary, 15, 18 (fig.), 33, 35 Tononi, Giulio, 303 Top-down processing, 179 Transducers, 49–51 Transmogrification. See Protein transmogrification Trellis. See Neuron trellis Tunnel diodes, 103 Tunneling, electron, 125–126 Turing, Alan, 137, 138, 140, 250 Turing machines. See Universal Turing Machines Twain, Mark, 107 Ultraviolet rays, 22, 50, 59 (fig.), 109–110, 111 Unconscious thinking, 219–221 Unification theories, 13, 156 Universal computers, 108, 140, 252, 257 Universal Turing Machines, 137–140, 142n, 143, 206, 250 Universe developmental time chart of, 17, 18 (fig.) expansion of the, 6–7, 17, 20 (fig.), 21, 155 initial information state of the, 20, 21, 23 (fig.) organizational units, 21 origin, 19 See also Big Bang Unwin, Nigel, 37 (fig.) Updike, John, 79 Urbach-Wiethe disease, 227 van der Waals interactions, 26, 39, 43 Vandersypen, L., 263 (fig.)

.), 67 why we see colors, 67, 70–71, 72, 75–77 Color blindness, 68 Communications cell, 86 language, 3, 91–92 one-way versus two-way, 98 See also Information transmission Competition, for access to consciousness, 231–232, 233 (fig.) Complex and simple cells in cortex, 193, 194–196 Computation, parallel. See Parallel computation Computation process classical versus quantum, 256–258 of Universal Turing Machines, 138–140 Computation time for gestalt recognition, 229 as a measure of complexity, 205, 206 saving, 273 See also Logical depth Computer-generated images, 140–142, 165 Computers and consciousness, 217 operational mode of, versus the brain’s, 184–185, 186, 187 protein demons as computers, 147–148 See also Universal Turing Machines Computing operations, 257 Conditional gate, 260, 261 Cones, 68, 72, 73 (fig.), 165, 166 (fig.) Conscious experience feelings and emotions, 223–226 gut feelings, 226–227 holistic aspects of, 221–223 and memory, 216, 218–219 multicellular involvement in, 159, 247 neural synchrony and, 227–228 and the passing of time, 216, 217–218 unconscious thinking, 219–221 Consciousness, xiii, 2–3, 95n, 184, 216 and the brain hemispheres, 184 competition for access to, 231–232, 233 (fig.)

3The Second Coming The Demons for Fast Information Transmission The Sensory Demons A Generalized Sensory Scheme The Demon Tandem How the Demon Tandems Censor Incoming Information 4The Sensors The Sensory Transducer Unit A Lesson in Economics from a Master The Silent Partner How Electrical Sensory Signals Are Generated 5Quantum Sensing The Quantum World Our Windows to the Quantum World Coherent Quantum Information Transmission The Advantages of Being Stable Why We See the Rainbow The Demons Behind Our Pictures in the Mind: A Darwinistic Physics View Why White Is White The Quantum View Again, Why White Is White Lady Evolution’s Quantum Game Plan Quantum Particles That Don’t Cut the Mustard 6Quantum into Molecular Information Boosting the Quantum A Consummate Sleight of Hand The Ubiquitous Membrane Demon 7Molecular Sensing A Direct Line from Nose to Cortex A Thousand Information Channels of Smell Mapping, Coding, and Synonymity Molecular Sensory Synonymity Why Sensory Synonymity Quantum Synonymity Harmless Double Entendres 8Electronic Transmission of Biological Information Evolution’s Favorite Leptons Electronic Information Transmission: A Development Stumped Two Old Batteries 9The Random Generators of Biomolecular Complexity Genuine Transmogrification A Quantum Random Generator of Molecular Form The Heuristics of Transmogrification The Random Generator and Our Genetic Heritage An Algorithm Is No Substitute for a Demon The Second Generator of Biomolecular Form Complexity as a Windfall Ikats 10The Ascent of the Digital Demons Quantum Electron Tunneling The Electronic Cul-de-Sac The Rise of the Digital Demons Do Plants Have Digital Demons, Too? 11The Second Information Arrow and Its Astonishing Dénouement: Consciousness The Structure of Time The Evolutionary Niche in the Structure of Time: A Hypothesis Forecognition 12How to Represent the World The Universal Turing Machine Rendering the World by Computer The Neuronal Virtual-Reality Generator Our Biased World Picture Computing by Neurons Correcting Our World Picture Flying the Coop of Our Senses 13Expanded Reality A Fine Bouquet Mathematics and Reality The Neuron Circuitry of Language The Feathers of the Brain Mathematics and Forecognition The Reluctant Sensory Brain The Limits of Knowledge A Note About Reality 14Information Processing in the Brain Cell Organization in the Brain Cortical Information-Processing Units Cortical-Cell Topography and Worldview A Last-Minute Change in Worldview Retrieving a Lost Dimension Information Processing in the Brain from the Bottom Up Being of One Mind Two Minds in One Body?

pages: 761 words: 231,902

The Singularity Is Near: When Humans Transcend Biology by Ray Kurzweil

To see machines' ability to use heuristic methods, consider one of the most interesting of the unsolvable problems, the "busy beaver" problem, formulated by Tibor Rado in 1962.31 Each Turing machine has a certain number of states that its internal program can be in, which correspond to the number of steps in its internal program. There are a number of different 4-state Turing machines that are possible, a certain number of 5-state machines, and so on. In the "busy beaver" problem, given a positive integer n, we construct all the Turing machines that have n states. The number of such machines will always be finite. Next we eliminate those n-state machines that get into an infinite loop (that is, never halt). Finally, we select the machine (one that does halt) that writes the largest number of 1s on its tape. The number of 1s that this Turing machine writes is called the busy beaver of n. Rado showed that there is no algorithm—that is, no Turing machine—that can compute this function for all ns.

The complexity of Babbage's invention stemmed only from the details of its design, which indeed proved too difficult for Babbage to implement using the technology available to him. The Turing machine, Alan Turing's theoretical conception of a universal computer in 1950, provides only seven very basic commands, yet can be organized to perform any possible computation.73 The existence of a "universal Turing machine," which can simulate any possible Turing machine that is described on its tape memory, is a further demonstration of the universality and simplicity of information.74 In The Age of Intelligent Machines, I showed how any computer could be constructed from "a suitable number of [a] very simple device," namely, the "nor" gate.75 This is not exactly the same demonstration as a universal Turing machine, but it does demonstrate that any computation can be performed by a cascade of this very simple device (which is simpler than rule 110), given the right software (which would include the connection description of the nor gates).76 Although we need additional concepts to describe an evolutionary process that create intelligent solutions to problems, Wolfram's demonstration of the simplicity an ubiquity of computation is an important contribution in our understanding of the fundamental significance of information in the world.

The crux of the problem is sorting out those n-state machines that get into infinite loops. If we program a Turing machine to generate and simulate all possible n-state Turing machines, this simulator itself gets into an infinite loop when it attempts to simulate one of the n-state machines that gets into an infinite loop. Despite its status as an unsolvable problem (and one of the most famous), we can determine the busy-beaver function for some ns. (Interestingly, it is also an unsolvable problem to separate those ns for which we can determine the busy beaver of n from those for which we cannot.) For example, the busy beaver of 6 is easily determined to be 35. With seven states, a Turing machine can multiply, so the busy beaver of 7 is much bigger: 22,961. With eight states, a Turing machine can compute exponentials, so the busy beaver of 8 is even bigger: approximately 1043.

pages: 429 words: 114,726

The Computer Boys Take Over: Computers, Programmers, and the Politics of Technical Expertise by Nathan L. Ensmenger

In order to facilitate his exploration, Turing invented a new tool, an imaginary device capable of performing simple mechanical computations. Each Turing Machine, which consisted of only a long paper tape along with a mechanism for reading from and writing to that tape, contained a table of instructions that allowed it to perform a single computation. As a computing device, the Turing Machine is deceptively simple; as a conceptual abstraction, it is extraordinarily powerful. As it turns out, the table of instructions for any Turing Machine can be rewritten to contain the instructions for building any other Turing Machine. The implication, as articulated in the Church-Turing thesis, is that every Turing Machine is a Universal Turing Machine, and by extension, every computing machine is essentially equivalent. In the real world, the appealingly egalitarian abstractions of the Church-Turing thesis quickly break down in the face of the temporal and spatial constraints of the physical universe.

By abstracting the logical design of the digital computer from any particular physical implementation, von Neumann took a crucial first step in the development of a modern theory of computation.55 His was not the only contribution; in 1937, for example, Turing had described, for the purposes of demonstrating the limits of computation, what would become known as the Universal Turing Machine. Eventually, the Universal Turing Machine would become an even more fundamental construct of modern computer science. According to the Church-Turing thesis, first articulated in 1943 by the mathematician Stephen Kleene, any function that can be physically computed can be computed by a Universal Turing Machine. The abstraction of the technology of computing in the theoretical construct of the Turing Machine mirrored the shift toward software that was occurring in the larger commercial computing industry. Independent of the work of theoretical computer scientists, working programmers—and their corporate employers—were discovering to their chagrin that computer software was even more complicated and expensive to develop than computer hardware.

This means that in theory at least, all computers are functionally equivalent: any given computer is but a specific implementation of a more general abstraction known as a Universal Turing Machine. It is the Platonic ideal of the Universal Turing Machine, and not the messy reality of actual physical computers, that is the true subject of modern theoretical computer science; it is only by treating the computer as an abstraction, a mathematical construct, that theoretical computer scientists lay claim to their field being a legitimate scientific, rather than merely a technical or engineering, discipline. The story of this remarkable self-construction and its consequences is the subject of chapter 5. The idealized Universal Turing Machine is, of course, only a conceptual device, a convenient fiction concocted by the mathematician Alan Turing in the late 1930s as a means of exploring a long-standing puzzle in theoretical mathematics known as the Entscheidungsproblem.

pages: 846 words: 232,630

Darwin's Dangerous Idea: Evolution and the Meanings of Life by Daniel C. Dennett

It doesn't matter what material you make a computer out of; what matters is the algorithm it runs; and since every algorithm is finitely specifiable, it is possible to devise a uniform language for uniquely describing each algorithm and putting all the specifications in "alphabetical order." Turing devised just such a system, and in it every computer — from your laptop to the grandest parallel supercomputer that will ever be built — has a unique description as what we now call a Turing machine. The Turing machines can each be given a unique identification number — its Library of Babel Number, if you like. Gödel's Theorem can then be reinterpreted to say that each of those Turing machines that happens to be a consistent algorithm for proving truths of arithmetic (and, not surprisingly, these are a Vast but Vanishing subset of all the possible Turing machines) has associated with it a Gödel sentence — a truth of arithmetic it cannot prove. So that is what Gödel, anchored by Turing to the world of computers, tells us: every computer that is a consistent truth-of-arithmetic-prover has an Achilles' heel, a truth it can never prove, even if it runs till doomsday.

They wanted something dead simple, easy to visualize and easy to calculate, so they not only dropped from three dimensions to two; they also "digitized" both space and time — all times and distances, as we saw, are in whole numbers of "instants" and "cells." It was von Neumann who had taken Alan Turing's abstract conception of a mechanical computer (now called a "Turing machine") and engineered it into the specification for a general-purpose stored-program serial-processing computer (now called a "von Neumann machine"); in his brilliant explorations of the spatial and structural requirements for such a computer, he had realized — and proved — that a Universal Turing machine (a Turing machine that can compute any computable function at all) could in principle be "built" in a two-dimensional world.6 Conway and his students also set out to confirm this with their own exercise in two-dimensional engineering.7 It was far from easy, but they showed how they could "build" a working computer out of simpler Life forms.

Whenever you {236} saw a glider approaching an eater, you would just predict "consumption in four generations" without bothering with the pixel-level calculations. As a second step, you could move to thinking of the gliders as symbols on the "tape" of a gigantic Turing machine, and then, adopting this higher design stance towards the configuration, predict its future as a Turing machine. At this level you would be "hand-simulating" the "machine language" of a computer program that plays chess, still a tedious way of making predictions, but orders of magnitude more efficient than working out the physics. As a third and still more efficient step, you could ignore the details of the chess-playing program itself and just assume that, whatever they are, they are good! That is, you could assume that the chess-playing program running on the Turing machine made of gliders and eaters played not just legal chess but good legal chess — it had been well designed (perhaps it has designed itself, in the manner of Samuel's checkers program) to find the good moves.

pages: 339 words: 94,769

Possible Minds: Twenty-Five Ways of Looking at AI by John Brockman

The vast majority of computer chips built in the era of Moore’s Law are based on the von Neumann architecture, including those powering our data centers, our laptops, and our smartphones. Von Neumann’s digital-computer architecture is conceptually the same generalization—from early digital computers constructed with electromagnetic relays at both Harvard University and Bletchley Park—that occurs in going from a special-purpose Turing Machine to a Universal Turing Machine. Furthermore, his self-replicating automata share a fundamental similarity with both the construction of a Turing Machine and the mechanism of DNA-based reproducing biological cells. There is to this day scholarly debate over whether von Neumann saw the cross connections between these three pieces of work, Turing’s and his two. Turing’s revision of his paper was done while he and von Neumann were both at Princeton; indeed, after getting his PhD, Turing almost stayed on as von Neumann’s postdoc.

One can imagine a different contingent version of our intellectual and technological history had Alan Turing and John von Neumann, both of whom made major contributions to the foundations of computing, not appeared on the scene. Turing contributed a fundamental model of computation—now known as a Turing Machine—in his paper “On Computable Numbers with an Application to the Entscheidungsproblem,” written and revised in 1936 and published in 1937. In these machines, a linear tape of symbols from a finite alphabet encodes the input for a computational problem and also provides the working space for the computation. A different machine was required for each separate computational problem; later work by others would show that in one particular machine, now known as a Universal Turing Machine, an arbitrary set of computing instructions could be encoded on that same tape. In the 1940s, von Neumann developed an abstract self-reproducing machine called a cellular automaton.

When my colleagues and I were building WolframAlpha, one of the ideas we had was to get it to answer all of those reference-library questions from Desk Set. By 2009, it could answer them all. In 1943, Warren McCulloch and Walter Pitts came up with a model for how brains conceptually, formally, might work—an artificial neural network. They saw that their brainlike model would do computations in the same way as Turing Machines. From their work, it emerged that we could make brainlike neural networks that would act as general computers. And in fact, the practical work done by the ENIAC folks and John von Neumann and others on computers came directly not from Turing Machines but through this bypath of neural networks. But simple neural networks didn’t do much. Frank Rosenblatt invented a learning device he called the perceptron, which was a one-layer neural network. In the late sixties, Marvin Minsky and Seymour Papert wrote a book titled Perceptrons, in which they basically proved that perceptrons couldn’t do anything interesting, which is correct.

pages: 396 words: 117,149

The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World by Pedro Domingos

Then in 1936 Alan Turing imagined a curious contraption with a tape and a head that read and wrote symbols on it, now known as a Turing machine. Every conceivable problem that can be solved by logical deduction can be solved by a Turing machine. Furthermore, a so-called universal Turing machine can simulate any other by reading its specification from the tape—in other words, it can be programmed to do anything. The Master Algorithm is for induction, the process of learning, what the Turing machine is for deduction. It can learn to simulate any other algorithm by reading examples of its input-output behavior. Just as there are many models of computation equivalent to a Turing machine, there are probably many different equivalent formulations of a universal learner. The point, however, is to find the first such formulation, just as Turing found the first formulation of the general-purpose computer.

., 91, 94–95 decision tree induction, 85–89 further reading, 300–302 hill climbing and, 135 Hume and, 58–59 induction and, 80–83 intelligence and, 52, 89 inverse deduction and, 52, 82–85, 91 Master Algorithm and, 240–241, 242–243 nature and, 141 “no free lunch” theorem, 62–65 overfitting, 70–75 probability and, 173 problem of induction, 59–62 sets of rules, 68–70 Taleb, Nassim, 38, 158 Tamagotchi, 285 Technology machine learning as, 236–237 sex and evolution of, 136–137 trends in, 21–22 Terrorists, data mining to catch, 232–233 Test set accuracy, 75–76, 78–79 Tetris, 32–33 Text classification, support vector machines and, 195–196 Thalamus, 27 Theory, defined, 46 Theory of cognition, 226 Theory of everything, Master Algorithm and, 46–48 Theory of intelligence, 35 Theory of problem solving, 225 Thinking, Fast and Slow (Kahneman), 141 Thorndike, Edward, 218 Through the Looking Glass (Carroll), 135 Tic-tac-toe, algorithm for, 3–4 Time, as principal component of memory, 217 Time complexity, 5 The Tipping Point (Gladwell), 105–106 Tolstoy, Leo, 66 Training set accuracy, 75–76, 79 Transistors, 1–2 Treaty banning robot warfare, 281 Truth, Bayesians and, 167 Turing, Alan, 34, 35, 286 Turing Award, 75, 156 Turing machine, 34, 250 Turing point, Singularity and, 286, 288 Turing test, 133–134 “Turning the Bayesian crank,” 149 UCI repository of data sets, 76 Uncertainty, 52, 90, 143–175 Unconstrained optimization, 193–194. See also Gradient descent Underwood, Ben, 26, 299 Unemployment, machine learning and, 278–279 Unified inference algorithm, 256 United Nations, 281 US Patent and Trademark Office, 133 Universal learning algorithm. See Master Algorithm Universal Turing machine, 34 Uplift modeling, 309 Valiant, Leslie, 75 Value of states, 219–221 Vapnik, Vladimir, 190, 192, 193, 195 Variance, 78–79 Variational inference, 164, 170 Venter, Craig, 289 Vinge, Vernor, 286 Virtual machines, 236 Visual cortex, 26 Viterbi algorithm, 165, 305 Voronoi diagrams, 181, 183 Wake-sleep algorithm, 103–104 Walmart, 11, 69–70 War, cyber-, 19–21, 279–282, 299, 310 War of the Worlds (radio program), 156 Watkins, Chris, 221, 223 Watson, James, 122, 236 Watson, Thomas J., Sr., 219 Watson (computer), 37, 42–43, 219, 237, 238 Wave equation, 30 Web 2.0, 21 Web advertising, 10–11, 160, 305 Weighted k-nearest-neighbor algorithm, 183–185, 190 Weights attribute, 189 backpropagation and, 111 Master Algorithm and, 242 meta-learning and, 237–238 perceptron’s, 97–99 relational learning and, 229 of support vectors, 192–193 Welles, Orson, 156 Werbos, Paul, 113 Wigner, Eugene, 29 Will, George F., 276 Williams, Ronald, 112 Wilson, E.

Besides, knowledge is not just a long list of facts. Knowledge is general, and has structure. “All humans are mortal” is much more succinct than seven billion statements of mortality, one for each human. Memorization gives us none of these things. Another candidate Master Algorithm is the microprocessor. After all, the one in your computer can be viewed as a single algorithm whose job is to execute other algorithms, like a universal Turing machine; and it can run any imaginable algorithm, up to its limits of memory and speed. In effect, to a microprocessor an algorithm is just another kind of data. The problem here is that, by itself, the microprocessor doesn’t know how to do anything; it just sits there idle all day. Where do the algorithms it runs come from? If they were coded up by a human programmer, no learning is involved. Nevertheless, there’s a sense in which the microprocessor is a good analog for the Master Algorithm.

pages: 158 words: 49,168

Infinite Ascent: A Short History of Mathematics by David Berlinski

These, too, are finite and correspond in an unspecified way to various internal configurations of the reading head, so that what the Turing machine does is a function not only of what at any moment it sees, but what at any moment it is. There is not much else. A Turing machine is a deterministic device, and like an ordinary machine, it does what it must do. Turing machines can nonetheless do rather an astonishing variety of intellectual tasks. Addition is an example, as with a few notational elaborations the logician may be observed constructing a calculating device out of thin air. Those elaborations: The machine’s single symbol is 1. Any natural number n the machine represents as a string of n + 1 consecutive 1’s, so that 0 is 1, and 1, 11, and n + m = (n + m) + 1. The following ten lines of code suffice to get this machine to add any two natural numbers. Any two (Figure 10.1). A mathematical function is computable, logicians say, if there is a Turing machine by which it can be computed. In the late 1930s, Alonzo Church and Stephen Kleene demonstrated that the recursive functions, the lambda-computable functions, and the Turing computable functions were one and the same, and so provided a single precise explication of the informal notion of an algorithm.

An effective calculation is any calculation that could be undertaken, Turing argued, by an exceptionally simple imaginary machine, or even a human computer, someone who has, like a clerk in the department of motor vehicles or a college dean, been stripped of all cognitive powers and can as a result execute only a few primitive acts. A Turing machine consists of a tape divided into squares and a reading head. Although finite, the tape may be extended in both directions. The reading head, as its name might suggest, is designed to recognize and manipulate a finite set of symbols—0 and 1, for most purposes. Beginning at an initial square, the reading head is capable of moving to the left or to the right on the tape, one square at a time, and it is capable of writing symbols on the tape or erasing the symbols that it is scanning. At any given moment, the reading head may occupy one of a number of internal states. These, too, are finite and correspond in an unspecified way to various internal configurations of the reading head, so that what the Turing machine does is a function not only of what at any moment it sees, but what at any moment it is.

The definition of the algorithm, as Gödel observed, has one unexpected property: It is completely stable in the sense that it does not depend on any underlying system of formalization. No matter where logicians begin in defining the algorithm, they always end up in the same place. Gödel regarded this circumstance as a miracle. FIG. 10.1 And so it is. In virtue of its elegance and obviousness, the definition of an algorithm in terms of Turing machines has become the standard. The usual operations of arithmetic are all Turing-computable. So, too, most algorithms. So, too, Church argued, any algorithm, the concept of a Turing machine exhausting completely the idea of an effective calculation. This last claim is known as Church’s thesis, and in the seventy years since it was advanced, it has passed progressively from a conjecture, to a persuasive definition, to a law of nature, to an unbudgeable fixture in contemporary thought. The algorithm is the second of two great ideas in Western science; the first is the calculus.

The Dream Machine: J.C.R. Licklider and the Revolution That Made Computing Personal by M. Mitchell Waldrop

Their own paper, published in 1943 as "A Logical Calculus of the Ideas Immanent in Nervous Activity," was essentially a demonstration that their idealized neural networks were functionally equivalent to Turing machines. That is, any problem that a Turing machine could solve, an appropriately designed network could also solve. And conversely, anything that was beyond a Turing machine's power-such as the decidability problem-was likewise beyond a network's power. As the science historian William Aspray has written, "With the Turing machines providing an abstract characterization of thinking in the machine world and McCulloch and Pitts's neuron nets providing one in the biological world, the equivalence result suggested a unified theory of thought that broke down barriers between the physical and biological worlds."14 Or, as McCulloch himself would put it in his 1965 autobiography Embodiments of Mind, he and Pitts had proved the equivalence of all general Turing machines, whether "man-made or begotten."

And to enable transformations between sentence structures-the sort of thing we do all the time when we go from, say, active voice Oohn kissed Mary) to passive (Mary was kissed by John) or to a question (Whom did John kiss?)- you need a grammar from the most powerful class of all, the one that is mathe- matically equivalent to a Turing machine. To put it another way, the very fact that we human beings use language in the way we do is proof that, in some sense, our brains have the computational power of a Turing machine. Or to express it still another way, the pinnacle of all possi- ble mathematical machines-the Turing machine-is also the baseline, the mini- mum needed for human cognition. Anybody who seriously wants to understand the workings of the mind had better start from there, because nothing less will do. Today, of course, Chomsky's proof that computational machines form a hier- archy is considered a major refinement of Turing's original insight, and one of the foundation stones of modern computer science.

Nonetheless, remarks he made at various times suggested that he intended the word automaton to encompass not just computers but brains, radar systems, the telephone system, homeostatic sys- tems within biology, and anything else that processed information and regulated itself. In at least one instance, moreover, his theory proved to be spectacularly prescien t. In the case of, say, a factory machine tool, von Neumann observed, you have an automaton that can turn out very complex parts but not another machine tool. Likewise, a universal Turing machine can output an arbitrarily complex tape but not another Turing machine. However, in almost any biological organ- ism, you have an automaton that can not only reproduce identical copies of it- self but also (through evolution) give rise to organisms that are more complex than itself. So von Neumann asked, What are the essential features required for an automaton to reproduce itself and to evolve? To give his readers a feel for the issues involved, von Neumann started out with a thought experiment.

pages: 293 words: 88,490

The End of Theory: Financial Crises, the Failure of Economics, and the Sweep of Human Interaction by Richard Bookstaber

., 85 Sharpe, William, 85 Shereshevsky, Solomon, 76–77 Simon, Herbert, 110 SIVs, 161, 165 Slick, Grace, 50 Smith, Adam, 3–4, 188 Societie Generale (SocGen), 164 Solow, Robert, 92 Soros, George, 83, 115, 137; and reflexivity, 58–59 stampede: and emergence, 35–36; Hajj, example of, 34–36 Standard & Poor’s, 160 stock market crash (October 1987): and the New York Stock Exchange (NYSE), 145–147; and portfolio insurance, 145–147 (see also portfolio insurance); and the S&P 500, 145–147 subprime mortgages, 160–161 Sun Pin, 117 Syll, Lars, 138 Thomas Theorem, 108 tight coupling, 112 Turing, Alan: and David Hilbert’s program, 54 (see also halting problem); and the halting problem, 31, 55; and the printing problem, 55; and Turing test, 196; and the universal Turing machine (UTM), 54 (see also universal Turing machine) Tversky, Amos, 45–47 Unbearable Lightness of Being, The, 60–61 uncertainty principle, 56–57; and the limits to knowledge, 51 universal Turing machine (UTM), 32, 54–56 University of Chicago, 3 Victorian England, 3–4 Volcker Rule, 156, 158 Walras, Leon, 194 Washington Mutual, 11 white night, 131 Whitehead, Alfred North, 52–53 Wittgenstein, Ludwig, 40 Wolfram, Stephen, 26–27 A NOTE ON THE TYPE This book has been composed in Adobe Text and Gotham.

He developed a conceptual framework for a computer that could take in any set of instructions, execute them faithfully, and deliver the result. Turing’s colleague Alonzo Church (who, independently of Turing, had also demonstrated the impossibility of Hilbert’s program) called this the Turing machine. Turing went a step further, and added to the machine a set of instructions that were internal to the machine and were not altered over the course of execution. The instruction set could be for any computation we can envision, and so this became known as the universal computing machine, or the universal Turing machine (UTM). The UTM is a momentous achievement; it is the foundation for modern computers. It allows for the essential elements of computing—reading, writing, a database (in Turing’s conception, held on an infinitely long tape), short-term memory of the state of the computation, and the instructions to execute held internally as part of the database.

pages: 337 words: 93,245

Diaspora by Greg Egan

Don't tell me: our set of twenty thousand polysaccharide Wang Tiles just happens to form the Turing Machine for calculating pi." "No. What they form is a universal Turing Machine. They can calculate anything at all-depending on the data they start with. Every daughter fragment is like a program being fed to a chemical computer. Growth executes the program." "Ah." Paolo's curiosity was roused-but he was having some trouble picturing where the hypothetical Turing Machine put its read/write head. "Are you telling me only one tile changes between any two rows, where thèmachine' leaves its mark on thèdata tape' ... . The mosaics he'd seen were a riot of complexity, with no two rows remotely the same. Karpal said, "No, no. Wang's original example worked exactly like a standard Turing Machine, to simplify the argument ... but the carpets are more like an arbitrary number of different computers with overlapping data, all working in parallel.

He said, "Hao Wang proved a powerful theorem, twenty-three hundred years ago. Think of a row of Wang Tiles as being like the data tape of a Turing Machine." Paolo had the library grant him knowledge of the term; it was the original conceptual form of a generalized computing device, an imaginary machine which moved hack and forth along a limitless one-dimensional data tape, reading and writing symbols according to a given set of rules. "With the right set of tiles, to force the right pattern, the next row of the tiling will look like the data tape after the Turing Machine has performed one step of its computation. And the row after that will be the data tape after two steps, and so on. For any given Turing Machine, there's a set of Wang Tiles that can imitate it." Paolo nodded amiably. He hadn't heard of this particular quaint result, but it was hardly surprising.

pages: 294 words: 96,661

The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity by Byron Reese

Consider that simple machine, that thought experiment with just a handful of parts: Everything Apollo 11 needed to do to make it to the moon and back could be programmed on a Turing Machine. Everything your smartphone can do can be programmed on a Turing machine, and everything IBM Watson can do can be programmed on a Turing machine. Who could have guessed that such a humble little device could do all that? Well, Turing could, of course. But no one else seems to have had that singular idea. Exit Turing. Enter John von Neumann, whom we call the father of modern computing. In 1945, he developed the von Neumann architecture for computers. While Turing machines are purely theoretical, designed to frame the question of what computers can do, the von Neumann architecture is about how to build actual computers. He suggested an internal processor and computer memory that holds both programs and data.

Turing’s contribution at this point in our tale came in 1936, when he first described what we now call a Turing machine. Turing conceived of a hypothetical machine that could perform complex mathematical problems. The machine is made up of a narrow strip of graph paper, which, in theory, is infinitely long. On the graph paper there is always a single active cell, and above that cell hovers a head. The head can read and write to the paper and move around a bit, based on the instructions it receives or the programs it runs. The point of the Turing machine was not “Here’s how you build a computer” but rather, “This simple imaginary device can solve an enormous range of computational problems. Almost all of them.” In fact, anything a computer can do today, you could theoretically do on a Turing machine. And Turing not only conceived of the machine but figured all this out.

pages: 246 words: 81,625

On Intelligence by Jeff Hawkins, Sandra Blakeslee

The box, which today we call a central processing unit (CPU), follows a set of fixed rules for reading and editing the information on the tape. Turing proved, mathematically, that if you choose the right set of rules for the CPU and give it an indefinitely long tape to work with, it can perform any definable set of operations in the universe. It would be one of many equivalent machines now called Universal Turing Machines. Whether the problem is to compute square roots, calculate ballistic trajectories, play games, edit pictures, or reconcile bank transactions, it is all 1's and 0's underneath, and any Turing Machine can be programmed to handle it. Information processing is information processing is information processing. All digital computers are logically equivalent. Turing's conclusion was indisputably true and phenomenally fruitful. The computer revolution and all its products are built on it. Then Turing turned to the question of how to build an intelligent machine.

He felt computers could be intelligent, but he didn't want to get into arguments about whether this was possible or not. Nor did he think he could define intelligence formally, so he didn't even try. Instead, he proposed an existence proof for intelligence, the famous Turing Test: if a computer can fool a human interrogator into thinking that it too is a person, then by definition the computer must be intelligent. And so, with the Turing Test as his measuring stick and the Turing Machine as his medium, Turing helped launch the field of AI. Its central dogma: the brain is just another kind of computer. It doesn't matter how you design an artificially intelligent system, it just has to produce humanlike behavior. The AI proponents saw parallels between computation and thinking. They said, "Look, the most impressive feats of human intelligence clearly involve the manipulation of abstract symbols and that's what computers do too.

We use mental symbols to represent objects, their positions, their names, and other properties. Sure, people do all this with brains and not with the kinds of computers we build, but Turing has shown that it doesn't matter how you implement or manipulate the symbols. You can do it with an assembly of cogs and gears, with a system of electronic switches, or with the brain's network of neurons whatever, as long as your medium can realize the functional equivalent of a Universal Turing Machine." This assumption was bolstered by an influential scientific paper published in 1943 by the neurophysiologist Warren McCulloch and the mathematician Walter Pitts. They described how neurons could perform digital functions that is, how nerve cells could conceivably replicate the formal logic at the heart of computers. The idea was that neurons could act as what engineers call logic gates. Logic gates implement simple logical operations such as AND, NOT, and OR.

pages: 351 words: 107,966

The Secret Life of Bletchley Park: The WWII Codebreaking Centre and the Men and Women Who Worked There by Sinclair McKay

Certainly, while at Bletchley, Turing certainly was not greatly interested in social interaction. Yet he was a more radical, open, honest soul than the accounts suggest. Turing became a Fellow of King’s College before, in the late 1930s, heading for the United States, to Princeton. Building bridges between the two disciplines of mathematics and applied physics, he threw himself into the construction of a ‘Turing machine’, a machine that could carry out logical binary calculations. Having seen a tide-predicting machine some years back in Liverpool, it occurred to him that the principle of this device could be applied to his own machine, greatly speeding its function. By 1938, when it was increasingly clear that war was coming to the whole of Europe, Turing returned to England, and to King’s, with his electric multiplier machine mounted on a bread-board.

Even in the years that followed the war, with all the technological progress that had been made, Turing’s devices tended to fulfil the stereotype of the mad scientist’s invention: a labyrinth of wires trailing everywhere, held together with sticking plaster. Prior to his premature death in 1954, his home in Manchester was filled with extraordinary and sometimes pungent chemical experiments. Turing had fixed upon the idea of a ‘Universal Turing Machine’ in the 1930s; the inspiration had been provided by a mathematical problem posed in Cambridge, concerning the provability of any given mathematical assertion. Turing had the idea of developing a machine that could carry out this task. When first trying to envisage the form of such a machine, Turing thought of typewriters, how they were built to carry out a certain sort of function. According to his biographer Andrew Hodges, he had in his head an idea of a super-typewriter: a machine that could identify symbols; that could write, but could also erase.

According to his biographer Andrew Hodges, he had in his head an idea of a super-typewriter: a machine that could identify symbols; that could write, but could also erase. A machine that could be configured in many ways to carry out many tasks, and yet would be automatic, requiring little or no intervention from a human operator. His argument was that any calculation that a human could perform, a machine could perform as well. The bombes were not Universal Turing Machines. Far from it. Nor were they an extension of the Polish ‘bomba’ machines, from which their name was taken. The British bombe was quite a different thing. In one sense, it was a philosophical response to the nature of Enigma. Despite the daunting number of combinations thrown up by Enigma, it none the less worked via a mechanical process. Thus, reasoned Turing, Enigma could also be thwarted mechanically.

pages: 238 words: 46

When Things Start to Think by Neil A. Gershenfeld

He did this by introducing the concept of a Universal Turing Machine. This was a simple machine that had a tape (possibly infinitely long), and a head that could move along it, reading and writing marks based on what was already on the tape. Turing showed that this machine could perform any computation that could be done by any other machine, by preparing it first with a program giving the rules for interpreting the instructions for the other machine. With this result he could then prove or disprove the Entscheidungsproblem for his one machine and have it apply to all of the rest. He did this by showing that it was impossible for a program to exist that could determine whether another program would eventually halt or keep running forever. Although a Turing machine was a theoretical construction, in the period after World War II a number of laboratories turned to successfully making electronic computers to replace the human "computers" who followed written instructions to carry out calculations.

Like the checkers on a checkerboard, tokens that each represent a collection of molecules get moved among sites based on how the neighboring sites are occupied. This idea has come to be known as Cellular Automata (CAs). From the 1970s onward, the group of Ed Fredkin, Tomaso Toffoli, and Norm Margolus at MIT started to make special-purpose computers designed for CAs. Because these machines entirely dispense with approximations of continuous functions, they can be much simpler and faster. And because a Turing machine can be described this way, a CA can do anything that can be done with a conventional computer. A cellular automata model of the universe is no less fundamental than one based on calculus. It's a much more natural description if a computer instead of a pencil is used to work with the model. BIT BELIEFS + 133 And the discretization solves another problem: a continuous quantity can represent an infinite amount of information.

Much to our pleasant surprise, we discovered that we could design simple continuous systems that exactly matched the response of the familiar digital ones for digital signals, but if they started in the wrong state they could use their analog freedom to synchronize onto the digital signal. Instead of splitting the job into an analog detector and then a digital search program as is done now, a physical analog part can do everything. The digital world has gone on refining Turing's machine, when even Turing understood its limits. One more area that he made a seminal contribution to was explaining how biology can form patterns by using chemical reactions. This useful behavior is easy to find in nature but hard to model on a digital computer. Now that we're approaching the end of the era of rapid increases in the performance of digital computers, like Turing we have no choice but to pay more attention to information processing in the analog world.

pages: 222 words: 74,587

Paper Machines: About Cards & Catalogs, 1548-1929 by Markus Krajewski, Peter Krapp

Historians lament this, since the usually complex sources of historiography allow for many competing or complementary takes on the genealogy of a concept. For an exemplary treatment of computing myths, see Levy 1998. 15. This comparison is not to claim that the index catalog is already a Turing machine. Comparisons, transfers, and analogies are not that simple. If the elements of a universal discrete machine are present, they still lack the computational logic of an operating system, the development of which constitutes Turing’s foundational achievement. What is described here is merely the fact that the card catalog is literally a paper machine, similar to a nontrivial Turing machine only in having similar components—no more, no less. 16. See for instance note 15 regarding the limits of the Turing machine analogy, note 1 of chapter 2 for the book ﬂood metaphor, and notes 62 of chapter 2 and 42 of chapter 4 for the business card and bank note comparisons, respectively.

This epistemological payoff is contained in the description—in short, the necessary catachresis of historiography activates insights that are called forth by the various connotations of the metaphors chosen. 8 Chapter 1 Thus, readers ought not be too surprised by references that may appear peculiar at ﬁrst. They will ﬁnd correspondences between index cards and bank notes, house numbers and book shelving, card catalogs and Turing machines, masses of books and their description as waves of a ﬂood. These images are called upon consciously, at times by way of historical quotations. Even if it is clear that a card catalog does not perfectly resemble the digital calculator or computer, I maintain that the card catalog is one precursor of computing.14 On the software side, the components of the catalog and its function correspond to the theoretical concept of a universal discrete machine as developed by Turing in 1936, with a writing/reading head (or scriptor), an inﬁnite paper band partitioned into discrete steps (or slips), and an unambiguous set of instructions for reading and writing data.15 Moreover, on the hardware side there is a line of industrial development from library technology directly to the producers of early computing installations, pointing to the technology transfer from the catalog card to the punch card and on to modern storage media.

In addition, I owe Marguerite Avery, Wolfram Burckhardt, Anne Mark, Jasmin Meerhoff, W. Boyd Rayward, and Oliver Simons for their assistance and cooperation; last but not least, and particularly for the time in between the two books, thanks to Frau V. Weimar, May 2010 Markus Krajewski Notes Chapter 1 1. From the catalog copy in ﬁgure 1.1. 2. Turing 1987, pp. 20, 91 and also in this connection, Dotzler 1996, esp. p. 7. For the structural analogy between card catalog and Turing machine, see above all note 15 in this chapter. 3. “For the duration of a state,” as Müller 1995, p. 45 put it. 4. See Kittler 1986, pp. 244f., and Kittler 1993, pp. 170ff. 5. Translator’s note: The author uses the German word verzetteln in a double sense: putting notes and quotes down on paper slips, and sorting individual notes in an index; but also disseminating, releasing them. See Grimm and Grimm 1956, cols. 2565ff.

pages: 253 words: 80,074

The Man Who Invented the Computer by Jane Smiley

In addition to this, while the machine could operate eternally, there was no way for the machine to check itself, and so there was no way to know whether every answer was “true” or not. The lambda calculus “represented an elegant and powerful symbolism for mathematical processes of abstraction and variation,” but the Turing machine was a thought experiment that posited a mechanical operation, to be done by either a mechanism or by a human mind. Andrew Hodges, Turing’s biographer, points out that Turing’s idea “was not only a matter of abstract mathematics, not only a play of symbols, for it involved thinking about what people did in the physical world … His machines—soon to be called Turing Machines—offered a bridge, a connection between abstract symbols and the physical world. Indeed, his imagery was, for Cambridge, almost shockingly industrial.” In May 1936, Alan Turing submitted his paper, entitled “On Computable Numbers, with an Application to the Entscheidungsproblem,” to the Proceedings of the London Mathematical Society and then applied unsuccessfully for a Procter Fellowship at Princeton.

In spring 1945, right around the time that the order was going out for the ten Colossus machines to be destroyed, Womersley went to the United States and was shown ENIAC (before, in fact, it was unveiled to the general public). When Womersley got back to the UK, he was eager to build a UK version. Since, unlike Mauchly and Eckert, he happened to be quite familiar with “On Computable Numbers” and had even toyed with designing a mechanical version of a Turing machine before the war (his partner, like Mauchly and Eckert’s original partner, was in the horse-racing pari-mutuel totalizer business), he offered Turing £800 per year—£200 more than he had received at Bletchley Park—to come to the NPL. Turing began work on October 1, 1945, and he was ready with plenty of ideas. Many of his new colleagues at the NPL had also been recruited from the war effort, though from the Admiralty Computing Service, not from Bletchley Park.

Turing thought that if the hardware was fast enough and the program detailed and complex enough, roomfuls of processor units could be avoided. However, such a machine would have had difficulties of its own, according to John Gustafson, who maintains, It is clear that what he had in mind building was something very like the theoretical model in his Computability paper, the model we now call a Turing machine. It worked on one bit at a time, but used a huge amount of memory to do anything of consequence. Since he had proved that anything that was computable could be theoretically computed on such a simple device, why not build one? The CPU would only have required a handful of vacuum tubes. But such a machine is horrendously difficult to program, and even at electronic speeds, it would have been painfully slow for many simple things like floating-point arithmetic.2 One of Turing’s difficulties (or Womersley’s, as his director) was that he didn’t mind talking to the press (either the general press or journals of particular groups, such as the Institution of Radio Engineers), but when he did talk, he raised hopes that did not seem realistically capable of fulfillment, and he was often met with skepticism.

The Code Book: The Science of Secrecy From Ancient Egypt to Quantum Cryptography by Simon Singh

Turing envisaged that the numbers to be multiplied could be fed into the machine via a paper tape, rather like the punched tape that is used to feed a tune into a Pianola. The answer to the multiplication would be output via another tape. Turing imagined a whole series of these so-called Turing machines, each specially designed to tackle a particular task, such as dividing, squaring or factoring. Then Turing took a more radical step. He imagined a machine whose internal workings could be altered so that it could perform all the functions of all conceivable Turing machines. The alterations would be made by inserting carefully selected tapes, which transformed the single flexible machine into a dividing machine, a multiplying machine, or any other type of machine. Turing called this hypothetical device a universal Turing machine because it would be capable of answering any question that could logically be answered. Unfortunately, it turned out that it is not always logically possible to answer a question about the undecidability of another question, and so even the universal Turing machine was unable to identify every undecidable question.

Unfortunately, it turned out that it is not always logically possible to answer a question about the undecidability of another question, and so even the universal Turing machine was unable to identify every undecidable question. Mathematicians who read Turing’s paper were disappointed that Gödel’s monster had not been subdued but, as a consolation prize, Turing had given them the blueprint for the modern programmable computer. Turing knew of Babbage’s work, and the universal Turing machine can be seen as a reincarnation of Difference Engine No. 2. In fact, Turing had gone much further, and provided computing with a solid theoretical basis, imbuing the computer with a hitherto unimaginable potential. It was still the 1930s though, and the technology did not exist to turn the universal Turing machine into a reality. However, Turing was not at all dismayed that his theories were ahead of what was technically feasible.

If the result is tewwer rather than wetter, then it is clear that plugboard cables should be inserted so as to swap w and t. Typing in other bits of ciphertext would reveal other plugboard cablings. The combination of crib, loops and electrically connected machines resulted in a remarkable piece of cryptanalysis, and only Turing, with his unique background in mathematical machines, could ever have come up with it. His musings on the imaginary Turing machines were intended to answer esoteric questions about mathematical undecidability, but this purely academic research had put him in the right frame of mind for designing a practical machine capable of solving very real problems. Bletchley was able to find £100,000 to turn Turing’s idea into working devices, which were dubbed bombes because their mechanical approach bore a passing resemblance to Rejewski’s bombe.

The Deep Learning Revolution (The MIT Press) by Terrence J. Sejnowski

When data sets are small, a single sample left out of the training set can be used to test the performance of the network trained on the remaining examples, and the process repeated for every sample to get an average test performance. This is a special case of cross-validation where n = 1, in which n subsamples are held out. 284 Glossary Turing machine Hypothetical computer invented by Alan Turing (1937) as a simple model for mathematical calculation. A Turing machine consists of a “tape” that can be moved back and forth, a “head” that has a “state” that can change the property of the active cell beneath it, and a set of instructions for how the head should modify the active cell and move the tape. At each step, the machine may modify the property of the active cell and change the state of the head.

(The Department of Defense had recently poured \$600 million into its Strategic Computing Initiative, a program that ran from 1983 to 1993 but came up short on building a vision system to guide a self-driving tank.)9 “Good luck with that,” was my reply. Gerald Sussman, who made several important applications of AI to real-world problems, including a system for high-precision integration for orbital mechanics, defended the honor of MIT’s approach to AI with an appeal to the classic work of Alan Turing, who had proven that the Turing machine, a thought experiment, could compute any computable function. “And how long would that take?” I asked. “You had better compute quickly or you will be eaten,” I added, then walked across the room to pour myself a cup of coffee. And that was the end of the dialogue with the faculty. 34 Chapter 2 “What is wrong with this picture?” is a question that every student in my lab can answer. But the first two rows of my lunchtime audience were stumped.

The fourth clue is that our brains are filled with billions and billions of tiny neurons that are constantly communicating with one another. This suggests that, for solutions to the hard problems in artificial intelligence, we should be looking into computers with massively parallel architectures rather than those with von Neumann digital architectures through which data and instructions are fetched and executed one at a time. Yes, it is true that a Turing machine can compute any computable function given enough memory and enough time, but nature had to solve problems in real The Dawn of Neural Networks 39 time. To do this, it made use of the brain’s neural networks that, like the most powerful computers on the planet, have massively parallel processors. Algorithms that run efficiently on them will eventually win out. Early Pioneers In the 1950s and 1960s, shortly after Norbert Wiener introduced cybernetics, based on communications and control systems in both machines and living creatures,3 there was an explosion of interest in self-organizing systems.

pages: 370 words: 107,983

Rage Inside the Machine: The Prejudice of Algorithms, and How to Stop the Internet Making Bigots of Us All by Robert Elliott Smith

In 1936, British mathematician (and later war hero) Alan Turing designed a theoretical mechanical device, and a mathematical proof that it could do any mechanical procedure, that could be implemented on any computer, ever. This device, which we now call a Turing Machine, involved configurations of mechanical ‘states’, and the ability to write to and read from memory (in the form of symbols written on an infinitely long roll of tape). In every way, the Turing Machine is like Babbage’s Engine, if that Engine had infinite memory in its ‘store’. In fact, the whole point of Turing’s proof was that this machine was equivalent to any general-purpose computer, and therefore any general-purpose computer is just like any other. That is to say, Turing proved that regardless of whether it was built on a substrate of brass gears and levers, vacuum tubes, as-yet-uninvented semiconductor chips, or even biological cells, the Turing Machine could implement any algorithm10 that could be implemented on any other computer.

After all, they theorize, everything that can possibly be built must follow the mechanical rules of physics. Thus everything that can be built is a mechanical device. Turing showed that all such devices that compute, regardless of the substrate they are built on, have the same ultimate algorithmic capabilities. The theory concludes that the brain is just another mechanical device, a built ‘machine’ that must follow the rules of physics. Therefore, the brain can’t do anything that a Turing machine, and thus any other computer, can’t do. By this argument, the brain is just a computer. Of course, the ultimate ‘capabilities’ assumed in this line of reasoning are the execution of Hilbert’s ‘mechanical procedures’ (algorithms). Whether this is all brains do is a debatable question that is philosophically central to AI. Regardless, many believe that universal computation indicates that the substrate of computing is irrelevant.

Due to the deeply rooted societal bias against homosexuals, he was, until recently, the greatest unsung hero of the Second World War, perhaps the greatest of all time. He was instrumental in the creation of a computer that deciphered the coded messages of the German’s Enigma machine, a feat that is thought to have shortened the war by at least four years and thus saved millions of lives. His invention of the thought-experiment computer the Turing machine literally created the field of computer science, the bedrock field for an immeasurable fraction of today’s global society. And he created another thought experiment that has forever altered the cultural zeitgeist about man and machines: the so-called Turing test. The test was first described in the 1950 paper entitled ‘Computing Machinery and Intelligence’,4 in which Turing acknowledges the difficulty of defining ‘thinking’, such that one could answer the question, ‘Do computers think?’

pages: 626 words: 181,434

I Am a Strange Loop by Douglas R. Hofstadter

For instance, consider the case of John Searle, a philosopher who has spent much of his career heaping scorn on artificial-intelligence research and computational models of thinking, taking special delight in mocking Turing machines. A momentary digression… Turing machines are extremely simple idealized computers whose memory consists of an infinitely long (i.e., arbitrarily extensible) “tape” of so-called “cells”, each of which is just a square that either is blank or has a dot inside it. A Turing machine comes with a movable “head”, which looks at any one square at a time, and can “read” the cell (i.e., tell if it has a dot or not) and “write” on it (i.e., put a dot there, or erase a dot). Lastly, a Turing machine has, stored in its “head”, a fixed list of instructions telling the head under which conditions to move left one cell or right one cell, or to make a new dot or to erase an old dot. Though the basic operations of all Turing machines are supremely trivial, any computation of any sort can be carried out by an appropriate Turing machine (numbers being represented by adjacent dot-filled cells, so that “•••” flanked by blanks would represent the integer 3).

Though the basic operations of all Turing machines are supremely trivial, any computation of any sort can be carried out by an appropriate Turing machine (numbers being represented by adjacent dot-filled cells, so that “•••” flanked by blanks would represent the integer 3). Back now to philosopher John Searle. He has gotten a lot of mileage out of the fact that a Turing machine is an abstract machine, and therefore could, in principle, be built out of any materials whatsoever. In a ploy that, in my opinion, should fool only third-graders but that unfortunately takes in great multitudes of his professional colleagues, he pokes merciless fun at the idea that thinking could ever be implemented in a system made of such far-fetched physical substrates as toilet paper and pebbles (the tape would be an infinite roll of toilet paper, and a pebble on a square of paper would act as the dot in a cell), or Tinkertoys, or a vast assemblage of beer cans and ping-pong balls bashing together.

Page 26 abstractions are central…in the study of the brain… See [Treisman], [Minsky 1986], [Schank], [Hofstadter and FARG], [Kanerva], [Fauconnier], [Dawkins], [Blackmore], and [Wheelis] for spellings-out of these abstract ideas. Page 27 Just as the notion of “gene” as an invisible entity that enabled… See [ Judson]. Page 27 and just as the notion of “atoms” as the building blocks… See [Pais 1986], [Pais 1991], [Hoffmann], and [Pullman]. Page 28 Turing machines are…idealized computers… See [Hennie] and [Boolos and Jeffrey]. Page 29 In his vivid writings, Searle gives… See Chapter 22 of [Hofstadter and Dennett]. Page 29 one particular can that would “pop up”… In his smugly dismissive review [Searle] of [Hofstadter and Dennett], Searle states: “So let us imagine our thirst-simulating program running on a computer made entirely of old beer cans, millions (or billions) of old beer cans that are rigged up to levers and powered by windmills.

pages: 913 words: 265,787

How the Mind Works by Steven Pinker

The computer scientist Joseph Weizenbaum once showed how to build one out of a die, some rocks, and a roll of toilet paper. In fact, one doesn’t even need a huge warehouse of these machines, one to do sums, another to do square roots, a third to print English sentences, and so on. One kind of Turing machine is called a universal Turing machine. It can take in a description of any other Turing machine printed on its tape and thereafter mimic that machine exactly. A single machine can be programmed to do anything that any set of rules can do. Does this mean that the human brain is a Turing machine? Certainly not. There are no Turing machines in use anywhere, let alone in our heads. They are useless in practice: too clumsy, too hard to program, too big, and too slow. But it does not matter. Turing merely wanted to prove that some arrangement of gadgets could function as an intelligent symbol-processor.

The machine is allowed as much tape as it needs. This design is called a Turing machine. What can this simple machine do? It can take in symbols standing for a number or a set of numbers, and print out symbols standing for new numbers that are the corresponding value for any mathematical function that can be solved by a step-by-step sequence of operations (addition, multiplication, exponentiation, factoring, and so on—I am being imprecise to convey the importance of Turing’s discovery without the technicalities). It can apply the rules of any useful logical system to derive true statements from other true statements. It can apply the rules of any grammar to derive well-formed sentences. The equivalence among Turing machines, calculable mathematical functions, logics, and grammars, led the logician Alonzo Church to conjecture that any well-defined recipe or set of steps that is guaranteed to produce the solution to some problem in a finite amount of time (that is, any algorithm) can be implemented on a Turing machine.

The equivalence among Turing machines, calculable mathematical functions, logics, and grammars, led the logician Alonzo Church to conjecture that any well-defined recipe or set of steps that is guaranteed to produce the solution to some problem in a finite amount of time (that is, any algorithm) can be implemented on a Turing machine. What does this mean? It means that to the extent that the world obeys mathematical equations that can be solved step by step, a machine can be built that simulates the world and makes predictions about it. To the extent that rational thought corresponds to the rules of logic, a machine can be built that carries out rational thought. To the extent that a language can be captured by a set of grammatical rules, a machine can be built that produces grammatical sentences. To the extent that thought consists of applying any set of well-specified rules, a machine can be built that, in some sense, thinks.

Speaking Code: Coding as Aesthetic and Political Expression by Geoff Cox, Alex McLean

To me, Chinese is just so many meaningless squiggles.”50 Given linguistic instruction, Searle imagines that he becomes able to answer questions that are indistinguishable from those of native Chinese speakers, but insists, “I produce the answers by manipulating uninterpreted formal symbols. As far as the Chinese is concerned, I simply behave like a computer; I perform computational operations on formally specified elements. For the purposes of the Chinese, I am simply an instantiation of the computer program.”51 Searle’s position is based on the linguistic distinction between syntax and semantics as applied to the digital computer or Turing machine as a “symbol-manipulating device,” where the units have no meaning in themselves (a position that follows from semiotics). Even if it is argued that there is some sense of intentionality in the program or a degree of meaning in the unit, it is not the same as human information processing, and this sense of agency is what Searle calls “as-if intentionality.” In the Chinese Room, Searle becomes an instantiation of a computer program, drawing on a database 32 Chapter 1 of symbols and arranging them according to program rules.

The “grain of the voice” is what Barthes calls the individual “voice-magic,” imparted by the “very precise space of the encounter between a language and a voice.” Roland Barthes, “The Grain of the Voice,” in Image, Music, Text, 181. The music industry tries to commodify this “grain.” 54. It is interesting to note that a function to calculate pi can be written; the issue is that it would never return the value. If running indefinitely, a Turing machine would be able to output it all, as it has infinite memory. 55. Groys, The Communist Postscript, xvii. 56. Nick Couldry, Why Voice Matters: Culture and Politics after Neoliberalism (London: Sage, 2010), 3. 57. Echoed in the phrase “words made flesh,” the title of Florian Cramer’s Words Made Flesh: Code, Culture, Imagination (Rotterdam: Piet Zwart Institute, 2005), 125 (available at http:// pzwart.wdka.hro.nl/mdr/research/fcramer/wordsmadeflesh/). 58.

., Views into the Chinese Room: New Essays on Searle and Artificial Intelligence (Oxford: Clarendon Press, 2002), 168. 53. Ibid., 168–169, citing Wittgenstein’s Philosophical Investigations (1953). 54. Proudfoot, “Wittgenstein’s Anticipation of the Chinese Room,” 177–178. 55. John R. Searle, “Twenty-One Years in the Chinese Room,” in Preston and Bishop, Views into the Chinese Room, 56. 56. Ibid. With this statement, Searle is arguing that Turing machines rely on abstract mathematical processes but not on energy transfer like some other machines; and one might extrapolate that the discourse around artificial life reinvigorates the fantasies of artificial intelligence in this way. Merely increasing computer capacity does not mean machine consciousness is any closer but perhaps only that the fantasies become stronger, as artificial intelligence morphs into the discourse around artificial life and the messier biocomputational “wet” realm of living cells. 57.

pages: 903 words: 235,753

The Stack: On Software and Sovereignty by Benjamin H. Bratton

Turing envisioned his famous “machine” according to the tools of his time to involve an infinite amount of “tape” divided into cells that can store symbols, moved along a stationary read-write “head” that can alter those symbols, a “state register” that can map the current arrangement of symbols along the tape, and a “table” of instructions that tells the machine to rewrite or erase the symbol and to move the “head,” assuming a new state for the “register” to map. The Church-Turing thesis (developed through the 1940s and 1950s) would demonstrate that Turing's “machine” not only could simulate algorithms, but that a universal Turing machine, containing all possible such machines, could, in theory, calculate all logical problems that are in fact computable (a limit that Turing's paper sought to identify). The philosophical implications are thorny and paradoxical. At the same moment that Turing demonstrates the mechanical basis for synthetic logic by machines (suggesting real artificial intelligence), he partially delinks the correlation between philosophical thought and machinic calculation.

There is no planetary-scale computation, now a vast network of many billions of little Turing machines, that does not intake and absorb the Earth's chemistry in order to function. The Stack is a hungry machine, and while its curated population of algorithms may be all but massless, their processing of Earthly material is a physical event, and therefore the range of possible translations between information and mechanical appetites has another limit that is not mathematical but defined by the real finitude of substances that can force communication between both sides of this encounter.20 Furthermore, like any megamachine the Earth layer is as socially constrained as it is technologically configured, and so there are political economies of Turing machines that are only accessible through misaligned and uneven hierarchies of geography, energy, and programmability.21 This is made clear by unpacking and sifting through the hardware on which The Stack depends.

In trying to place the image, I also think about how just up the 101 freeway from where Deleuze was sitting, silicon was being repurposed as the physical medium of synthetic computational intelligence, in the areas near Palo Alto in a “valley” already having been named in honor of this element. For me these are conjoined, and not just by their geographic proximity: Deleuze on the beach contemplating (we might assume) what he called “the plane of immanence,” the field from which all potential forms emerge, and Intel's initial approximations of microprocessor technology for universal computation, putting a mini-Turing machine on a silicon wafer.5 In different ways and for different ends, both grapple with matter as vibrant, contingent, and mutable, as reproduced in the careful calculation of sets of differences drawn from particular virtual possibilities. At the end of the day, Deleuze's philosophy is more about chemistry than computation, continuities more than discrete digitalizations, but his philosophical imagery of worlds appearing from the multiplication of imminent processes and generic diagrams, on oscillations of the physical and the virtual, is not unfamiliar to the projects of information realism.

pages: 239 words: 56,531

Written just before the war, Turing’s master’s thesis, “On Computable Numbers,” was his greatest contribution to computer science. In it, he proposed the questions that still remain central to the discipline decades later. Turing suggested that it should be possible to make a “Universal Machine,” a computer that could simulate the performance of any other device. The fact that the analog machines of the late 1930s and early 1940s were far too slow to function as Universal Turing Machines did not affect his faith that such devices would come into existence. And with the stimulus of the war effort, they did. Within a decade, Turing was working on the Manchester Mark I computer—one of the ﬁrst machines recognized as being a direct antecedent to the computers we use now. Turing proposed a universal machine that functioned as a stored program computer; in this setup, the programs, or software, could be swapped and modiﬁed, improved and abandoned, just as the hardware could and would be.

He assisted Christopher Strachey in producing what was probably the ﬁrst artwork made with a computer: the love letter generator of 1952. 5 Strachey, working from a thousand-line piece of software (the longest yet written for the Mark I), created a program that randomly produced such sentimental and vaguely meaningless missives as: 18 STICK Y Darling Sweetheart, You are my avid fellow feeling. My affection curiously clings to your passionate wish. My liking yearns for your heart. You are my wistful sympathy: my tender liking. Yours beautifully M. U. C. Here, the Universal Turing Machine simulates mawkish Victorian sentimentality by choosing from a database of prewritten phrases that it then arranges into syntactically correct but stilted English. This triﬂe, inspired at least in part by the renown of Christopher’s uncle Lytton Strachey’s 1918 portrait of a generation, Eminent Victorians, is the product of a stored program computer, and as such may well be the ﬁrst aesthetic object produced by the ancestors of the culture machine.

pages: 528 words: 146,459

Computer: A History of the Information Machine by Martin Campbell-Kelly, William Aspray, Nathan L. Ensmenger, Jeffrey R. Yost

Whereas Church relied on conventional mathematics to make his arguments, Turing used a conceptual computer, later known as the Turing Machine. The Turing Machine was a thought experiment rather than a realistic computer: the “machine” consisted of a scanning head that could read and write symbols on a tape of potentially infinite length, controlled by an “instruction table” (which we would now call a program). It was computing reduced to its simplest form. Turing also showed that his machine was “universal” in the sense that it could compute any function that it was possible to compute. Generally people described a computer as “general purpose” if it could solve a broad range of mathematical problems. Turing made a much stronger claim: that a universal computer could tackle problems not just in mathematics but in any tractable area of human knowledge. In short, the Turing Machine embodied all the logical capabilities of a modern computer.

Alan Turing was born in 1912 and at school he was drawn to science and practical experimenting. He won a scholarship to King’s College, Cambridge University, and graduated in mathematics with the highest honors in 1934. He became a Fellow of King’s College and, in 1936, published his classic paper “On Computable Numbers with an Application to the Entscheidungsproblem” in which he described the Turing Machine. Turing showed that not all mathematical questions were decidable, and that one could not always determine whether or not a mathematical function was computable. For nonmathematicians this was an obscure concept to grasp, although later—after World War II—Turing explained the idea in an article in Science News. Turing had a gift for easy explanation to general audiences and illustrated his argument with puzzles rather than mathematics.

., 108–109 GUI (graphical user interface), 253, 257–261, 264–267, 273, 297 Handwriting recognition technology, 297, 298 Harvard Business Review, 134 Harvard Mark I, 54–59, 73, 80, 83, 91 (photo), 104 Harvard Mark II, 83–84 Harvard University, 83–84, 104 Hewlett, William, 239, 249 Hewlett-Packard (HP), 219, 239, 249 High-speed broadband, 274 Hilbert, David, 59 Hitachi, 251 Hobbyists, 230, 231, 232–238, 239–241, 242 Hock, Dee, 158–161 Hoerni, Jean, 221 Hoff, Ted, 231–232 Hollerith, Herman, 13–18, 27, 33–36 Hollerith Electric Tabulating System, 15–18, 27, 33–34, 38 Home computing, 211–212 Homebrew Computer Club, 237, 240 Home-built computers, 219 Honeywell competitive strategies of, 132 computer recession and, 133 FACT programming language, 175 IBM-compatible computers, 128–129, 131 mainframe computers, 98, 116, 117, 124 minicomputers, 219 Hopper, Grace Murray, 59, 110, 171, 173, 174–175, 192 (photo) Human computers, 3–6, 50–51, 52–54, 65, 67–68, 71 Human Factors Research Center, 199, 258–259, 282 Human knowledge, 275–278 Human-computer interaction, 207–210, 243, 258–259 Human-factors design, 208 Hypertext, 200, 234–235, 279, 286–287 IBM 1401 mainframe computer, 95 (photo), 117, 120–123, 126, 128–129 accounting machines, 38, 121, 125, 130 Airline Control Program operating system, 160 airline reservation systems, 154–157 antitrust suit against, 187 Automatic Computing Plant, 57 BASE II computer system, 160 Bitnet, 292 business computers, 123–124, 130–133, 138–139 business practices, 21, 30, 39–40, 97 Commercial Translator, 175 compatible product lines and, 125 computer development and, 80 computer industry and, 21, 98, 117, 133 computer industry rank, 219 computer industry standardization, 135 core memory technology, 115, 150, 151 data processing computers, 112–116, 133–134 Defense Calculator (model 701), 107, 113, 114, 115 as defense contractor, 107, 151, 152 Eckert and Mauchly’s attempts to sell EMCC to, 108 electromechanical manufacturing and, 138 electronic technology, 103–107 Endicott Laboratory, 57 ENIAC and, 71–81, 92 (photo), 93, 99, 145 FORTRAN developed by, 171–175, 185, 191, 205–206, 214, 215 Harvard Mark I and, 54–59, 73, 80, 83, 91 (photo), 104 Hollerith and, 14 IBM PC as industry standard, 197 (photo), 245–249 as leader in computer technology, 106 Magnetic Drum Calculator (model 650), 106, 112, 115, 120–121 mainframe computers, 126, 130–131 MS-DOS development agreement and, 256 naming of company, 36 New Product Line, 126–128, 129 OS/2 operating system, 265 PC Jr. computer, 263 personal computers, 197 (photo), 229, 245–249 PL/1 programming language, 183 Program Development Laboratories, 180–182 public image of, 119–120 punched-card systems of, 21, 38–39, 53, 90 (photo) rent-and-refill nature of business, 36, 37–38 sales strategies of, 21, 36–37, 39–40, 138, 246 sales training, 39–40 scanning technology, 164 Selective Sequence Electronic Calculator, 104, 105–106 survival of Great Depression by, 36, 37 systems integration and, 138–139 System/360 computers, 96 (photo), 124–132, 138, 209, 210 Systems Network Architecture, 285 Tape Processing Machine (model 702), 106–107, 112–115 technical innovation, 21, 31, 36, 38–39 T-H-I-N-K motto, 33, 37, 106 time-sharing computers, 209–210 total-system design approach, 120–123 TopView operating system, 265 TSS/360 operating system, 212–213 unbundling, 187–188 war-related computing projects of, 74, 79 IBM-compatible personal computers developed by competitors, 128–129, 131, 138 market dominance of, 255, 261, 263 software development for, 254, 256–257, 264–267 IBM Selective Sequence Electronic Calculator (SSEC), 104–106, 117 Icons, desktop, 258, 260 iOS platform, 297 iPad tablets, 298 iPhone, 298 iPod devices, 298 iTunes stores, 298 Illustrated London News, 12, 88 (photo) Informatics, 177, 188 Information processing calculating engines for, 6–8 clearing houses for, 8–11 human computers for, 3–6 mechanical systems for, 13–19 telegraphy and, 11–13 Information Processing Techniques Office (IPTO), 208, 258–259, 280–283 Information storage, office, 21, 27 Information technology, 19, 25 Initial Orders program, 169 Institute for Advanced Study (IAS), Princeton, 73, 80, 93 (photo), 107, 147, 191 Instruction tables (Turing Machine), 60 Insurance industry, 4, 18, 101, 102, 109, 175, 187 Integrated circuits, 130, 131, 194 (photo), 215–219, 221, 222–225 Intel 8088 chip, 246, 247, 248 Fairchild origins of, 221, 222 integrated circuit technology and, 216 microprocessors and, 195 (fig.), 231–232, 251, 265 Interactive computing, 207–210, 243, 258–259 interface message processor (IMP), 282 International Business Machines.

pages: 144 words: 43,356

Surviving AI: The Promise and Peril of Artificial Intelligence by Calum Chace

His work is estimated to have shortened the war by two years, but incredibly, his reward was to be prosecuted for homosexuality and obliged to accept injections of synthetic oestrogen which rendered him impotent. He died two years later and it took 57 years before a British government apologised for this barbaric behaviour. Before the war, in 1936, Turing had already devised a theoretical device called a Turing machine. It consists of an infinitely long tape divided into squares, each bearing a single symbol. Operating according to the directions of an instruction table, a reader moves the tape back and forth, reading one square – and one symbol – at a time. Together with his PhD tutor Alonzo Church he formulated the Church-Turing thesis, which says that a Turing machine can simulate the logic of any computer algorithm. (That word “algorithm” crops up a lot in computer science. It simply means a set of rules, or instructions, for a computer to follow. An algorithm is not a programme which tells a computer how to handle a particular situation such as opening a spreadsheet, or calculating the sum of a column of figures.

The algorithm builds an internal model and uses it to make predictions, which it tests against additional data and then refines the model.) Turing is also famous for inventing a test for artificial consciousness called the Turing Test, in which a machine proves that it is conscious by rendering a panel of human judges unable to determine that it is not (which is essentially the test that we humans apply to each other). The birth of computing The first design for a Turing machine was made by Charles Babbage, a Victorian academic and inventor, long before Turing’s birth. Babbage never finished the construction of his devices, although working machines have recently been built based on his designs. His Difference Engine (designed in 1822) would carry out basic mathematical functions, and the Analytical Engine (design never completed) would carry out general purpose computation.

pages: 960 words: 125,049

Mastering Ethereum: Building Smart Contracts and DApps by Andreas M. Antonopoulos, Gavin Wood Ph. D.

Specifically, he proved that the halting problem (whether it is possible, given an arbitrary program and its input, to determine whether the program will eventually stop running) is not solvable. Alan Turing further defined a system to be Turing complete if it can be used to simulate any Turing machine. Such a system is called a Universal Turing machine (UTM). Ethereum’s ability to execute a stored program, in a state machine called the Ethereum Virtual Machine, while reading and writing data to memory makes it a Turing-complete system and therefore a UTM. Ethereum can compute any algorithm that can be computed by any Turing machine, given the limitations of finite memory. Ethereum’s groundbreaking innovation is to combine the general-purpose computing architecture of a stored-program computer with a decentralized blockchain, thereby creating a distributed single-state (singleton) world computer.

The transaction contains metadata such as the gas limit for that transaction. Truffle One of the most commonly used Ethereum development frameworks. Turing complete A concept named after English mathematician and computer scientist Alan Turing: a system of data-manipulation rules (such as a computer’s instruction set, a programming language, or a cellular automaton) is said to be “Turing complete” or “computationally universal” if it can be used to simulate any Turing machine. Vitalik Buterin A Russian–Canadian programmer and writer primarily known as the cofounder of Ethereum and of Bitcoin Magazine. Vyper A high-level programming language, similar to Serpent, with Python-like syntax. Intended to get closer to a pure functional language. Created by Vitalik Buterin. Wallet Software that holds secret keys. Used to access and control Ethereum accounts and interact with smart contracts.

pages: 370 words: 94,968

The Most Human Human: What Talking With Computers Teaches Us About What It Means to Be Alive by Brian Christian

To Ackley’s point, most work on computation has not traditionally been on dynamic systems, or interactive ones, or ones integrating data from the real world in real time. Indeed, theoretical models of the computer—the Turing machine, the von Neumann architecture—seem like reproductions of an idealized version of conscious, deliberate reasoning. As Ackley puts it, “The von Neumann machine is an image of one’s conscious mind where you tend to think: you’re doing long division, and you run this algorithm step-by-step. And that’s not how brains operate. And only in various circumstances is that how minds operate.” I spoke next with University of Massachusetts theoretical computer scientist Hava Siegelmann, who agreed. “Turing was very [mathematically] smart, and he suggested the Turing machine as a way to describe a mathematician.16 It’s [modeling] the way a person solves a problem, not the way he recognizes his mother.”

The very fact that we, as a rule, must deliberately “get exercise” bodes poorly: I imagine the middle-class city dweller paying money for a parking space or transit pass in lieu of walking a mile or two to the office, who then pays more money for a gym membership (and drives or buses there). I grew up three miles from the Atlantic Ocean; during the summer, tanning salons a block and a half from the beach would still be doing a brisk business. To see ourselves as distinct and apart from our fellow creatures is to see ourselves as distinct and apart from our bodies. The results of adopting this philosophy have been rather demonstrably weird. Turing Machines and the Corporeal IOU Wanting to get a handle on how these questions of soul and body intersect computer science, I called up the University of New Mexico’s and the Santa Fe Institute’s Dave Ackley, a professor in the field of artificial life. “To me,” he says, “and this is one of the rants that I’ve been on, that ever since von Neumann and Turing and the ENIAC guys15 built machines, the model that they’ve used is the model of the conscious mind—one thing at a time, nothing changing except by conscious thought—no interrupts, no communication from the outside world.

This fits in nicely with our sense of an inner homunculus pulling the levers and operating our body from a control room behind our eyeballs. It fits in nicely with Aristotle’s notion that thinking is the most human thing we can do. And so we compensate accordingly. I almost wonder if micromanagement comes from the same over-biasing of deliberate conscious awareness that led, both to and out of, the Turing machine model of computation underlying all of our computers today. Aware of everything, acting logically, from the top down, step-by-step. But bodies and brains are, of course, not like that at all. Micromanagement and out-of-control executive compensation are odd in a way that dovetails precisely with what’s odd about our rationalist, disembodied, brain-in-a-vat ideas about ourselves. When I fight off a disease bent on my cellular destruction, when I marvelously distribute energy and collect waste with astonishing alacrity even in my most seemingly fatigued moments, when I slip on ice and gyrate crazily but do not fall, when I unconsciously counter-steer my way into a sharp bicycle turn, taking advantage of physics I do not understand using a technique I am not even aware of using, when I somehow catch the dropped oranges before I know I’ve dropped them, when my wounds heal in my ignorance, I realize how much bigger I am than I think I am.

Engineering Security by Peter Gutmann

A better use of the time and effort required for user education would have been to concentrate on making the types of documents that are sent as attachments purely passive and unable to cause any action on the destination machine. A generalisation of this problem is that we have Turing machines everywhere — in the pursuit of extensibility, everything from Word documents to web site URLs has been turned 198 Psychology into a programming language [626]. There’s even a standards group that manages the creation of such embedded Turing machines [627][628]49. While a pen-tester once reported that seeing Javascript embedded in DNS results get executed on the target machine came as a considerable surprise to him, the later discovery that the same thing worked just as well with DHCP packets [629][630] and SSIDs [631][632][633][634] (complete with ready-made tools to exploit them [635]) confirmed that we really do have Turing machines in the most surprising places. Another security pen-tester was in the process of creating one of those whoopeecushion XML files that recursively expands to billions of nodes, and was about to upload it to a web server to watch the fireworks.

To the user they’re blurred into a single “do whatever needs to be done to this object” action, to the extent that even experienced Windows and Mac users have a hard time comprehending what Unix execute permission bits are for since for them there’s no distinct concept of “program execution” [636]. Another surprising location where Turing machines turn up is during the boot process of any reasonably recent computer (via the hardware-independent ACPI machine language), in fonts (via a stripped-down Postscript interpreter, this was used for one of the iPhone jailbreaks), and in UEFI, the Universal Extensible Firmware Interface (via the EFI bytecode or EIC) [637]. One of the most unusual Turing machines is built into the RAR archiver. RAR attempts to limit what the code being interpreted by the Turing machine can do by requiring that actions like screen output run from data with a fixed CRC32 value, so that only static strings can be displayed. Since CRC32 isn’t a cryptographically strong hash function and it’s relatively easy to make your data have an arbitrary CRC value [638], all that’s required in order to bypass the check is to have your code modify the data to produce the expected CRC value, whereupon the Turing machine’s interpreter processes it without further ado [639].

Since CRC32 isn’t a cryptographically strong hash function and it’s relatively easy to make your data have an arbitrary CRC value [638], all that’s required in order to bypass the check is to have your code modify the data to produce the expected CRC value, whereupon the Turing machine’s interpreter processes it without further ado [639]. Further Turing machines have been discovered in the x86 architecture’s interrupt handling and memory translation tables [640] and ELF executable metadata [641], with the discovery of further Turingmachine mechanisms limited only by the imagination of security researchers. You can’t even trust hardcopy any more, since it’s a trivial task to use the programmability of printer languages like Postscript to have the screen display one thing (for example a payment value of \$1,000) and the printout display another (\$10,000 or \$100, depending on which way you want to go) [642].

pages: 481 words: 125,946

What to Think About Machines That Think: Today's Leading Thinkers on the Age of Machine Intelligence by John Brockman

Nowadays we have some novel performing entities, such as Apple Siri, Microsoft Cortana, Google Now, and Amazon Echo. These exciting modern services often camp it up with “female” vocal chat. They talk like Turing women—or, rather, they emit lines of dialog somewhat like voice-over actresses. However, they also offer swift access to vast fields of combinatorial Big Data that no human brain could ever contain, or will ever contain. These services are not stand-alone Turing Machines. They’re amorphous global networks, combing through clouds of Big Data, algorithmically cataloging responses from human users, providing real-time user response with wireless broadband, while wearing the pseudohuman mask of a fake individual so as to meet some basic interface-design needs. That’s what they are. Every aspect of the tired “artificial intelligence” metaphor actively gets in the way of our grasping how, why, where, and for whom that is done.

In fact, natural cognition is likely much more complex and detailed than our current incarnations of artificial intelligence or cognitive computing. For example, how sophisticated do we have to imagine natural cognition, when quantum coherence at room temperature can help birds in our garden sense the magnetic field? How complex do we have to imagine embodied cognition in octopi, when it’s possible to build Turing Machines made exclusively of artificial muscles? How should we answer these questions, when we’re still far from recording in full detail what’s going on in our brains? My guess is that in 200 years our current thinking machines will look as primitive as the original Mechanical Turk. However sophisticated they may become, our machines are still primitive compared to the resolution and efficiency of natural cognition.

MACHINES THAT THINK? NUTS! STUART A. KAUFFMAN Pioneer of biocomplexity research; affiliate, Institute for Systems Biology, Seattle; author, Reinventing the Sacred: A New View of Science, Reason, and Religion The advent of quantum biology, light-harvesting molecules, bird navigation, perhaps smell, suggests that sticking to classical physics in biology may turn out to be simply stubborn. Now Turing Machines are discrete state (0,1), discrete time (T, T+1) subsets of classical physics. We all know they, like Shannon information, are merely syntactic. Wonderful mathematical results such as Gregory Chaitin’s omega—the probability that a program will halt, which is totally non-computable and nonalgorithmic—tell us that the human mind, as Roger Penrose also argued, cannot be merely algorithmic. Mathematics is creative.

The Art of Computer Programming: Fundamental Algorithms by Donald E. Knuth

A suitable notation for coroutines in ALGOL-like languages was introduced in Dahl and Nygaard's SIMULA I [CACM 9 A966), 671-678], 230 BASIC CONCEPTS 1-4.5 and several excellent examples of coroutines (including replicated coroutines) appear in the book Structured Programming by O.-J. Dahl, E. W. Dijkstra, and C. A. R. Hoare, Chapter 3. The first interpretive routine may be said to be the "Universal Turing Machine," a Turing machine capable of simulating any other Turing machines. Turing machines are not actual computers; they are theoretical constructions used to prove that certain problems are unsolvable by algorithms. Interpretive routines in the conventional sense were mentioned by John Mauchly in his lectures at the Moore School in 1946. The most notable early interpreters, chiefly intended to provide a convenient means of doing floating point arithmetic, were certain routines for the Whirlwind I (by C.

At the time this chapter was first written, several interesting results of this kind had been obtained (notably by J. Hartmanis and R. E. Stearns) but only for special classes of Turing machines having multiple tapes and read/write heads. The Turing machine model is comparatively unrealistic, so these results tended to have little to do with practical problems. We must admit that, as the number n of nodes created by a linking automa- automaton approaches infinity, we don't know how to build such a device physically, since we want the machine operations to take the same amount of time regardless of the size of n; if linking is represented by using addresses as in a computer memory, it is necessary to put a bound on the number of nodes, since the link fields have a fixed size. A multitape Turing machine is therefore a more realistic model when n approaches infinity. Yet it seems reasonable to believe that a linking automaton as described above leads to a more appropriate theory of the complexity of algorithms than Turing machines do, even when asymptotic formulas for large n are considered, because the theory is more likely to be relevant for practical values of n.

Yet it seems reasonable to believe that a linking automaton as described above leads to a more appropriate theory of the complexity of algorithms than Turing machines do, even when asymptotic formulas for large n are considered, because the theory is more likely to be relevant for practical values of n. Furthermore when n gets bigger than 1030 or so, not even a one-tape Turing machine is realistic: It could never be built. Relevance is more important than realism. Many years have passed since the author wrote most of the comments above, and everybody can be glad that substantial progress has indeed been made on the theory of linking automata (now called pointer machines). But of course much still remains to be done. 2.6 HISTORY AND BIBLIOGRAPHY 465 General rules for programming have been discovered. Most of them have been used in the Kansas City freight yards for a long time. — DERRICK LEHMER A949) You will, I am sure, agree with me ... that if page 534 finds us only in the second chapter, the length of the first one must have been really intolerable. — SHERLOCK HOLMES, in The Valley of Fear A888) ANSWERS TO EXERCISES / am not bound to please thee with my answers.

pages: 416 words: 112,268

Human Compatible: Artificial Intelligence and the Problem of Control by Stuart Russell

Modern revival of the topic in the context of global ecology: Garrett Hardin, “The tragedy of the commons,” Science 162 (1968): 1243–48. 30. It’s quite possible that even if we had tried to build intelligent machines from chemical reactions or biological cells, those assemblages would have turned out to be implementations of Turing machines in nontraditional materials. Whether an object is a general-purpose computer has nothing to do with what it’s made of. 31. Turing’s breakthrough paper defined what is now known as the Turing machine, the basis for modern computer science. The Entscheidungsproblem, or decision problem, in the title is the problem of deciding entailment in first-order logic: Alan Turing, “On computable numbers, with an application to the Entscheidungsproblem,” Proceedings of the London Mathematical Society, 2nd ser., 42 (1936): 230–65. 32.

Turing’s paper introducing universality was one of the most important ever written. In it, he described a simple computing device that could accept as input the description of any other computing device, together with that second device’s input, and, by simulating the operation of the second device on its input, produce the same output that the second device would have produced. We now call this first device a universal Turing machine. To prove its universality, Turing introduced precise definitions for two new kinds of mathematical objects: machines and programs. Together, the machine and program define a sequence of events—specifically, a sequence of state changes in the machine and its memory. In the history of mathematics, new kinds of objects occur quite rarely. Mathematics began with numbers at the dawn of recorded history.

See work, elimination of Tegmark, Max, 4, 114, 138 Tellex, Stephanie, 73 Tencent, 250 tensor processing units (TPUs), 35 Terminator (film), 112, 113 Tesauro, Gerry, 55 Thaler, Richard, 244 Theory of the Leisure Class, The (Veblen), 230 Thinking, Fast and Slow (Kahneman), 238 thinking, learning from, 293–95 Thornton, Richard, 133 Times, 7, 8 tool (narrow) artificial intelligence, 46, 47, 136 TPUs (tensor processing units), 35 tragedy of the commons, 31 Transcendence (film), 3–4, 141–42 transitivity of preferences, 23–24 Treatise of Human Nature, A (Hume), 167 tribalism, 150, 159–60 truck drivers, 119 TrueSkill system, 279 Tucker, Albert, 30 Turing, Alan, 32, 33, 37–38, 40–41, 124–25, 134–35, 140–41, 144, 149, 153, 160–61 Turing test, 40–41 tutoring, 100–101 tutoring systems, 70 2001: A Space Odyssey (film), 141 Uber, 57, 182 UBI (universal basic income), 121 uncertainty AI uncertainty as to human preferences, principle of, 53, 175–76 human uncertainty as to own preferences, 235–37 probability theory and, 273–84 United Nations (UN), 250 universal basic income (UBI), 121 Universal Declaration of Human Rights (1948), 107 universality, 32–33 universal Turing machine, 33, 40–41 unpredictability, 29 utilitarian AI, 217–27 Utilitarianism ((Mill), 217–18 utilitarianism/utilitarian AI, 214 challenges to, 221–27 consequentialist AI, 217–19 ideal utilitarianism, 219 interpersonal comparison of utilities, debate over, 222–24 multiple people, maximizing sum of utilities of, 219–26 preference utilitarianism, 220 social aggregation theorem and, 220 Somalia problem and, 226–27 utility comparison across populations of different sizes, debate over, 224–25 utility function, 53–54 utility monster, 223–24 utility theory, 22–26 axiomatic basis for, 23–24 objections to, 24–26 value alignment, 137–38 Vardi, Moshe, 202–3 Veblen, Thorstein, 230 video games, 45 virtual reality authoring, 101 virtue ethics, 217 visual object recognition, 6 von Neumann, John, 23 W3C Credible Web group, 109 WALL-E (film), 255 Watson, 80 wave function, 35–36 “we’re the experts” argument, 152–54 white-collar jobs, 119 Whitehead, Alfred North, 88 whole-brain emulation, 171 Wiener, Norbert, 10, 136–38, 153, 203 Wilczek, Frank, 4 Wiles, Andrew, 185 wireheading, 205–8 work, elimination of, 113–24 caring professions and, 122 compensation effects and, 114–17 historical warnings about, 113–14 income distribution and, 123 occupations at risk with adoption of AI technology, 118–20 reworking education and research institutions to focus on human world, 123–24 striving and enjoying, relation between, 121–22 universal basic income (UBI) proposals and, 121 wage stagnation and productivity increases, since 1973, 117 “work in human–machine teams” argument, 163 World Economic Forum, 250 World Wide Web, 64 Worshipful Company of Scriveners, 109 Zuckerberg, Mark, 157 ABCDEFGHIJKLMNOPQRSTUVWXYZ About the Author Stuart Russell is a professor of Computer Science and holder of the Smith-Zadeh Chair in Engineering at the University of California, Berkeley.

pages: 661 words: 187,613

The Language Instinct: How the Mind Creates Language by Steven Pinker

It took Alan Turing, the brilliant British mathematician and philosopher, to make the idea of a mental representation scientifically respectable. Turing described a hypothetical machine that could be said to engage in reasoning. In fact this simple device, named a Turing machine in his honor, is powerful enough to solve any problem that any computer, past, present, or future, can solve. And it clearly uses an internal symbolic representation—a kind of mentalese—without requiring a little man or any occult processes. By looking at how a Turing machine works, we can get a grasp of what it would mean for a human mind to think in mentalese as opposed to English. In essence, to reason is to deduce new pieces of knowledge from old ones. A simple example is the old chestnut from introductory logic: if you know that Socrates is a man and that all men are mortal, you can figure out that Socrates is mortal.

A grammar composed of a set of phrase structure rules, which build a deep-structure tree, and one or more transformational rules, which move the phrases in the deep structure to yield a surface-structure tree. transitive. See intransitive. Turing machine. A design for a simple computer consisting of a potentially infinite strip of paper, and a processor that can move along the paper and print or erase symbols on it in a sequence that depends on which symbol the processor is currently reading and which of several states it is in. Though too clumsy for practical use, a Turing machine is thought to be capable of computing anything that any digital computer, past, present, or future, can compute. Universal Grammar. The basic design underlying the grammars of all human languages; also refers to the circuitry in children’s brains that allows them to learn the grammar of their parents’ language.

The neuron at the receiving end adds up any signals coming in from excitatory synapses, subtracts any signals coming in from inhibitory synapses, and if the sum exceeds a threshold, the receiving neuron becomes active itself. A network of these toy neurons, if large enough, can serve as a computer, calculating the answer to any problem that can be specified precisely, just like the page-crawling Turing machine in Chapter 3 that could deduce that Socrates is mortal. That is because toy neurons can be wired together in a few simple ways that turn them into “logic gates,” devices that can compute the logical relations “and,” “or,” and “not” that underlie deduction. The meaning of the logical relation “and” is that the statement “A and B” is true if A is true and if B is true. An AND gate that computes that relation would be one that turns itself on if all of its inputs are on.

pages: 285 words: 78,180

Life at the Speed of Light: From the Double Helix to the Dawn of Digital Life by J. Craig Venter

In 1929 the young Irish crystallographer John Desmond Bernal (1901–1971) imagined the possibility of machines with a lifelike ability to reproduce themselves, in a “post-biological future” he described in The World, the Flesh & the Devil: “To make life itself will be only a preliminary stage. The mere making of life would only be important if we intended to allow it to evolve of itself anew.” A logical recipe to create these complex mechanisms was developed in the next decade. In 1936 Alan Turing, the cryptographer and pioneer of artificial intelligence, described what has come to be known as a Turing machine, which is described by a set of instructions written on a tape. Turing also defined a universal Turing machine, which can carry out any computation for which an instruction set can be written. This is the theoretical foundation of the digital computer. Turing’s ideas were developed further in the 1940s, by the remarkable American mathematician and polymath John von Neumann, who conceived of a self-replicating machine. Just as Turing had envisaged a universal machine, so von Neumann envisaged a universal constructor.

Watson and Crick described the elegantly functional molecular structure of the double helix, and how DNA is reproduced so its instructions can be passed down the generations. This is nature’s self-reproducing automaton. The onset of efforts to create another kind of self-reproducing automaton, along with the beginnings of artificial-life research, date back to around this period, when the first modern computers came into use. The discovery of the coded nature of life’s genetic information system led naturally to parallels with Turing machines. Turing himself, in his key 1950 paper on artificial intelligence, discussed how survival of the fittest was “a slow method” that could possibly be given a boost, not least because an experimenter was not restricted to random mutations.26 Many began to believe that artificial life would emerge from complex logical interactions within a computer. Various streams of thought combined at this point: the theories of von Neumann, with his work on early computers and his self-reproducing automaton; of Turing, who posed basic questions about machine intelligence27; and of the American mathematician Norbert Weiner, who applied ideas from information theory and self-regulating processes to living things in the field of cybernetics,28 described in his book Cybernetics, published in 1948.

pages: 720 words: 197,129

The Innovators: How a Group of Inventors, Hackers, Geniuses and Geeks Created the Digital Revolution by Walter Isaacson

Gödel’s incompleteness theory, the indeterminacy of quantum mechanics, and Turing’s answer to Hilbert’s third challenge all dealt blows to a mechanical, deterministic, predictable universe. Turing’s paper was published in 1937 with the not so snappy title “On Computable Numbers, with an Application to the Entscheidungsproblem.” His answer to Hilbert’s third question was useful for the development of mathematical theory. But far more important was the by-product of Turing’s proof: his concept of a Logical Computing Machine, which soon came to be known as a Turing machine. “It is possible to invent a single machine which can be used to compute any computable sequence,” he declared.10 Such a machine would be able to read the instructions of any other machine and carry out whatever task that machine could do. In essence, it embodied the dream of Charles Babbage and Ada Lovelace for a completely general-purpose universal machine. A different and less beautiful solution to the Entscheidungsproblem, with the clunkier name “untyped lambda calculus,” had been published earlier that year by Alonzo Church, a mathematician at Princeton.

The seismic shifts and simultaneous advances of 1937 were not directly caused by the publication of Turing’s paper. In fact, it got little notice at first. Turing asked his mother to send out reprints of it to the mathematical philosopher Bertrand Russell and a half dozen other famous scholars, but the only major review was by Alonzo Church, who could afford to be flattering because he had been ahead of Turing in solving Hilbert’s decision problem. Church was not only generous; he introduced the term Turing machine for what Turing had called a Logical Computing Machine. Thus at twenty-four, Turing’s name became indelibly stamped on one of the most important concepts of the digital age.12 CLAUDE SHANNON AND GEORGE STIBITZ AT BELL LABS There was another seminal theoretical breakthrough in 1937, similar to Turing’s in that it was purely a thought experiment. This one was the work of an MIT graduate student named Claude Shannon, who that year turned in the most influential master’s thesis of all time, a paper that Scientific American later dubbed “the Magna Carta of the Information Age.”13 Shannon grew up in a small Michigan town where he built model planes and amateur radios, then went on to major in electrical engineering and math at the University of Michigan.

In 1939 Zuse began work on a third model, the Z3, that used electromechanical relays both for the arithmetic unit and for the memory and control units. When it was completed in 1941, it became the first fully working all-purpose, programmable digital computer. Even though it did not have a way to directly handle conditional jumps and branching in the programs, it could theoretically perform as a universal Turing machine. Its major difference from later computers was that it used clunky electromagnetic relays rather than electronic components such as vacuum tubes or transistors. Zuse’s friend Schreyer went on to write a doctoral thesis, “The Tube Relay and the Techniques of Its Switching,” that advocated using vacuum tubes for a powerful and fast computer. But when he and Zuse proposed it to the German Army in 1942, the commanders said they were confident that they would win the war before the two years it would take to build such a machine.27 They were more interested in making weapons than computers.

pages: 294 words: 81,292

Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat

Fortunately Bletchley Park had a secret weapon of its own—Alan Turing. Before the war, Turing had studied mathematics and encryption at Cambridge and Princeton. He had imagined an “automatic machine,” now known as a Turing machine. The automatic machine laid out the basic principles of computation itself. The Church-Turing hypothesis, which combined Turing’s work with that of his Princeton professor, mathematician Alonso Church, really puts the starch in the pants of the study of artificial intelligence. It proposes that anything that can be computed by an algorithm, or program, can be computed by a Turing machine. Therefore, if brain processes can be expressed as a series of instructions—an algorithm—then a computer can process information the same way. In other words, unless there’s something mystical or magical about human thinking, intelligence can be achieved by a computer.

Searle, John self-awareness Self-Aware Systems self-improvement self-preservation September 11 attacks serial processing SETI (Search for Extraterrestrial Intelligence) Shostak, Seth Silicon Valley Singularitarians Singularity definitions of Kurzweil and technological Singularity Is Near, The (Kurzweil) Singularity Summit Singularity University Sir Groovy Siri 60 Minutes Skilling, Jeffrey Smart Action smart phones see also iPhone software complexity of malware see also programming solar energy space exploration “Speculations Concerning the First Ultraintelligent Machine” (Good) speech recognition SRI International stealth companies Sterrit, Roy Stibel, Jeff Stuxnet subprime mortgage crisis Symantec SyNAPSE Technological Risk (Lewis) technology journalism Terminator movies terrorism 9/11 attacks Thiel, Peter Thinking Machines, Inc. Three Mile Island tightly coupled systems Thrun, Sebastian transhumans transistors Traveller Trillion Credit Squadron Turing, Alan Turing machine Turing test Tversky, Amos two-minute problem 2001: A Space Odyssey Ulam, Stanislaw utility function Vassar, Michael Vicarious Systems Vinge, Vernor violence Virginia Tech Massacre Virtually You (Aboujaoude) voice recognition von Neumann, John Voss, Peter Wallach, Wendall Wall Street Warwick, Kevin Washington Post Watson weapons, see military Whitby, Blay “Why the Future Doesn’t Need Us” (Joy) Wired for Thought (Stibel) Wissner-Gross, Alexander D.

pages: 492 words: 118,882

The Blockchain Alternative: Rethinking Macroeconomic Policy and Economic Theory by Kariappa Bheemaiah

Luckily for the Allies, hope lay in the form of some work that had been done a few years earlier by another Cambridge mathematician, Alan Turing. Along with his mentor, Max Newman, Turing set about designing and building automated machines (Turing Machines) that could decrypt secret German military communications (as documented in the popular movie, ‘The Imitation Game’). However, owing to an obsession for secrecy during the war years and for several years after that, the achievements made by Turing and the team at Bletchley Park in computer development was kept hidden from view. Instead of Turing Machines, over the same time period, a machine called the ENIAC (Electronic Numerical Integrator And Computer) was being developed by John Mauchly and Presper Eckert across the Atlantic. Mauchly, a physicist, who was interested in meteorology tried to develop a weather prediction model.

Charles Darwin best known for the science of evolution, build his classification system on the work of Carl Linnaeus (1707-1778), the father of Taxonomy. 5 The Differential Analyser consisted of multiple rotating disks and cylinders driven by electric motors linked together with metal rods that were manually set up (sometime taking up to two days) to solve any differential equation problem. 1 157 Chapter 4 ■ Complexity Economics: A New Way to Witness Capitalism various militaries to communicate sensitive information by integrating the techniques of cryptography - a kind of natural selection. To combat this, pioneers such as Alan Turing and his mentor Max Newman, set about designing and building automated machines (Turing Machines) that could decrypt these camouflaged communiqués. This effectively changed the use of the computer and increased the diversity of the kinds of computers. After the war, advances by notable inventors such as John Mauchly, Presper Eckert and John von Neumann (a veritable polymath) led to the creation of the EDVAC (Electronic Discrete Variable Automatic Computer), the first binary computer. With binary computers coming of age, there was an increasing need to develop software to give instructions to computers.

It was during the time of developing ENIAC that he met the renowned polymath, John von Neumann, and with his help went on to design a stored-program computer, the EDVAC (Electronic Discrete Variable Automatic Computer), the first binary computer (ENIAC was decimal). See Figure 4-11. Figure 4-11. General design of the Electronic Discrete Variable Automatic Computer. Reference Source: ‘The von Neumann Architecture’, The Computing Universe, 2014 From an abstract architecture perspective, von Neumann’s design is logically equivalent to Turing’s Universal Turing Machine. In fact, von Neumann had read Turing’s theoretical papers prior to designing his machine. Ultimately it was this simple design that was built upon by successive generations of computer scientists and led to the design of computers with multiple processors and the creation of parallel computing. 219 Chapter 4 ■ Complexity Economics: A New Way to Witness Capitalism The period following the war saw great strides being made in the hardware of computers.

pages: 574 words: 164,509

Superintelligence: Paths, Dangers, Strategies by Nick Bostrom

By the Church–Turing thesis, all computable functions are computable by a Turing machine. Since any of the three forms of superintelligence could simulate a Turing machine (if given access to unlimited memory and allowed to operate indefinitely), they are by this formal criterion computationally equivalent. Indeed, an average human being (provided with unlimited scrap paper and unlimited time) could also implement a Turing machine, and thus is also equivalent by the same criterion. What matters for our purposes, however, is what these different systems can achieve in practice, with finite memory and in reasonable time. And the efficiency variations are so great that one can readily make some distinctions. For example, a typical individual with an IQ of 85 could be taught to implement a Turing machine. (Conceivably, it might even be possible to train some particularly gifted and docile chimpanzee to do this.)

Nets lacking hidden layers had previously been shown to have severely limited functionality (Minsky and Papert 1969). 26. E.g., MacKay (2003). 27. Murphy (2012). 28. Pearl (2009). 29. We suppress various technical details here in order not to unduly burden the exposition. We will have occasion to revisit some of these ignored issues in Chapter 12. 30. A program p is a description of string x if p, run on (some particular) universal Turing machine U, outputs x; we write this as U(p) = x. (The string x here represents a possible world.) The Kolmogorov complexity of x is then K(x):=minp {l(p): U(p) = x}, where l(p) is the length of p in bits. The “Solomonoff” probability of x is then defined as where the sum is defined over all (“minimal,” i.e. not necessarily halting) programs p for which U outputs a string starting with x (Hutter 2005). 31.

pages: 171 words: 51,276

Infinity in the Palm of Your Hand: Fifty Wonders That Reveal an Extraordinary Universe by Marcus Chown

Precisely how it works—a read/write head changing the digits one at a time—is not really important. The crucial thing is that Turing’s machine can be fed a description, encoded in binary, of any other machine and will then simulate that machine. Because of this unprecedented ability, Turing called it a Universal Machine. Today, it is referred to as a Universal Turing Machine. Clearly, it is unrecognizable as a computer. But that is exactly what it is. A Universal Turing Machine is the simplest computer imaginable: the irreducible atom of computing. Ironically, Turing devised his machine-of-the-mind to show not what a computer can do but what it cannot do. As a mathematician, it was the ultimate limit of computers that interested him. Turing’s search for an incomputable problem did not take him long. In fact, hardly had he begun looking when he stumbled on a simple task that no computer, no matter how powerful, could ever carry out.

Applied Cryptography: Protocols, Algorithms, and Source Code in C by Bruce Schneier

Previous Table of Contents Next ----------- Complexity of Problems Complexity theory also classifies the inherent complexity of problems, not just the complexity of particular algorithms used to solve problems. (Excellent introductions to this topic are [600, 211, 1226]; see also [1096, 27, 739].) The theory looks at the minimum time and space required to solve the hardest instance of a problem on a theoretical computer known as a Turing machine . A Turing machine is a finite-state machine with an infinite read-write memory tape. It turns out that a Turing machine is a realistic model of computation. Problems that can be solved with polynomial-time algorithms are called tractable, because they can usually be solved in a reasonable amount of time for reasonable-sized inputs. (The exact definition of “reasonable” depends on the circumstance.) Problems that cannot be solved in polynomial time are called intractable, because calculating their solution quickly becomes infeasible.

Problems can be divided into complexity classes, which depend on the complexity of their solutions. Figure 11.1 shows the more important complexity classes and their presumed relationships. (Unfortunately, not much about this material has been proved mathematically.) On the bottom, the class P consists of all problems that can be solved in polynomial time. The class NP consists of all problems that can be solved in polynomial time only on a nondeterministic Turing machine: a variant of a normal Turing machine that can make guesses. The machine guesses the solution to the problem—either by making “lucky guesses” or by trying all guesses in parallel—and checks its guess in polynomial time. NP ’s relevance to cryptography is this: Many symmetric algorithms and all public-key algorithms can be cracked in nondeterministic polynomial time. Given a ciphertext C, the cryptanalyst simply guesses a plaintext, X, and a key, k, and in polynomial time runs the encryption algorithm on inputs X and k and checks whether the result is equal to C.

Furthermore, this argument is not applicable to all classes of ciphers; in particular, it is not applicable to one-time pads—for any C, there are many X, k pairs that yield C when run through the encryption algorithm, but most of these X s are nonsense, not legitimate plaintexts. Figure 11.1 Complexity classes. The class NP includes the class P, because any problem solvable in polynomial time on a deterministic Turing machine is also solvable in polynomial time on a nondeterministic Turing machine; the guessing stage can simply be omitted. If all NP problems are solvable in polynomial time on a deterministic machine, then P = NP. Although it seems obvious that some NP problems are much harder than others (a brute-force attack against an encryption algorithm versus encrypting a random block of plaintext), it has never been proven that P ` NP (or that P = NP).

pages: 209 words: 53,236

The Scandal of Money by George Gilder

Each of these thinkers attempted to define his philosophy in utilitarian and determinist mathematics. Addressing pure logic as math, Gödel concluded that even arithmetic cannot constitute a complete and coherent system. All logical schemes have to move beyond self-referential circularity and invoke axioms outside themselves. Turing explored the possibility of a complete and self-sufficient logical machine and found it an impossible dream. His “Turing machine” defined the abstract logical architecture of all computers. But all computers must depend on what Turing called human “oracles” to define their symbols, instructions, and programs and to interpret their output, which as a stream of off-and-on currents or charges is ostensibly meaningless.1 Shannon set out to create a purely mathematical definition of information and ended up providing a logical scheme of communication that depends on human subjectivity and creativity for meaning and purpose.

See also secular stagnation Stiglitz, Joseph, 89 stimulus, xiii, xviii, 14, 150, 171 Stockman, David, 40, 48, 50 Stone Age, 19, 169 Summers, Lawrence “Larry,” 3, 7, 56, 114, 134, 150 Sun Microsystems, 119 Swanson’s Law, 18 Sweden, Swedes, 5, 36 Szabo, Nick, 64, 72–76, 160 T Taiwan, Taiwanese, 30, 46–48, 106 Tamny, John, 11 tax cuts, xiv, 12, 151, 153 Taylor, John, 35 Taylor Rule, 35 technology, xi, xviii–xix, xxi–xxii, 2–3, 5–9, 12–13, 19, 41, 43, 45, 56–58, 63–67, 70, 83, 90, 92, 100, 143, 158, 171 Tel Aviv, 118 Texas, 55, 99 Thailand, 110 Thiel, Peter, xi, 14, 56 third world, the, 6, 118, 152 Tiananmen Square, 29, 43 Troubled Asset Relief Program, 55 Trump, Donald, 40, 113 Turing, Alan, 64, 138, 168, 174 Turing machine, 138 Turner, Adair, 88–96, 110 Turner-Piketty thesis, 93, 96 Tversky, Amos, xx Twilight of Sovereignty (Wriston), 101 U UBS, 127 unemployment rates, xi–xiii, 35, 55, 100, 150, 152 Unenumerated, 73 “unicorns,” 50, 87, 119–23, 171 Union of Soviet Socialist Republics (USSR), 42. See also Soviet Union United Kingdom (UK), 87–89, 104. See also Great Britain United Nations, 82, 98 United States, 7, 41, 159 banks in the, 104–5 economy of the, 13, 15, 36, 49–50, 55, 68, 88, 108, 118, 153–54, 159 entitlement liabilities in the, 3 gold standard and, 41, 68, 100, 158 interest rates in, 10–11 relationship with China, 45–50 upward mobility in the, 4 U.S.

pages: 523 words: 143,139

Algorithms to Live By: The Computer Science of Human Decisions by Brian Christian, Tom Griffiths

See, for example, Russell and Wefald, Do the Right Thing, and Horvitz and Zilberstein, “Computational Tradeoffs Under Bounded Resources.” Tom and his colleagues have used this approach to develop models of human cognition; see Griffiths, Lieder, and Goodman, “Rational Use of Cognitive Resources.” analogy to a human mathematician: In section 9 of Turing, “On Computable Numbers,” Turing justifies the choices made in defining what we now call a Turing machine by comparing them to operations that a person might carry out: a two-dimensional piece of paper becomes a one-dimensional tape, the person’s state of mind becomes the state of the machine, and symbols are written and read as the person or machine moves around on the paper. Computation is what a computer does, and at the time the only “computers” were people. we are irrational and error-prone: For example, see Gilovich, How We Know What Isn’t So; Ariely and Jones, Predictably Irrational; and Marcus, Kluge. 1.

man vs. nature: Appropriately, schoolchildren in the twenty-first century increasingly learn about “person vs. nature,” “person vs. self,” “person vs. person,” and “person vs. society.” “a clever man would put the poison into his own goblet”: The Princess Bride, screenplay by William Goldman; 20th Century Fox, 1987. “anticipating the anticipations of others”: Attributed to Keynes in Gregory Bergman, Isms, Adams Media, 2006. it was the halting problem that inspired Turing: Alan Turing considers the halting problem and proposes the Turing machine in “On Computable Numbers, with an Application to the Entscheidungsproblem” and “On Computable Numbers, with an Application to the Entscheidungsproblem. A Correction.” “poker players call it ‘leveling’”: Dan Smith, personal interview, September 11, 2014. “You don’t have deuce–seven”: This took place at the “Full Tilt Poker Durrrr Million Dollar Challenge,” held at Les Ambassadeurs Club in London, November 17–19, 2009, and was televised on Sky Sports.

See also Transmission Control Protocol (TCP) teaching to the test technical investors telegraph telephone temperature temporal locality Tenenbaum, Josh tennis tournaments Texas Hold ’Em text messages “TeX Tuneup of 2012, The” (Knuth) Thanksgiving commerce theft, irrational responses and Things a Computer Scientist Rarely Talks About (Knuth) 37% rule Thoreau, Henry David thrashing threading Three Princes of Serendip, The Threshold Rule throughput Tibshirani, Robert Tikhonov, Andrey time interval of timeboxing time costs time management time-space tradeoffs Tolins, Jackson Tomlinson, Ray town size distributions Toxoplasma gondii traffic tragedy of the commons training scars transit systems Transmission Control Protocol (TCP) ACKs and backchannels and flow control and price of anarchy and traveling salesman problem Treat, Tyler “Treatise on the Probability of the Causes of Events” (Laplace) Tree, Jean Fox Trick, Michael triple handshake triple-or-nothing game trip planning. See also traveling salesman problem Turing, Alan Turing machine turn-taking Tuskegee Syphilis Study Tversky, Amos Twain, Mark twin primes Twitter two-factor models two-machine scheduling UC Berkeley Ulam, Stanislaw “Stan” Ullman, Ellen uncertainty Unilever “Unreasonable Effectiveness of Data, The” (Norvig) “up or out” system Upper Confidence Bound urban planners US Armed Forces US Census US House of Representatives US Public Health Service U-turns vacation email and itinerary of policy on vaccination Vail, Alfred valet stand veil of ignorance verification, gap between search and Vickrey, William Vickrey auction Vita Coco voicemail voice transmission, Internet Voltaire Von Neumann, John Wagenmakers, E.

pages: 573 words: 157,767

From Bacteria to Bach and Back: The Evolution of Minds by Daniel C. Dennett

This is very hard to imagine or even to take seriously, and this has inspired some thinkers to conclude that since evolution couldn’t create a computer (or a computer program to run on it), human minds must not be products of natural selection alone, and the aspirations of Artificial Intelligence must be forlorn. The mathematician and physicist Roger Penrose (1989) is the most illustrious example. For the sake of argument let’s concede that evolution by natural selection could not directly evolve a living digital computer (a Turing machine tree or a Turing machine turtle, for example). But there is an indirect way: let natural selection first evolve human minds, and then they can intelligently design Hamlet, La Sagrada Familia, and the computer, among many other wonders. This bootstrapping process seems almost magical at first, even self-contradictory. Isn’t Shakespeare, or Gaudí, or Turing a more magnificent, brilliant “creation” than any of their brainchildren?

More importantly, he showed that if their instructions included conditional branching (if-then instructions, such as “if you observe 0, replace it with 1 and move left, and if you observe 1 leave it as is and move right, and change to state n.”), then these machines could pursue indefinitely complex paths determined by the instructions, which gave them a remarkable competence: they could do anything computational. In other words, a programmable digital computer is a Universal Turing Machine, capable of mimicking any special-purpose digital computer by following a set of instructions that implement that special-purpose computer in software.13 (You don’t have to rewire your smartphone to get it to do new tasks; just download an app and turn it into a star finder or translator or hand calculator or spell-checker or.…) A huge Design Space of information-processing was made accessible by Turing, and he foresaw that there was a traversable path from Absolute Ignorance to Artificial Intelligence, a long series of lifting steps in that Design Space.

No, and answering this important question is a major task for the rest of the book. The short explanation is that Turing himself is one of the twigs on the Tree of Life, and his artifacts, concrete and abstract, are indirectly products of the blind Darwinian processes in the same way spider webs and beaver dams are, so there is no radical discontinuity, no need for a skyhook, to get us from spiders and beaver dams to Turing and Turing machines. Still, there is a large gap to be filled, because Turing’s way of making things was strikingly different from the spider’s way and the beaver’s way, and we need a good evolutionary account of that difference. If competence without comprehension is so wonderfully fecund—capable of designing nightingales, after all—why do we need comprehension—capable of designing odes to nightingales and computers?

pages: 893 words: 199,542

Structure and interpretation of computer programs by Harold Abelson, Gerald Jay Sussman, Julie Sussman

This was first demonstrated in a clear way by Alan M. Turing (1912-1954), whose 1936 paper laid the foundations for theoretical computer science. In the paper, Turing presented a simple computational model – now known as a Turing machine – and argued that any “effective process” can be formulated as a program for such a machine. (This argument is known as the Church-Turing thesis.) Turing then implemented a universal machine, i.e., a Turing machine that behaves as an evaluator for Turing-machine programs. He used this framework to demonstrate that there are well-posed problems that cannot be computed by Turing machines (see exercise 4.15), and so by implication cannot be formulated as “effective processes.” Turing went on to make fundamental contributions to practical computer science as well. For example, he invented the idea of structuring programs using general-purpose subroutines.

Solver tower of types tracing instruction execution register assignment transform-painter transparency, referential transpose a matrix tree B-tree binary, see also binary tree combination viewed as counting leaves of enumerating leaves of fringe of Huffman lazy mapping over red-black represented as pairs reversing at all levels tree accumulation tree->list... tree-map tree-recursive process order of growth trigonometric relations true true true? truncation error truth maintenance try-again Turing machine Turing, Alan M., [2] Turner, David, [2], [3] type field type tag, [2] two-level type(s) cross-type operations dispatching on hierarchy in symbolic algebra hierarchy of lowering, [2] multiple subtype and supertype raising, [2] subtype supertype tower of type-inferencing mechanism type-tag using Scheme data types typed pointer typing input expressions unbound variable unev register unification discovery of algorithm implementation pattern matching vs., [2] unify-match union-set binary-tree representation ordered-list representation unordered-list representation unique (query language) unique-pairs unit square univariate polynomial universal machine explicit-control evaluator as general-purpose computer as University of California at Berkeley University of Edinburgh University of Marseille UNIX, [2] unknown-expression-type unknown-procedure-type unordered-list representation of sets unspecified values define display if without alternative newline set!

pages: 1,387 words: 202,295

Structure and Interpretation of Computer Programs, Second Edition by Harold Abelson, Gerald Jay Sussman, Julie Sussman

This was first demonstrated in a clear way by Alan M. Turing (1912-1954), whose 1936 paper laid the foundations for theoretical computer science. In the paper, Turing presented a simple computational model—now known as a Turing machine—and argued that any “effective process” can be formulated as a program for such a machine. (This argument is known as the Church-Turing thesis.) Turing then implemented a universal machine, i.e., a Turing machine that behaves as an evaluator for Turing-machine programs. He used this framework to demonstrate that there are well-posed problems that cannot be computed by Turing machines (see Exercise 4.15), and so by implication cannot be formulated as “effective processes.” Turing went on to make fundamental contributions to practical computer science as well. For example, he invented the idea of structuring programs using general-purpose subroutines.

pages: 931 words: 79,142

Concepts, Techniques, and Models of Computer Programming by Peter Van-Roy, Seif Haridi

The records of table 3.1, HTML and XML documents, and the declarative user interfaces of section 3.8.2 can all be created and transformed easily by a program. A programmable declarativeness. This is as expressive as a Turing machine.1 For example, table 2.1 deﬁnes a language at this level. See the introduction to chapter 6 for more on the relationship between the descriptive and programmable levels. There are two fundamentally diﬀerent ways to view programmable declarativeness: A deﬁnitional view, where declarativeness is a property of the component implementation. For example, programs written in the declarative model are guaranteed to be declarative, because of properties of the model. An observational view, where declarativeness is a property of the component in- 1. A Turing machine is a simple formal model of computation, ﬁrst deﬁned by Alan Turing, that is as powerful as any computer that can be built, as far as is known in the current state of computer science.

Figure 2.5 shows the three ways that the translation approach has been used for deﬁning programming languages: 2.1 Deﬁning practical programming languages 41 Programming language Translations Kernel language Foundational calculus Abstract machine Aid the programmer in reasoning and understanding Mathematical study of programming Efficient execution on a real machine Figure 2.5: Translation approaches to language semantics. The kernel language approach, used throughout the book, is intended for the programmer. Its concepts correspond directly to programming concepts. The foundational approach is intended for the mathematician. Examples are the Turing machine, the λ calculus (underlying functional programming), ﬁrst-order logic (underlying logic programming), and the π calculus (to model concurrency). Because these calculi are intended for formal mathematical study, they have as few elements as possible. The abstract machine approach is intended for the implementor. Programs are translated into an idealized machine, which is traditionally called an abstract machine or a virtual machine.3 It is relatively easy to translate idealized machine code into real machine code.

A Turing machine is a simple formal model of computation, ﬁrst deﬁned by Alan Turing, that is as powerful as any computer that can be built, as far as is known in the current state of computer science. That is, any computation that can be programmed on any computer can also be programmed on a Turing machine. 116 Declarative Programming Techniques terface. The observational view follows the principle of abstraction: that to use a component it is enough to know its speciﬁcation without knowing its implementation. The component just has to behave declaratively, i.e., as if it were independent, stateless, and deterministic, without necessarily being written in a declarative computation model. This book uses both the deﬁnitional and observational views. When we are interested in looking inside a component, we will use the deﬁnitional view. When we are interested in how a component behaves, we will use the observational view.

pages: 561 words: 167,631

2312 by Kim Stanley Robinson

Titan, by far the largest Saturnian moon, is bigger than Mercury or Pluto. More about Titan later. Extracts (9) One question for computability: is the problem capable of producing a result If a finite number of steps will produce an answer, it is a problem that can be solved by a Turing machine Is the universe itself the equivalent of a Turing machine? This is not yet clear Turing machines can’t always tell when the result has been obtained. No oracle machine is capable of solving its own halting problem A Turing jump operator assigns to each problem X a successively harder problem, X prime. Setting a Turing machine the problem of making its own Turing jump creates a recursive effect called the Ouroboros All problems solvable by quantum computers are also solvable by classical computers. Making use of quantum mechanical phenomena only increases speed of operation two popular physical mechanisms, dots and liquids.

Wireless by Charles Stross

” “What’s thinking got to do with—” I stop. It’s useless pretending. “Fuck. Okay, you’re a research cell working on some ultimate black problem, and you’re using the Farm because it’s about the most secure environment anyone can imagine, and you’re emulating some kind of minimal universal Turing machine using the chessboard. Say, a 2,5 UTM—two registers, five operations—you can encode the registers positionally in the chessboard’s two dimensions, and use the moves to simulate any other universal Turing machine, or a transform in an eleven-dimensional manifold like AXIOM REFUGE—” Godel’s waving frantically. “She’s coming! She’s coming!” I hear doors clanging in the distance. Shit. “But why are you so afraid of the Nurses?” “Back channels,” Cantor says cryptically. “Alan, be a good lad and try to jam the door for a minute, will you?

“You’re a theoretical research cell, aren’t you?” “We prefer to call ourselves a think tank.” Cantor nods gravely. “Or even”—Mandelbrot takes a deep breath—“a brains trust!” “A-ha! AhaHAHAHA! Hic.” Godel covers his mouth, face reddening. “What do you think the rules are?” Cantor repeats, and they’re still staring at me, as if, as if . . . “Why does it matter?” I ask. I’m thinking that it could be anything; a 2,5 universal Turing machine encoded in the moves of the pawns—that would fit—whatever it is, it’s symbolic communication, very abstract, very pared-back, and if they’re doing it in this ultimately firewalled environment and expecting to report directly to the Board, it’s got to be way above my security clearance— “Because you’re acting cagey, lad. Which makes you too bright for your own good. Listen to me: just try to convince yourself that we’re playing chess, and Matron will let you out of here.”

pages: 239 words: 64,812

Geek Sublime: The Beauty of Code, the Code of Beauty by Vikram Chandra

As later grammarians put it, we are lakṣaṇaikacakṣuṣka, solely guided by rules. Correctness is guaranteed by the correct application of rules.13 The systematic, deterministic workings of these rules may remind you of the orderly on-and-off workings of logic gates. The Ashtadhyayi is, of course, an algorithm, a machine that consumes phonemes and morphemes and produces words and sentences. Panini’s machine—which is sometimes compared to the Turing machine—is also the first known instance of the application of algorithmic thinking to a domain outside of logic and mathematics. The influence of the Ashtadhyayi was and remains immense. In the Sanskrit ecumene, later grammarians suggested some additions and modifications, and other grammars were written before and after Panini’s intervention, but all have been overshadowed by this one “tersest and yet most complete grammar of any language.”14 The West discovered the Ashtadhyayi during the great flowering of Orientalist research and translation in the eighteenth and nineteenth centuries.

Ada, the Enchantress of Numbers: Poetical Science. Sausalito: Critical Connection, 2010. Kindle Edition. Torvalds, Linus. “Re: Stable Linux 2.6.25.10.” Gmane.org, July 15, 2008. http://article.gmane.org/gmane.linux.kernel/706950. Turing, Alan. “On Computable Numbers, with an Application to the Entscheidungs-problem (1936).” In The Annotated Turing: A Guided Tour through Alan Turing’s Historic Paper on Computability and the Turing Machine, by Charles Petzold. Indianapolis: Wiley, 2008. Urban, Hugh B. The Economics of Ecstasy: Tantra, Secrecy, and Power in Colonial Bengal. New York: Oxford University Press, 2001. ______. The Power of Tantra: Religion, Sexuality and the Politics of South Asian Studies. London: IB Tauris, 2009. Kindle edition. Varma, Roli. “Computing Self-Efficacy among Women in India.” Journal of Women and Minorities in Science and Engineering 16, no. 3 (2010): 257–74. ______.

The Ethical Algorithm: The Science of Socially Aware Algorithm Design by Michael Kearns, Aaron Roth

We deliberately say “computation” and not “computers,” because for the purposes of this book (and perhaps even generally), the most important thing to know about theoretical computer science is that it views computation as a ubiquitous phenomenon, not one that is limited to technological artifacts. The scientific justification for this view originates with the staggeringly influential work of Alan Turing (the first theoretical computer scientist) in the 1930s, who demonstrated the universality of computational principles with his mathematical model now known as the Turing machine. Many trained in theoretical computer science, ourselves included, view the field and its tools not simply as another scientific discipline but as a way of seeing and understanding the world around us—perhaps much as those trained in theoretical physics in an earlier era saw their own field. So a theoretical computer scientist sees computation taking place everywhere—certainly in computers, but also in nature (in genetics, evolution, quantum mechanics, and neuroscience), in society (in markets and other systems of collective behavior), and beyond.

See also gender data and bias sexual orientation data, 25–26, 51–52, 86–89 Shapley, Lloyd, 129–30 The Shining (King), 118, 120 Shmatikov, Vitaly, 25 Simmons, Joe, 157–58 simple algorithms, 174 simulated game play, 134–35 single nucleotide polymorphisms (SNPs), 30–31 singularity, 180 Smith, Adam, 36 smoking, 27–28, 34–36, 39, 51–54 Snowden, Edward, 47–48 social awareness, 16–17, 131 social welfare, 97, 113, 115 societal norms and values, 12, 15–18, 20–21, 86, 134, 169–70 socioeconomic groups, 57 software engineers, 48–49 sorting algorithms, 4–5 spurious correlations, 150, 159 stable equilibriums, 99–100, 128 stable matchings, 128–30 standoffs, 98 statistics and adaptive data analysis, 159 and aggregate data, 22–23, 30–31 and algorithmic violations of fairness and privacy, 96 Bayesian, 38–39, 173 and the Bonferroni correction, 149 criminal sentencing, 14–15 and differential privacy, 40, 44–45, 47–52, 167 and fairness issues, 193–94 flawed statistical reasoning, 140–41 and interpretability of model outputs, 171–72 and investing scams, 138–41 and medical research, 34 and online shopping algorithms, 117 and p-hacking, 144–45, 153–55, 157–59, 161, 164, 169–70 statistical modeling, 90 statistical parity, 69–74, 84 and US Census data, 195 and “word embedding” models, 57–58, 63–64 stock investing, 81, 137–41 strategy, 97–102 Strava, 50–51 subgroup protections, 88–89 subjectivity, 86, 172 subpoenas, 41, 45–46, 48 “superfood” research, 143–44 superintelligent AI, 179–81, 185, 187 supervised machine learning, 63–64, 69–70, 183 supply and demand, 94–97 Supreme Court nomination hearings, 24 survey responses, 40–45 Sweeney, Latanya, 23 synthetic images, 132–35 target populations, 172–73 TD-Gammon program, 132 technological advances, 100–101, 103 TED Talks, 141–42 telemarketing calls, 38 temporal difference, 132 Tesauro, Gerry, 132 test preparation courses, 74–75 theoretical computer science, 11–13, 36 threshold rule, 75 Title VII, 15 tobacco research, 34–36 torturing data, 156–59 traffic and navigation problems, 19–20, 101–11, 113–15, 179 training data, 61–62 transparency, 125–26, 170–71 trust, 45–47, 170–71, 194–95 “truthfulness” in game theory, 114 “tunable” parameters, 37–39, 125–26, 171 Turing, Alan, 11–12, 180 Turing Award, 133 Turing machine, 11 23andMe, 54–55 2020 Census, 49, 195 Twitter Predictor Game, 52–53 two-route navigation problem, 107 two-sided markets, 127 2001: A Space Odyssey (film), 184 typing, 118 underspecified problems, 183 unintended consequences, 6–8, 16–17, 184–85, 188 unique data points, 26–27 unsupervised learning, 63–64 upstream effects, 194 US Census Bureau, 49 US Constitution, 49 US Equal Employment Opportunity Commission, 86–87 user identifiers, 24 user modeling, 121 user ratings, 118–21 US military deployments, 50–51 US State Department, 15 validation sets, 162–63 value alignment problems, 184 values.

Prime Obsession:: Bernhard Riemann and the Greatest Unsolved Problem in Mathematics by John Derbyshire

Andrew has also computed the CLIMBING THE CRITICAL LINE 261 first 100 zeros to 1,000 decimal places each.93 The first zero (I mean, of course, its imaginary part) begins 14.13472514173469379045725198356247027078425711569924 31756855674601499634298092567649490103931715610127 79202971548797436766142691469882254582505363239447 13778041338123720597054962195586586020055556672583 601077370020541098266150754278051744259130625448… V. There are stories behind Table 16-1. That A.M. Turing, for example, is the very same Alan Turing who worked in mathematical logic, developing the idea of the Turing Test (a way of deciding whether a computer or its program is intelligent), and of the Turing machine (a very general, theoretical type of computer, a thought experiment used to tackle certain problems in mathematical logic). There is a Turing Prize for achievement in computer science, awarded annually since 1966 by the Association for Computing Machinery, equivalent to a Fields Medal94 in mathematics, or to a Nobel Prize in other sciences. Turing was fascinated by the Riemann Hypothesis. By 1937 (his 26th year) he had made up his mind that the Hypothesis was false and conceived the idea of constructing a mechanical computing device with which to generate a counterexample—a zero off the critical line.

The method adopted for Principia Mathematica offered no guarantee against flaws, like the flaw Russell had spotted in Frege’s work. Hilbert’s “metamathematics” program tried to encompass both logic and mathematics in a more waterproof symbolism. This inspired the work of Kurt Gödel and Alan Turing. Gödel proved important theorems by attaching numbers to Hilbert-type symbols; Turing coded both instructions and data as arbitrary numbers in his “Turing machine” concept. Picking up on this idea, John von Neumann developed the stored-program concept on which all modern software is based, that code and data can be represented in the same way in a computer’s memory…. EPILOGUE 138. In a letter to his brother dated June 26, 1854, he mentioned a recurrence of mein altes Übel—“my old malady”—brought on by a spell of bad weather. 139. In the modern municipality of Verbania. 140.

., 373 Steiner, Jakob, 119 Step functions, 124, 297-302 Stern, Moritz, 27 Stevens, Wallace, 198 Stieltjes integral, 160 Stieltjes, Thomas, 154, 160, 161, 376 Stirling, James, 123 Strachey, Lytton, 370, 380 Summation sign (Σ), 78 “Sweet Betsy from Pike” (tune), 394, 395 Sylvester, James Joseph, 154, 225 T “Taiye,” 82-83; pl. 8 Teichmüller, Oswald, 255-256, 383 Teichmüller Theory, 383 Telegraph, electric, 120 Tenenbaum, Gérald, 389 Theory of Numbers (Hardy and Wright), 302 Theory of performances, 52 Theory of the Riemann Zeta-function, The (Titchmarsh), 217, 384 Thread, The (Davis), 122 Three-body problem, 314 Time reversal symmetry, 316 Titchmarsh, Edward Charles, 217, 258, 262, 394 Tocqueville, Alexis de, 118 Topology, 18, 121, 209, 374 Trace formula, 321, 388 Transcendental numbers, 174, 185, 354 Trigonometry, 18 Trinity College, Cambridge, 193, 223224, 225-226, 229, 287, 379, 380 Trinity Hall, Cambridge, 380 Truman, Harry S., 166 Turán, Paul, 238, 239, 378 Turing, Alan, 258, 261-262, 357, 377, 391; pl. 5 Turing machine, 261, 391 Turing Prize, 261 Turing Test, 261 Twiddle principle, 46 Twiddle sign, 45, 368 U Uncle Petros and Goldbach’s Conjecture (Doxiadis), 90 Universal Computer, The (Davis), 187 Universities, academies distinguished from, 30 University of Bordeaux, 158-159 University of Breslau, 93, 94 University of Bristol, England, 390 University of Cambridge, 259 University of Copenhagen, 228 INDEX 421 University of Leipzig, 270 University of Louvain, 161 University of Manchester, 259 University of Marburg, 270 University of Minnesota, 322, 357 University of Wales, Cardiff, 391 University of Washington in Seattle, 352 Upper bound, 235-236 Wilhelm I, German Kaiser, 160 William IV, King of England and Hanover, 26 Wolfram, Stephen, 389 Wright, Sir Edward, 302 V Z Vallée Poussin, Charles de la, x, 153, 155-156, 161, 189, 223, 232, 237, 352, 356, 376; pl. 3 Value plane, 219-221, 335 Victoria, Queen of England, 26 Vienna Academy, 153 “Villikens and his Dinah” (song), 395 Vis viva equation, 313, 315 Volterra, Vito, 92 Vorhauer, Ulrike, 350, 390 z plane, 379 Zeno, 88 Zeros, 85 in conjugate pairs, 190-191 density of, 396 dividing by, 35 of a function, 139, 154, 160, 169, 190-192, 206, 211-212, 385 gradient, 110 mathematical legitimacy, 89 non-trivial, 77, 190-192, 198-199, 217, 221-222, 232, 289-290, 295 number of, 258 order of a, 385 of a polynomial, 173 power, 65, 66 spacing in critical strip, 217-218, 232, 290 trivial, 148, 169, 206 Zeta function, 135 Basel problem and, 63-65 on complex plane, 183, 213-216 critical line, 221-222 critical strip, 216 decomposition, 358 domain, 142-145, 205-206 expression, 77, 79, 137 graph, 142-144 Mertens’s function and, 250-251 Möbius function and, 250-251 W w plane, 379 Wagon, Stan, 389 Wallace, William, 92 Wave functions, 318 Weber, Heinrich, 29, 119, 257, 366 Weber, Wilhelm, 27, 120, 127, 374 Wedeniwski, Sebastian, 258, 259 Weierstrass, Karl, 135, 164 Weil, André, 270, 325, 385, 395; pl. 6 Weil Conjectures, 270, 355 Wendland, 22, 94 Weyl, Hermann, 170, 255, 385 Whitehead, Alfred North, 225 Whitemore, Hugh, 262 Wigner, Eugene, 282, 387 Wild Numbers, The (Schogt), 161 Wiles, Andrew, 90, 161, 245, 271, 354355 Y Yorke, James, 387 422 sieve of Eratosthenes and, 102-104, 138 values of, 79-81, 146-147, 263 visualization, 216-218 zeros of, 154, 160, 169, 190-192, 206, 211-212, 217-218, 221-222, 232-233, 234, 259-261, 287-288, 295, 395 INDEX ζ(s), 77.

pages: 250 words: 73,574

Nine Algorithms That Changed the Future: The Ingenious Ideas That Drive Today's Computers by John MacCormick, Chris Bishop

But to assist his argument, Turing describes a particular type of machine (for Turing, a “machine” is what we would call a “computer” today) that can also do calculations. Part of the paper is devoted to demonstrating that certain calculations cannot be performed by these machines—this is the proof of undecidability, which we have discussed in detail already. But another part of the same paper makes a detailed and compelling argument that Turing's “machine” (read: computer) can perform any calculation done by a “computer” (read: human). You may be beginning to appreciate why it is difficult to overstate the seminal nature of Turing's “On computable numbers.” paper. It not only defines and solves some of the most fundamental problems in computer science, but also strikes out into the heart of a philosophical minefield, making a persuasive case that human thought processes could be emulated by computers (which, remember, had not been invented yet!).

See database, table; virtual table tag Tale of Two Cities, A target value Taylor, A. J. P. TCP telegraph telephone. See phone terminate theology Thompson, Thomas M. threshold; soft title: of this book; of a web page to-do list to-do list trick Tom Sawyer training. See also learning training data transaction: abort; atomic; in a database; on the internet; rollback travel agent Traveling Salesman Problem trick, definition of TroubleMaker.exe Turing, Alan Turing machine Turing test TV Twain, Mark twenty questions, game of twenty-questions trick two-dimensional parity. See parity two-phase commit U.S. Civil War Ullman, Jeffrey D. uncomputable. See also undecidable undecidable. See also uncomputable undefined unicycle universe unlabeled Vazirani, Umesh verification Verisign video video game virtual table virtual table trick Waters, Alice web.

pages: 230

Purely Functional Data Structures by Chris Okasaki

It has since been used in many situations, including real-time queues [HM81], realtime deques [Hoo82, GT86, Sar86, CG93], catenable deques [BT95], and the order maintenance problem [DS87]. Deques Hood [Hoo82] first modified the real-time queues of [HM81] to obtain real-time deques based on global rebuilding. Several other researchers later duplicated this work [GT86, Sar86, CG93]. These implementations are all similar to techniques used to simulate multihead Turing machines [Sto70, FMR72, LS81]. Hoogerwoord [Hoo92] proposed amortized deques based on batched rebuilding, but, as always with batched rebuilding, his implementation is not efficient when used persistently. The real-time deques in Figure 8.4 first appeared in [Oka95c]. Coroutines and Lazy Evaluation Streams (and other lazy data structures) have frequently been used to implement a form of coroutining between the producer of a stream and the consumer of a stream.

Information Processing Letters, 14(5):205-206, July 1982. (p. 55) T. W. Butler. Computer response time and user performance. In Conference on Human Factors in Computing Systems, pages 5862, December 1983. (p. 83) Richard S. Bird and Philip Wadler. Introduction to Functional Programming. Prentice Hall International, 1988. (p. 29) Tyng-Ruey Chuang and Benjamin Goldberg. Real-time deques, multihead Turing machines, and purely functional programming. In Conference on Functional Programming Languages and Computer Architecture, pages 289-298, June 1993. (pp. 109,113) Thomas H. Cormen, Charles E. Leiserson, and Ronald L. Rivest. Introduction to algorithms. MIT Press, 1990. (p. 27) Richard H. Connelly and F. Lockwood Morris. A generalization of the trie data structure. Mathematical Structures in Computer Science, 5(3):381-418, September 1995.

pages: 252 words: 74,167

Thinking Machines: The Inside Story of Artificial Intelligence and Our Race to Build the Future by Luke Dormehl

It is a vastly complex machine, many, many times more complicated than any other machine ever made with hands; but after all a machine. It has been likened to a steam engine. But that was before we knew as much about the way it works as we know now. It really is a gas engine: like the engine of an automobile, a motor boat, or a flying machine. One of Turing’s most significant concepts related to something called the Universal Turing Machine. Instead of computers being single-purpose machines used for just one function, he explained how they could be made to perform a variety of tasks by reading step-by-step instructions from a tape. By doing so, Turing wrote that the computer ‘could in fact be made to work as a model of any other machine’. This meant that it was not necessary to have infinite different machines carrying out different tasks.

pages: 1,280 words: 384,105

The Best of Best New SF by Gardner R. Dozois

Don’t tell me: our set of twenty thousand polysaccharide Wang Tiles just happens to form the Turing Machine for calculating pi.” “No. What they form is a universal Turing Machine. They can calculate anything at all – depending on the data they start with. Every daughter fragment is like a program being fed to a chemical computer. Growth executes the program.” “Ah.” Paolo’s curiosity was roused – but he was having some trouble picturing where the hypothetical Turing Machine put its read/write head. “Are you telling me only one tile changes between any two rows, where the ‘machine’ leaves its mark on the ‘data tape’ . . . ?” The mosaics he’d seen were a riot of complexity, with no two rows remotely the same. Karpal said, “No, no. Wang’s original example worked exactly like a standard Turing Machine, to simplify the argument . . . but the carpets are more like an arbitrary number of different computers with overlapping data, all working in parallel.

He said, “Hao Wang proved a powerful theorem, twenty-three hundred years ago. Think of a row of Wang Tiles as being like the data tape of a Turing Machine.” Paolo had the library grant him knowledge of the term; it was the original conceptual form of a generalized computing device, an imaginary machine which moved back and forth along a limitless one-dimensional data tape, reading and writing symbols according to a given set of rules. “With the right set of tiles, to force the right pattern, the next row of the tiling will look like the data tape after the Turing Machine has performed one step of its computation. And the row after that will be the data tape after two steps, and so on. For any given Turing Machine, there’s a set of Wang Tiles which can imitate it.” Paolo nodded amiably. He hadn’t heard of this particular quaint result, but it was hardly surprising.

pages: 232

A Discipline of Programming by E. Dijkstra

This shows quite clearly why I regard general recursion as an order of magnitude more complicated than just repetition, and it therefore hurts me to see the semantics of the repetitive construct "while B do S" defined as that of the call "whiledo(B, S)" of the recursive procedure (described in ALGOL 60 syntax): procedure whiledo (condition, statement); begin if condition then begin statement; whiledo (condition, statement) end end Although correct, it hurts me, for I don't like to crack an egg with a sledgehammer, no matter how effective the sledgehammer is for doing so. For the generation of theoretical computing scientists that became involved in the subject during the sixties, the above recursive definition is often not only "the natural one", but even "the true one". In view of the fact that we cannot even define what a Turing machine is supposed to do without appeal- ing to the notion of repetition, some redressing of the balance seemed indi- ca ted. For the absence of a bibliography I offer neither explanation nor apology. Acknowledgements. The following people have had a direct influence on this book, either by their willingness to discuss its intended contents or by com- menting on (parts of) the finished manuscript: C.

In other words, we accept an HSLM that is only able to simulate properly a subset of the computations that are guaranteed to terminate prop- erly when executed by the UM. Note 4. The notion of the liberal pre-condition is introduced here in recognition of the fact that the HSLM is so bounded. This is in sharp contrast to the very similar notion of "partial correctness", which has been introduced in connection with unbounded machines (such as Turing machines) because of the undecidability of the Halting Problem. (End of note 4.) One may raise the question -but I shall not answer it- of what the UM will do when started at an initial state for which we don't know whether it satisfies wp(S, T) or not. I take the position (from a philosophical point of view probably very shaky) that as long as we have not proved that the initial state satisfies wp(S, T), the UM may do as it likes, in the sense that we have no right to complain.

The Ages of Globalization by Jeffrey D. Sachs

Facebook, Google, and Amazon came out of nowhere to become, in a few short years, among the most powerful companies in the world. Smartphones are only a decade old, but they have already upended how we live. How did this revolution come about? The roots of the digital revolution can be traced to a remarkable paper by British genius Alan Turing, writing in 1936. Turing envisioned a new conceptual device, a universal computing machine—a Turing machine, as it became known—that could read an endless tape of 0s and 1s in order to calculate anything that could be calculated. Turing had conceptualized a general-purpose programmable computer before one had been invented. His ideas would fundamentally shape the digital revolution to come. Turing also made legendary contributions to the Allied war effort by showing how to use mathematical cryptography and an early electronic device to decipher the Nazi military secret code.

.), 170 social democracy, 202 social-democratic ethos, 201–3 social institutions, 19–20 societies: Eurasia with horse-based, 62–63, 65; Greek, 76–79; hierarchical structure of, 39; horse-based, 59; human, 38–40 soil nutrients, 19 Song Dynasty, 88–91, 89, 104 Soviet Union, 30, 161–62, 207 Spain, 97, 108–11, 110 state law, 71 steam engine, 4; global trading of, 137; in Industrial Age, 16–17, 131–34, 132; Watt patenting, 17 steel, 139 steppes: of Asia, 53; climate zones of, 53; Eurasian, 24, 54; horse domestication in, 59; migration from, 64 subsidiarity doctrine, 196, 203–4 sugar, 119–20 sugar plantations, 120 Summa Theologica (Aquinas), 78 Sun Yat-Sen, 147 sustainable agriculture, 13 sustainable development, 31, 183–85, 196–200; economic growth from, 187; governance of, 200; public goods for, 204–5; religious leaders on, 211–12; U.N. goals of, 198, 201–2 Sustainable Development Goals (SDG), 178, 198, 202 syphilis, 102 tabula rasa (blank-slate learning), 176 Taiping Rebellion, 147 tea infusion, 152 technologies, 1–2, 11, 18, 70; digital, 181; digital revolution and, 166; economic development from, 21; environmental impact of, 188–90; Eurasian advances in, 49; of farm villages, 45; geography and, 18; of Han Empire, 82; horses distributing, 64; inequalities and changes in, 30; information, 4–5; innovative designs for, 138–41; institutions and, 17; intelligent, 141; lucky latitudes innovations in, 50–51; military, 29–30; naval, 96; North America cut off from, 51–52; Old World, 21; for poverty reduction, 177; upheavals from, 130; U.S. advances in, 160; wireless, 181 tellurocracy (land power), 72 temperate zones: advantages to, 22–25; empires, 51; of Eurasia, 48; slavery in, 119 territorial competition, 28 tertiary sectors, 14–16 textile industry, 134; of Britain, 121; of India, 149; robots in, 186 thalassocracy (sea power), 72–73 theileria parva (equine piroplasmosis), 55 Thirty Years’ War, 156–57 Thucydides, 75 Timurid Empire, 83, 93, 93–94 tin mines, 61 tobacco, 119–20 Tokugawa Shogunate, 150 trade, 67 Trajan (emperor), 79 transistors, 171, 172 transnational cooperation, 205 transoceanic empires, 4 transportation vehicle, 54 transport systems, 203 Treaty of Tordesillas (1494), 109–10 Treaty of Versailles, 157 Treaty of Zaragoza (1529), 109–10 triangular trade, 119 tropical vector-borne diseases, 49, 117 tropical zones, 22–23 trucks, self-driving, 186 trypanosomiasis disease, 50 tsetse flies, 56, 152 Turing, Alan, 170, 173 Turing machine, 170 Turkish tribes, 88 Turse, Nick, 162 U.K. See United Kingdom Umayyad Empire, 87, 87 U.N. See United Nations UN Framework Convention on Climate Change (UNFCCC), 204 United Kingdom (U.K.), 144 United Nations (U.N.), 167; as anti-fascist alliance, 207; global center of gravity and, 209–10; reformation of, 207–10; Security Council, 210; sustainable development goals of, 198, 201–2; U.S. supporting, 208–10 United States (U.S.): birth of, 130–31; Britain’s economic dominance with, 154; China’s rising power and, 193; Civil War of, 161; Declaration of Independence of, 131; decolonization supported by, 166–67; Department of Defense, 171; dominant economy of, 159; economic development of, 154, 154; farmer’s food in, 15; GDP of, 154; geopolitical leadership of, 162; global hegemony of, 159–62, 168; global output of, 181; infrastructure development of, 161; lucky latitudes in, 49; military bases of, 162, 163; primary sector employment in, 16; R&D spending of, 182, 182; regime change operations of, 162; Soviet Union challenges to, 161–62; technology advances of, 160; UK comparisons with, 144; unlimited resources of, 121; U.N. supported by, 208–10; War of Independence of, 123; world output of, 155 Universal Declaration of Human Rights, 207 universal health care, 199–200 Upper Pleistocene, 37–40 Urban II (pope), 88 urbanization, 7; population and, 130; rates of, 8, 9; transformation to, 14–16 U.S.

pages: 336 words: 93,672

The Future of the Brain: Essays by the World's Leading Neuroscientists by Gary Marcus, Jeremy Freeman

And unlike worms and flies with their high degree of stereotypy, in which genetically determined neural circuits mediate innate behaviors, mammalian neocortical circuits are shaped by the experiences of their ancestors, in the form of genetic specialization within cortical regions, as well as by personal experiences in the form of synaptic learning, and exploit more general-purpose, flexible population coding principles that are highly sensitive to context. In that sense, the cortical column may be the closest that nature has come to evolving a universal Turing machine, a machine whose settings are adapted by a combination of genomic and learned (synaptic) mechanisms to the particular statistics of its input, be it visual, olfactory, linguistic, or otherwise. Of Men and Mice A deep understanding of the cortex necessitates querying the relevant microvariables, in particular spiking neurons, by recording the occurrences and timing of action potentials. Active neurons rapidly assemble and disassemble into far-flung coalitions that can be tracked from the sensory periphery to motor structures.

See also Human Brain Project (HBP); SyNAPSE project (IBM); whole-brain simulation simulome, 183 Skinner’s behaviorism, 206 Sligte, Ilja, 166 Smith, Stephen, 14 Society for Neuroscience, 258 Solstad, Trygve, 75 songbirds: FOXP2 gene, 155–56 Spaun (Semantic Pointer Architecture Unified Network): architecture of model, 130f; behavior and brain model, 126–27, 129, 131–32; flexible coordination, 132–33; neural firing patterns, 129; neurons, 127; reverse engineering, 133–34; serial working memory task in, 128f, 131 speech, 140; computational neuroanatomy of, 146; FOXP2 gene mutation, 151–52, 155; information, 145; perception, 144–46; production, 146, 187, 190 spinal cord injury, 229 Sporns, Olaf, 11, 65, 90–99 standard operating procedures (SOP), 33 star-nosed mole, 187 Stensola, Hanne, 72 Stensola, Tor, 72 stimulation: electrical, 11, 79, 195, 225–26 stroke, 229, 243 style computing, 213 subjective feelings, 269 Südhof, Thomas, 207 supercomputer: human brain as, 94 supervised learning, 206 SyNAPSE project (IBM): brain simulation, 125–26 synaptic connections: brain, 50 synaptic plasticity, 119, 221, 241 synaptic proteins: in situ immune microscopy of, 60–61 synchronization: neuronal interactions, 93 syntactic theory: language computations, 143–44; minimalism, 144 syntax, 140, 141, 147–48 Talairach, Jean, 5 Talairach Atlas, 10 Tank David, 19 Taube, Jeff, 75 Technical University of Munich, 121 technological innovation, 79 Thunder, 104 Tonegawa, Susumu, 259 top-down modeling, 85f, 112, 162, 171f, 267 touch receptors, 67 Tournoux, P., 5 transcranial magnetic stimulation (TMS), 228 transcriptome, 48 transducer, 246, 250 transistor, 82, 84, 85f, 86–88, 135, 177, 181, 183, 210, 221, 245, 250 traumatic brain injury, 194, 266 trilevel hypothesis: brain, 84–85 Tsuchiya, Nao, 168, 169 tuberculosis, 171 tuberous sclerosis, 241 tumors, 266 Turing machine, 26 23andMe, 198 Twitter, 103 two-photon imaging: mouse cortex, 107 two-photon microscopy, 32 two-photon tomography, 34 ulcerative colitis, 234 ultrasonic frequencies, 246 ultrasonic waves, 249 ultrasound, 250 Universidad Politécnica de Madrid, 116 University College London, 122, 177 University of California–San Diego, 177 University of Edinburgh, 115 University of Oslo, 115, 116 US BRAIN Initiative, 113, 124 US Human Connectome Project, 113 Vallortigara, Giorgio, 207 Vandenbroucke, Annelinde, 166 Van Essen, David, 12 variable binding: brain, 213–14; language, 212 Venter, Craig, 256 Vesalius, Andreas, 3, 4f vestibular system, 22 virtual brains: building, 97–99 virtual reality: whole brain neuroimaging and, 17–24 vision: restoration, 227, 230 Vision (Marr), 181 visual processing: stimuli, 163 visual responses: brute-force data collection, 105 visual-spatial extinction, 163–64 visual system: primates, 104–5 visual thalamus, 264 Vogt, Karl, 91 Vogt, Marthe, 4 von Economo, Constantin, 4 von Neumann, John, 208, 212–13 V2 neurons: hypothesis, 105–6 Waddington, Conrad, 189 Watson, James, 7, 46 Waxholm Space, 115 Werbos, Paul, 41 White, John, 12 whole-brain neuroimaging, 20–21, 17–24 whole-brain neuroscience: behavior as brain output, 121–22; building the brain, 118–19; ethics, 123; global collaboration, 123–24; global effort to understand brain, 124; modeling brain disorders and diseases, 122; unifying brain models, 120–21; validity of model, 119–20 whole-brain simulation: creating to understand, 111–13; neuroinformatics for computing, 113–15; next generation brain atlases, 115–17; ongoing debate, 267–68; predictive neuroscience, 117–18.

pages: 329 words: 88,954

Emergence by Steven Johnson

Turing’s war research had focused on detecting patterns lurking within the apparent chaos of code, but in his Manchester years, his mind gravitated toward a mirror image of the original code-breaking problem: how complex patterns could come into being by following simple rules. How does a seed know how to build a flower? Turing’s paper on morphogenesis—literally, “the beginning of shape”—turned out to be one of his seminal works, ranking up their with his more publicized papers and speculations: his work on Gödel’s undecidability problem, the Turing Machine, the Turing Test—not to mention his contributions to the physical design of the modern digital computer. But the morphogenesis paper was only the beginning of a shape—a brilliant mind sensing the outlines of a new problem, but not fully grasping all its intricacies. If Turing had been granted another few decades to explore the powers of self-assembly—not to mention access to the number-crunching horsepower of non-vacuum-tube computers—it’s not hard to imagine his mind greatly enhancing our subsequent understanding of emergent behavior.

pages: 353 words: 101,130

That's very nearly what Sophus is claiming lies behind the border: an enormous quantum computer that could perform any operation that falls under the general description of quantum physics--and in fact is in a superposition of states in which it's doing all of them." Mariama's eyes widened, but then she protested, "Sophus never puts it like that." "No, of course not," Yann agreed. "He's much too careful to use overheated language like that. 'The universe is a Deutsch-Bennett-Turing machine's is not a statement that goes down well with most physicists, since it has no empirically falsifiable content." He smiled mischievously. "It does remind me of something, though. If you ever want a good laugh, you should try some of the pre-Qusp anti-AI propaganda. I once read a glorious tract which asserted that as soon as there was intelligence without bodies, its 'unstoppable lust for processing power' would drive it to convert the whole Earth, and then the whole universe, into a perfectly efficient Planck-scale computer.

A quantum computer can simulate any quantum process; that's old news. It doesn't mean that there is a quantum computer underlying anything." "No," Tchicaya agreed. "But qubit network theory doesn't claim that. It just says that when you get to a low enough level, you have nothing left to lose by treating the system as if it were software. It's like all the proofs in applied algorithmic theory that are based on imagining Turing machines. No one complains that the real universe is conspicuously devoid of paper tape." "Old habits die hard," she confessed. "I'm still in mourning for the Sarumpaet rules, and they were disproved before I was born. They're what I was brought up on, they're what I've thought of all my life as the template for a physical theory. It's not easy adapting, even to Sophus's model." "Yeah. I really am grateful to you for trying this," Tchicaya said.

pages: 385 words: 98,015

Einstein's Unfinished Revolution: The Search for What Lies Beyond the Quantum by Lee Smolin

That talk, and other early anticipations of the idea, seemed to make little impression until David Deutsch, originally a specialist in quantum gravity who held a position at Oxford, proposed in 1989 an approach to quantum computation in the context of a paper on the foundations of mathematics and logic.2 In his paper, Deutsch introduced the idea of a universal quantum computer, analogous to a Turing machine. A few years later Peter Shore, a computer scientist working for an IBM research laboratory, proved that a quantum computer could factor large numbers much faster than a regular computer. At that point people began to take notice, because one application of being able to factor large numbers is that many of the codes now in common use could be broken. Research groups began to spring up around the world, and they quickly filled with brilliant young researchers, many of whom had a dual research strategy in which they would attack the problems in quantum foundations while contributing to the development of quantum computing.

See also quantum states of atoms, 49–51, 60–61, 77–78, 146–47 classical, 30 contrary, 38–43, 45, 123, 298 correlated, 51, 145–47, 146n, 149 definition, 15, 304 of electrons, 78 superposing, 32–33 stationary states, 77, 87, 92 stochastic quantum mechanics, 223 Stoppard, Tom, 15 string theory, 189, 234n, 278, 304 subjective probabilities, 162, 163n, 170, 172, 174, 193, 208 subjectivity, entropy and, 191n subsystem principle, 26, 27 superdeterminism, 220–22 superposition, 4–5 of atoms, 6–7, 50, 139–40, 146, 152, 156–57 of electrons, 152 entanglement and, 195 general relativity and, 138 gravity and, 140 measurement and, 64 of molecules, 6 of objects, 139 of particles, 4–5 of photons, 50 pilot wave theory and, 214 quantum, 6 quantum mechanics and, 37, 138–39 quantum states and, 32–33 of quantum systems, 37, 137–38 reality and, 147 Schrödinger’s cat, 49–53 of states, 32–33, 196–97 wave function and, 139–40, 213 superposition principle, 33, 137–39 symmetry, 104n, 255n, 263, 263–64 ’t Hooft, Gerard, 221–22 technology, entanglement, 48 temperature, 29, 30 temporal relationalism, 237, 253, 265 thermodynamics, 120, 159, 177, 191n, 214, 303 time capsules, 203 causality and, 204, 236 as emergent, 237 events and, 266 gravity and, 137, 140 as illusion, 202–4 irreversibility of, 236, 236n laws of, 265 laws of nature and, 265 moments and, 201–3 momentum and, 262 nature and, 265 quantum mechanics and, 63, 137 quantum state and, 31 retrocausality, 216–17, 217 topological field theories, 193–94 transactional interpretation, 216–17 truth, xx, xxvi, 276–77 Tumulka, Roderich, 107 Turing machine, 185 twistor theory, 136 tyranny, 178 uncertainty principle, 18–22, 32, 58, 61, 90, 92–93, 117, 145, 304 Unger, Roberto Mangabeira, 265 unification, 216, 229 unification of forces, xvii unitary law, 31 universal quantum computer, 185 universe as causal set, 260 chosen aspect of, 221 expansion of, 4 information and, 189 living mirror of, 245 nadic, 242–43, 243 observation and, 166–67, 231 parallel, 145, 148, 247 physics in early, 175–76 pilot wave theory and, 121 quantum mechanics and, 28, 159n, 231 quantum states and, 193, 197, 231 quantum theory and, 27–28 relational model of, 242 theory of, 27 wave function and, 231 Valentini, Antony, 120, 121, 210 variety, 244, 247 velocity, 20–21, 21, 23, 81, 262n, 304 views, causal theory of, 269–71 Vigier, Jean-Pierre, 114 von Neumann, John, 93–94, 104–5, 110 water, xv wave function, 31n, 32, 99n, 176 atomic systems and, 141, 213 beables and, 224 definition of, 304 information and, 193, 249n particles and, 99–100, 109, 118–20, 209, 210 phases of, 214n pilot wave theory and, 125–26, 210 probabilities and, 124, 128, 151, 165 Rule 1 and, 116, 118 spacetime and, 140 spontaneous collapses and, 143 squaring, 99–100, 100, 151 superposition and, 139–40, 213 theory of relativity and, 142 universe and, 231 wave-function collapse, 35–36, 129–30, 139, 186 definition, 298 drawbacks of, 214–15 ghost branches and, 213 lessons from, 213–16 measurement problem and, 213 Rule 2 and, 215 wavelength, xxviii, 22 wave mechanics, 304 wave-particle duality, 86, 97–98 de Broglie, L., and, 83–84, 103 decoherence and, 156 definition of, 304 double slit experiment and, 98, 199–200, 210–11 Einstein and, 83–84 electrons and, 98–99 light and, 84 measurement problem and, 223 pilot wave theory and, 142, 208–10, 222 realism and, 89 Schrödinger and, 83–84 waves in electric field, 40 electrons as, 79–80, 82, 83 frequency of, 22, 61 height of, 34 light as, 68, 68, 72, 80 matter as, 5, 23, 84 particles and, 21–24, 34, 60, 66, 79–80, 81, 83–84, 99–100, 213 photons as, 69 sum of, 124 wave theory, 67 Weinberg, Steven, 179 Weyl, Hermann, 82–83 Wheeler, John Archibald, xxvii, 37, 145, 187–88 Wheeler-DeWitt equation, 203 Wigner, Eugene, 195–96 Wigner’s friend, 196n Witten, Edward, 136 Wittgenstein, Ludwig, 72n World War I, 12 wormholes, 240, 240n X-rays, 79–80 Young, Thomas, 67–68 ABCDEFGHIJKLMNOPQRSTUVWXYZ ABOUT THE AUTHOR Lee Smolin has made influential contributions to the search for a unification of physics.

pages: 349 words: 98,868

Nervous States: Democracy and the Decline of Reason by William Davies

In the years immediately before the Second World War, various mathematicians, philosophers, and psychologists mused on whether human thought and communication could be modeled as mathematical formulae. The British mathematician (and subsequently celebrated code breaker) Alan Turing’s 1937 paper, “On Computable Numbers,” imagined a “Turing Machine” which could be programmed to perform basic instructions in response to different symbols that it was fed in a random order. While the Turing Machine was never built, this vision signaled the leap from the abstract mathematics of computation to its technological construction. Humans would be required to program such machines, but the machines could then perform various acts of calculation on their own. This idea of a programmable machine is now so familiar to us, that we often fail to notice the peculiar assumptions on which it rests.

The Art of Computer Programming by Donald Ervin Knuth

If it takes T(n) steps to multiply n-bit numbers, we can accomplish m-bit times n-bit multiplication by breaking the n-bit number into \n/m~\ m-bit groups, using \n/rri\T(m) + O(n + m) operations. The results cited in the text therefore give an estimated running time of O(n log m log log m) on Turing machines, or O(n log m) on machines with random access to words of bounded size, or O(n) on pointer machines. 15. The best upper bound known is O(n(lognJ log log n), due to M. J. Fischer and L. J. Stockmeyer [J. Comp. and Syst. Sci. 9 A974), 317-331]; their construction works on multitape Turing machines, and is O(n log n) on pointer machines. The best lower bound known is of order nlogn/loglogn, due to M. S. Paterson, M. J. Fischer, and A. R. Meyer [SIAM/AMS Proceedings 7 A974), 97-111]; this applies to multitape Turing machines but not to pointer machines. 16. Let 2fc be the smallest power of 2 that exceeds 2K. Set at 4— u~t ^2ut and bt «— u/2*-"*) /2, where ut = 0 for t > K.

A generalized spectral test, based on discrete Fourier transforms, can be used to test how well a sequence measures up to Definition Ql [see A. Compagner, Physical Rev. E52 A995), 5634-5645]. Still another interesting approach to a definition of randomness has been taken by Per Martin-L6f [Information and Control 9 A966), 602-619]. Given a finite 6-ary sequence Xi, ..., Xn, let l(Xi,... ,Xn) be the length of the shortest Turing machine program that generates this sequence. (Alternatively, we could use other classes of effective algorithms, such as those discussed in Section 1.1.) Then l(Xi,... ,Xn) is a measure of the "patternlessness" of 170 RANDOM NUMBERS 3.5 the sequence, and we may equate this idea with randomness. The sequences of length N that maximize l(Xi,... ,Xn) may be called random. (From the standpoint of practical random number generation by computer, this is, of course, the worst definition of "randomness" that can be imagined!)

The running time will satisfy T(n) = O(nT(logn)); hence T(n)<Cn(Clgn)(Clglgn)(Clglglgn)..., E2) where the product continues until reaching a factor with lg... lgn < 1. Schonhage and Strassen showed how to improve this theoretical upper bound to O(n log n log log n) in their paper, by using integer numbers u to carry out fast Fourier transforms on integers, modulo numbers of the form 2e + 1. This upper bound applies to Turing machines, namely to computers with bounded memory and a finite number of arbitrarily long tapes. If we allow ourselves a more powerful computer, with random access to any number of words of bounded size, Schonhage has pointed out that the upper bound drops to O(nlogn). For we can choose k = I and m = 6k, and we have time to build a complete multiplication table of all possible products xy for 0 < x,y < 2^mj/12\ (The number of such products is 2fc or 2fc+1, and we can compute each table entry by addition from one of its predecessors in O(k) steps, hence O(k2k) = 0{n) steps will suffice for the calculation.)

pages: 387 words: 111,096

Enigma by Robert Harris

But the fourth—the fourth—will be solved purely electronically, using a relay rack and valves, linked to the bombe by this fat cable form, that looks like a -' Kramer cupped his hands into a circle '—well, that looks like a cobra, I guess. Using valves in sequence—that's a revolution. Never been done before. Your people say it should make the calculations a hundred times, maybe a thousand times, as fast.' Jericho said, almost to himself: 'A Turing machine.' 'A what?' 'An electronic computer.' 'Well, whatever you want to call it. It works in theory, that's the good news. And from what they're saying, this may be just the start. It seems they're planning some kind of super-bombe, all electronic, called Colossus.' Jericho had a sudden vision of Alan Turing, one winter afternoon, sitting cross-legged in his Cambridge study while the lamps came on outside, describing his dream of a universal calculating machine.

For the twentieth time his hand went to the bulge of cryptograms in his inside pocket. Two Englands, he thought. One England—this one—familiar, safe, obvious. But now another, secret England, secluded in the grounds of stately houses -Beaumanor, Gayhurst, Woburn, Adstock, Bletchley—an England of aerial farms and direction finders, clattering bombes and, soon, the glowing green and orange valves of Turing machines ('it should make the calculations a hundred times, maybe a thousand times as fast'}. A new age beginning to be born in the parklands of the old. What was it that Hardy had written in his Apology? 'Real mathematics has no effect on war. No one has yet discovered any warlike purpose to be served by the theory of numbers.' The old boy couldn't guess the half of it. The bell tinkled again and Hester emerged from the post office holding a newspaper over her head like an umbrella.

pages: 416 words: 106,582

This Will Make You Smarter: 150 New Scientific Concepts to Improve Your Thinking by John Brockman

As you begin to form these and other concepts, the chaos on the screen gradually becomes more comprehensible. Developing concepts that carve nature at its joints is the first crucial step toward understanding, not only in the Game of Life but in science and in ordinary life as well. At a more advanced level, we discover that the Game of Life is Turing complete. That is, it’s possible to build a pattern that acts like a Universal Turing Machine (a computer that can simulate any other computer). Thus, any computable function could be implemented in the Game of Life—including perhaps a function that describes a universe like the one we inhabit. It’s also possible to build a universal constructor in the Game of Life, a pattern that can build many types of complex objects, including copies of itself. Nonetheless, the structures that evolve into a Game of Life are different from those we find in the real world: Game of Life structures tend to be fragile, in the sense that changing a single cell will often cause them to dissolve.

., 135–36 Turing, Alan, 146–47 Tversky, Amos, 121, 280 Twain, Mark, 111 typewriter keyboards, 285–86 ulcers, 240 umwelt, 143–45 uncertainty, 28, 53–54, 65, 69, 72, 273, 340 and fear of the unknown, 55–57 unpredictableness, 103–4 risk literacy and, 259–61 statistical thinking and, 260 theater and, 262 see also certainty; probability unconscious, 146 rational, 146–49 understanding, 358 unintended effects of actions, 124–26, 372 uniqueness and specialness: Copernican Principle and, 11–12 in dual view of ourselves, 32 of Earth and humans, 3–5 mediocrity principle and, 6–8, 11, 12 Universal Turing Machine, 276 universe, 294, 301 causes and purposes in, 9–10 Copernican Principle and, 11–12 expansion of, 1, 11 life in, 3–5, 13–14, 292 mediocrity principle and, 6–8, 11, 12 truth and, 222 unknown, fear of, 55–57 Uranus, 361 “Use of Knowledge in Society, The” (Hayek), 258 utility, 347 truth vs., 135–36 vaccinations, 268, 279, 394 autism and, 56, 331 vanilla, 142 Veblen, Thorstein, 228 Veeck, Bill, 360 Veeck effect, 360–62 Venter, J.

pages: 518 words: 107,836

How Not to Network a Nation: The Uneasy History of the Soviet Internet (Information Policy) by Benjamin Peters

Researchers and historians of science remember his 1943 paper, coauthored with the enigmatic polymath Walter Pitts, “A Logical Calculus of the Ideas Immanent in Nervous Activity,” which proposed models for neural networks in the brain that later became influential in the theory of automata, computation, and cybernetics. Their argument holds that the mind is, given certain reductions, equivalent to a Turing machine. In other words, with sufficient abstraction, it is possible to imagine the neural network in a mind as a logical circuit that is capable of carrying out any computable problem. In McCulloch’s words, he sought “a theory in terms so general that the creations of God and men almost exemplify it.”10 That “almost” packs much into its experimental epistemology. Although the conclusion that the mind functions as a computer has since been disputed and dismissed by several generations of neuroscience and cognitive science, the basic neurophysiological insights that McCulloch brought to cybernetics animated the midcentury cybernetic scene.

The Soviet translation and adoption of cybernetics share with the other case studies glossed here an underlying fascination with the relationship of the mind to the machine, especially as seen in the biology and neurology of the British and Chilean cyberneticists. The mind-machine analog is a politically charged two-way street. Not only does cybernetics prompt us to think about how a logic machine (computer circuits or any other Turing machine) may function like a mind (a neural network), but it also raises McCulloch’s potent possibility that subsequent neuroscience has soundly rejected: the mind (neural network) might function like a logic machine (computer circuits). This reverse comparison (that a mind is like a machine) proves particularly enduring in later discussions of the design and development of national networks. The designers of major early cold war national networks in the Soviet Union, the United States, and Chile sought, implicitly or explicitly, to model their own self-governing national networks after cybernetic neural networks.

pages: 137 words: 36,231

Information: A Very Short Introduction by Luciano Floridi

These perform calculations through the interaction of continuously varying physical phenomena, such as the shadow cast by the gnomon on the dial of a sundial, the approximately regular flow of sand in an hourglass or of water in a water clock, and the mathematically constant swing of a pendulum. Clearly, it is not the use of a specific substance or reliance on a specific physical phenomenon that makes an information system analogue, but the fact that its operations are directly determined by the measurement of continuous, physical transformations of whatever solid, liquid, or gaseous matter is employed. There are analogue computers that use continuously varying voltages and a Turing machine (the logically idealized model of our personal computers) is a digital computer but may not be electrical. Given their physical nature, analogue computers operate in real time (i.e. time corresponding to time in the real world) and therefore can be used to monitor and control events as they happen, in a 1:1 relation between the time of the event and the time of computation (think of the hourglass).

pages: 124 words: 36,360

Kitten Clone: Inside Alcatel-Lucent by Douglas Coupland

The quantum computer is a device that is still in its theoretical phase that makes direct use of quantum mechanical phenomena, such as superposition and entanglement, to perform operations on data. The difference between quantum computers and digital computers is based on transistors. Whereas digital computers require data to be encoded into binary digits (bits), quantum computation utilizes quantum properties to represent data and perform operations on these data. A theoretical model is the quantum Turing machine, also known as the universal quantum computer. Quantum computers share theoretical similarities with non-deterministic and probabilistic computers, like the ability to be in more than one state simultaneously. The field of quantum computing was first introduced by Richard Feynman in 1982. Although quantum computing is still in its infancy, experiments have been carried out in which quantum computational operations were executed on a very small number of qubits (quantum bits).

pages: 423 words: 21,637

On Lisp: Advanced Techniques for Common Lisp by Paul Graham

Extensibility, bottom-up programming, interactive development, source code transformation, embedded languages--this is where Lisp shows to advantage. In principle, of course, any Turing-equivalent programming language can do the same things as any other. But that kind of power is not what programming languages are about. In principle, anything you can do with a programming language you can do with a Turing machine; in practice, programming a Turing machine is not worth the trouble. So when I say that this book is about how to do things that are impossible in other languages, I don't mean "impossible" in the mathematical sense, but in the sense that matters for programming languages. That is, if you had to write some of the programs in this book in C, you might as well do it by writing a Lisp compiler in C first. Embedding Prolog in C, for example--can you imagine the amount of work that would take?

pages: 124 words: 40,697

The Grand Design by Stephen Hawking, Leonard Mlodinow

In the Game of Life world, do composite objects exist that, after merely following the laws of that world for some generations, will spawn others of their kind? Not only were Conway and his students able to demonstrate that this is possible, but they even showed that such an object would be, in a sense, intelligent! What do we mean by that? To be precise, they showed that the huge conglomerations of squares that self-replicate are “universal Turing machines.” For our purposes that means that for any calculation a computer in our physical world can in principle carry out, if the machine were fed the appropriate input—that is, supplied the appropriate Game of Life world environment—then some generations later the machine would be in a state from which an output could be read that would correspond to the result of that computer calculation. To get a taste for how that works, consider what happens when gliders are shot at a simple 2 × 2 block of live squares.

pages: 331 words: 47,993

Artificial You: AI and the Future of Your Mind by Susan Schneider

(Notice that the nonphysical mind is not an abstract entity, however, as it has causal and temporal properties. Being nonspatial is a necessary condition of being abstract, but it is not a sufficient condition.) We might call this view Computational Cartesianism. This may sound odd, but experts on functionalism, like the philosopher Hilary Putnam, have long recognized that computations of a Turing machine can be implemented in a Cartesian soul.24 The picture that Computational Cartesianism offers of mind-body causation is perplexing, but so was the original Cartesian view that the mind, although nonspatiotemporal, somehow stands in a causal relationship with the physical world. Not all substance dualisms are this radical, in any case. For instance, consider non-Cartesian substance dualism, a view held by E.

pages: 303 words: 67,891

Advances in Artificial General Intelligence: Concepts, Architectures and Algorithms: Proceedings of the Agi Workshop 2006 by Ben Goertzel, Pei Wang

We come to the game with numerous modules useful for 3 Roughly speaking, the standard proof that a particular problem class C is hard is given by showing that you can map another hard problem class H into it. Since there is (believed to be) insufficient underlying structure to solve H rapidly, this then exhibits a subclass of C that is not (believed to be) rapidly solvable. In the case of Sokoban, the proof proceeds by constructing, for any given Turing machine with given finite tape length, a particular Sokoban instance that is solvable if and only if the Turing machine halts in an accepting state. This leaves (at least) two outs for solving problems in C rapidly. First, since the mapping is into, with range only a small subset of instances of C, almost all instances of C may be rapidly solvable. For example, in the case of Sokoban, it intuitively seems that random instances can only be hard if they have a high enough density of barrels and goals, because otherwise solving the different goals decouples.

pages: 492 words: 141,544

Red Moon by Kim Stanley Robinson

Declaration of Rights, 1793, number 12: Those who incite, expedite, subscribe to, execute or cause to be executed arbitrary legal instruments are guilty and ought to be punished. Number 34: There is oppression against the social body when a single one of its members is oppressed. “The smart red cloud” is an AI panopticonic array developed at Beijing’s University of Electronic Science and Technology. Extant and permeable. The theoretical literature on AI is perplexing. A Turing machine can effectively compute all problems that can be effectively computed by a Turing machine. Tautology as joke? Not obviously. The solution is impossible, therefore when it is solved it will be solved. This asserted in all seriousness. The analyst often found these sentences amusing. Hope as a tautology. Tautology as a hope. Inaccurate names and descriptions as a deliberate conjuring, as an appeal for funding. A form of begging. Begging is a hope.

pages: 759 words: 166,687

Between Human and Machine: Feedback, Control, and Computing Before Cybernetics by David A. Mindell

In this tradition any computer that fits Turing’s definition of a “universal” machine is equivalent to any other, and they can all be understood in a purely symbolic realm, without reference to hardware or architecture. This realization, of course, provided a foundation for the computer and cognitive sciences and has been a major reason for their success. Yet it says nothing about what the symbols inside the machine refer to or how they travel into or out of the machine to interact with the world. Put another way, the ideal Turing machine can calculate anything, but it does not do anything. Turing and his successors considered the formal systems of manipulating symbols according to rules and references to other symbols, but they did not consider what or how those symbols might signify. 34 They thus effected the “separation” and “divorce” that Mumford noted between “print and first-hand experience,” between the symbolic and the concrete.

., 187 –88, 201 Torpedo Data Computer (TDC), 56 torpedoes, 47 , 76 , 99 , 219 , 331 –32 tracking, 154 , 307 automatic, 18 , 245 –48 optical, 259 , 264 , 334 regenerative, 65 , 289 transcontinental line, 111 –13, 116 , 128 transient analysis, 13 , 177 , 209 , 289 and antiaircraft problem, 178 , 312 and differential analyzer, 158 and dynamic tester, 240 –41 and frequency domain, 227 –29 and prediction, 278 and servomechanism theory, 168 –69, 179 , 230 “Transient Behavior and Design of Servomechanisms” (Brown), 13 , 179 –80 classification of, 180 , 208 –9, 211 distribution of, 209 –10, 213 , 219 , 229 –30, 237 transient phenomena, 144 –51, 153 , 178 , 211 , 354 n7 in power systems, 146 –47, 149 –50 and servomechanism theory, 166 , 168 and stability, 146 –47, 228 and steady-state analyses, 146 transient response, 109 –10, 179 , 227 –29 transistor, 118 transmission, 85 , 199 , 329 –30, 374 n53, 375 n4 and amplifiers, 122 –23, 125 cable, 116 carrier frequency, 107 coaxial cable, 128 and communications, 134 –35 electrical, 62 , 106 –7, 112 , 132 , 134 , 143 , 146 , 197 and environment, 135 and fire control systems, 25 , 32 , 70 , 87 –88, 94 , 233 limits of, 107 –11 long-distance, 107 –9, 111 , 114 –16, 125 , 136 , 143 –44 modeling of, 151 Nyquist on, 360 n78 open wire, 107 –9, 112 , 116 physics of, 122 and radar, 251 regulation of, 135 of signals, 105 –37, 258 stability of, 126 –27 synchronous, 51 , 60 of text, 134 , 136 and transient phenomena, 146 –47 unit of, 112 “Transmission of Information” (Hartley), 134 transmitters: data, 45 synchronous, 48 –49 Trinks, Willibald, 140 , 209 , 364 n79 Tucker, Samuel, 262 Tufts University, 281 , 328 , 330 Turing, Alan, 10 Turing machine, 15 turrets, 17 , 70 , 100 –101 Tuve, Merle, 257 , 267 –68, 382 n26 “Unified Theory of the Relay Interpolator” (Stibitz), 303 United Shoe Machinery Corporation, 329 , 335 universities: and fire control research, 216 and industry, 230 , 242 , 252 , 311 and military, 257 , 311 and NDRC, 195 –96, 198 , 200 , 202 –3 University of Pennsylvania. See Moore School of Electrical Engineering University of Texas, 301 , 331 , 334 University of Wisconsin, 189 , 240 , 328 V-1 buzz bombs, 231 –33, 254 –58 V-2 rockets, 258 vacuum tubes: and amplifiers, 50 , 168 and binary switching, 365 n105 computron, 292 G.E.

pages: 1,737 words: 491,616

Rationality: From AI to Zombies by Eliezer Yudkowsky

The formalism of Solomonoff induction measures the “complexity of a description” by the length of the shortest computer program which produces that description as an output. To talk about the “shortest computer program” that does something, you need to specify a space of computer programs, which requires a language and interpreter. Solomonoff induction uses Turing machines, or rather, bitstrings that specify Turing machines. What if you don’t like Turing machines? Then there’s only a constant complexity penalty to design your own universal Turing machine that interprets whatever code you give it in whatever programming language you like. Different inductive formalisms are penalized by a worst-case constant factor relative to each other, corresponding to the size of a universal interpreter for that formalism. In the better (in my humble opinion) versions of Solomonoff induction, the computer program does not produce a deterministic prediction, but assigns probabilities to strings.

Benja Fallenstein commented: I think that while you can in this case never devise an empirical test whose outcome could logically prove irreducibility, there is no clear reason to believe that you cannot devise a test whose counterfactual outcome in an irreducible world would make irreducibility subjectively much more probable (given an Occamian prior). Without getting into reducibility/irreducibility, consider the scenario that the physical universe makes it possible to build a hypercomputer—that performs operations on arbitrary real numbers, for example—but that our brains do not actually make use of this: they can be simulated perfectly well by an ordinary Turing machine, thank you very much . . . Well, that’s a very intelligent argument, Benja Fallenstein. But I have a crushing reply to your argument, such that, once I deliver it, you will at once give up further debate with me on this particular point: You’re right. Alas, I don’t get modesty credit on this one, because after publishing the last essay I realized a similar flaw on my own—this one concerning Occam’s Razor and psychic powers: If beliefs and desires are irreducible and ontologically basic entities, or have an ontologically basic component not covered by existing science, that would make it far more likely that there was an ontological rule governing the interaction of different minds—an interaction which bypassed ordinary “material” means of communication like sound waves, known to existing science.

Wouldn’t there be a potential infinite recursion? But so long as the terms of the theory were being processed by human scientists, they just knew when an “observation” had occurred. You said an “observation” occurred whenever it had to occur in order for the experimental predictions to come out right—a subtle form of constant tweaking. (Remember, the basics of quantum theory were formulated before Alan Turing said anything about Turing machines, and way before the concept of computation was popularly known. The distinction between an effective formal theory, and one that required human interpretation, was not as clear then as now. Easy to pinpoint the problems in hindsight; you shouldn’t learn the lesson that problems are usually this obvious in foresight.) Looking back, it may seem like one meta-lesson to learn from history, is that philosophy really matters in science—it’s not just some adjunct of a separate academic field.

pages: 309 words: 54,839

Attack of the 50 Foot Blockchain: Bitcoin, Blockchain, Ethereum & Smart Contracts by David Gerard

So 4,700,000/16 seconds/21,000 = 13.99 transactions per second. [301] “Ethereum Transaction Chart”. Etherscan.io. [302] Joseph Young. “Ethereum Launches; But Leaked Chat Says Project Needs ‘Years More’”. CoinTelegraph, 1 August 2015. (archive) [303] e.g., Vlad Zamfir. “About my tweet from yesterday …” 5 March 2017. (archive) [304] “Vitalik’s Quantum Quest”. Bitcoin Error Log (blog), 16 August 2016. (archive) [305] Jordan Ash. “Why Turing Machines are Quantum.” Noospheer (blog), 4 September 2013. “If successful, it will have applications ranging from cryptography to finance, energy, medical care and beyond.” (archive) [306] O(sqrt(N)) rather than O(N), per Grover’s algorithm. Which is a pretty good speedup for as long as nobody else knows you have a quantum computer. [307] Vitalik Buterin. Comment on “Why does Greg Maxwell and many others from Bitcoin Core not respect Vitalik?”

pages: 523 words: 148,929

Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 by Michio Kaku

Your computer is just as dumb today as it was yesterday.) So there are at least two approaches to modeling the brain. The first, the traditional top-down approach, is to treat robots like digital computers, and program all the rules of intelligence from the very beginning. A digital computer, in turn, can be broken down into something called a Turing machine, a hypothetical device introduced by the great British mathematician Alan Turing. A Turing machine consists of three basic components: an input, a central processor that digests this data, and an output. All digital computers are based on this simple model. The goal of this approach is to have a CD-ROM that has all the rules of intelligence codified on it. By inserting this disk, the computer suddenly springs to life and becomes intelligent.

pages: 489 words: 148,885

Accelerando by Stross, Charles

Aineko glances at Amber, sees her thunderous expression, and hastily changes the subject: "The solution was intuitively obvious, just not to humans. You're so verbal." Lifting a hind paw, she scratches behind her left ear for a moment then pauses, foot waving absentmindedly. "Besides, the CETI team was searching under the street lights while I was sniffing around in the grass. They kept trying to find primes; when that didn't work, they started trying to breed a Turing machine that would run it without immediately halting." Aineko lowers her paw daintily. "None of them tried treating it as a map of a connectionist system based on the only terrestrial components anyone had ever beamed out into deep space. Except me. But then, your mother had a hand in my wetware, too." "Treating it as a map –" Amber stops. "You were meant to penetrate Dad's corporate network?" "That's right," says the cat.

For example, a collapse of the false vacuum," Manfred insists, slightly uncoordinated and slurring his vowels under the influence of the first glass of fruit punch he's experienced in nigh-on twenty real-time years. His body is young and still relatively featureless, hair still growing out, and he's abandoned his old no-implants fetish at last to adopt an array of interfaces that let him internalize all the exocortex processes that he formerly ran on an array of dumb Turing machines outside his body. He's standing on his own sense of style and is the only person in the room who isn't wearing some variation of dinner jacket or classical evening dress. "Entangled exchange via routers is all very well, but it won't let us escape the universe itself – any phase change will catch up eventually, the network must have an end. And then where will we be, Sameena?" "I'm not disputing that."

pages: 479 words: 144,453

Homo Deus: A Brief History of Tomorrow by Yuval Noah Harari

The most interesting emerging religion is Dataism, which venerates neither gods nor man – it worships data. 11 The Data Religion Dataism says that the universe consists of data flows, and the value of any phenomenon or entity is determined by its contribution to data processing.1 This may strike you as some eccentric fringe notion, but in fact it has already conquered most of the scientific establishment. Dataism was born from the explosive confluence of two scientific tidal waves. In the 150 years since Charles Darwin published On the Origin of Species, the life sciences have come to see organisms as biochemical algorithms. Simultaneously, in the eight decades since Alan Turing formulated the idea of a Turing Machine, computer scientists have learned to engineer increasingly sophisticated electronic algorithms. Dataism puts the two together, pointing out that exactly the same mathematical laws apply to both biochemical and electronic algorithms. Dataism thereby collapses the barrier between animals and machines, and expects electronic algorithms to eventually decipher and outperform biochemical algorithms.

pages: 202 words: 62,901

The People's Republic of Walmart: How the World's Biggest Corporations Are Laying the Foundation for Socialism by Leigh Phillips, Michal Rozworski

A distributed planning network of quite modest personal computers, linked by an economy-wide telecommunications system and employing a standardized system of product identification and computer databases, would be sufficient. It would, however, require universal access to computers and the free flow of information. Given a new lease on life by the advent of new technologies, the debate has continued into the 2000s. A 2002 rejoinder to the Cockshott-Cottrell perspective from Polish logician Witold Marciszewski of the University of Warsaw argued that socialist planning would require what are called super-Turing machines, or hypercomputers—theoretical computers that go beyond the computability of standard computers, which some claim are not only physically impossible to build, but logically impossible to devise. And in 2006, Robert Murphy, a young Austrian School economist with the Pacific Research Institute, a Californian free-market think tank, employed set theorist Georg Cantor’s diagonal argument to claim that the list of prices in any planning board’s matrix would need to contain not merely billions or trillions of prices, but—as with the set of all real numbers or set of all subsets of integers—an uncountably infinite number of them, therefore making economy-wide socialist calculation impossible in principle, not just in practice, because the full list of all prices could never be listed.

pages: 333 words: 64,581

Clean Agile: Back to Basics by Robert C. Martin

I also imagine that the first code he wrote for the Automatic Computing Engine, in 1946, was written in small steps, with lots of desk checking, and even some real testing. 1. Turing, A. M. 1936. On computable numbers, with an application to the Entscheidungsproblem [proof]. Proceedings of the London Mathematical Society, 2 (published 1937), 42(1):230–65. The best way to understand this paper is to read Charles Petzold’s masterpiece: Petzold, C. 2008. The Annotated Turing: A Guided Tour through Alan Turing’s Historic Paper on Computability and the Turing Machine. Indianapolis, IN: Wiley. The early days of software are loaded with examples of behavior that we would now describe as Agile. For example, the programmers who wrote the control software for the Mercury space capsule worked in half-day steps that were punctuated by unit tests. Much has been written elsewhere about this period. Craig Larman and Vic Basili wrote a history that is summarized on Ward Cunningham’s wiki,2 and also in Larman’s book, Agile & Iterative Development: A Manager’s Guide.3 But Agile was not the only game in town.

pages: 259 words: 67,456

The Mythical Man-Month by Brooks, Jr. Frederick P.

., "System quality through structured programming," AFIPS Proc FJCC, 41-1 (1972), pp. 339-343. Dahl, O. J., E. W. Dijkstra, and C. A. R. Hoare, Structured Programming. London and New York: Academic Press, 1972. This volume contains the fullest treatment. See also Dijkstra's germinal letter, "GOTO statement considered harm-ful," CACM, II, 3 (March, 1968), pp. 147-148. Bohm, C., and A. Jacopini, "Flow diagrams, Turing machines, and languages with only two formation rules," CACM, 9, 5 (May, 1966), pp. 366-371. Codd, E. F., E. S. Lowry, E. McDonough, and C. A. Scalzi, "Multiprogramming STRETCH: Feasibility considerations," CACM, 2,11 (Nov., 1959), pp. 13-17. Strachey, C., "Time sharing in large fast computers," Proc. Int. Con/, on Info. Processing, UNESCO (June, 1959), pp. 336-341. See also Codd's remarks on p. 341, where he reported progress on work like that proposed in Strachey's paper.

pages: 218 words: 63,471

How We Got Here: A Slightly Irreverent History of Technology and Markets by Andy Kessler

The Robinson used relays and other electronics and required two sets of paper tapes to read 2000 characters a second. Progress was slow as the tape kept ripping. Two London-based post office engineers, Tommy Flowers and Alan Coombes, improved on the Robinson, more than doubling the speed, by using 2400 vacuum tubes and a number of servomotors, instead of relays and fragile paper tape. But it was still a Turing machine. Its name was Colossus, a mighty name, and it was moved to 114 HOW WE GOT HERE Bletchley Park and started operating in December 1943. It had both logic and memory and since it could change its program based on which code needed to be deciphered, it was a true Turing programmable computer. The Allies relied on the Colossus to help determine Nazi troop concentrations and the best D-Day landing points. *** But in 1945, after VE and VJ days and the end of the war, we all know what happened next.

Raw Data Is an Oxymoron by Lisa Gitelman

For the fundamentally unproductive maintenance of the difference between man and machine from a systems theoretical perspective, see Peter Fuchs, “Kommunikation mit Computern? Zur Korrektur einer Fragestellung,” Sociologia Internationalis 29 (1991): 1–30, esp. 8f. 38. For the only ontological assumption of systems theory, see the first sentence of the first chapter of the outline of a general theory, Luhmann, Soziale Systeme, 30. 39. Ibid., 240, my emphasis of the term, which (not coincidentally?) refers to the Turing machine. 40. Luhmann, Die Gesellschaft der Gesellschaft, 82. 41. See first the foundational chapter in Luhmann, Soziale Systeme, 191ff. In addition, on the question of the extent to which communication applies to computers, see Peter Fuchs, “Kommunikation mit Computern? Zur Korrektur einer Fragestellung,” Sociologia Internationalis 29 (1991): 1–30. 42. Luhmann, Die Gesellschaft der Gesellschaft, 82f. 43.

pages: 266 words: 79,297

Forge Your Future with Open Source by VM (Vicky) Brasseur

Sometimes it feels like you live your life in a maze of twisty little passages, all alike. Now you can code your way out. Jamis Buck (286 pages) ISBN: 9781680500554 \$38 Good Math Mathematics is beautiful—and it can be fun and exciting as well as practical. Good Math is your guide to some of the most intriguing topics from two thousand years of mathematics: from Egyptian fractions to Turing machines; from the real meaning of numbers to proof trees, group symmetry, and mechanical computation. If you’ve ever wondered what lay beyond the proofs you struggled to complete in high school geometry, or what limits the capabilities of the computer on your desk, this is the book for you. Mark C. Chu-Carroll (282 pages) ISBN: 9781937785338 \$34 * * *

pages: 252 words: 79,452

To Be a Machine: Adventures Among Cyborgs, Utopians, Hackers, and the Futurists Solving the Modest Problem of Death by Mark O'Connell

Chichester: John Wiley & Sons, 2014. Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press, 2014. Čapek, Karel. R.U.R. (Rossum’s Universal Robots): A Fantastic Melodrama. Trans. Claudia Novack. London: Penguin, 2004. Chamayou, Grégoire. Drone Theory. London: Penguin, 2015. Cicurel, Ronald, and Miguel Nicolelis. The Relativistic Brain: How It Works and Why It Cannot Be Simulated by a Turing Machine. Montreux: Kios Press, 2015. Clarke, Arthur C. The City and the Stars. New York: Harcourt, Brace, 1956. Descartes, René. Discourse on Method and Meditations on First Philosophy. Trans. Donald A. Cress. Indianapolis: Hackett Classics, 1998. ———. Treatise of Man. Trans. Thomas Steele Hall. Amherst, NY: Prometheus, 2003. Dick, Philip K. Do Androids Dream of Electric Sheep? New York: Doubleday, 1968.

pages: 292 words: 88,319

The Infinite Book: A Short Guide to the Boundless, Timeless and Endless by John D. Barrow

So ½ S = ¼ + + + + ... = S – ½ since the right-hand size is just the series without its first term, which is ½. Hence, ½ S = ½ and S = 1. 5. H. Weyl, Philosophy of Mathematics and Natural Science, Princeton University Press, 1949, p. 42. Weyl’s mention of decision procedures and machines is interesting. Mathematics had just emerged from a pre-war period which saw the advent in print of Alan Turing’s ‘Turing machine’, the archetypal universal computer that is indistinguishable from a human calculator (the original meaning of the word ‘computer’) and the question, answered in the negative by Turing, of whether a finite computing machine would be able to decide the truth or falsity of all statements of mathematics in a finite time. Turing showed that there exist uncomputable problems for which the machine would never halt if set to work on deciding them.

pages: 322 words: 88,197

Wonderland: How Play Made the Modern World by Steven Johnson

You could dream up new melodies for it, instruct it to produce new patterns of sound. The water clocks the Banu Musa built were automated. But their “instrument” was endowed with a higher-level property. It was programmable. Conceptually, this was a massive leap forward: machines designed specifically to be open-ended in their functionality, machines controlled by code and not just mechanics. A direct line of logic connects the “Instrument Which Plays by Itself” to the Turing machines that have so transformed life in the modern age. You can think of the instrument as the moment where the Manichean divide between hardware and software first opened up. An invention that itself makes invention easier, faster, more receptive to trial and error. A virtual machine. Yet it must have been hard to see (or hear) its significance at the time. To the untutored spectator, it might have paled beside the animated peacocks.

pages: 311 words: 94,732

The Rapture of the Nerds by Cory Doctorow, Charles Stross

Tell your guidebooks or familiars or whatever to download Exhibit B for you. As you can see, the genome of the said item is chimeric and shows signs of crude tampering, but it’s largely derived from Drosophila, Mus musculus, and a twenty-first-century situationist artist or politician called Sarah Palin. Large chunks of its genome appear to be wholly artificial, though, written entirely in Arabic, and there’s an aqueous-phase Turing machine partially derived from octopus ribosomes to interpret them. It looks as if something has been trying to use the sharia code as a platform for implementing a legal virtual machine. We’re not sure why, unless it’s an obscure joke.” “Does the metasphere have a sense of humor?” Huw says. He clears his throat—the dust must be getting to him, because it feels as if he’s developing a ticklish cough.

pages: 317 words: 101,074

It is hard to sort out the paternity of the modern computer, because much of the thinking and work was done in the United States and Britain during World War II under the cloak of wartime secrecy. Three major contributors were Alan Turing, Claude Shannon, and John von Neumann. In the mid-1930s, Alan Turing, like Babbage a superlative Cambridge-trained British mathematician, proposed what is known today as a Turing machine. It was his version of a completely general-purpose calculating machine that could be instructed to work with almost any kind of information. In the late 1930s, when Claude Shannon was still a student, he demonstrated that a machine executing logical instructions could manipulate information. His insight, the subject of his master's thesis, was about how computer circuits—closed for true and open for false—could perform logical operations, using the number 1 to represent "true" and 0 to represent "false."

pages: 340 words: 97,723

The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity by Amy Webb

The Society of Mind. New York: Simon & Schuster, 1985. Neema, S. “Assured Autonomy.” Defense Advanced Research Projects Agency. https://www.darpa.mil/program/assured-autonomy. Osnos, E. Age of Ambition: Chasing Fortune, Truth, and Faith in the New China. New York: Farrar, Straus, and Giroux, 2015. Petzold, C. The Annotated Turing: A Guided Tour Through Alan Turing’s Historic Paper on Computability and the Turing Machine. Indianapolis, IN: Wiley Publishing, 2008. Pylyshyn, Z. W., ed. The Robot’s Dilemma: The Frame Problem in Artificial Intelligence. Norwood, NJ: Ablex, 1987. Riedl, M. O. “The Lovelace 2.0 Test of Artificial Creativity and Intelligence.” https://arxiv.org/pdf/1410.6142.pdf. Schneier, B. “The Internet of Things Is Wildly Insecure—and Often Unpatchable.” Wired, January 6, 2014. https://www.wired.com/2014/01/theres-no-good-way-to-patch-the-Internet-of-things-and-thats-a-huge-problem/.

pages: 366 words: 107,145

Fuller Memorandum by Stross, Charles

I can't see: my arm is a monstrous, distracting wall of ache, I'm still handcuffed, and now they've hooded me and pinioned my ankles. So much for making a run for it. They're obviously taking me somewhere indoors-- Indoors? Something tells me that, yes, we are indoors now. Maybe it's the lack of fresh air, or the echoes, or the ground beneath this trolley's wheels. We must be nearly there. I distract myself, trying to recall the transition table for Cantor's 2,5 Universal Turing Machine--the one with the five chess pieces and the board. I was always crap at chess, never really got into it deeply enough at school, but I understand UTMs, and if I can hold enough moves in my head before the gray stuff turns to Swiss cheese I might be able to code something up. Damn it, Bob, you're a magician! Think of something! But it all blurs, when you're in pain. Like most of my ilk I work best in a nice warm office, with a honking great monitor on my desk and a can of Pringles in front of me.

pages: 335 words: 107,779

Some Remarks by Neal Stephenson

The debate on free will vs. determinism is no more settled today than it was at the time of the Leibniz-Clarke Correspondence, and so in that sense (at least) Monadology is still interesting as a gambit, which different observers might see as heroic, ingenious, or desperate, to cut that Gordian knot by making free minds or souls into the fundamental components of the universe. 2. Leibniz’s interpreters made use of the vocabulary at their disposal to translate his terminology into words such as “mind,” “soul,” “cognition,” “endeavour,” etc. This, however, was before the era of information theory, Turing machines, and digital computers, which have supplied us with a new set of concepts, a lexicon, and a rigorous science pertaining to things that, like monads, perform a sort of cogitation but are neither divine nor human. A translator of Leibniz’s work, beginning in a.d. 2010 from a blank sheet of paper, would, I submit, be more likely to use words like “computer” and “computation” than “soul” and “cognition.”

pages: 477 words: 106,069

The Sense of Style: The Thinking Person's Guide to Writing in the 21st Century by Steven Pinker

The new entries in AHD 5 are a showcase for the linguistic exuberance and recent cultural history of the Anglosphere: Abrahamic, air rage, amuse-bouche, backward-compatible, brain freeze, butterfly effect, carbon footprint, camel toe, community policing, crowdsourcing, Disneyfication, dispensationalism, dream catcher, earbud, emo, encephalization, farklempt, fashionista, fast-twitch, Goldilocks zone, grayscale, Grinch, hall of mirrors, hat hair, heterochrony, infographics, interoperable, Islamofascism, jelly sandal, jiggy, judicial activism, ka-ching, kegger, kerfuffle, leet, liminal, lipstick lesbian, manboob, McMansion, metabolic syndrome, nanobot, neuroethics, nonperforming, off the grid, Onesie, overdiagnosis, parkour, patriline, phish, quantum entanglement, queer theory, quilling, race-bait, recursive, rope-a-dope, scattergram, semifreddo, sexting, tag-team, time-suck, tranche, ubuntu, unfunny, universal Turing machine, vacuum energy, velociraptor, vocal percussion, waterboard, webmistress, wetware, Xanax, xenoestrogen, x-ray fish, yadda yadda yadda, yellow dog, yutz, Zelig, zettabyte, zipline If I were allowed to take just one book to the proverbial desert island, it might be a dictionary. who and whom. When Groucho Marx was once asked a long and orotund question, he replied, “Whom knows?” A 1928 short story by George Ade contains the line “‘Whom are you?’

pages: 407 words: 104,622

The Man Who Solved the Market: How Jim Simons Launched the Quant Revolution by Gregory Zuckerman

Watson Research Center, 172 Thorp, Edward, 30, 97–98, 127–29, 130, 163 tick data, 112 Toll, John, 33 tradeable effects, 111 trading errors, 166 trading signals, 3, 83–84, 203–5, 246–47, 312 trenders, 73 trend following, 96, 100 Trump, Donald, xviii, 281–94, 302, 304–5 Trump, Ivanka, 281 Trump, Melania, 285 Trump National Golf Club, 282 Tsai, Gerald, Jr., 123 Turing, Alan (Turing machine), 3, 148 “turtles,” 125 Tversky, Amos, 152 twenty-four-hour effect, 109 20th Century Fox, 10–11 Two Sigma Investments, 310, 312 Tykhe Capital, 256 United Airlines, 166 United Church of Christ, 87–88 United Fruit Company, 19 University of California, Berkeley, 3, 17–19, 20, 38, 68–69, 92–93, 95 University of California, Irvine, 81 University of California, Los Angeles, 36–37 University of Cambridge, 147 University of Chicago, 30, 72, 256 University of Erlangen-Nuremberg, 300–301 University of Illinois, 171 University of New Mexico, 169–70 University of Pennsylvania, 176, 185, 236, 270 University of Rochester, 169 value style of investing, 96 Vietnam War, 31–32, 48 Villani, Dario, 308 Vinik, Jeffrey, 163 Volcker, Paul, 65 Volfbeyn, Pavel, 238, 241, 242, 252–54 von Neumann, John, 67 Wadsworth, Jack, Jr., 89 Wallace, Mike, 13 Wall Street (movie), 106 Wall Street Journal, 57, 76, 122, 124, 128, 146, 172, 198, 275, 294, 303, 318 Walters, Barbara, 13 Wander, Wolfgang, 300–301, 300n Ward, Kelli, 304 WarGames (movie), 192 Washington Post, 282 weekend effect, 109–10 Weinberger, Peter, 201, 233–34 Weinstein, Boaz, 299 Welch, Lloyd, 46–48 West Meadow Beach, 34, 235 Wheeler, Langdon, 106 white supremacism, 292–93, 299–300 Whitney, Glen, at Renaissance, 235–36 compensation, 200–201, 229 departure, 262 job interviews, 233 Kononenko and, 241, 242–43, 262 Mercer and, 231–32, 235 Wild One, The (movie), 17 Wiles, Andrew, 69–70 Witten, Edward, 38 World Bank, 56 WorldCom, 226 World Trade Center mosque controversy, 278 Yale University, 176 Yang, Chen Ning, 33 Yau, Shing-Tung, 35 Yiannopoulos, Milo, 300, 302 Zeno’s paradoxes, 12 ABCDEFGHIJKLMNOPQRSTUVWXYZ ABOUT THE AUTHOR Gregory Zuckerman is the author of The Greatest Trade Ever and The Frackers, and is a Special Writer at the Wall Street Journal.

pages: 402 words: 110,972

Nerds on Wall Street: Math, Machines and Wired Markets by David J. Leinweber

Thinking Machines, founded by computational superstar Danny Hillis (son-in-law of Marvin Minsky, the pope of symbolic AI), gathered some of the leading lights to build and program massive machines with up to 64K (65,536) processors. That is a lot more than one, but still a lot less than the 100 billion neurons in the brain. You don’t need a machine with a billion processors to try out solutions that would use them. A simulator will do fine, if not as fast. For theory buffs, this is an example of the idea of a universal computation; a Turing machine or its equivalent can emulate anything you want. The Nintendo 64 emulators you can run on your PC to play Pac-Man are another. The neural net movement exploited this idea, seeking to realize learning by mimicking structure and function. Another branch of the 184 Nerds on Wall Str eet turn to biologically inspired approaches to learning used the intriguing idea of mimicking evolution. The mechanics of evolution at the chromosome level—the processes of mutation and crossover, dominant and recessive traits—are understood well.

pages: 561 words: 120,899

The Theory That Would Not Die: How Bayes' Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant From Two Centuries of Controversy by Sharon Bertsch McGrayne

Their ignorance proved fortunate. Despite the strange reputation of British mathematicians, the operational head of GC&CS prepared for war by quietly recruiting a few nonlinguists—“men of the Professor type”5—from Oxford and Cambridge universities. Among that handful of men was Alan Mathison Turing, who would father the modern computer, computer science, software, artificial intelligence, the Turing machine, the Turing test—and the modern Bayesian revival. Turing had studied pure mathematics at Cambridge and Princeton, but his passion was bridging the gap between abstract logic and the concrete world. More than a genius, Turing had imagination and vision. He had also developed an almost unique set of interests: the abstract mathematics of topology and logic; the applied mathematics of probability; the experimental derivation of fundamental principles; the construction of machines that could think; and codes and ciphers.

pages: 387 words: 119,409

Work Rules!: Insights From Inside Google That Will Transform How You Live and Lead by Laszlo Bock

xii I ran the abbreviation “NP” by my close friend Gus Mattammal, who has degrees in math, physics, and business and is director of Advantage Testing of Silicon Valley, an elite tutoring and test preparation firm. I figured if anyone could explain NP, he could. Gus told me, “Class NP contains all computational problems such that the corresponding decision problem can be solved in a polynomial time by a nondeterministic Turing machine.” Ummm.… He then translated for me: “Unless you’re a computer scientist, ‘NP problems’ can just be used to stand for ‘really, really hard problems to solve.’ ” xiii In the spring of 2012, we started deploying algorithms to better match candidates with jobs. By mid-2013, hiring yield had increased 28 percent (that is, for every 1, 000 applicants, we are hiring 28 percent more people than in the past).

pages: 409 words: 125,611

The Great Divide: Unequal Societies and What We Can Do About Them by Joseph E. Stiglitz

A lot of intellectual effort has been devoted to devising better ways of maximizing advertising and marketing budgets—targeting customers, especially the affluent, who might actually buy the product. But standards of living might have been raised even more if all of this innovative talent had been allocated to more fundamental research—or even to more applied research that could have led to new products. Yes, being better connected with each other, through Facebook or Twitter, is valuable. But how can we compare these innovations with those like the laser, the transistor, the Turing machine, and the mapping of the human genome, each of which has led to a flood of transformative products? Of course, there are grounds for a sigh of relief. Although we may not know how much recent technological innovations are contributing to our well-being, at least we know that, unlike the wave of financial innovations that marked the precrisis global economy, they have had a positive effect. ______________ * Project Syndicate, March 9, 2014.

pages: 566 words: 122,184

Code: The Hidden Language of Computer Hardware and Software by Charles Petzold

In Great Britain, the Colossus computer (first operational in 1943) was dedicated to cracking the German "Enigma" code-making machine. Contributing to this project (and to some later British computer projects) was Alan M. Turing (1912–1954), who is most famous these days for writing two influential papers. The first, published in 1937, pioneered the concept of "computability," which is an analysis of what computers can and can't do. He conceived of an abstract model of a computer that's now known as the Turing Machine. The second famous paper Turing wrote was on the subject of artificial intelligence. He introduced a test for machine intelligence that's now known as the Turing Test. At the Moore School of Electrical Engineering (University of Pennsylvania), J. Presper Eckert (1919–1995) and John Mauchly (1907–1980) designed the ENIAC (Electronic Numerical Integrator and Computer). It used 18,000 vacuum tubes and was completed in late 1945.

When Computers Can Think: The Artificial Intelligence Singularity by Anthony Berglas, William Black, Samantha Thalind, Max Scratchmann, Michelle Estes

Turing halting problem The first line of arguments are based on the limits to computation proved by Alan Turing and Kurt Gödel in the 1930s. Long before significant real computers could be built, Turing created a very simple theoretical computer in which programs could be written. He then proved that any other more sophisticated computer could not have any more computational power than his simple machine. In other words, if you could write a program on a more complex computer, then that program could be translated to run on his Turing Machine. Being a logician Turing was unconcerned about practical details as to how long the program would take to run, but he showed that once a computer had some basic characteristics it could run any program that could be written. This includes any program that could be implemented with neurons. Turing then used a clever argument to show that there are some programs that cannot be written at all.

Jennifer Morgue by Stross, Charles

This realm, being hosted on Bosch, is scattered with traps that are superclassed into a bunch of scanner routines from Project Aurora and sniff for any taint of the real supernatural. Probably he whiffed of Laundry business — and that set off one of the traps, which yanked him in." "How do you get inside a game?" asks Pinky, looking hopeful. "Could you get me into Grand Theft Auto: Castro Club Extreme" Brains glances at him in evident disgust. "You can virtualize any universal Turing machine," he sniffs. "Okay, Bob. What precisely do you need from us in order to get the kid out of there" I point to the laptop: "I need that, running the Dungeon Master client inside the game. Plus a class four summoning grid, and a lot of luck." My guts clench. "Make that a lot more luck than usual." "Running the DM client — " Brains goes cross-eyed for a moment " — is it reentrant" "It will be."

pages: 331 words: 60,536

The Sovereign Individual: How to Survive and Thrive During the Collapse of the Welfare State by James Dale Davidson, Rees Mogg

The fact that genetically influenced sacrifice on behalf of the nation-state often militated against the evolutionary purpose of kin selection also tells you that humans are adaptable enough to adjust to many circumstances for which we were not genetically programmed in the conditions of the Stone Age. As Tudge elaborates in describing the "extreme generalness" of human beings: "We are the animal equivalent of the Turing machine: the universal device that can be turned to any task." 72 Which tendency will come to the surface in the coming transition crisis? Probably both. 224 The commercialization of sovereignty itself depends upon the willingness of hundreds of thousands of Sovereign Individuals and many millions of others to deploy their assets in the "First Bank of Nowhere" in order to secure immunity from direct compulsion.

Mastering Blockchain, Second Edition by Imran Bashir

Script is a limited language, however, in the sense that it only allows essential operations that are necessary for executing transactions, but it does not allow for arbitrary program development. Think of it as a calculator that only supports standard preprogrammed arithmetic operations. As such, Bitcoin script language cannot be called Turing complete. In simple words, Turing complete language means that it can perform any computation. It is named after Alan Turing who developed the idea of Turing machine that can run any algorithm however complex. Turing complete languages need loops and branching capability to perform complex computations. Therefore, Bitcoin's scripting language is not Turing complete, whereas Ethereum's Solidity language is. To facilitate arbitrary program development on a blockchain, Turing complete programming language is needed, and it is now a very desirable feature of blockchains.

pages: 579 words: 183,063

Tribe of Mentors: Short Life Advice From the Best in the World by Timothy Ferriss

Janna Levin TW/IG: @jannalevin jannalevin.com JANNA LEVIN is the Tow Professor of physics and astronomy at Barnard College of Columbia University, and has contributed to an understanding of black holes, the cosmology of extra dimensions, and gravitational waves in the shape of spacetime. She is also director of sciences at Pioneer Works, a cultural center dedicated to experimentation, education, and production across disciplines. Her books include How the Universe Got Its Spots and a novel, A Madman Dreams of Turing Machines, which won the PEN/Bingham Prize. She was recently named a Guggenheim Fellow, a grant awarded to those “who have demonstrated exceptional capacity for productive scholarship.” Her latest book, Black Hole Blues and Other Songs from Outer Space, is the inside story on the discovery of the century: the sound of spacetime ringing from the collision of two black holes over a billion years ago. * * * How has a failure, or apparent failure, set you up for later success?

pages: 647 words: 43,757

Types and Programming Languages by Benjamin C. Pierce

., Q, will appear in the position of U—in effect, performing an "indirect branch" through register 1 to the stream of instructions represented by Q. Conditional constructs and arithmetic (successor, predecessor, and zero-test) can be encoded using a generalization of this trick. Putting all of this together, we arrive at a proof of undecidability via a reduction from two-counter machines—a simple variant on ordinary Turing machines, consisting of a finite control and two counters, each holding a natural number—to subtyping statements. 28.5.5 Theorem [Pierce, 1994]: For each two-counter machine M, there exists a subtyping statement S(M) such that S(M) is derivable in full F<: iff the execution of M halts. Thus, if we could decide whether any subtype statement is provable, then we could also decide whether any given two-counter machine will eventually halt.

pages: 798 words: 240,182

The Transhumanist Reader by Max More, Natasha Vita-More

Other Logics for Nanocomputers Before going into other extensions of conventional digital logic, there is another form of nanocomputer that may appear earlier for technological reasons. That’s the molecular biocomputer. Figure 18.4 Register operation. Originally published in Extropy 13 (1994). Copyright © Max More. Imagine that a DNA molecule is a tape, upon which is written 2 bits of information per base pair (the DNA molecule is a long string of adenine-thymine and guanine-cytosine pairs). Imagine, in particular, this to be the tape of a Turing machine, which is represented by some humongous clump of special-purpose enzymes that reads the “tape,” changes state, replaces a base pair with a new one, and slides up and down the “tape.” If one could design the enzyme clump using conventional molecular biology techniques (and each of the individual functions it needs to do are done somewhere, somehow, by some natural enzyme) you’d have a molecular computer.

pages: 1,201 words: 233,519

Coders at Work by Peter Seibel

Seibel: Do you ever wish you had focused more on the design of the language, as a language? Knuth: I don't know. I guess so. In a way I resent having every language be universal because they'll be universal in a different way. It's a little bit like Unix having 30 definitions of regular expressions under one roof— depending on which part of Unix you're using you've got a slightly different flavor of regular expressions. If every tool that you have includes a Turing machine inside, is this really the way to go? I was really thinking of TeX as something that the more programming it had in it, the less it was doing its real mission of typesetting. When I put in the calculation of prime numbers into the TeX manual I was not thinking of this as the way to use TeX. I was thinking, “Oh, by the way, look at this: dogs can stand on their hind legs and TeX can calculate prime numbers.”

God Created the Integers: The Mathematical Breakthroughs That Changed History by Stephen Hawking

The third challenge remained. Turing set out to answer it, working on it from the spring of 1935 to the spring of 1936. In order to do so, he needed to make precise the notion of a decision procedure. He needed to formalize it. Perhaps inspired by a childhood interest in typewriters, Turing did this by expressing the concept of a decision procedure in terms of machines, machines now known as Turing machines. Turing recognized that typewriters can only write onto a sheet of paper. They have no ability to interpret the sheet of paper. That is left to the human typist. Turing realized that to eliminate the human component, the machine needed to be able to read input as well as write output. Abstracting and simplifying as much as possible, Turing supposed that his machines operated on a tape composed of a square that could either contain a mark or be blank.

It moves right leaving symbols in place as they are found, and then writes a symbol when it encounters the first empty square, which represents the end of the first number and the start of the second. Now change the configuration to the following set of rules. Keep moving right leaving symbols in place as they are found in marked squares until coming to an empty square. This marks the end of the second number. Move left and erase the symbol found. A Turing machine with a slightly more complicated table of behavior can be defined to multiply two numbers. But these were simple processes. Turing realized that, in contrast, a decision-making machine of the sort required by Hilbert’s decision problem could not be directly constructed. At most, its existence could only be inferred. And if no such machine existed, that could only be inferred, perhaps by taking an inventory of all such machines, which is precisely what Turing did.

pages: 1,799 words: 532,462

The Codebreakers: The Comprehensive History of Secret Communication From Ancient Times to the Internet by David Kahn

To prove it, he envisioned a mechanism that could move to the left or to the right an infinitely long tape marked into squares, and could read and change or read and leave unchanged the blank or the mark—the 0 or the 1—in each square. He demonstrated that this machine could compute anything that could be calculated. Then he showed that even this machine could not tell whether the unknown problems could be solved. This machine, later called the “universal Turing machine,” has come to be recognized as the idealization of general-purpose computers and Turing, therefore, as the intellectual father of the computer. This genius turned his mind to the problem of solving Enigma messages. He took the Poles’ bombe and advanced it by a quantum leap. He conceived a device that would take a cryptogram’s presumed plaintext—as the Poles had done with AnX, only longer—and run it through all possible rotor combinations until it found one that would yield the known ciphertext from the presumed plaintext.

., 975-76 Turning grilles, 308-09 Tut Latin, 822 Two-letter differential, 840-41, 847 Typewriter keyboards, 740-41 TYPEX, 510 Tyro, T, 89 Tyronian notes, 89 U-2 aircraft, 693, 720 U-110, 977 U-158, 504 U-505, 506 U-boats, 273, 466, 504-07 UBCHI, 301, 304 Ugaritic literature, 900 ULTRA, 601 Ultra Secret, The, 979 Unbreakable cipher, 398-400 Unicity distance, 750 Unicity point, 750 United States Air Force Security Service, 680-81 Army, 1, 12, 398, 427, 574-75, 577 Central Bureau, 577, 578 Central Intelligence Agency, 681, 684 colonial cryptology, 174-86 Data Encryption Standard, 979-81, 983 National Defense Research Committee, 558-60 Navy, 5, 12, 252, 386-88, 408, 415-19 passim, 503-04, 680, 969, 971 Philippines, Navy cryptanalytic unit, 10, 25, 47, 563, 564 poor pre-World War II cryptography, 488-89 2nd Signal Service Battalion, 576-77 Signal Companies (Radio Intelligence), 507, 578-79 Signal school, 321, 324, 325 Signal Security Service, 575, 611, 678 solution of American messages, 187, 460, 496-98, 556-57, 671 State Department, 488-501 superiority of current American cryp-tology, 730 takes world lead, 385 see also Army Security Agency; black chambers, American; censorship; Code Compilation Section; Combat Intelligence Unit; Federal Bureau of Investigation; Friedman; FRUPAC; G.2 A.6; Hitt; M-94; M-134; M-138; M-209; National Security Agency; Office of Strategic Services; OP-20-G; Radio Intelligence Division; Rochefort; Safford; SIGABA; Signal Intelligence Service; Signal Security Agency; SIGTOT; Stager; War Department Telegraph Code; word transposition Univacs, 726 Universal Pocket Code, 490 Universal Trade Code, 359, 846 Universal Turing machine, 976 Uruk, 76 Valerianus, P., 903 Valério, L. P. E., 240, 242 Vanek, V., 540 Variant Beaufort, 203, 242 Vassilyev, A. T., 619 Vatican. See papal cryptology Vātsyāyana, 74 Venice, 109-10, 114, 858 Ventris, M. G. F., 922-37 passim Verkuyl, J. A., 691 Vernam, G. S., 394, 403, 612 system, 395-403, 492, 501, 612, 825, 830, 984 Verne, J., 793-94 Vetterlein, Engineer, 555 Viaris, Marquis G.

pages: 1,076 words: 67,364

Haskell Programming: From First Principles by Christopher Allen, Julie Moronuki

Chapter 1 All You Need is Lambda Anything from almost nothing Even the greatest mathematicians, the ones that we would put into our mythology of great mathematicians, had to do a great deal of leg work in order to get to the solution in the end. Daniel Tammett 29 CHAPTER 1. ALL YOU NEED IS LAMBDA 1.1 30 All You Need is Lambda This chapter provides a very brief introduction to the lambda calculus, a model of computation devised in the 1930s by Alonzo Church. A calculus is a method of calculation or reasoning; the lambda calculus is one process for formalizing a method. Like Turing machines, the lambda calculus formalizes the concept of eﬀective computability, thus determining which problems, or classes of problems, can be solved. You may be wondering where the Haskell is. You may be contemplating skipping this chapter. You may feel tempted to skip ahead to the fun stuﬀ when we build a project. DON’T. We’re starting from ﬁrst principles here so that when we get around to building projects you know what you’re doing.

pages: 1,758 words: 342,766

Code Complete (Developer Best Practices) by Steve McConnell

[bib36entry69] Boehm,Barry. 2000b. “"Unifying Software Engineering and Systems Engineering,"” IEEE Computer, March 2000, 114–116. [bib36entry70] Boehm-Davis,Deborah, Sylvia Sheppard, and John Bailey. 1987. “"Program Design Languages: How Much Detail Should They Include?"” International Journal of Man-Machine Studies 27, no. 4: 337–47. [bib36entry71] Böhm, C., and G.Jacopini. 1966. “"Flow Diagrams, Turing Machines and Languages with Only Two Formation Rules."” Communications of the ACM 9, no. 5 (5): 366–71. [bib36entry72] Booch,Grady. 1987. Software Engineering with Ada, 2d ed. Menlo Park, CA: Benjamin/Cummings. [bib36entry73] Booch,Grady. 1994. Object Oriented Analysis and Design with Applications, 2d ed. Boston, MA: Addison-Wesley. [bib36entry74] Booth,Rick. 1997. Inner Loops : A Sourcebook for Fast 32-bit Software Development.

pages: 1,263 words: 371,402

The Year's Best Science Fiction: Twenty-Sixth Annual Collection by Gardner Dozois

My data mining had quickly turned up recurring segments, chunks of organised data differing only in detail. And it was Wilson’s intuition that these things were bits of executable code: programs you could run. Even as expressed in the Eaglets’ odd flowing language, he thought he recognised logical loops, start and stop statements. Mathematics may or may not be universal, but computing seems to be—my brother had found Turing machines, buried deep in an alien database. Wilson translated the segments into a human mathematical programming language, and set them to run on a dedicated processor. They turned out to be like viruses. Once downloaded on almost any computer substrate they organised themselves, investigated their environment, started to multiply, and quickly grew, accessing the data banks that had been downloaded from the stars with them.

pages: 1,351 words: 385,579

The Better Angels of Our Nature: Why Violence Has Declined by Steven Pinker

Bennett, “Abducted: The Amber Alert system is more effective as theater than as a way to protect children,” Boston Globe, Jul. 20, 2008. 213. Kids hit by parents driving kids: Skenazy, 2009, p. 176. 214. Counterproductive kidnapping alerts: D. Bennett, “Abducted: The Amber Alert system is more effective as theater than as a way to protect children,” Boston Globe, Jul. 20, 2008. 215. Alan Turing: Hodges, 1983. 216. Turing machines: Turing, 1936. 217. Can machines think?: Turing, 1950. 218. State-sponsored homophobia, past: Fone, 2000. Present: Ottosson, 2009. 219. More homophobia against gay men: Fone, 2000. More laws against male homosexuality: Ottosson, 2006. 220. More hate crimes against men: U.S. Department of Justice, FBI, 2008 Hate crime statistics, table 4, http://www2.fbi.gov/ucr/hc2008/data/table_04.html. 221.