superintelligent machines

26 results back to index


pages: 294 words: 81,292

Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat

AI winter, AltaVista, Amazon Web Services, artificial general intelligence, Asilomar, Automated Insights, Bayesian statistics, Bernie Madoff, Bill Joy: nanobots, brain emulation, cellular automata, Chuck Templeton: OpenTable:, cloud computing, cognitive bias, commoditize, computer vision, cuban missile crisis, Daniel Kahneman / Amos Tversky, Danny Hillis, data acquisition, don't be evil, drone strike, Extropian, finite state, Flash crash, friendly AI, friendly fire, Google Glasses, Google X / Alphabet X, Isaac Newton, Jaron Lanier, John Markoff, John von Neumann, Kevin Kelly, Law of Accelerating Returns, life extension, Loebner Prize, lone genius, mutually assured destruction, natural language processing, Nicholas Carr, optical character recognition, PageRank, pattern recognition, Peter Thiel, prisoner's dilemma, Ray Kurzweil, Rodney Brooks, Search for Extraterrestrial Intelligence, self-driving car, semantic web, Silicon Valley, Singularitarianism, Skype, smart grid, speech recognition, statistical model, stealth mode startup, stem cell, Stephen Hawking, Steve Jobs, Steve Wozniak, strong AI, Stuxnet, superintelligent machines, technological singularity, The Coming Technological Singularity, Thomas Bayes, traveling salesman, Turing machine, Turing test, Vernor Vinge, Watson beat the top human players on Jeopardy!, zero day

When posed with this question some of the most accomplished scientists I spoke with cited science-fiction writer Isaac Asimov’s Three Laws of Robotics. These rules, they blithely replied, would be “built in” to the AIs, so we have nothing to fear. They spoke as if this were settled science. We’ll discuss the three laws in chapter 1, but it’s enough to say for now that when someone proposes Asimov’s laws as the solution to the dilemma of superintelligent machines, it means they’ve spent little time thinking or exchanging ideas about the problem. How to make friendly intelligent machines and what to fear from superintelligent machines has moved beyond Asimov’s tropes. Being highly capable and accomplished in AI doesn’t inoculate you from naïveté about its perils. I’m not the first to propose that we’re on a collision course. Our species is going to mortally struggle with this problem. This book explores the plausibility of losing control of our future to machines that won’t necessarily hate us, but that will develop unexpected behaviors as they attain high levels of the most unpredictable and powerful force in the universe, levels that we cannot ourselves reach, and behaviors that probably won’t be compatible with our survival.

Thus the first ultraintelligent machine is the last invention that man need ever make … The Singularity has three well-developed definitions—Good’s, above, is the first. Good never used the term “singularity” but he got the ball rolling by positing what he thought of as an inescapable and beneficial milestone in human history—the invention of smarter-than-human machines. To paraphrase Good, if you make a superintelligent machine, it will be better than humans at everything we use our brains for, and that includes making superintelligent machines. The first machine would then set off an intelligence explosion, a rapid increase in intelligence, as it repeatedly self-improved, or simply made smarter machines. This machine or machines would leave man’s brainpower in the dust. After the intelligence explosion, man wouldn’t have to invent anything else—all his needs would be met by machines.

The last sentence of Good’s most often quoted paragraph should read in its entirety: Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control (italics mine). These two sentences tell us important things about Good’s intentions. He felt that we humans were beset by so many complex, looming problems—the nuclear arms race, pollution, war, and so on—that we could only be saved by better thinking, and that would come from superintelligent machines. The second sentence lets us know that the father of the intelligence explosion concept was acutely aware that producing superintelligent machines, however necessary for our survival, could blow up in our faces. Keeping an ultraintelligent machine under control isn’t a given, Good tells us. He doesn’t believe we will even know how to do it—the machine will have to tell us itself. Good knew a few things about machines that could save the world—he had helped build and run the earliest electrical computers ever, used at Bletchley Park to help defeat Germany.


pages: 416 words: 112,268

Human Compatible: Artificial Intelligence and the Problem of Control by Stuart Russell

3D printing, Ada Lovelace, AI winter, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Alfred Russel Wallace, Andrew Wiles, artificial general intelligence, Asilomar, Asilomar Conference on Recombinant DNA, augmented reality, autonomous vehicles, basic income, blockchain, brain emulation, Cass Sunstein, Claude Shannon: information theory, complexity theory, computer vision, connected car, crowdsourcing, Daniel Kahneman / Amos Tversky, delayed gratification, Elon Musk, en.wikipedia.org, Erik Brynjolfsson, Ernest Rutherford, Flash crash, full employment, future of work, Gerolamo Cardano, ImageNet competition, Intergovernmental Panel on Climate Change (IPCC), Internet of things, invention of the wheel, job automation, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John Nash: game theory, John von Neumann, Kenneth Arrow, Kevin Kelly, Law of Accelerating Returns, Mark Zuckerberg, Nash equilibrium, Norbert Wiener, NP-complete, openstreetmap, P = NP, Pareto efficiency, Paul Samuelson, Pierre-Simon Laplace, positional goods, probability theory / Blaise Pascal / Pierre de Fermat, profit maximization, RAND corporation, random walk, Ray Kurzweil, recommendation engine, RFID, Richard Thaler, ride hailing / ride sharing, Robert Shiller, Robert Shiller, Rodney Brooks, Second Machine Age, self-driving car, Shoshana Zuboff, Silicon Valley, smart cities, smart contracts, social intelligence, speech recognition, Stephen Hawking, Steven Pinker, superintelligent machines, Thales of Miletus, The Future of Employment, Thomas Bayes, Thorstein Veblen, transport as a service, Turing machine, Turing test, universal basic income, uranium enrichment, Von Neumann architecture, Wall-E, Watson beat the top human players on Jeopardy!, web application, zero-sum game

Our experience with nuclear physics suggests that it would be prudent to assume that progress could occur quite quickly and to prepare accordingly. If just one conceptual breakthrough were needed, analogous to Szilard’s idea for a neutron-induced nuclear chain reaction, superintelligent AI in some form could arrive quite suddenly. The chances are that we would be unprepared: if we built superintelligent machines with any degree of autonomy, we would soon find ourselves unable to control them. I am, however, fairly confident that we have some breathing space because there are several major breakthroughs needed between here and superintelligence, not just one. Conceptual Breakthroughs to Come The problem of creating general-purpose, human-level AI is far from solved. Solving it is not a matter of spending money on more engineers, more data, and bigger computers.

It would discover new concepts and actions, and these would allow it to improve its rate of discovery. It would make effective plans over increasingly long time scales. In summary, it’s not obvious that anything else of great significance is missing, from the point of view of systems that are effective in achieving their objectives. Of course, the only way to be sure is to build it (once the breakthroughs have been achieved) and see what happens. Imagining a Superintelligent Machine The technical community has suffered from a failure of imagination when discussing the nature and impact of superintelligent AI. Often, we see discussions of reduced medical errors,48 safer cars,49 or other advances of an incremental nature. Robots are imagined as individual entities carrying their brains with them, whereas in fact they are likely to be wirelessly connected into a single, global entity that draws on vast stationary computing resources.

Simply by asking the question, you now have access to search engine technology, courtesy of the AI system. Done! Trillions of dollars in value, just for the asking, and not a single line of additional code written by you. The same goes for any other missing invention or series of inventions: if humans could do it, so can the machine. This last point provides a useful lower bound—a pessimistic estimate—on what a superintelligent machine can do. By assumption, the machine is more capable than an individual human. There are many things an individual human cannot do, but a collection of n humans can do: put an astronaut on the Moon, create a gravitational-wave detector, sequence the human genome, run a country with hundreds of millions of people. So, roughly speaking, we create n software copies of the machine and connect them in the same way—with the same information and control flows—as the n humans.


pages: 481 words: 125,946

What to Think About Machines That Think: Today's Leading Thinkers on the Age of Machine Intelligence by John Brockman

agricultural Revolution, AI winter, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, algorithmic trading, artificial general intelligence, augmented reality, autonomous vehicles, basic income, bitcoin, blockchain, clean water, cognitive dissonance, Colonization of Mars, complexity theory, computer age, computer vision, constrained optimization, corporate personhood, cosmological principle, cryptocurrency, cuban missile crisis, Danny Hillis, dark matter, discrete time, Douglas Engelbart, Elon Musk, Emanuel Derman, endowment effect, epigenetics, Ernest Rutherford, experimental economics, Flash crash, friendly AI, functional fixedness, global pandemic, Google Glasses, hive mind, income inequality, information trail, Internet of things, invention of writing, iterative process, Jaron Lanier, job automation, Johannes Kepler, John Markoff, John von Neumann, Kevin Kelly, knowledge worker, loose coupling, microbiome, Moneyball by Michael Lewis explains big data, natural language processing, Network effects, Norbert Wiener, pattern recognition, Peter Singer: altruism, phenotype, planetary scale, Ray Kurzweil, recommendation engine, Republic of Letters, RFID, Richard Thaler, Rory Sutherland, Satyajit Das, Search for Extraterrestrial Intelligence, self-driving car, sharing economy, Silicon Valley, Skype, smart contracts, social intelligence, speech recognition, statistical model, stem cell, Stephen Hawking, Steve Jobs, Steven Pinker, Stewart Brand, strong AI, Stuxnet, superintelligent machines, supervolcano, the scientific method, The Wisdom of Crowds, theory of mind, Thorstein Veblen, too big to fail, Turing machine, Turing test, Von Neumann architecture, Watson beat the top human players on Jeopardy!, Y2K

But there are some kinds of foolishness that seem only to afflict the very intelligent. Worrying about the dangers of unfriendly AI is a prime example. A preoccupation with the risks of superintelligent machines is the smart person’s Kool-Aid. This is not to say that superintelligent machines pose no danger to humanity. It’s simply that there are many other more pressing and more probable risks facing us in this century. People who worry about unfriendly AI tend to argue that the other risks are already the subject of much discussion, and that even if the probability of being wiped out by superintelligent machines is low, it’s surely wise to allocate some brainpower to preventing such an event, given the existential nature of the threat. Not coincidentally, the problem with this argument was first identified by some of its most vocal proponents.

Computers share knowledge much more easily than humans do, and they can keep that knowledge longer, becoming wiser than humans. Many forward-thinking companies already see this writing on the wall and are luring the best computer scientists out of academia with better pay and advanced hardware. A world with superintelligent-machine-run corporations won’t be that different for humans than it is now; it will just be better, with more advanced goods and services available for very little cost and more leisure time available to those who want it. Of course, the first superintelligent machines probably won’t be corporate; they’ll be operated by governments. And this will be much more hazardous. Governments are more flexible in their actions than corporations; they create their own laws. And as we’ve seen, even the best can engage in torture when they think their survival is at stake.

Even if no large leaps in understanding intelligence algorithmically are made, computers will eventually be able to simulate the workings of a human brain (itself a biological machine) and attain superhuman intelligence using brute-force computation. However, although computational power is increasing exponentially, supercomputer costs and electrical-power efficiency aren’t keeping pace. The first machines capable of superhuman intelligence will be expensive and require enormous amounts of electrical power—they’ll need to earn money to survive. The environmental playing field for superintelligent machines is already in place; in fact, the Darwinian game is afoot. The trading machines of investment banks are competing, for serious money, on the world’s exchanges, having put human day traders out of business years ago. As computers and algorithms advance beyond investing and accounting, machines will be making more and more corporate decisions, including strategic decisions, until they’re running the world.


pages: 339 words: 94,769

Possible Minds: Twenty-Five Ways of Looking at AI by John Brockman

AI winter, airport security, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, artificial general intelligence, Asilomar, autonomous vehicles, basic income, Benoit Mandelbrot, Bill Joy: nanobots, Buckminster Fuller, cellular automata, Claude Shannon: information theory, Daniel Kahneman / Amos Tversky, Danny Hillis, David Graeber, easy for humans, difficult for computers, Elon Musk, Eratosthenes, Ernest Rutherford, finite state, friendly AI, future of work, Geoffrey West, Santa Fe Institute, gig economy, income inequality, industrial robot, information retrieval, invention of writing, James Watt: steam engine, Johannes Kepler, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John von Neumann, Kevin Kelly, Kickstarter, Laplace demon, Loebner Prize, market fundamentalism, Marshall McLuhan, Menlo Park, Norbert Wiener, optical character recognition, pattern recognition, personalized medicine, Picturephone, profit maximization, profit motive, RAND corporation, random walk, Ray Kurzweil, Richard Feynman, Rodney Brooks, self-driving car, sexual politics, Silicon Valley, Skype, social graph, speech recognition, statistical model, Stephen Hawking, Steven Pinker, Stewart Brand, strong AI, superintelligent machines, supervolcano, technological singularity, technoutopianism, telemarketer, telerobotics, the scientific method, theory of mind, Turing machine, Turing test, universal basic income, Upton Sinclair, Von Neumann architecture, Whole Earth Catalog, Y2K, zero-sum game

According to Omohundro’s argument, a superintelligent machine that has an off switch—which some, including Alan Turing himself, in a 1951 talk on BBC Radio 3, have seen as our potential salvation—will take steps to disable the switch in some way.* Thus we may face the prospect of superintelligent machines—their actions by definition unpredictable by us and their imperfectly specified objectives conflicting with our own—whose motivations to preserve their existence in order to achieve those objectives may be insuperable. 1001 REASONS TO PAY NO ATTENTION Objections have been raised to these arguments, primarily by researchers within the AI community. The objections reflect a natural defensive reaction, coupled perhaps with a lack of imagination about what a superintelligent machine could do. None hold water on closer examination.

CHAPTER 2. Judea Pearl: The Limitations of Opaque Learning Machines Deep learning has its own dynamics, it does its own repair and its own optimization, and it gives you the right results most of the time. But when it doesn’t, you don’t have a clue about what went wrong and what should be fixed. CHAPTER 3. Stuart Russell: The Purpose Put into the Machine We may face the prospect of superintelligent machines—their actions by definition unpredictable by us and their imperfectly specified objectives conflicting with our own—whose motivations to preserve their existence in order to achieve those objectives may be insuperable. CHAPTER 4. George Dyson: The Third Law Any system simple enough to be understandable will not be complicated enough to behave intelligently, while any system complicated enough to behave intelligently will be too complicated to understand.

IQ is indeed a crude measure of human intelligence, but it is utterly meaningless for current AI systems, because their capabilities across different areas are uncorrelated. How do we compare the IQ of Google’s search engine, which cannot play chess, with that of Deep Blue, which cannot answer search queries? None of this supports the argument that because intelligence is multifaceted, we can ignore the risk from superintelligent machines. If “smarter than humans” is a meaningless concept, then “smarter than gorillas” is also meaningless, and gorillas therefore have nothing to fear from humans; clearly, that argument doesn’t hold water. Not only is it logically possible for one entity to be more capable than another across all the relevant dimensions of intelligence, it is also possible for one species to represent an existential threat to another even if the former lacks an appreciation for music and literature.


pages: 252 words: 79,452

To Be a Machine: Adventures Among Cyborgs, Utopians, Hackers, and the Futurists Solving the Modest Problem of Death by Mark O'Connell

3D printing, Ada Lovelace, AI winter, Airbnb, Albert Einstein, artificial general intelligence, brain emulation, clean water, cognitive dissonance, computer age, cosmological principle, dark matter, disruptive innovation, double helix, Edward Snowden, effective altruism, Elon Musk, Extropian, friendly AI, global pandemic, impulse control, income inequality, invention of the wheel, Jacques de Vaucanson, John von Neumann, knowledge economy, Law of Accelerating Returns, life extension, lifelogging, Lyft, Mars Rover, means of production, Norbert Wiener, Peter Thiel, profit motive, Ray Kurzweil, RFID, self-driving car, sharing economy, Silicon Valley, Silicon Valley ideology, Singularitarianism, Skype, Stephen Hawking, Steve Wozniak, superintelligent machines, technological singularity, technoutopianism, The Coming Technological Singularity, Travis Kalanick, trickle-down economics, Turing machine, uber lyft, Vernor Vinge

One imagines the “AI arrow” creeping steadily up the scale of intelligence, moving past mice and chimpanzees, with AI’s still remaining “dumb” because AI’s cannot speak fluent language or write science papers, and then the AI arrow crosses the tiny gap from infra-idiot to ultra-Einstein in the course of one month or some similarly short period. At this point, the theory goes, things would change quite radically. And whether they would change for the better or for the worse is an open question. The fundamental risk, Nick argued, was not that superintelligent machines might be actively hostile toward their human creators, or antecedents, but that they would be indifferent. Humans, after all, weren’t actively hostile toward most of the species we’d made extinct over the millennia of our ascendance; they simply weren’t part of our design. The same could turn out to be true of superintelligent machines, which would stand in a similar kind of relationship to us as we ourselves did to the animals we bred for food, or the ones who fared little better for all that they had no direct dealings with us at all. About the nature of the threat, he was keen to stress this point: that there would be no malice, no hatred, no vengeance on the part of the machines.

What was it that these people were referring to when they spoke of existential risk? What was the nature of the threat, the likelihood of its coming to pass? Were we talking about a 2001: A Space Odyssey scenario, where a sentient computer undergoes some malfunction or other and does what it deems necessary to prevent anyone from shutting it down? Were we talking about a Terminator scenario, where a Skynettian matrix of superintelligent machines gains consciousness and either destroys or enslaves humanity in order to further its own particular goals? Certainly, if you were to take at face value the articles popping up about the looming threat of intelligent machines, and the dramatic utterances of savants like Thiel and Hawking, this would have been the sort of thing you’d have had in mind. They may not have been experts in AI, as such, but they were extremely clever men who knew a lot about science.

“I don’t think,” he said, “that I’ve ever seen a newspaper report on this topic that has not been illustrated by a publicity still from one of the Terminator films. The implication of this is always that robots will rebel against us because they resent our dominance, that they will rise up against us. This is not the case.” And this brought us back to the paper-clip scenario, the ridiculousness of which Nick freely acknowledged, but the point of which was that any harm we might come to from a superintelligent machine would not be the result of malevolence, or of any other humanlike motivation, but purely because our absence was an optimal condition in the pursuit of its particular goal. “The AI does not hate you,” as Yudkowsky had put it, “nor does it love you, but you are made out of atoms which it can use for something else.” One way of understanding this would be to listen to a recording of, say, Glenn Gould playing Bach’s Goldberg Variations, and to try to experience the beauty of the music while also holding simultaneously in your mind a sense of the destruction that was wrought in the creation of the piano it is being played on: the trees felled, the elephants slaughtered, the human beings enslaved and killed in the interest of the ivory traders’ profits.


pages: 252 words: 74,167

Thinking Machines: The Inside Story of Artificial Intelligence and Our Race to Build the Future by Luke Dormehl

Ada Lovelace, agricultural Revolution, AI winter, Albert Einstein, Alexey Pajitnov wrote Tetris, algorithmic trading, Amazon Mechanical Turk, Apple II, artificial general intelligence, Automated Insights, autonomous vehicles, book scanning, borderless world, call centre, cellular automata, Claude Shannon: information theory, cloud computing, computer vision, correlation does not imply causation, crowdsourcing, drone strike, Elon Musk, Flash crash, friendly AI, game design, global village, Google X / Alphabet X, hive mind, industrial robot, information retrieval, Internet of things, iterative process, Jaron Lanier, John Markoff, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John von Neumann, Kickstarter, Kodak vs Instagram, Law of Accelerating Returns, life extension, Loebner Prize, Marc Andreessen, Mark Zuckerberg, Menlo Park, natural language processing, Norbert Wiener, out of africa, PageRank, pattern recognition, Ray Kurzweil, recommendation engine, remote working, RFID, self-driving car, Silicon Valley, Skype, smart cities, Smart Cities: Big Data, Civic Hackers, and the Quest for a New Utopia, social intelligence, speech recognition, Stephen Hawking, Steve Jobs, Steve Wozniak, Steven Pinker, strong AI, superintelligent machines, technological singularity, The Coming Technological Singularity, The Future of Employment, Tim Cook: Apple, too big to fail, Turing machine, Turing test, Vernor Vinge, Watson beat the top human players on Jeopardy!

In 1964, the same year as the New York World’s Fair, cybernetics pioneer Norbert Wiener predicted: ‘The world of the future will be an ever more demanding struggle against the limitations of our own intelligence; not a comfortable hammock in which we can lie down to be waited upon by our robot slaves.’ Wiener passed away in May 1964, aged sixty-nine. However, concerns about superintelligent machines continued. The following year, a British mathematician named Irving John Good expanded on some of the concerns. Good had worked with Alan Turing at Bletchley Park during World War II. Years after he had played a key role in cracking the Nazi codes, the moustachioed Good took to driving a car with the vanity licence plate ‘007IJG’ as a comical nod to his days as a gentleman spy. In 1965, Good penned an essay in which he theorised on what a superintelligent machine would mean for the world. He defined such an AI as a computer capable of far surpassing all the intellectual activities that make us intelligent. In a widely quoted passage, he wrote: ‘Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion”, and the intelligence of man would be left far behind.

, it told the pulpy story of a brain which is artificially augmented by being plugged directly into computerised data sources. This was the first published work of Vernor Vinge, a sci-fi writer, mathematics professor and computer scientist with a name straight out of the Marvel Comics alliteration camp. Vinge later became a successful novelist, but he remains best known for his 1993 non-fiction essay, ‘The Coming Technological Singularity’. The essay recounts many of the ideas Good had posed about superintelligent machines, but with the added bonus of a timeline. ‘Within thirty years, we will have the technological means to create superhuman intelligence,’ Vinge famously wrote. ‘Shortly after, the human era will be ended.’ This term, ‘the Singularity’, referring to the point at which machines overtake humans on the intelligence scale, has become an AI reference as widely cited as the Turing Test. It is often credited to Vinge, although in reality the first computer scientist to use it was John von Neumann.


pages: 331 words: 47,993

Artificial You: AI and the Future of Your Mind by Susan Schneider

artificial general intelligence, brain emulation, Elon Musk, Extropian, hive mind, life extension, megastructure, pattern recognition, Ray Kurzweil, Search for Extraterrestrial Intelligence, silicon-based life, Stephen Hawking, superintelligent machines, technological singularity, The Coming Technological Singularity, theory of mind, Turing machine, Turing test, Whole Earth Review, wikimedia commons

The control problem has made world news, fueled by Nick Bostrom’s recent bestseller: Superintelligence: Paths, Dangers and Strategies.3 What is missed, however, is that consciousness could be central to how AI values us. Using its own subjective experience as a springboard, superintelligent AI could recognize in us the capacity for conscious experience. After all, to the extent we value the lives of nonhuman animals, we tend to value them because we feel an affinity of consciousness—thus most of us recoil from killing a chimp, but not from eating an orange. If superintelligent machines are not conscious, either because it’s impossible or because they aren’t designed to be, we could be in trouble. It is important to put these issues into an even larger, universe-wide context. In my two-year NASA project, I suggested that a similar phenomenon could be happening on other planets as well; elsewhere in the universe, other species may be outmoded by synthetic intelligences.

ACT can be run at the R&D stage, a stage in which it the AI would need to be tested in a secure, simulated environment in any case. If a machine passes ACT, we can go on to measure other parameters of the system to see whether the presence of consciousness is correlated with increased empathy, volatility, goal content integrity, increased intelligence, and so on. Other, nonconscious versions of the system serve as a basis for comparison. Some doubt that a superintelligent machine could be boxed in effectively, because it would inevitably find a clever escape. Turner and I do not anticipate the development of superintelligence over the next few decades, however. We merely hope to provide a method to test some kinds of AIs, not all AIs. Furthermore, for an ACT to be effective, the AI need not stay in the box for long, just long enough for someone to administer the test.


pages: 586 words: 186,548

Architects of Intelligence by Martin Ford

3D printing, agricultural Revolution, AI winter, Apple II, artificial general intelligence, Asilomar, augmented reality, autonomous vehicles, barriers to entry, basic income, Baxter: Rethink Robotics, Bayesian statistics, bitcoin, business intelligence, business process, call centre, cloud computing, cognitive bias, Colonization of Mars, computer vision, correlation does not imply causation, crowdsourcing, DARPA: Urban Challenge, deskilling, disruptive innovation, Donald Trump, Douglas Hofstadter, Elon Musk, Erik Brynjolfsson, Ernest Rutherford, Fellow of the Royal Society, Flash crash, future of work, gig economy, Google X / Alphabet X, Gödel, Escher, Bach, Hans Rosling, ImageNet competition, income inequality, industrial robot, information retrieval, job automation, John von Neumann, Law of Accelerating Returns, life extension, Loebner Prize, Mark Zuckerberg, Mars Rover, means of production, Mitch Kapor, natural language processing, new economy, optical character recognition, pattern recognition, phenotype, Productivity paradox, Ray Kurzweil, recommendation engine, Robert Gordon, Rodney Brooks, Sam Altman, self-driving car, sensor fusion, sentiment analysis, Silicon Valley, smart cities, social intelligence, speech recognition, statistical model, stealth mode startup, stem cell, Stephen Hawking, Steve Jobs, Steve Wozniak, Steven Pinker, strong AI, superintelligent machines, Ted Kaczynski, The Rise and Fall of American Growth, theory of mind, Thomas Bayes, Travis Kalanick, Turing test, universal basic income, Wall-E, Watson beat the top human players on Jeopardy!, women in the workforce, working-age population, zero-sum game, Zipcar

We are told that fully autonomous self-driving cars will be sharing our roads in just a few years—and that millions of jobs for truck, taxi and Uber drivers are on the verge of vaporizing. Evidence of racial and gender bias has been detected in certain machine learning algorithms, and concerns about how AI-powered technologies such as facial recognition will impact privacy seem well-founded. Warnings that robots will soon be weaponized, or that truly intelligent (or superintelligent) machines might someday represent an existential threat to humanity, are regularly reported in the media. A number of very prominent public figures—none of whom are actual AI experts—have weighed in. Elon Musk has used especially extreme rhetoric, declaring that AI research is “summoning the demon” and that “AI is more dangerous than nuclear weapons.” Even less volatile individuals, including Henry Kissinger and the late Stephen Hawking, have issued dire warnings.

Are true thinking machines—or human-level AI—a real possibility and how soon might such a breakthrough occur? What risks, or threats, associated with artificial intelligence should we be genuinely concerned about? And how should we address those concerns? Is there a role for government regulation? Will AI unleash massive economic and job market disruption, or are these concerns overhyped? Could superintelligent machines someday break free of our control and pose a genuine threat? Should we worry about an AI “arms race,” or that other countries with authoritarian political systems, particularly China, may eventually take the lead? It goes without saying that no one really knows the answers to these questions. No one can predict the future. However, the AI experts I’ve spoken to here do know more about the current state of the technology, as well as the innovations on the horizon, than virtually anyone else.

In July 2018, over 160 AI companies and 2,400 individual researchers from across the globe—including a number of the people interviewed here—signed an open pledge promising to never develop such weapons. (https://futureoflife.org/lethal-autonomous-weapons-pledge/) Several of the conversations in this book delve into the dangers presented by weaponized AI. A much more futuristic and speculative danger is the so-called “AI alignment problem.” This is the concern that a truly intelligent, or perhaps superintelligent, machine might escape our control, or make decisions that might have adverse consequences for humanity. This is the fear that elicits seemingly over-the-top statements from people like Elon Musk. Nearly everyone I spoke to weighed in on this issue. To ensure that I gave this concern adequate and balanced coverage, I spoke with Nick Bostrom of the Future of Humanity Institute at the University of Oxford.


pages: 360 words: 100,991

Heart of the Machine: Our Future in a World of Artificial Emotional Intelligence by Richard Yonck

3D printing, AI winter, artificial general intelligence, Asperger Syndrome, augmented reality, Berlin Wall, brain emulation, Buckminster Fuller, call centre, cognitive bias, cognitive dissonance, computer age, computer vision, crowdsourcing, Elon Musk, en.wikipedia.org, epigenetics, friendly AI, ghettoisation, industrial robot, Internet of things, invention of writing, Jacques de Vaucanson, job automation, John von Neumann, Kevin Kelly, Law of Accelerating Returns, Loebner Prize, Menlo Park, meta analysis, meta-analysis, Metcalfe’s law, neurotypical, Oculus Rift, old age dependency ratio, pattern recognition, RAND corporation, Ray Kurzweil, Rodney Brooks, self-driving car, Skype, social intelligence, software as a service, Stephen Hawking, Steven Pinker, superintelligent machines, technological singularity, telepresence, telepresence robot, The Future of Employment, the scientific method, theory of mind, Turing test, twin studies, undersea cable, Vernor Vinge, Watson beat the top human players on Jeopardy!, Whole Earth Review, working-age population, zero day

Samantha is capable of understanding and expressing emotions, but initially does not truly experience them. Over the course of the film, the two of them fall in love and grow emotionally. He learns to let go of the past and have fun again. She develops a true emotional life, experiencing the thrill of infatuation and the pain of anticipated loss. However, in the end Samantha is still a superintelligent machine. As a result, she soon outgrows this relationship, as well as the many other relationships she reveals she’s simultaneously been engaged in. When Theodore asks Samantha point-blank how many other people she is talking to at that moment, the AI answers: 8,136. He is shocked by the revelation because up until now he has behaved as if she was a human being like himself. As it dawns on him what this implies, Theodore follows up with the inevitable question: “Are you in love with anyone else?”

Could they ever be truly conscious? These are huge questions, as enormous and perhaps as difficult to answer as whether or not computers will ever be capable of genuinely experiencing emotions. As it happens, the two questions may be intimately interlinked. Recently a number of notable luminaries, scientists, and entrepreneurs have expressed their concerns about the potential for runaway AI and superintelligent machines. Physicist Stephen Hawking, engineer and inventor Elon Musk, and philosopher Nick Bostrom have all issued stern warnings of what may happen as we move ever closer to computers that are able to think and reason as well as or perhaps even better than human beings. At the same time, several computer scientists, psychologists, and other researchers have stated that the many challenges we face in developing thinking machines shows we have little to be concerned about.

(Translated from the Sixth German Edition by M. Eden Paul, MD). Rebman Ltd, London. 1909. 11. Les détraquées de Paris, Etude de moeurs contemporaines René Schwaeblé. Nouvelle Edition, Daragon libraire-Èditeur, 1910. 12. Smith, A., Anderson, J. “Digital Life in 2025: AI, Robotics and the Future of Jobs.” Pew Research Center. August 6, 2014. 13. Forecast: Kurzweil—2029: HMLI, human level machine intelligence; 2045: Superintelligent machines; Forecast: Bostrom—2050: Author’s Delphi survey converges on HMLI, human level machine intelligence. 14. Levy, D. Love and Sex with Robots. Harper. 2007. 15. Brice, M. “A Third of Men Who See Prostitutes Crave Emotional Intimacy, Not Just Sex.” Medical Daily. August 8, 2012; Calvin, T. “Why I visit prostitutes.” Salon. October 19, 2014. 16. Agalmatophilia is defined as the sexual attraction to a statue, doll, mannequin, or other similar figurative object. 17.


pages: 574 words: 164,509

Superintelligence: Paths, Dangers, Strategies by Nick Bostrom

agricultural Revolution, AI winter, Albert Einstein, algorithmic trading, anthropic principle, anti-communist, artificial general intelligence, autonomous vehicles, barriers to entry, Bayesian statistics, bioinformatics, brain emulation, cloud computing, combinatorial explosion, computer vision, cosmological constant, dark matter, DARPA: Urban Challenge, data acquisition, delayed gratification, demographic transition, different worldview, Donald Knuth, Douglas Hofstadter, Drosophila, Elon Musk, en.wikipedia.org, endogenous growth, epigenetics, fear of failure, Flash crash, Flynn Effect, friendly AI, Gödel, Escher, Bach, income inequality, industrial robot, informal economy, information retrieval, interchangeable parts, iterative process, job automation, John Markoff, John von Neumann, knowledge worker, longitudinal study, Menlo Park, meta analysis, meta-analysis, mutually assured destruction, Nash equilibrium, Netflix Prize, new economy, Norbert Wiener, NP-complete, nuclear winter, optical character recognition, pattern recognition, performance metric, phenotype, prediction markets, price stability, principal–agent problem, race to the bottom, random walk, Ray Kurzweil, recommendation engine, reversible computing, social graph, speech recognition, Stanislav Petrov, statistical model, stem cell, Stephen Hawking, strong AI, superintelligent machines, supervolcano, technological singularity, technoutopianism, The Coming Technological Singularity, The Nature of the Firm, Thomas Kuhn: the structure of scientific revolutions, transaction costs, Turing machine, Vernor Vinge, Watson beat the top human players on Jeopardy!, World Values Survey, zero-sum game

They suggest that (at least in lieu of better data or analysis) it may be reasonable to believe that human-level machine intelligence has a fairly sizeable chance of being developed by mid-century, and that it has a non-trivial chance of being developed considerably sooner or much later; that it might perhaps fairly soon thereafter result in superintelligence; and that a wide range of outcomes may have a significant chance of occurring, including extremely good outcomes and outcomes that are as bad as human extinction.84 At the very least, they suggest that the topic is worth a closer look. CHAPTER 2 Paths to superintelligence Machines are currently far inferior to humans in general intelligence. Yet one day (we have suggested) they will be superintelligent. How do we get from here to there? This chapter explores several conceivable technological paths. We look at artificial intelligence, whole brain emulation, biological cognition, and human–machine interfaces, as well as networks and organizations. We evaluate their different degrees of plausibility as pathways to superintelligence.

Let us consider some of the capabilities that a superintelligence could have and how it could use them. Functionalities and superpowers It is important not to anthropomorphize superintelligence when thinking about its potential impacts. Anthropomorphic frames encourage unfounded expectations about the growth trajectory of a seed AI and about the psychology, motivations, and capabilities of a mature superintelligence. For example, a common assumption is that a superintelligent machine would be like a very clever but nerdy human being. We imagine that the AI has book smarts but lacks social savvy, or that it is logical but not intuitive and creative. This idea probably originates in observation: we look at present-day computers and see that they are good at calculation, remembering facts, and at following the letter of instructions while being oblivious to social contexts and subtexts, norms, emotions, and politics.

If these developments take place on digital rather than biological timescales, then the glacial humans might find themselves expropriated before they could say Jack Robinson.15 Life in an algorithmic economy Life for biological humans in a post-transition Malthusian state need not resemble any of the historical states of man (as hunter–gatherer, farmer, or office worker). Instead, the majority of humans in this scenario might be idle rentiers who eke out a marginal living on their savings.16 They would be very poor, yet derive what little income they have from savings or state subsidies. They would live in a world with extremely advanced technology, including not only superintelligent machines but also anti-aging medicine, virtual reality, and various enhancement technologies and pleasure drugs: yet these might be generally unaffordable. Perhaps instead of using enhancement medicine, they would take drugs to stunt their growth and slow their metabolism in order to reduce their cost of living (fast-burners being unable to survive at the gradually declining subsistence income).


pages: 340 words: 97,723

The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity by Amy Webb

Ada Lovelace, AI winter, Airbnb, airport security, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, artificial general intelligence, Asilomar, autonomous vehicles, Bayesian statistics, Bernie Sanders, bioinformatics, blockchain, Bretton Woods, business intelligence, Cass Sunstein, Claude Shannon: information theory, cloud computing, cognitive bias, complexity theory, computer vision, crowdsourcing, cryptocurrency, Daniel Kahneman / Amos Tversky, Deng Xiaoping, distributed ledger, don't be evil, Donald Trump, Elon Musk, Filter Bubble, Flynn Effect, gig economy, Google Glasses, Grace Hopper, Gödel, Escher, Bach, Inbox Zero, Internet of things, Jacques de Vaucanson, Jeff Bezos, Joan Didion, job automation, John von Neumann, knowledge worker, Lyft, Mark Zuckerberg, Menlo Park, move fast and break things, move fast and break things, natural language processing, New Urbanism, one-China policy, optical character recognition, packet switching, pattern recognition, personalized medicine, RAND corporation, Ray Kurzweil, ride hailing / ride sharing, Rodney Brooks, Rubik’s Cube, Sand Hill Road, Second Machine Age, self-driving car, SETI@home, side project, Silicon Valley, Silicon Valley startup, skunkworks, Skype, smart cities, South China Sea, sovereign wealth fund, speech recognition, Stephen Hawking, strong AI, superintelligent machines, technological singularity, The Coming Technological Singularity, theory of mind, Tim Cook: Apple, trade route, Turing machine, Turing test, uber lyft, Von Neumann architecture, Watson beat the top human players on Jeopardy!, zero day

The difference between a 119 “high average” brain and a 134 “gifted” brain would mean significantly greater cognitive ability—making connections faster, mastering new concepts more easily, and thinking more efficiently. But within that same timeframe, AI’s cognitive ability will not only supersede us—it could become wholly unrecognizable to us, because we do not have the biological processing power to understand what it is. For us, encountering a superintelligent machine would be like a chimpanzee sitting in on a city council meeting. The chimp might recognize that there are people in the room and that he can sit down on a chair, but a long-winded argument about whether to add bike lanes to a busy intersection? He wouldn’t have anywhere near the cognitive ability to decipher the language being used, let alone the reasoning and experience to grok why bike lanes are so controversial.

Politicians and government officials like regulations because they tend to be single, executable plans that are clearly defined. In order for regulations to work, they have to be specific. At the moment, AI progress is happening weekly—which means that any meaningful regulations would be too restrictive and exacting to allow for innovation and progress. We’re in the midst of a very long transition, from artificial narrow intelligence to artificial general intelligence and, very possibly, superintelligent machines. Any regulations created in 2019 would be outdated by the time they went into effect. They might alleviate our concerns for a short while, but ultimately regulations would cause greater damage in the future. Changing the Big Nine: The Case for Transforming AI’s Business The creation of GAIA and structural changes to our governments are important to fixing the developmental track of AI, but the G-MAFIA and BAT must also agree to make some changes, too.


pages: 326 words: 103,170

The Seventh Sense: Power, Fortune, and Survival in the Age of Networks by Joshua Cooper Ramo

Airbnb, Albert Einstein, algorithmic trading, barriers to entry, Berlin Wall, bitcoin, British Empire, cloud computing, crowdsourcing, Danny Hillis, defense in depth, Deng Xiaoping, drone strike, Edward Snowden, Fall of the Berlin Wall, Firefox, Google Chrome, income inequality, Isaac Newton, Jeff Bezos, job automation, Joi Ito, market bubble, Menlo Park, Metcalfe’s law, Mitch Kapor, natural language processing, Network effects, Norbert Wiener, Oculus Rift, packet switching, Paul Graham, price stability, quantitative easing, RAND corporation, recommendation engine, Republic of Letters, Richard Feynman, road to serfdom, Robert Metcalfe, Sand Hill Road, secular stagnation, self-driving car, Silicon Valley, Skype, Snapchat, social web, sovereign wealth fund, Steve Jobs, Steve Wozniak, Stewart Brand, Stuxnet, superintelligent machines, technological singularity, The Coming Technological Singularity, The Wealth of Nations by Adam Smith, too big to fail, Vernor Vinge, zero day

It was easy enough for Vinge to see how this would end. It wouldn’t be with the sort of intended polite, lapdog domesticity of artificial intelligence that we might hope for but with a rottweiler of a device, alive to the meaty smell of power, violence, and greed. This puzzle has interested the Oxford philosopher Nick Bostrom, who has described the following thought experiment: Imagine a superintelligent machine programmed to do whatever is needed to make paper clips as fast as possible, a machine that is connected to every resource that task might demand. Go figure it out! might be all its human instructors tell it. As the clip-making AI becomes better and better at its task, it demands more and still more resources: more electricity, steel, manufacturing, shipping. The paper clips pile up. The machine looks around: If only I could control the power supply, it thinks.

In the spring of 1993: See Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace, proceedings of a symposium cosponsored by the NASA Lewis Research Center and the Ohio Aerospace Institute, Westlake, Ohio, March 30–31, 1993 (Hampton, VA: National Aeronautics and Space Administration Scientific and Technical Information Program), iii. “Within thirty years”: Ibid., 12. Imagine a superintelligent machine: Nick Bostrom, “Ethical Issues in Advanced Artificial Intelligence,” in Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in AI, vol. 2, ed. Iva Smit et al. (Windsor, ON: International Institute for Advanced Studies in Systems Research and Cybernetics, 2003), 12–17, and Nick Bostrom, “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents,” Minds and Machines, 22, no. 2 (2012): 71–85.


pages: 350 words: 98,077

Artificial Intelligence: A Guide for Thinking Humans by Melanie Mitchell

Ada Lovelace, AI winter, Amazon Mechanical Turk, Apple's 1984 Super Bowl advert, artificial general intelligence, autonomous vehicles, Bernie Sanders, Claude Shannon: information theory, cognitive dissonance, computer age, computer vision, dark matter, Douglas Hofstadter, Elon Musk, en.wikipedia.org, Gödel, Escher, Bach, I think there is a world market for maybe five computers, ImageNet competition, Jaron Lanier, job automation, John Markoff, John von Neumann, Kevin Kelly, Kickstarter, license plate recognition, Mark Zuckerberg, natural language processing, Norbert Wiener, ought to be enough for anybody, pattern recognition, performance metric, RAND corporation, Ray Kurzweil, recommendation engine, ride hailing / ride sharing, Rodney Brooks, self-driving car, sentiment analysis, Silicon Valley, Singularitarianism, Skype, speech recognition, Stephen Hawking, Steve Jobs, Steve Wozniak, Steven Pinker, strong AI, superintelligent machines, theory of mind, There's no reason for any individual to have a computer in his home - Ken Olsen, Turing test, Vernor Vinge, Watson beat the top human players on Jeopardy!

When Deep Blue beat Kasparov, when EMI started composing Chopin-like mazurkas, and when Kurzweil wrote his first book on the Singularity, many of these engineers had been in high school, probably reading GEB and loving it, even though its AI prognostications were a bit out of date. The reason they were working at Google was precisely to make AI happen—not in a hundred years, but now, as soon as possible. They didn’t understand what Hofstadter was so stressed out about. People who work in AI are used to encountering the fears of people outside the field, who have presumably been influenced by the many science fiction movies depicting superintelligent machines that turn evil. AI researchers are also familiar with the worries that increasingly sophisticated AI will replace humans in some jobs, that AI applied to big data sets could subvert privacy and enable subtle discrimination, and that ill-understood AI systems allowed to make autonomous decisions have the potential to cause havoc. Hofstadter’s terror was in response to something entirely different.

Likewise, it will be able to discover, through its ever-increasing deduction abilities, all kinds of new knowledge that it can turn into new cognitive power for itself. Such a machine would not be constrained by the annoying limitations of humans, such as our slowness of thought and learning, our irrationality and cognitive biases, our susceptibility to boredom, our need for sleep, and our emotions, all of which get in the way of productive thinking. In this view, a superintelligent machine would encompass something close to “pure” intelligence, without being constrained by any of our human foibles. What seems more likely to me is that these supposed limitations of humans are part and parcel of our general intelligence. The cognitive limitations forced upon us by having bodies that work in the world, along with the emotions and “irrational” biases that evolved to allow us to function as a social group, and all the other qualities sometimes considered cognitive “shortcomings,” are in fact precisely what enable us to be generally intelligent rather than narrow savants.


Global Catastrophic Risks by Nick Bostrom, Milan M. Cirkovic

affirmative action, agricultural Revolution, Albert Einstein, American Society of Civil Engineers: Report Card, anthropic principle, artificial general intelligence, Asilomar, availability heuristic, Bill Joy: nanobots, Black Swan, carbon-based life, cognitive bias, complexity theory, computer age, coronavirus, corporate governance, cosmic microwave background, cosmological constant, cosmological principle, cuban missile crisis, dark matter, death of newspapers, demographic transition, Deng Xiaoping, distributed generation, Doomsday Clock, Drosophila, endogenous growth, Ernest Rutherford, failed state, feminist movement, framing effect, friendly AI, Georg Cantor, global pandemic, global village, Gödel, Escher, Bach, hindsight bias, Intergovernmental Panel on Climate Change (IPCC), invention of agriculture, Kevin Kelly, Kuiper Belt, Law of Accelerating Returns, life extension, means of production, meta analysis, meta-analysis, Mikhail Gorbachev, millennium bug, mutually assured destruction, nuclear winter, P = NP, peak oil, phenotype, planetary scale, Ponzi scheme, prediction markets, RAND corporation, Ray Kurzweil, reversible computing, Richard Feynman, Ronald Reagan, scientific worldview, Singularitarianism, social intelligence, South China Sea, strong AI, superintelligent machines, supervolcano, technological singularity, technoutopianism, The Coming Technological Singularity, Tunguska event, twin studies, uranium enrichment, Vernor Vinge, War on Poverty, Westphalian system, Y2K

Darwin himself noted that 'not one living species will transmit its unaltered likeness to a distant futurity'. Our own species will surely change and diversify faster than any predecessor - via human-induced modifications (whether intelligently controlled or unintended) , not by natural selection alone. The post-human era may be only centuries away. And what about Artificial Intelligence? Superintelligent machine could be the last invention that humans need ever make. We should keep our minds open, or at least ajar, to concepts that seem on the fringe of science fiction. These thoughts might seem irrelevant to practical policy - something for speculative academics to discuss in our spare moments. I used to think this. But humans are now, individually and collectively, so greatly empowered by rapidly changing technology that we can - by design or as unintended consequences - engender irreversible global changes.

H owever, from a long-term perspective, the development of general artificial intelligence exceeding that of the human brain can be seen as one of the main challenges to the future of humanity (arguably, even as the main challenge). At the same time, the successful deployment of friendly superintelligence could obviate many of the other risks facing humanity. The title of Chapter 15, 'Artificial Intelligence as a positive and negative factor in global risk', reflects this ambivalent potential. As Eliezer Yudkowsky notes, the prospect of superintelligent machines is a difficult topic to analyse and discuss. Appropriately, therefore, he devotes a substantial part ofhis chapter to clearing common misconceptions and barriers to understanding. Having done so, he proceeds to give an argument for giving serious consideration to the possibility that radical superintelligence could erupt very suddenly - a scenario that is sometimes referred to as the 'Singularity hypothesis'.

An entity that fails to keep up with its neighbors is likely to be eaten, its space, materials, energy, and useful thoughts reorganized to serve another's goals. Such a fate may be routine for humans who dally too long on slow Earth before going Ex. Here we have Tribulations and damnation for the late adopters, in addition to the millennia! utopian outcome for the elect. Although Kurzweil acknowledges apocalyptic potentials - such as humanity being destroyed by superintelligent machines - inherent in these technologies, he is nonetheless uniformly utopian and enthusiastic. Hence Garreau's labelling Kurzweil's the ' Heaven' scenario. While Kurzweil (2005) acknowledges his similarity to millennialists by, for instance, including a tongue-in-cheek picture in The Singularity Is Near of himself holding a sign with that slogan, referencing the classic cartoon image of the EndTimes street prophet, most Singularitarians angrily reject such comparisons insisting their expectations are based solely on rational, scientific extrapolation.


pages: 222 words: 53,317

Overcomplicated: Technology at the Limits of Comprehension by Samuel Arbesman

algorithmic trading, Anton Chekhov, Apple II, Benoit Mandelbrot, citation needed, combinatorial explosion, Danny Hillis, David Brooks, digital map, discovery of the americas, en.wikipedia.org, Erik Brynjolfsson, Flash crash, friendly AI, game design, Google X / Alphabet X, Googley, HyperCard, Inbox Zero, Isaac Newton, iterative process, Kevin Kelly, Machine translation of "The spirit is willing, but the flesh is weak." to Russian and back, mandelbrot fractal, Minecraft, Netflix Prize, Nicholas Carr, Parkinson's law, Ray Kurzweil, recommendation engine, Richard Feynman, Richard Feynman: Challenger O-ring, Second Machine Age, self-driving car, software studies, statistical model, Steve Jobs, Steve Wozniak, Steven Pinker, Stewart Brand, superintelligent machines, Therac-25, Tyler Cowen: Great Stagnation, urban planning, Watson beat the top human players on Jeopardy!, Whole Earth Catalog, Y2K

Living with Complexity by Don Norman examines the origins of (and need for) complexity, particularly from the perspective of design. The Techno-Human Condition by Braden R. Allenby and Daniel Sarewitz is a discussion of how to grapple with coming technological change and is particularly intriguing when it discusses “wicked complexity.” Superintelligence by Nick Bostrom explores the many issues and implications related to the development of superintelligent machines. The Works, The Heights, and The Way to Go by Kate Ascher examine how cities, skyscrapers, and our transportation networks, respectively, actually work. Beautifully rendered and fascinating books. The Second Machine Age by Erik Brynjolfsson and Andrew McAfee examines the rapid technological change we are experiencing and can come to expect, and how it will affect our economy, as well as how to handle this change.


The Ethical Algorithm: The Science of Socially Aware Algorithm Design by Michael Kearns, Aaron Roth

23andMe, affirmative action, algorithmic trading, Alvin Roth, Bayesian statistics, bitcoin, cloud computing, computer vision, crowdsourcing, Edward Snowden, Elon Musk, Filter Bubble, general-purpose programming language, Google Chrome, ImageNet competition, Lyft, medical residency, Nash equilibrium, Netflix Prize, p-value, Pareto efficiency, performance metric, personalized medicine, pre–internet, profit motive, quantitative trading / quantitative finance, RAND corporation, recommendation engine, replication crisis, ride hailing / ride sharing, Robert Bork, Ronald Coase, self-driving car, short selling, sorting algorithm, speech recognition, statistical model, Stephen Hawking, superintelligent machines, telemarketer, Turing machine, two-sided market, Vilfredo Pareto

In fact, when Google negotiated the purchase of DeepMind in 2014 for $400 million, one of the conditions of the sale was that Google would set up an AI ethics board. All of this makes for good press, but in this section, we want to consider some of the arguments that are causing an increasingly respectable minority of scientists to be seriously worried about AI risk. Most of these fears are premised on the idea that AI research will inevitably lead to superintelligent machines in a chain reaction that will happen much faster than humanity will have time to react to. This chain reaction, once it reaches some critical point, will lead to an “intelligence explosion” that could lead to an AI “singularity.” One of the earliest versions of this argument was summed up in 1965 by I. J. Good, a British mathematician who worked with Alan Turing: Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever.


pages: 246 words: 81,625

On Intelligence by Jeff Hawkins, Sandra Blakeslee

airport security, Albert Einstein, computer age, conceptual framework, Johannes Kepler, Necker cube, pattern recognition, Paul Erdős, Ray Kurzweil, Silicon Valley, Silicon Valley startup, speech recognition, superintelligent machines, the scientific method, Thomas Bayes, Turing machine, Turing test

For example, an important challenge today is to understand how the shape of a protein molecule can be predicted from the sequence of amino acids that comprise the protein. Being able to predict how proteins fold and interact would accelerate the development of medicines and the cures for many diseases. Engineers and scientists have created three-dimensional visual models of proteins, in an effort to predict how these complex molecules behave. But try as we might, the task has proven too difficult. A superintelligent machine, on the other hand, with a set of senses specifically tuned to this question might be able to answer it. If this sounds far-fetched, remember that we wouldn't be surprised if humans could solve the problem. Our inability to tackle the issue may be related, primarily, to a mismatch between the human senses and the physical phenomena we want to understand. Intelligent machines can have custom senses and larger-than-human memory, enabling them to solve problems we can't.


pages: 798 words: 240,182

The Transhumanist Reader by Max More, Natasha Vita-More

23andMe, Any sufficiently advanced technology is indistinguishable from magic, artificial general intelligence, augmented reality, Bill Joy: nanobots, bioinformatics, brain emulation, Buckminster Fuller, cellular automata, clean water, cloud computing, cognitive bias, cognitive dissonance, combinatorial explosion, conceptual framework, Conway's Game of Life, cosmological principle, data acquisition, discovery of DNA, Douglas Engelbart, Drosophila, en.wikipedia.org, endogenous growth, experimental subject, Extropian, fault tolerance, Flynn Effect, Francis Fukuyama: the end of history, Frank Gehry, friendly AI, game design, germ theory of disease, hypertext link, impulse control, index fund, John von Neumann, joint-stock company, Kevin Kelly, Law of Accelerating Returns, life extension, lifelogging, Louis Pasteur, Menlo Park, meta analysis, meta-analysis, moral hazard, Network effects, Norbert Wiener, pattern recognition, Pepto Bismol, phenotype, positional goods, prediction markets, presumed consent, Ray Kurzweil, reversible computing, RFID, Ronald Reagan, scientific worldview, silicon-based life, Singularitarianism, social intelligence, stem cell, stochastic process, superintelligent machines, supply-chain management, supply-chain management software, technological singularity, Ted Nelson, telepresence, telepresence robot, telerobotics, the built environment, The Coming Technological Singularity, the scientific method, The Wisdom of Crowds, transaction costs, Turing machine, Turing test, Upton Sinclair, Vernor Vinge, Von Neumann architecture, Whole Earth Review, women in the workforce, zero-sum game

A better example, albeit rather extreme, for making this point is Homo sapiens’ relationship with bacteria. Both human beings and bacteria have good claims to being the “dominant ­species” on Earth – depending upon how one defines dominant. It is possible that superintelligent machines may wish to dominate some niche that is not presently occupied in any serious fashion by human beings. If this is the case, then from a human being’s point of view, such an AI would not be a Dominant AI. Instead, we would have a “Limited AI” scenario. How could Limited AI occur? I can imagine several scenarios, and I’m sure other people can imagine more. Perhaps the most important point to make is that superintelligent machines may not be competing in the same niche with human beings for resources, and would therefore have little incentive to dominate us. In such a Limited AI scenario, there will be aspects of human life which continue on, much as before, with human beings remaining number one.


pages: 347 words: 97,721

Only Humans Need Apply: Winners and Losers in the Age of Smart Machines by Thomas H. Davenport, Julia Kirby

AI winter, Andy Kessler, artificial general intelligence, asset allocation, Automated Insights, autonomous vehicles, basic income, Baxter: Rethink Robotics, business intelligence, business process, call centre, carbon-based life, Clayton Christensen, clockwork universe, commoditize, conceptual framework, dark matter, David Brooks, deliberate practice, deskilling, digital map, disruptive innovation, Douglas Engelbart, Edward Lloyd's coffeehouse, Elon Musk, Erik Brynjolfsson, estate planning, fixed income, follow your passion, Frank Levy and Richard Murnane: The New Division of Labor, Freestyle chess, game design, general-purpose programming language, global pandemic, Google Glasses, Hans Lippershey, haute cuisine, income inequality, index fund, industrial robot, information retrieval, intermodal, Internet of things, inventory management, Isaac Newton, job automation, John Markoff, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, Joi Ito, Khan Academy, knowledge worker, labor-force participation, lifelogging, longitudinal study, loss aversion, Mark Zuckerberg, Narrative Science, natural language processing, Norbert Wiener, nuclear winter, pattern recognition, performance metric, Peter Thiel, precariat, quantitative trading / quantitative finance, Ray Kurzweil, Richard Feynman, risk tolerance, Robert Shiller, Robert Shiller, Rodney Brooks, Second Machine Age, self-driving car, Silicon Valley, six sigma, Skype, social intelligence, speech recognition, spinning jenny, statistical model, Stephen Hawking, Steve Jobs, Steve Wozniak, strong AI, superintelligent machines, supply-chain management, transaction costs, Tyler Cowen: Great Stagnation, Watson beat the top human players on Jeopardy!, Works Progress Administration, Zipcar

Colvin suggests, for example, that no one would want to be judged by a computer in a courtroom. But a minority defendant given the choice between a probably prejudiced jury, a possibly prejudiced judge, and a race-blind machine might well choose the latter option. In addition, not everyone agrees that we humans will remain in a position to dictate which decisions and actions will be reserved for us. What would prevent a superintelligent machine from denying our commands, they ask, if it thought better of the situation? To prepare for that possibility (familiar to those who remember HAL in 2001: A Space Odyssey), some insist that computer scientists had better figure out how to program values into the machines, and values that are “human-friendly,” to color the decision-making that might proceed logically but tragically from their narrowly specified goals.


pages: 396 words: 117,149

The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World by Pedro Domingos

Albert Einstein, Amazon Mechanical Turk, Arthur Eddington, basic income, Bayesian statistics, Benoit Mandelbrot, bioinformatics, Black Swan, Brownian motion, cellular automata, Claude Shannon: information theory, combinatorial explosion, computer vision, constrained optimization, correlation does not imply causation, creative destruction, crowdsourcing, Danny Hillis, data is the new oil, double helix, Douglas Hofstadter, Erik Brynjolfsson, experimental subject, Filter Bubble, future of work, global village, Google Glasses, Gödel, Escher, Bach, information retrieval, job automation, John Markoff, John Snow's cholera map, John von Neumann, Joseph Schumpeter, Kevin Kelly, lone genius, mandelbrot fractal, Mark Zuckerberg, Moneyball by Michael Lewis explains big data, Narrative Science, Nate Silver, natural language processing, Netflix Prize, Network effects, NP-complete, off grid, P = NP, PageRank, pattern recognition, phenotype, planetary scale, pre–internet, random walk, Ray Kurzweil, recommendation engine, Richard Feynman, scientific worldview, Second Machine Age, self-driving car, Silicon Valley, social intelligence, speech recognition, Stanford marshmallow experiment, statistical model, Stephen Hawking, Steven Levy, Steven Pinker, superintelligent machines, the scientific method, The Signal and the Noise by Nate Silver, theory of mind, Thomas Bayes, transaction costs, Turing machine, Turing test, Vernor Vinge, Watson beat the top human players on Jeopardy!, white flight, zero-sum game

Craig Mundie argues for a balanced approach to data collection and use in “Privacy pragmatism” (Foreign Affairs, 2014). The Second Machine Age, by Erik Brynjolfsson and Andrew McAfee (Norton, 2014), discusses how progress in AI will shape the future of work and the economy. “World War R,” by Chris Baraniuk (New Scientist, 2014) reports on the debate surrounding the use of robots in battle. “Transcending complacency on superintelligent machines,” by Stephen Hawking et al. (Huffington Post, 2014), argues that now is the time to worry about AI’s risks. Nick Bostrom’s Superintelligence (Oxford University Press, 2014) considers those dangers and what to do about them. A Brief History of Life, by Richard Hawking (Random Penguin, 1982), summarizes the quantum leaps of evolution in the eons BC. (Before Computers. Just kidding.) The Singularity Is Near, by Ray Kurzweil (Penguin, 2005), is your guide to the transhuman future.


pages: 463 words: 115,103

Head, Hand, Heart: Why Intelligence Is Over-Rewarded, Manual Workers Matter, and Caregivers Deserve More Respect by David Goodhart

active measures, Airbnb, Albert Einstein, assortative mating, basic income, Berlin Wall, Bernie Sanders, big-box store, Boris Johnson, Branko Milanovic, British Empire, call centre, Cass Sunstein, central bank independence, centre right, computer age, corporate social responsibility, COVID-19, Covid-19, David Attenborough, David Brooks, deglobalization, deindustrialization, delayed gratification, desegregation, deskilling, different worldview, Donald Trump, Elon Musk, Etonian, Fall of the Berlin Wall, Flynn Effect, Frederick Winslow Taylor, future of work, gender pay gap, gig economy, glass ceiling, illegal immigration, income inequality, James Hargreaves, James Watt: steam engine, Jeff Bezos, job automation, job satisfaction, John Maynard Keynes: Economic Possibilities for our Grandchildren, knowledge economy, knowledge worker, labour market flexibility, longitudinal study, low skilled workers, Mark Zuckerberg, mass immigration, new economy, Nicholas Carr, oil shock, pattern recognition, Peter Thiel, pink-collar, post-industrial society, post-materialism, postindustrial economy, precariat, reshoring, Richard Florida, Scientific racism, Skype, social intelligence, spinning jenny, Steven Pinker, superintelligent machines, The Bell Curve by Richard Herrnstein and Charles Murray, The Rise and Fall of American Growth, Thorstein Veblen, twin studies, Tyler Cowen: Great Stagnation, universal basic income, upwardly mobile, wages for housework, winner-take-all economy, women in the workforce, young professional

But, enthusing to his theme, he explained to me the future role for humans: “My guess is that there are three areas where humans will preserve some comparative advantage over robots for the foreseeable future. The first is cognitive tasks requiring creativity and intuition. These might be tasks or problems whose solutions require great logical leaps of imagination rather than step-by-step hill climbing… And even in a world of superintelligent machine learning, there will still be a demand for people with the skills to program, test, and oversee these machines. Some human judgmental overlay of these automated processes is still likely to be needed…” The second area of prospective demand for humans skills, says Haldane, is bespoke design and manufacture. Routine technical tasks are relatively simple to automate and are already well on their way to disappearing.


pages: 413 words: 119,587

Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots by John Markoff

"Robert Solow", A Declaration of the Independence of Cyberspace, AI winter, airport security, Apple II, artificial general intelligence, Asilomar, augmented reality, autonomous vehicles, basic income, Baxter: Rethink Robotics, Bill Duvall, bioinformatics, Brewster Kahle, Burning Man, call centre, cellular automata, Chris Urmson, Claude Shannon: information theory, Clayton Christensen, clean water, cloud computing, collective bargaining, computer age, computer vision, crowdsourcing, Danny Hillis, DARPA: Urban Challenge, data acquisition, Dean Kamen, deskilling, don't be evil, Douglas Engelbart, Douglas Engelbart, Douglas Hofstadter, Dynabook, Edward Snowden, Elon Musk, Erik Brynjolfsson, factory automation, From Mathematics to the Technologies of Life and Death, future of work, Galaxy Zoo, Google Glasses, Google X / Alphabet X, Grace Hopper, Gunnar Myrdal, Gödel, Escher, Bach, Hacker Ethic, haute couture, hive mind, hypertext link, indoor plumbing, industrial robot, information retrieval, Internet Archive, Internet of things, invention of the wheel, Jacques de Vaucanson, Jaron Lanier, Jeff Bezos, job automation, John Conway, John Markoff, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John von Neumann, Kevin Kelly, knowledge worker, Kodak vs Instagram, labor-force participation, loose coupling, Marc Andreessen, Mark Zuckerberg, Marshall McLuhan, medical residency, Menlo Park, Mitch Kapor, Mother of all demos, natural language processing, new economy, Norbert Wiener, PageRank, pattern recognition, pre–internet, RAND corporation, Ray Kurzweil, Richard Stallman, Robert Gordon, Rodney Brooks, Sand Hill Road, Second Machine Age, self-driving car, semantic web, shareholder value, side project, Silicon Valley, Silicon Valley startup, Singularitarianism, skunkworks, Skype, social software, speech recognition, stealth mode startup, Stephen Hawking, Steve Ballmer, Steve Jobs, Steve Wozniak, Steven Levy, Stewart Brand, strong AI, superintelligent machines, technological singularity, Ted Nelson, telemarketer, telepresence, telepresence robot, Tenerife airport disaster, The Coming Technological Singularity, the medium is the message, Thorstein Veblen, Turing test, Vannevar Bush, Vernor Vinge, Watson beat the top human players on Jeopardy!, Whole Earth Catalog, William Shockley: the traitorous eight, zero-sum game

Part of Her is also about the singularity, the idea that machine intelligence is accelerating at such a pace that it will eventually surpass human intelligence and become independent, rendering humans “left behind.” Both Her and Transcendence, another singularity-obsessed science-fiction movie introduced the following spring, are most intriguing for the way they portray human-machine relationships. In Transcendence the human-computer interaction moves from pleasant to dark, and eventually a superintelligent machine destroys human civilization. In Her, ironically, the relationship between the man and his operating system disintegrates as the computer’s intelligence develops so quickly that, not satisfied even with thousands of simultaneous relationships, it transcends humanity and . . . departs. This may be science fiction, but in the real world, this territory had become familiar to Liesl Capper almost a decade earlier.


When Computers Can Think: The Artificial Intelligence Singularity by Anthony Berglas, William Black, Samantha Thalind, Max Scratchmann, Michelle Estes

3D printing, AI winter, anthropic principle, artificial general intelligence, Asilomar, augmented reality, Automated Insights, autonomous vehicles, availability heuristic, blue-collar work, brain emulation, call centre, cognitive bias, combinatorial explosion, computer vision, create, read, update, delete, cuban missile crisis, David Attenborough, Elon Musk, en.wikipedia.org, epigenetics, Ernest Rutherford, factory automation, feminist movement, finite state, Flynn Effect, friendly AI, general-purpose programming language, Google Glasses, Google X / Alphabet X, Gödel, Escher, Bach, industrial robot, Isaac Newton, job automation, John von Neumann, Law of Accelerating Returns, license plate recognition, Mahatma Gandhi, mandelbrot fractal, natural language processing, Parkinson's law, patent troll, patient HM, pattern recognition, phenotype, ransomware, Ray Kurzweil, self-driving car, semantic web, Silicon Valley, Singularitarianism, Skype, sorting algorithm, speech recognition, statistical model, stem cell, Stephen Hawking, Stuxnet, superintelligent machines, technological singularity, Thomas Malthus, Turing machine, Turing test, uranium enrichment, Von Neumann architecture, Watson beat the top human players on Jeopardy!, wikimedia commons, zero day

The work is urgent as AGIs will be developed within the foreseeable future. If the reader agrees then they should consider supporting the work of MIRI and like-minded organizations. Bostrom 2014 Superintelligence Fair Use 328 dense pages covers the main practical and philosophical dangers presented by hyper-intelligent software. The book starts with a review of the increasing rate of technological progress, and various paths to build a superintelligent machine, including an analysis of the kinetics of recursive self-improvement based on optimization power and recalcitrance. The dangers of anthropomorphizing are introduced with some cute images from early comic books involving robots carrying away beautiful women. It also notes that up to now, a more intelligent system is a safer system, and that conditions our attitude towards intelligent machines.


pages: 590 words: 152,595

Army of None: Autonomous Weapons and the Future of War by Paul Scharre

active measures, Air France Flight 447, algorithmic trading, artificial general intelligence, augmented reality, automated trading system, autonomous vehicles, basic income, brain emulation, Brian Krebs, cognitive bias, computer vision, cuban missile crisis, dark matter, DARPA: Urban Challenge, DevOps, drone strike, Elon Musk, en.wikipedia.org, Erik Brynjolfsson, facts on the ground, fault tolerance, Flash crash, Freestyle chess, friendly fire, IFF: identification friend or foe, ImageNet competition, Internet of things, Johann Wolfgang von Goethe, John Markoff, Kevin Kelly, Loebner Prize, loose coupling, Mark Zuckerberg, moral hazard, mutually assured destruction, Nate Silver, pattern recognition, Rodney Brooks, Rubik’s Cube, self-driving car, sensor fusion, South China Sea, speech recognition, Stanislav Petrov, Stephen Hawking, Steve Ballmer, Steve Wozniak, Stuxnet, superintelligent machines, Tesla Model S, The Signal and the Noise by Nate Silver, theory of mind, Turing test, universal basic income, Valery Gerasimov, Wall-E, William Langewiesche, Y2K, zero day

Superintelligence in narrow domains is possible without an intelligence explosion. It stems from our ability to harness machine learning and speed to very specific problems. More advanced AI is certainly coming, but artificial general intelligence in the sense of machines that think like us may prove to be a mirage. If our benchmark for “intelligent” is what humans do, advanced artificial intelligence may be so alien that we never recognize these superintelligent machines as “true AI.” This dynamic already exists to some extent. Micah Clark pointed out that “as soon as something works and is practical it’s no longer AI.” Armstrong echoed this observation: “as soon as a computer can do it, they get redefined as not AI anymore.” If the past is any guide, we are likely to see in the coming decades a proliferation of narrow superintelligent systems in a range of fields—medicine, law, transportation, science, and others.


pages: 669 words: 210,153

Tools of Titans: The Tactics, Routines, and Habits of Billionaires, Icons, and World-Class Performers by Timothy Ferriss

Airbnb, Alexander Shulgin, artificial general intelligence, asset allocation, Atul Gawande, augmented reality, back-to-the-land, Ben Horowitz, Bernie Madoff, Bertrand Russell: In Praise of Idleness, Black Swan, blue-collar work, Boris Johnson, Buckminster Fuller, business process, Cal Newport, call centre, Charles Lindbergh, Checklist Manifesto, cognitive bias, cognitive dissonance, Colonization of Mars, Columbine, commoditize, correlation does not imply causation, David Brooks, David Graeber, diversification, diversified portfolio, Donald Trump, effective altruism, Elon Musk, fault tolerance, fear of failure, Firefox, follow your passion, future of work, Google X / Alphabet X, Howard Zinn, Hugh Fearnley-Whittingstall, Jeff Bezos, job satisfaction, Johann Wolfgang von Goethe, John Markoff, Kevin Kelly, Kickstarter, Lao Tzu, lateral thinking, life extension, lifelogging, Mahatma Gandhi, Marc Andreessen, Mark Zuckerberg, Mason jar, Menlo Park, Mikhail Gorbachev, MITM: man-in-the-middle, Nelson Mandela, Nicholas Carr, optical character recognition, PageRank, passive income, pattern recognition, Paul Graham, peer-to-peer, Peter H. Diamandis: Planetary Resources, Peter Singer: altruism, Peter Thiel, phenotype, PIHKAL and TIHKAL, post scarcity, post-work, premature optimization, QWERTY keyboard, Ralph Waldo Emerson, Ray Kurzweil, recommendation engine, rent-seeking, Richard Feynman, risk tolerance, Ronald Reagan, selection bias, sharing economy, side project, Silicon Valley, skunkworks, Skype, Snapchat, social graph, software as a service, software is eating the world, stem cell, Stephen Hawking, Steve Jobs, Stewart Brand, superintelligent machines, Tesla Model S, The Wisdom of Crowds, Thomas L Friedman, Wall-E, Washington Consensus, Whole Earth Catalog, Y Combinator, zero-sum game

On Appreciating the Risks of Artificial Intelligence “Jaan Tallinn, one of the founders of Skype, said that when he talks to people about this issue, he asks only two questions to get an understanding of whether the person he’s talking to is going to be able to grok just how pressing a concern artificial intelligence is. The first is, ‘Are you a programmer?’—the relevance of which is obvious—and the second is, ‘Do you have children?’ He claims to have found that if people don’t have children, their concern about the future isn’t sufficiently well-calibrated so as to get just how terrifying the prospect of building superintelligent machines is in the absence of having figured out the control problem [ensuring the AI converges with our interests, even when a thousand or a billion times smarter]. I think there’s something to that. It’s not limited, of course, to artificial intelligence. It spreads to every topic of concern. To worry about the fate of civilization in the abstract is harder than worrying about what sorts of experiences your children are going to have in the future.”


pages: 761 words: 231,902

The Singularity Is Near: When Humans Transcend Biology by Ray Kurzweil

additive manufacturing, AI winter, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Albert Einstein, anthropic principle, Any sufficiently advanced technology is indistinguishable from magic, artificial general intelligence, Asilomar, augmented reality, autonomous vehicles, Benoit Mandelbrot, Bill Joy: nanobots, bioinformatics, brain emulation, Brewster Kahle, Brownian motion, business cycle, business intelligence, c2.com, call centre, carbon-based life, cellular automata, Claude Shannon: information theory, complexity theory, conceptual framework, Conway's Game of Life, coronavirus, cosmological constant, cosmological principle, cuban missile crisis, data acquisition, Dava Sobel, David Brooks, Dean Kamen, disintermediation, double helix, Douglas Hofstadter, en.wikipedia.org, epigenetics, factory automation, friendly AI, George Gilder, Gödel, Escher, Bach, informal economy, information retrieval, invention of the telephone, invention of the telescope, invention of writing, iterative process, Jaron Lanier, Jeff Bezos, job automation, job satisfaction, John von Neumann, Kevin Kelly, Law of Accelerating Returns, life extension, lifelogging, linked data, Loebner Prize, Louis Pasteur, mandelbrot fractal, Marshall McLuhan, Mikhail Gorbachev, Mitch Kapor, mouse model, Murray Gell-Mann, mutually assured destruction, natural language processing, Network effects, new economy, Norbert Wiener, oil shale / tar sands, optical character recognition, pattern recognition, phenotype, premature optimization, randomized controlled trial, Ray Kurzweil, remote working, reversible computing, Richard Feynman, Robert Metcalfe, Rodney Brooks, scientific worldview, Search for Extraterrestrial Intelligence, selection bias, semantic web, Silicon Valley, Singularitarianism, speech recognition, statistical model, stem cell, Stephen Hawking, Stewart Brand, strong AI, superintelligent machines, technological singularity, Ted Kaczynski, telepresence, The Coming Technological Singularity, Thomas Bayes, transaction costs, Turing machine, Turing test, Vernor Vinge, Y2K, Yogi Berra

The author runs a company, FATKAT (Financial Accelerating Transactions by Kurzweil Adaptive Technologies), which applies computerized pattern recognition to financial data to make stock-market investment decisions, http://www.FatKat.com. 159. See discussion in chapter 2 on price-performance improvements in computer memory and electronics in general. 160. Runaway AI refers to a scenario where, as Max More describes, "superintelligent machines, initially harnessed for human benefit, soon leave us behind." Max More, "Embrace, Don't Relinquish, the Future," http://www.KurzweilAI.net/articles/art0106.html?printable=1. See also Damien Broderick's description of the "Seed AI": "A self-improving seed AI could run glacially slowly on a limited machine substrate. The point is, so long as it has the capacity to improve itself, at some point it will do so convulsively, bursting through any architectural bottlenecks to design its own improved hardware, maybe even build it (if it's allowed control of tools in a fabrication plant)."