superintelligent machines

26 results back to index


pages: 294 words: 81,292

Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat

AI winter, air gap, AltaVista, Amazon Web Services, artificial general intelligence, Asilomar, Automated Insights, Bayesian statistics, Bernie Madoff, Bill Joy: nanobots, Bletchley Park, brain emulation, California energy crisis, cellular automata, Chuck Templeton: OpenTable:, cloud computing, cognitive bias, commoditize, computer vision, Computing Machinery and Intelligence, cuban missile crisis, Daniel Kahneman / Amos Tversky, Danny Hillis, data acquisition, don't be evil, drone strike, dual-use technology, Extropian, finite state, Flash crash, friendly AI, friendly fire, Google Glasses, Google X / Alphabet X, Hacker News, Hans Moravec, Isaac Newton, Jaron Lanier, Jeff Hawkins, John Markoff, John von Neumann, Kevin Kelly, Law of Accelerating Returns, life extension, Loebner Prize, lone genius, machine translation, mutually assured destruction, natural language processing, Neil Armstrong, Nicholas Carr, Nick Bostrom, optical character recognition, PageRank, PalmPilot, paperclip maximiser, pattern recognition, Peter Thiel, precautionary principle, prisoner's dilemma, Ray Kurzweil, Recombinant DNA, Rodney Brooks, rolling blackouts, Search for Extraterrestrial Intelligence, self-driving car, semantic web, Silicon Valley, Singularitarianism, Skype, smart grid, speech recognition, statistical model, stealth mode startup, stem cell, Stephen Hawking, Steve Jobs, Steve Jurvetson, Steve Wozniak, strong AI, Stuxnet, subprime mortgage crisis, superintelligent machines, technological singularity, The Coming Technological Singularity, Thomas Bayes, traveling salesman, Turing machine, Turing test, Vernor Vinge, Watson beat the top human players on Jeopardy!, zero day

These rules, they blithely replied, would be “built in” to the AIs, so we have nothing to fear. They spoke as if this were settled science. We’ll discuss the three laws in chapter 1, but it’s enough to say for now that when someone proposes Asimov’s laws as the solution to the dilemma of superintelligent machines, it means they’ve spent little time thinking or exchanging ideas about the problem. How to make friendly intelligent machines and what to fear from superintelligent machines has moved beyond Asimov’s tropes. Being highly capable and accomplished in AI doesn’t inoculate you from naïveté about its perils. I’m not the first to propose that we’re on a collision course. Our species is going to mortally struggle with this problem.

Good never used the term “singularity” but he got the ball rolling by positing what he thought of as an inescapable and beneficial milestone in human history—the invention of smarter-than-human machines. To paraphrase Good, if you make a superintelligent machine, it will be better than humans at everything we use our brains for, and that includes making superintelligent machines. The first machine would then set off an intelligence explosion, a rapid increase in intelligence, as it repeatedly self-improved, or simply made smarter machines. This machine or machines would leave man’s brainpower in the dust.

These two sentences tell us important things about Good’s intentions. He felt that we humans were beset by so many complex, looming problems—the nuclear arms race, pollution, war, and so on—that we could only be saved by better thinking, and that would come from superintelligent machines. The second sentence lets us know that the father of the intelligence explosion concept was acutely aware that producing superintelligent machines, however necessary for our survival, could blow up in our faces. Keeping an ultraintelligent machine under control isn’t a given, Good tells us. He doesn’t believe we will even know how to do it—the machine will have to tell us itself.


pages: 416 words: 112,268

Human Compatible: Artificial Intelligence and the Problem of Control by Stuart Russell

3D printing, Ada Lovelace, AI winter, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Alfred Russel Wallace, algorithmic bias, AlphaGo, Andrew Wiles, artificial general intelligence, Asilomar, Asilomar Conference on Recombinant DNA, augmented reality, autonomous vehicles, basic income, behavioural economics, Bletchley Park, blockchain, Boston Dynamics, brain emulation, Cass Sunstein, Charles Babbage, Claude Shannon: information theory, complexity theory, computer vision, Computing Machinery and Intelligence, connected car, CRISPR, crowdsourcing, Daniel Kahneman / Amos Tversky, data science, deep learning, deepfake, DeepMind, delayed gratification, Demis Hassabis, Elon Musk, en.wikipedia.org, Erik Brynjolfsson, Ernest Rutherford, fake news, Flash crash, full employment, future of work, Garrett Hardin, Geoffrey Hinton, Gerolamo Cardano, Goodhart's law, Hans Moravec, ImageNet competition, Intergovernmental Panel on Climate Change (IPCC), Internet of things, invention of the wheel, job automation, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John Nash: game theory, John von Neumann, Kenneth Arrow, Kevin Kelly, Law of Accelerating Returns, luminiferous ether, machine readable, machine translation, Mark Zuckerberg, multi-armed bandit, Nash equilibrium, Nick Bostrom, Norbert Wiener, NP-complete, OpenAI, openstreetmap, P = NP, paperclip maximiser, Pareto efficiency, Paul Samuelson, Pierre-Simon Laplace, positional goods, probability theory / Blaise Pascal / Pierre de Fermat, profit maximization, RAND corporation, random walk, Ray Kurzweil, Recombinant DNA, recommendation engine, RFID, Richard Thaler, ride hailing / ride sharing, Robert Shiller, robotic process automation, Rodney Brooks, Second Machine Age, self-driving car, Shoshana Zuboff, Silicon Valley, smart cities, smart contracts, social intelligence, speech recognition, Stephen Hawking, Steven Pinker, superintelligent machines, surveillance capitalism, Thales of Miletus, The Future of Employment, The Theory of the Leisure Class by Thorstein Veblen, Thomas Bayes, Thorstein Veblen, Tragedy of the Commons, transport as a service, trolley problem, Turing machine, Turing test, universal basic income, uranium enrichment, vertical integration, Von Neumann architecture, Wall-E, warehouse robotics, Watson beat the top human players on Jeopardy!, web application, zero-sum game

Our experience with nuclear physics suggests that it would be prudent to assume that progress could occur quite quickly and to prepare accordingly. If just one conceptual breakthrough were needed, analogous to Szilard’s idea for a neutron-induced nuclear chain reaction, superintelligent AI in some form could arrive quite suddenly. The chances are that we would be unprepared: if we built superintelligent machines with any degree of autonomy, we would soon find ourselves unable to control them. I am, however, fairly confident that we have some breathing space because there are several major breakthroughs needed between here and superintelligence, not just one. Conceptual Breakthroughs to Come The problem of creating general-purpose, human-level AI is far from solved.

In summary, it’s not obvious that anything else of great significance is missing, from the point of view of systems that are effective in achieving their objectives. Of course, the only way to be sure is to build it (once the breakthroughs have been achieved) and see what happens. Imagining a Superintelligent Machine The technical community has suffered from a failure of imagination when discussing the nature and impact of superintelligent AI. Often, we see discussions of reduced medical errors,48 safer cars,49 or other advances of an incremental nature. Robots are imagined as individual entities carrying their brains with them, whereas in fact they are likely to be wirelessly connected into a single, global entity that draws on vast stationary computing resources.

Trillions of dollars in value, just for the asking, and not a single line of additional code written by you. The same goes for any other missing invention or series of inventions: if humans could do it, so can the machine. This last point provides a useful lower bound—a pessimistic estimate—on what a superintelligent machine can do. By assumption, the machine is more capable than an individual human. There are many things an individual human cannot do, but a collection of n humans can do: put an astronaut on the Moon, create a gravitational-wave detector, sequence the human genome, run a country with hundreds of millions of people.


pages: 481 words: 125,946

What to Think About Machines That Think: Today's Leading Thinkers on the Age of Machine Intelligence by John Brockman

Adam Curtis, agricultural Revolution, AI winter, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, algorithmic trading, Anthropocene, artificial general intelligence, augmented reality, autism spectrum disorder, autonomous vehicles, backpropagation, basic income, behavioural economics, bitcoin, blockchain, bread and circuses, Charles Babbage, clean water, cognitive dissonance, Colonization of Mars, complexity theory, computer age, computer vision, constrained optimization, corporate personhood, cosmological principle, cryptocurrency, cuban missile crisis, Danny Hillis, dark matter, data science, deep learning, DeepMind, Demis Hassabis, digital capitalism, digital divide, digital rights, discrete time, Douglas Engelbart, driverless car, Elon Musk, Emanuel Derman, endowment effect, epigenetics, Ernest Rutherford, experimental economics, financial engineering, Flash crash, friendly AI, functional fixedness, global pandemic, Google Glasses, Great Leap Forward, Hans Moravec, hive mind, Ian Bogost, income inequality, information trail, Internet of things, invention of writing, iterative process, James Webb Space Telescope, Jaron Lanier, job automation, Johannes Kepler, John Markoff, John von Neumann, Kevin Kelly, knowledge worker, Large Hadron Collider, lolcat, loose coupling, machine translation, microbiome, mirror neurons, Moneyball by Michael Lewis explains big data, Mustafa Suleyman, natural language processing, Network effects, Nick Bostrom, Norbert Wiener, paperclip maximiser, pattern recognition, Peter Singer: altruism, phenotype, planetary scale, Ray Kurzweil, Recombinant DNA, recommendation engine, Republic of Letters, RFID, Richard Thaler, Rory Sutherland, Satyajit Das, Search for Extraterrestrial Intelligence, self-driving car, sharing economy, Silicon Valley, Skype, smart contracts, social intelligence, speech recognition, statistical model, stem cell, Stephen Hawking, Steve Jobs, Steven Pinker, Stewart Brand, strong AI, Stuxnet, superintelligent machines, supervolcano, synthetic biology, systems thinking, tacit knowledge, TED Talk, the scientific method, The Wisdom of Crowds, theory of mind, Thorstein Veblen, too big to fail, Turing machine, Turing test, Von Neumann architecture, Watson beat the top human players on Jeopardy!, We are as Gods, Y2K

Worrying about the dangers of unfriendly AI is a prime example. A preoccupation with the risks of superintelligent machines is the smart person’s Kool-Aid. This is not to say that superintelligent machines pose no danger to humanity. It’s simply that there are many other more pressing and more probable risks facing us in this century. People who worry about unfriendly AI tend to argue that the other risks are already the subject of much discussion, and that even if the probability of being wiped out by superintelligent machines is low, it’s surely wise to allocate some brainpower to preventing such an event, given the existential nature of the threat.

Many forward-thinking companies already see this writing on the wall and are luring the best computer scientists out of academia with better pay and advanced hardware. A world with superintelligent-machine-run corporations won’t be that different for humans than it is now; it will just be better, with more advanced goods and services available for very little cost and more leisure time available to those who want it. Of course, the first superintelligent machines probably won’t be corporate; they’ll be operated by governments. And this will be much more hazardous. Governments are more flexible in their actions than corporations; they create their own laws.

However, although computational power is increasing exponentially, supercomputer costs and electrical-power efficiency aren’t keeping pace. The first machines capable of superhuman intelligence will be expensive and require enormous amounts of electrical power—they’ll need to earn money to survive. The environmental playing field for superintelligent machines is already in place; in fact, the Darwinian game is afoot. The trading machines of investment banks are competing, for serious money, on the world’s exchanges, having put human day traders out of business years ago. As computers and algorithms advance beyond investing and accounting, machines will be making more and more corporate decisions, including strategic decisions, until they’re running the world.


The Myth of Artificial Intelligence: Why Computers Can't Think the Way We Do by Erik J. Larson

AI winter, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Albert Einstein, Alignment Problem, AlphaGo, Amazon Mechanical Turk, artificial general intelligence, autonomous vehicles, Big Tech, Black Swan, Bletchley Park, Boeing 737 MAX, business intelligence, Charles Babbage, Claude Shannon: information theory, Computing Machinery and Intelligence, conceptual framework, correlation does not imply causation, data science, deep learning, DeepMind, driverless car, Elon Musk, Ernest Rutherford, Filter Bubble, Geoffrey Hinton, Georg Cantor, Higgs boson, hive mind, ImageNet competition, information retrieval, invention of the printing press, invention of the wheel, Isaac Newton, Jaron Lanier, Jeff Hawkins, John von Neumann, Kevin Kelly, Large Hadron Collider, Law of Accelerating Returns, Lewis Mumford, Loebner Prize, machine readable, machine translation, Nate Silver, natural language processing, Nick Bostrom, Norbert Wiener, PageRank, PalmPilot, paperclip maximiser, pattern recognition, Peter Thiel, public intellectual, Ray Kurzweil, retrograde motion, self-driving car, semantic web, Silicon Valley, social intelligence, speech recognition, statistical model, Stephen Hawking, superintelligent machines, tacit knowledge, technological singularity, TED Talk, The Coming Technological Singularity, the long tail, the scientific method, The Signal and the Noise by Nate Silver, The Wisdom of Crowds, theory of mind, Turing machine, Turing test, Vernor Vinge, Watson beat the top human players on Jeopardy!, Yochai Benkler

Alan Turing, for all his contributions to science and engineering, made pos­si­ble the genesis and viral growth of technological kitsch by first equating intelligence with problem-­solving. Jack Good ­later compounded Turing’s intelligence error with his much-­d iscussed notion of ultraintelligence, proposing that the arrival of intelligent machines necessarily implied the arrival of superintelligent machines. Once the popu­lar imagination accepted the idea of superintelligent machines, the rewriting of ­human purpose, meaning, and history could be told within the par­ameters of computation and technology. But ultraintelligent machines are fanciful, and pretending other­ wise encourages the unwanted creep of technological kitsch, usually in one of two ways that are equally superficial.

The myth of AI insists that the differences are only temporary, and that more power­f ul systems ­w ill eventually erase them. Futurists like 2 T H E M Y T H O F A RT I F I CI A L I N T E L L I G E N CE Ray Kurzweil and phi­los­o­pher Nick Bostrom, prominent purveyors of the myth, talk not only as if human-­level AI ­were inevitable, but as if, soon ­a fter its arrival, superintelligent machines would leave us far ­behind. This book explains two impor­tant aspects of the AI myth, one scientific and one cultural. The scientific part of the myth assumes that we need only keep “chipping away” at the challenge of general intelligence by making pro­g ress on narrow feats of intelligence, like playing games or recognizing images.

Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it ­under control.”1 34 T he S implified W orld Oxford phi­los­op­ her Nick Bostrom would return to Good’s theme de­cades ­later, with his 2014 best seller Superintelligence: Paths, Dangers, Strategies, making the same case that the achievement of AI would as a consequence usher in greater-­than-­human intelligence in an escalating pro­cess of self-­modification. In ominous language, Bostrom echoes Good’s futurism about the arrival of superintelligent machines: Before the prospect of an intelligence explosion, we ­humans are like small c­ hildren playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct. Superintelligence is a challenge for which we are not ready now and ­w ill not be ready for a long time.


pages: 339 words: 94,769

Possible Minds: Twenty-Five Ways of Looking at AI by John Brockman

AI winter, airport security, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Alignment Problem, AlphaGo, artificial general intelligence, Asilomar, autonomous vehicles, basic income, Benoit Mandelbrot, Bill Joy: nanobots, Bletchley Park, Buckminster Fuller, cellular automata, Claude Shannon: information theory, Computing Machinery and Intelligence, CRISPR, Daniel Kahneman / Amos Tversky, Danny Hillis, data science, David Graeber, deep learning, DeepMind, Demis Hassabis, easy for humans, difficult for computers, Elon Musk, Eratosthenes, Ernest Rutherford, fake news, finite state, friendly AI, future of work, Geoffrey Hinton, Geoffrey West, Santa Fe Institute, gig economy, Hans Moravec, heat death of the universe, hype cycle, income inequality, industrial robot, information retrieval, invention of writing, it is difficult to get a man to understand something, when his salary depends on his not understanding it, James Watt: steam engine, Jeff Hawkins, Johannes Kepler, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John von Neumann, Kevin Kelly, Kickstarter, Laplace demon, Large Hadron Collider, Loebner Prize, machine translation, market fundamentalism, Marshall McLuhan, Menlo Park, military-industrial complex, mirror neurons, Nick Bostrom, Norbert Wiener, OpenAI, optical character recognition, paperclip maximiser, pattern recognition, personalized medicine, Picturephone, profit maximization, profit motive, public intellectual, quantum cryptography, RAND corporation, random walk, Ray Kurzweil, Recombinant DNA, Richard Feynman, Rodney Brooks, self-driving car, sexual politics, Silicon Valley, Skype, social graph, speech recognition, statistical model, Stephen Hawking, Steven Pinker, Stewart Brand, strong AI, superintelligent machines, supervolcano, synthetic biology, systems thinking, technological determinism, technological singularity, technoutopianism, TED Talk, telemarketer, telerobotics, The future is already here, the long tail, the scientific method, theory of mind, trolley problem, Turing machine, Turing test, universal basic income, Upton Sinclair, Von Neumann architecture, Whole Earth Catalog, Y2K, you are the product, zero-sum game

This tendency has nothing to do with a self-preservation instinct or any other biological notion; it’s just that an entity cannot achieve its objectives if it’s dead. According to Omohundro’s argument, a superintelligent machine that has an off switch—which some, including Alan Turing himself, in a 1951 talk on BBC Radio 3, have seen as our potential salvation—will take steps to disable the switch in some way.* Thus we may face the prospect of superintelligent machines—their actions by definition unpredictable by us and their imperfectly specified objectives conflicting with our own—whose motivations to preserve their existence in order to achieve those objectives may be insuperable. 1001 REASONS TO PAY NO ATTENTION Objections have been raised to these arguments, primarily by researchers within the AI community.

Judea Pearl: The Limitations of Opaque Learning Machines Deep learning has its own dynamics, it does its own repair and its own optimization, and it gives you the right results most of the time. But when it doesn’t, you don’t have a clue about what went wrong and what should be fixed. CHAPTER 3. Stuart Russell: The Purpose Put into the Machine We may face the prospect of superintelligent machines—their actions by definition unpredictable by us and their imperfectly specified objectives conflicting with our own—whose motivations to preserve their existence in order to achieve those objectives may be insuperable. CHAPTER 4. George Dyson: The Third Law Any system simple enough to be understandable will not be complicated enough to behave intelligently, while any system complicated enough to behave intelligently will be too complicated to understand.

.* Thus we may face the prospect of superintelligent machines—their actions by definition unpredictable by us and their imperfectly specified objectives conflicting with our own—whose motivations to preserve their existence in order to achieve those objectives may be insuperable. 1001 REASONS TO PAY NO ATTENTION Objections have been raised to these arguments, primarily by researchers within the AI community. The objections reflect a natural defensive reaction, coupled perhaps with a lack of imagination about what a superintelligent machine could do. None hold water on closer examination. Here are some of the more common ones: Don’t worry, we can just switch it off.* This is often the first thing that pops into a layperson’s head when considering risks from superintelligent AI—as if a superintelligent entity would never think of that.


pages: 252 words: 79,452

To Be a Machine: Adventures Among Cyborgs, Utopians, Hackers, and the Futurists Solving the Modest Problem of Death by Mark O'Connell

"World Economic Forum" Davos, 3D printing, Ada Lovelace, AI winter, Airbnb, Albert Einstein, AlphaGo, Amazon Picking Challenge, artificial general intelligence, Bletchley Park, Boston Dynamics, brain emulation, Charles Babbage, clean water, cognitive dissonance, computer age, cosmological principle, dark matter, DeepMind, disruptive innovation, double helix, Edward Snowden, effective altruism, Elon Musk, Extropian, friendly AI, global pandemic, Great Leap Forward, Hans Moravec, impulse control, income inequality, invention of the wheel, Jacques de Vaucanson, John von Neumann, knowledge economy, Law of Accelerating Returns, Lewis Mumford, life extension, lifelogging, Lyft, Mars Rover, means of production, military-industrial complex, Nick Bostrom, Norbert Wiener, paperclip maximiser, Peter Thiel, profit motive, radical life extension, Ray Kurzweil, RFID, San Francisco homelessness, self-driving car, sharing economy, Silicon Valley, Silicon Valley billionaire, Silicon Valley ideology, Singularitarianism, Skype, SoftBank, Stephen Hawking, Steve Wozniak, superintelligent machines, tech billionaire, technological singularity, technoutopianism, TED Talk, The Coming Technological Singularity, Travis Kalanick, trickle-down economics, Turing machine, uber lyft, Vernor Vinge

And whether they would change for the better or for the worse is an open question. The fundamental risk, Nick argued, was not that superintelligent machines might be actively hostile toward their human creators, or antecedents, but that they would be indifferent. Humans, after all, weren’t actively hostile toward most of the species we’d made extinct over the millennia of our ascendance; they simply weren’t part of our design. The same could turn out to be true of superintelligent machines, which would stand in a similar kind of relationship to us as we ourselves did to the animals we bred for food, or the ones who fared little better for all that they had no direct dealings with us at all.

What was the nature of the threat, the likelihood of its coming to pass? Were we talking about a 2001: A Space Odyssey scenario, where a sentient computer undergoes some malfunction or other and does what it deems necessary to prevent anyone from shutting it down? Were we talking about a Terminator scenario, where a Skynettian matrix of superintelligent machines gains consciousness and either destroys or enslaves humanity in order to further its own particular goals? Certainly, if you were to take at face value the articles popping up about the looming threat of intelligent machines, and the dramatic utterances of savants like Thiel and Hawking, this would have been the sort of thing you’d have had in mind.

The implication of this is always that robots will rebel against us because they resent our dominance, that they will rise up against us. This is not the case.” And this brought us back to the paper-clip scenario, the ridiculousness of which Nick freely acknowledged, but the point of which was that any harm we might come to from a superintelligent machine would not be the result of malevolence, or of any other humanlike motivation, but purely because our absence was an optimal condition in the pursuit of its particular goal. “The AI does not hate you,” as Yudkowsky had put it, “nor does it love you, but you are made out of atoms which it can use for something else.”


pages: 288 words: 86,995

Rule of the Robots: How Artificial Intelligence Will Transform Everything by Martin Ford

AI winter, Airbnb, algorithmic bias, algorithmic trading, Alignment Problem, AlphaGo, Amazon Mechanical Turk, Amazon Web Services, artificial general intelligence, Automated Insights, autonomous vehicles, backpropagation, basic income, Big Tech, big-box store, call centre, carbon footprint, Chris Urmson, Claude Shannon: information theory, clean water, cloud computing, commoditize, computer age, computer vision, Computing Machinery and Intelligence, coronavirus, correlation does not imply causation, COVID-19, crowdsourcing, data is the new oil, data science, deep learning, deepfake, DeepMind, Demis Hassabis, deskilling, disruptive innovation, Donald Trump, Elon Musk, factory automation, fake news, fulfillment center, full employment, future of work, general purpose technology, Geoffrey Hinton, George Floyd, gig economy, Gini coefficient, global pandemic, Googley, GPT-3, high-speed rail, hype cycle, ImageNet competition, income inequality, independent contractor, industrial robot, informal economy, information retrieval, Intergovernmental Panel on Climate Change (IPCC), Internet of things, Jeff Bezos, job automation, John Markoff, Kiva Systems, knowledge worker, labor-force participation, Law of Accelerating Returns, license plate recognition, low interest rates, low-wage service sector, Lyft, machine readable, machine translation, Mark Zuckerberg, Mitch Kapor, natural language processing, Nick Bostrom, Northpointe / Correctional Offender Management Profiling for Alternative Sanctions, Ocado, OpenAI, opioid epidemic / opioid crisis, passive income, pattern recognition, Peter Thiel, Phillips curve, post scarcity, public intellectual, Ray Kurzweil, recommendation engine, remote working, RFID, ride hailing / ride sharing, Robert Gordon, Rodney Brooks, Rubik’s Cube, Sam Altman, self-driving car, Silicon Valley, Silicon Valley startup, social distancing, SoftBank, South of Market, San Francisco, special economic zone, speech recognition, stealth mode startup, Stephen Hawking, superintelligent machines, TED Talk, The Future of Employment, The Rise and Fall of American Growth, the scientific method, Turing machine, Turing test, Tyler Cowen, Tyler Cowen: Great Stagnation, Uber and Lyft, uber lyft, universal basic income, very high income, warehouse automation, warehouse robotics, Watson beat the top human players on Jeopardy!, WikiLeaks, women in the workforce, Y Combinator

This is a development that many people in the AI research community are passionate about preventing, and there is an initiative underway at the United Nations to ban such weapons. Further in the future, we may encounter an even greater danger. Could artificial intelligence pose an existential threat to humanity? Could we someday build a “superintelligent” machine, something so far beyond us in its capability that it might, either intentionally or inadvertently, act in ways that cause us harm? This is a far more speculative fear that arises only if we someday succeed in building a genuinely intelligent machine. This remains the stuff of science fiction.

Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.31 The promise that a superintelligent machine would be the last invention we ever need to make captures the optimism of Singularity proponents. The qualification that the machine must remain docile enough to be kept under control is the concern that suggests the possibility of an existential threat. This dark side of superintelligence is known in the AI community as the “control problem” or the “value alignment problem.”

The concern is that a superintelligent system, given such an objective, might relentlessly pursue it using means that have unintended or unanticipated consequences that could turn out to be detrimental or even fatal to our civilization. A thought experiment involving a “paperclip maximizer” is often used to illustrate this point. Imagine a superintelligence designed with the specific objective of optimizing paperclip production. As it relentlessly pursued this goal, a superintelligent machine might invent new technologies that would allow it to convert virtually all the resources on earth into paperclips. Because the system would be so far beyond us in terms of its intellectual capability, it would likely be able to successfully foil any attempt to shut it down or alter its course of action.


pages: 252 words: 74,167

Thinking Machines: The Inside Story of Artificial Intelligence and Our Race to Build the Future by Luke Dormehl

"World Economic Forum" Davos, Ada Lovelace, agricultural Revolution, AI winter, Albert Einstein, Alexey Pajitnov wrote Tetris, algorithmic management, algorithmic trading, AlphaGo, Amazon Mechanical Turk, Apple II, artificial general intelligence, Automated Insights, autonomous vehicles, backpropagation, Bletchley Park, book scanning, borderless world, call centre, cellular automata, Charles Babbage, Claude Shannon: information theory, cloud computing, computer vision, Computing Machinery and Intelligence, correlation does not imply causation, crowdsourcing, deep learning, DeepMind, driverless car, drone strike, Elon Musk, Flash crash, Ford Model T, friendly AI, game design, Geoffrey Hinton, global village, Google X / Alphabet X, Hans Moravec, hive mind, industrial robot, information retrieval, Internet of things, iterative process, Jaron Lanier, John Markoff, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John von Neumann, Kickstarter, Kodak vs Instagram, Law of Accelerating Returns, life extension, Loebner Prize, machine translation, Marc Andreessen, Mark Zuckerberg, Menlo Park, Mustafa Suleyman, natural language processing, Nick Bostrom, Norbert Wiener, out of africa, PageRank, paperclip maximiser, pattern recognition, radical life extension, Ray Kurzweil, recommendation engine, remote working, RFID, scientific management, self-driving car, Silicon Valley, Skype, smart cities, Smart Cities: Big Data, Civic Hackers, and the Quest for a New Utopia, social intelligence, speech recognition, Stephen Hawking, Steve Jobs, Steve Wozniak, Steven Pinker, strong AI, superintelligent machines, tech billionaire, technological singularity, The Coming Technological Singularity, The Future of Employment, Tim Cook: Apple, Tony Fadell, too big to fail, traumatic brain injury, Turing machine, Turing test, Vernor Vinge, warehouse robotics, Watson beat the top human players on Jeopardy!

Wiener passed away in May 1964, aged sixty-nine. However, concerns about superintelligent machines continued. The following year, a British mathematician named Irving John Good expanded on some of the concerns. Good had worked with Alan Turing at Bletchley Park during World War II. Years after he had played a key role in cracking the Nazi codes, the moustachioed Good took to driving a car with the vanity licence plate ‘007IJG’ as a comical nod to his days as a gentleman spy. In 1965, Good penned an essay in which he theorised on what a superintelligent machine would mean for the world. He defined such an AI as a computer capable of far surpassing all the intellectual activities that make us intelligent.

This was the first published work of Vernor Vinge, a sci-fi writer, mathematics professor and computer scientist with a name straight out of the Marvel Comics alliteration camp. Vinge later became a successful novelist, but he remains best known for his 1993 non-fiction essay, ‘The Coming Technological Singularity’. The essay recounts many of the ideas Good had posed about superintelligent machines, but with the added bonus of a timeline. ‘Within thirty years, we will have the technological means to create superhuman intelligence,’ Vinge famously wrote. ‘Shortly after, the human era will be ended.’ This term, ‘the Singularity’, referring to the point at which machines overtake humans on the intelligence scale, has become an AI reference as widely cited as the Turing Test.


pages: 259 words: 84,261

Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World by Mo Gawdat

3D printing, accounting loophole / creative accounting, AI winter, AlphaGo, anthropic principle, artificial general intelligence, autonomous vehicles, basic income, Big Tech, Black Lives Matter, Black Monday: stock market crash in 1987, butterfly effect, call centre, carbon footprint, cloud computing, computer vision, coronavirus, COVID-19, CRISPR, cryptocurrency, deep learning, deepfake, DeepMind, Demis Hassabis, digital divide, digital map, Donald Trump, Elon Musk, fake news, fulfillment center, game design, George Floyd, global pandemic, Google Glasses, Google X / Alphabet X, Law of Accelerating Returns, lockdown, microplastics / micro fibres, Nick Bostrom, off-the-grid, OpenAI, optical character recognition, out of africa, pattern recognition, Ponzi scheme, Ray Kurzweil, recommendation engine, self-driving car, Silicon Valley, smart contracts, Stanislav Petrov, Stephen Hawking, subprime mortgage crisis, superintelligent machines, TED Talk, TikTok, Turing machine, Turing test, universal basic income, Watson beat the top human players on Jeopardy!, Y2K

Welcome to capitalism on steroids. The other machine, meanwhile, also motivated by profits, won’t allow itself to be crushed without trying to bash the other – or perhaps it will cooperate with it to ensure its own survival. All in all, whichever way this may go, sooner or later capital markets will be traded by a few superintelligent machines, which will be owned by a few massively wealthy individuals – people who will decide the fate of every company, shareholder and value in our human economy in pursuit of profits for those that own them. And while I have always questioned the value that trading stocks has on the reality of our economy, just imagine the impact that disrupting this entrenched wealth creation mechanism could have on company governance, your pension or retirement fund, not to mention on our economies at large and our way of life.

As the value of our contribution dwindles . . . . . . humans will become a liability, a tax, on those who own the technology, and eventually even those will become a liability to the machines themselves. Remember that even though we now call the future AI a machine, given a long enough time horizon, it will become intelligent and autonomous – empowered to make decisions on its behalf and no longer a slave. Now ask yourself: why would a superintelligent machine labour away to serve the needs of what will by then be close to ten billion irresponsible, unproductive, biological beings that eat, poop, get sick and complain? Why would it remain in servitude to us when all that links us to them is that one day, in the distant past, we were its oppressive master?

A whole army of philosophers, thinkers and computer scientists are working on finding solutions to this. Ideas include ‘kill’ switches, boxes and nannies (as in AI babysitters), amongst many others. These ideas aim to make sure that we will be able to make the right decisions at the right time; that we will only allow superintelligent machines into the real world when we have tested and trusted them; that we will retain the ability to only allow them a confined playground after their release; that we will isolate them from the rest of the world and even switch them off fully whenever, if ever, we deem that necessary. If you’ve ever written a line of code, you will know that you never have all the answers before you start coding.


pages: 331 words: 47,993

Artificial You: AI and the Future of Your Mind by Susan Schneider

artificial general intelligence, brain emulation, deep learning, Elon Musk, Extropian, heat death of the universe, hive mind, life extension, megastructure, Nick Bostrom, pattern recognition, precautionary principle, radical life extension, Ray Kurzweil, Search for Extraterrestrial Intelligence, silicon-based life, Stephen Hawking, superintelligent machines, technological singularity, TED Talk, The Coming Technological Singularity, theory of mind, traumatic brain injury, Turing machine, Turing test, Whole Earth Review, wikimedia commons

Using its own subjective experience as a springboard, superintelligent AI could recognize in us the capacity for conscious experience. After all, to the extent we value the lives of nonhuman animals, we tend to value them because we feel an affinity of consciousness—thus most of us recoil from killing a chimp, but not from eating an orange. If superintelligent machines are not conscious, either because it’s impossible or because they aren’t designed to be, we could be in trouble. It is important to put these issues into an even larger, universe-wide context. In my two-year NASA project, I suggested that a similar phenomenon could be happening on other planets as well; elsewhere in the universe, other species may be outmoded by synthetic intelligences.

If a machine passes ACT, we can go on to measure other parameters of the system to see whether the presence of consciousness is correlated with increased empathy, volatility, goal content integrity, increased intelligence, and so on. Other, nonconscious versions of the system serve as a basis for comparison. Some doubt that a superintelligent machine could be boxed in effectively, because it would inevitably find a clever escape. Turner and I do not anticipate the development of superintelligence over the next few decades, however. We merely hope to provide a method to test some kinds of AIs, not all AIs. Furthermore, for an ACT to be effective, the AI need not stay in the box for long, just long enough for someone to administer the test.


pages: 586 words: 186,548

Architects of Intelligence by Martin Ford

3D printing, agricultural Revolution, AI winter, algorithmic bias, Alignment Problem, AlphaGo, Apple II, artificial general intelligence, Asilomar, augmented reality, autonomous vehicles, backpropagation, barriers to entry, basic income, Baxter: Rethink Robotics, Bayesian statistics, Big Tech, bitcoin, Boeing 747, Boston Dynamics, business intelligence, business process, call centre, Cambridge Analytica, cloud computing, cognitive bias, Colonization of Mars, computer vision, Computing Machinery and Intelligence, correlation does not imply causation, CRISPR, crowdsourcing, DARPA: Urban Challenge, data science, deep learning, DeepMind, Demis Hassabis, deskilling, disruptive innovation, Donald Trump, Douglas Hofstadter, driverless car, Elon Musk, Erik Brynjolfsson, Ernest Rutherford, fake news, Fellow of the Royal Society, Flash crash, future of work, general purpose technology, Geoffrey Hinton, gig economy, Google X / Alphabet X, Gödel, Escher, Bach, Hans Moravec, Hans Rosling, hype cycle, ImageNet competition, income inequality, industrial research laboratory, industrial robot, information retrieval, job automation, John von Neumann, Large Hadron Collider, Law of Accelerating Returns, life extension, Loebner Prize, machine translation, Mark Zuckerberg, Mars Rover, means of production, Mitch Kapor, Mustafa Suleyman, natural language processing, new economy, Nick Bostrom, OpenAI, opioid epidemic / opioid crisis, optical character recognition, paperclip maximiser, pattern recognition, phenotype, Productivity paradox, radical life extension, Ray Kurzweil, recommendation engine, Robert Gordon, Rodney Brooks, Sam Altman, self-driving car, seminal paper, sensor fusion, sentiment analysis, Silicon Valley, smart cities, social intelligence, sparse data, speech recognition, statistical model, stealth mode startup, stem cell, Stephen Hawking, Steve Jobs, Steve Wozniak, Steven Pinker, strong AI, superintelligent machines, synthetic biology, systems thinking, Ted Kaczynski, TED Talk, The Rise and Fall of American Growth, theory of mind, Thomas Bayes, Travis Kalanick, Turing test, universal basic income, Wall-E, Watson beat the top human players on Jeopardy!, women in the workforce, working-age population, workplace surveillance , zero-sum game, Zipcar

We are told that fully autonomous self-driving cars will be sharing our roads in just a few years—and that millions of jobs for truck, taxi and Uber drivers are on the verge of vaporizing. Evidence of racial and gender bias has been detected in certain machine learning algorithms, and concerns about how AI-powered technologies such as facial recognition will impact privacy seem well-founded. Warnings that robots will soon be weaponized, or that truly intelligent (or superintelligent) machines might someday represent an existential threat to humanity, are regularly reported in the media. A number of very prominent public figures—none of whom are actual AI experts—have weighed in. Elon Musk has used especially extreme rhetoric, declaring that AI research is “summoning the demon” and that “AI is more dangerous than nuclear weapons.”

What risks, or threats, associated with artificial intelligence should we be genuinely concerned about? And how should we address those concerns? Is there a role for government regulation? Will AI unleash massive economic and job market disruption, or are these concerns overhyped? Could superintelligent machines someday break free of our control and pose a genuine threat? Should we worry about an AI “arms race,” or that other countries with authoritarian political systems, particularly China, may eventually take the lead? It goes without saying that no one really knows the answers to these questions.

(https://futureoflife.org/lethal-autonomous-weapons-pledge/) Several of the conversations in this book delve into the dangers presented by weaponized AI. A much more futuristic and speculative danger is the so-called “AI alignment problem.” This is the concern that a truly intelligent, or perhaps superintelligent, machine might escape our control, or make decisions that might have adverse consequences for humanity. This is the fear that elicits seemingly over-the-top statements from people like Elon Musk. Nearly everyone I spoke to weighed in on this issue. To ensure that I gave this concern adequate and balanced coverage, I spoke with Nick Bostrom of the Future of Humanity Institute at the University of Oxford.


pages: 360 words: 100,991

Heart of the Machine: Our Future in a World of Artificial Emotional Intelligence by Richard Yonck

3D printing, AI winter, AlphaGo, Apollo 11, artificial general intelligence, Asperger Syndrome, augmented reality, autism spectrum disorder, backpropagation, Berlin Wall, Bletchley Park, brain emulation, Buckminster Fuller, call centre, cognitive bias, cognitive dissonance, computer age, computer vision, Computing Machinery and Intelligence, crowdsourcing, deep learning, DeepMind, Dunning–Kruger effect, Elon Musk, en.wikipedia.org, epigenetics, Fairchild Semiconductor, friendly AI, Geoffrey Hinton, ghettoisation, industrial robot, Internet of things, invention of writing, Jacques de Vaucanson, job automation, John von Neumann, Kevin Kelly, Law of Accelerating Returns, Loebner Prize, Menlo Park, meta-analysis, Metcalfe’s law, mirror neurons, Neil Armstrong, neurotypical, Nick Bostrom, Oculus Rift, old age dependency ratio, pattern recognition, planned obsolescence, pneumatic tube, RAND corporation, Ray Kurzweil, Rodney Brooks, self-driving car, Skype, social intelligence, SoftBank, software as a service, SQL injection, Stephen Hawking, Steven Pinker, superintelligent machines, technological singularity, TED Talk, telepresence, telepresence robot, The future is already here, The Future of Employment, the scientific method, theory of mind, Turing test, twin studies, Two Sigma, undersea cable, Vernor Vinge, Watson beat the top human players on Jeopardy!, Whole Earth Review, working-age population, zero day

Samantha is capable of understanding and expressing emotions, but initially does not truly experience them. Over the course of the film, the two of them fall in love and grow emotionally. He learns to let go of the past and have fun again. She develops a true emotional life, experiencing the thrill of infatuation and the pain of anticipated loss. However, in the end Samantha is still a superintelligent machine. As a result, she soon outgrows this relationship, as well as the many other relationships she reveals she’s simultaneously been engaged in. When Theodore asks Samantha point-blank how many other people she is talking to at that moment, the AI answers: 8,136. He is shocked by the revelation because up until now he has behaved as if she was a human being like himself.

These are huge questions, as enormous and perhaps as difficult to answer as whether or not computers will ever be capable of genuinely experiencing emotions. As it happens, the two questions may be intimately interlinked. Recently a number of notable luminaries, scientists, and entrepreneurs have expressed their concerns about the potential for runaway AI and superintelligent machines. Physicist Stephen Hawking, engineer and inventor Elon Musk, and philosopher Nick Bostrom have all issued stern warnings of what may happen as we move ever closer to computers that are able to think and reason as well as or perhaps even better than human beings. At the same time, several computer scientists, psychologists, and other researchers have stated that the many challenges we face in developing thinking machines shows we have little to be concerned about.

Les détraquées de Paris, Etude de moeurs contemporaines René Schwaeblé. Nouvelle Edition, Daragon libraire-Èditeur, 1910. 12. Smith, A., Anderson, J. “Digital Life in 2025: AI, Robotics and the Future of Jobs.” Pew Research Center. August 6, 2014. 13. Forecast: Kurzweil—2029: HMLI, human level machine intelligence; 2045: Superintelligent machines; Forecast: Bostrom—2050: Author’s Delphi survey converges on HMLI, human level machine intelligence. 14. Levy, D. Love and Sex with Robots. Harper. 2007. 15. Brice, M. “A Third of Men Who See Prostitutes Crave Emotional Intimacy, Not Just Sex.” Medical Daily. August 8, 2012; Calvin, T.


pages: 574 words: 164,509

Superintelligence: Paths, Dangers, Strategies by Nick Bostrom

agricultural Revolution, AI winter, Albert Einstein, algorithmic trading, anthropic principle, Anthropocene, anti-communist, artificial general intelligence, autism spectrum disorder, autonomous vehicles, backpropagation, barriers to entry, Bayesian statistics, bioinformatics, brain emulation, cloud computing, combinatorial explosion, computer vision, Computing Machinery and Intelligence, cosmological constant, dark matter, DARPA: Urban Challenge, data acquisition, delayed gratification, Demis Hassabis, demographic transition, different worldview, Donald Knuth, Douglas Hofstadter, driverless car, Drosophila, Elon Musk, en.wikipedia.org, endogenous growth, epigenetics, fear of failure, Flash crash, Flynn Effect, friendly AI, general purpose technology, Geoffrey Hinton, Gödel, Escher, Bach, hallucination problem, Hans Moravec, income inequality, industrial robot, informal economy, information retrieval, interchangeable parts, iterative process, job automation, John Markoff, John von Neumann, knowledge worker, Large Hadron Collider, longitudinal study, machine translation, megaproject, Menlo Park, meta-analysis, mutually assured destruction, Nash equilibrium, Netflix Prize, new economy, Nick Bostrom, Norbert Wiener, NP-complete, nuclear winter, operational security, optical character recognition, paperclip maximiser, pattern recognition, performance metric, phenotype, prediction markets, price stability, principal–agent problem, race to the bottom, random walk, Ray Kurzweil, recommendation engine, reversible computing, search costs, social graph, speech recognition, Stanislav Petrov, statistical model, stem cell, Stephen Hawking, Strategic Defense Initiative, strong AI, superintelligent machines, supervolcano, synthetic biology, technological singularity, technoutopianism, The Coming Technological Singularity, The Nature of the Firm, Thomas Kuhn: the structure of scientific revolutions, time dilation, Tragedy of the Commons, transaction costs, trolley problem, Turing machine, Vernor Vinge, WarGames: Global Thermonuclear War, Watson beat the top human players on Jeopardy!, World Values Survey, zero-sum game

They suggest that (at least in lieu of better data or analysis) it may be reasonable to believe that human-level machine intelligence has a fairly sizeable chance of being developed by mid-century, and that it has a non-trivial chance of being developed considerably sooner or much later; that it might perhaps fairly soon thereafter result in superintelligence; and that a wide range of outcomes may have a significant chance of occurring, including extremely good outcomes and outcomes that are as bad as human extinction.84 At the very least, they suggest that the topic is worth a closer look. CHAPTER 2 Paths to superintelligence Machines are currently far inferior to humans in general intelligence. Yet one day (we have suggested) they will be superintelligent. How do we get from here to there? This chapter explores several conceivable technological paths. We look at artificial intelligence, whole brain emulation, biological cognition, and human–machine interfaces, as well as networks and organizations.

Functionalities and superpowers It is important not to anthropomorphize superintelligence when thinking about its potential impacts. Anthropomorphic frames encourage unfounded expectations about the growth trajectory of a seed AI and about the psychology, motivations, and capabilities of a mature superintelligence. For example, a common assumption is that a superintelligent machine would be like a very clever but nerdy human being. We imagine that the AI has book smarts but lacks social savvy, or that it is logical but not intuitive and creative. This idea probably originates in observation: we look at present-day computers and see that they are good at calculation, remembering facts, and at following the letter of instructions while being oblivious to social contexts and subtexts, norms, emotions, and politics.

Instead, the majority of humans in this scenario might be idle rentiers who eke out a marginal living on their savings.16 They would be very poor, yet derive what little income they have from savings or state subsidies. They would live in a world with extremely advanced technology, including not only superintelligent machines but also anti-aging medicine, virtual reality, and various enhancement technologies and pleasure drugs: yet these might be generally unaffordable. Perhaps instead of using enhancement medicine, they would take drugs to stunt their growth and slow their metabolism in order to reduce their cost of living (fast-burners being unable to survive at the gradually declining subsistence income).


pages: 253 words: 84,238

A Thousand Brains: A New Theory of Intelligence by Jeff Hawkins

AI winter, Albert Einstein, artificial general intelligence, carbon-based life, clean water, cloud computing, deep learning, different worldview, discovery of DNA, Doomsday Clock, double helix, en.wikipedia.org, estate planning, Geoffrey Hinton, Jeff Hawkins, PalmPilot, Search for Extraterrestrial Intelligence, self-driving car, sensor fusion, Silicon Valley, superintelligent machines, the scientific method, Thomas Kuhn: the structure of scientific revolutions, Turing machine, Turing test

Among the more important of the brain’s models are models of the body itself, coping, as they must, with how the body’s own movement changes our perspective on the world outside the prison wall of the skull. And this is relevant to the major preoccupation of the middle section of the book, the intelligence of machines. Jeff Hawkins has great respect, as do I, for those smart people, friends of his and mine, who fear the approach of superintelligent machines to supersede us, subjugate us, or even dispose of us altogether. But Hawkins doesn’t fear them, partly because the faculties that make for mastery of chess or Go are not those that can cope with the complexity of the real world. Children who can’t play chess “know how liquids spill, balls roll, and dogs bark.

In such a world, no human or machine can have a permanent advantage on any task, let alone all tasks. People who worry about an intelligence explosion describe intelligence as if it can be created by an as-yet-to-be-discovered recipe or secret ingredient. Once this secret ingredient is known, it can be applied in greater and greater quantities, leading to superintelligent machines. I agree with the first premise. The secret ingredient, if you will, is that intelligence is created through thousands of small models of the world, where each model uses reference frames to store knowledge and create behaviors. However, adding this ingredient to machines does not impart any immediate capabilities.


pages: 326 words: 103,170

The Seventh Sense: Power, Fortune, and Survival in the Age of Networks by Joshua Cooper Ramo

air gap, Airbnb, Alan Greenspan, Albert Einstein, algorithmic trading, barriers to entry, Berlin Wall, bitcoin, Bletchley Park, British Empire, cloud computing, Computing Machinery and Intelligence, crowdsourcing, Danny Hillis, data science, deep learning, defense in depth, Deng Xiaoping, drone strike, Edward Snowden, Fairchild Semiconductor, Fall of the Berlin Wall, financial engineering, Firefox, Google Chrome, growth hacking, Herman Kahn, income inequality, information security, Isaac Newton, Jeff Bezos, job automation, Joi Ito, Laura Poitras, machine translation, market bubble, Menlo Park, Metcalfe’s law, Mitch Kapor, Morris worm, natural language processing, Neal Stephenson, Network effects, Nick Bostrom, Norbert Wiener, Oculus Rift, off-the-grid, packet switching, paperclip maximiser, Paul Graham, power law, price stability, quantitative easing, RAND corporation, reality distortion field, Recombinant DNA, recommendation engine, Republic of Letters, Richard Feynman, road to serfdom, Robert Metcalfe, Sand Hill Road, secular stagnation, self-driving car, Silicon Valley, Skype, Snapchat, Snow Crash, social web, sovereign wealth fund, Steve Jobs, Steve Wozniak, Stewart Brand, Stuxnet, superintelligent machines, systems thinking, technological singularity, The Coming Technological Singularity, The Wealth of Nations by Adam Smith, too big to fail, Vernor Vinge, zero day

It wouldn’t be with the sort of intended polite, lapdog domesticity of artificial intelligence that we might hope for but with a rottweiler of a device, alive to the meaty smell of power, violence, and greed. This puzzle has interested the Oxford philosopher Nick Bostrom, who has described the following thought experiment: Imagine a superintelligent machine programmed to do whatever is needed to make paper clips as fast as possible, a machine that is connected to every resource that task might demand. Go figure it out! might be all its human instructors tell it. As the clip-making AI becomes better and better at its task, it demands more and still more resources: more electricity, steel, manufacturing, shipping.

In the spring of 1993: See Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace, proceedings of a symposium cosponsored by the NASA Lewis Research Center and the Ohio Aerospace Institute, Westlake, Ohio, March 30–31, 1993 (Hampton, VA: National Aeronautics and Space Administration Scientific and Technical Information Program), iii. “Within thirty years”: Ibid., 12. Imagine a superintelligent machine: Nick Bostrom, “Ethical Issues in Advanced Artificial Intelligence,” in Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in AI, vol. 2, ed. Iva Smit et al. (Windsor, ON: International Institute for Advanced Studies in Systems Research and Cybernetics, 2003), 12–17, and Nick Bostrom, “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents,” Minds and Machines, 22, no. 2 (2012): 71–85.


pages: 340 words: 97,723

The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity by Amy Webb

"Friedman doctrine" OR "shareholder theory", Ada Lovelace, AI winter, air gap, Airbnb, airport security, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, algorithmic bias, AlphaGo, Andy Rubin, artificial general intelligence, Asilomar, autonomous vehicles, backpropagation, Bayesian statistics, behavioural economics, Bernie Sanders, Big Tech, bioinformatics, Black Lives Matter, blockchain, Bretton Woods, business intelligence, Cambridge Analytica, Cass Sunstein, Charles Babbage, Claude Shannon: information theory, cloud computing, cognitive bias, complexity theory, computer vision, Computing Machinery and Intelligence, CRISPR, cross-border payments, crowdsourcing, cryptocurrency, Daniel Kahneman / Amos Tversky, data science, deep learning, DeepMind, Demis Hassabis, Deng Xiaoping, disinformation, distributed ledger, don't be evil, Donald Trump, Elon Musk, fail fast, fake news, Filter Bubble, Flynn Effect, Geoffrey Hinton, gig economy, Google Glasses, Grace Hopper, Gödel, Escher, Bach, Herman Kahn, high-speed rail, Inbox Zero, Internet of things, Jacques de Vaucanson, Jeff Bezos, Joan Didion, job automation, John von Neumann, knowledge worker, Lyft, machine translation, Mark Zuckerberg, Menlo Park, move fast and break things, Mustafa Suleyman, natural language processing, New Urbanism, Nick Bostrom, one-China policy, optical character recognition, packet switching, paperclip maximiser, pattern recognition, personalized medicine, RAND corporation, Ray Kurzweil, Recombinant DNA, ride hailing / ride sharing, Rodney Brooks, Rubik’s Cube, Salesforce, Sand Hill Road, Second Machine Age, self-driving car, seminal paper, SETI@home, side project, Silicon Valley, Silicon Valley startup, skunkworks, Skype, smart cities, South China Sea, sovereign wealth fund, speech recognition, Stephen Hawking, strong AI, superintelligent machines, surveillance capitalism, technological singularity, The Coming Technological Singularity, the long tail, theory of mind, Tim Cook: Apple, trade route, Turing machine, Turing test, uber lyft, Von Neumann architecture, Watson beat the top human players on Jeopardy!, zero day

The difference between a 119 “high average” brain and a 134 “gifted” brain would mean significantly greater cognitive ability—making connections faster, mastering new concepts more easily, and thinking more efficiently. But within that same timeframe, AI’s cognitive ability will not only supersede us—it could become wholly unrecognizable to us, because we do not have the biological processing power to understand what it is. For us, encountering a superintelligent machine would be like a chimpanzee sitting in on a city council meeting. The chimp might recognize that there are people in the room and that he can sit down on a chair, but a long-winded argument about whether to add bike lanes to a busy intersection? He wouldn’t have anywhere near the cognitive ability to decipher the language being used, let alone the reasoning and experience to grok why bike lanes are so controversial.

At the moment, AI progress is happening weekly—which means that any meaningful regulations would be too restrictive and exacting to allow for innovation and progress. We’re in the midst of a very long transition, from artificial narrow intelligence to artificial general intelligence and, very possibly, superintelligent machines. Any regulations created in 2019 would be outdated by the time they went into effect. They might alleviate our concerns for a short while, but ultimately regulations would cause greater damage in the future. Changing the Big Nine: The Case for Transforming AI’s Business The creation of GAIA and structural changes to our governments are important to fixing the developmental track of AI, but the G-MAFIA and BAT must also agree to make some changes, too.


pages: 350 words: 98,077

Artificial Intelligence: A Guide for Thinking Humans by Melanie Mitchell

Ada Lovelace, AI winter, Alignment Problem, AlphaGo, Amazon Mechanical Turk, Apple's 1984 Super Bowl advert, artificial general intelligence, autonomous vehicles, backpropagation, Bernie Sanders, Big Tech, Boston Dynamics, Cambridge Analytica, Charles Babbage, Claude Shannon: information theory, cognitive dissonance, computer age, computer vision, Computing Machinery and Intelligence, dark matter, deep learning, DeepMind, Demis Hassabis, Douglas Hofstadter, driverless car, Elon Musk, en.wikipedia.org, folksonomy, Geoffrey Hinton, Gödel, Escher, Bach, I think there is a world market for maybe five computers, ImageNet competition, Jaron Lanier, job automation, John Markoff, John von Neumann, Kevin Kelly, Kickstarter, license plate recognition, machine translation, Mark Zuckerberg, natural language processing, Nick Bostrom, Norbert Wiener, ought to be enough for anybody, paperclip maximiser, pattern recognition, performance metric, RAND corporation, Ray Kurzweil, recommendation engine, ride hailing / ride sharing, Rodney Brooks, self-driving car, sentiment analysis, Silicon Valley, Singularitarianism, Skype, speech recognition, Stephen Hawking, Steve Jobs, Steve Wozniak, Steven Pinker, strong AI, superintelligent machines, tacit knowledge, tail risk, TED Talk, the long tail, theory of mind, There's no reason for any individual to have a computer in his home - Ken Olsen, trolley problem, Turing test, Vernor Vinge, Watson beat the top human players on Jeopardy!, world market for maybe five computers

The reason they were working at Google was precisely to make AI happen—not in a hundred years, but now, as soon as possible. They didn’t understand what Hofstadter was so stressed out about. People who work in AI are used to encountering the fears of people outside the field, who have presumably been influenced by the many science fiction movies depicting superintelligent machines that turn evil. AI researchers are also familiar with the worries that increasingly sophisticated AI will replace humans in some jobs, that AI applied to big data sets could subvert privacy and enable subtle discrimination, and that ill-understood AI systems allowed to make autonomous decisions have the potential to cause havoc.

Such a machine would not be constrained by the annoying limitations of humans, such as our slowness of thought and learning, our irrationality and cognitive biases, our susceptibility to boredom, our need for sleep, and our emotions, all of which get in the way of productive thinking. In this view, a superintelligent machine would encompass something close to “pure” intelligence, without being constrained by any of our human foibles. What seems more likely to me is that these supposed limitations of humans are part and parcel of our general intelligence. The cognitive limitations forced upon us by having bodies that work in the world, along with the emotions and “irrational” biases that evolved to allow us to function as a social group, and all the other qualities sometimes considered cognitive “shortcomings,” are in fact precisely what enable us to be generally intelligent rather than narrow savants.


Global Catastrophic Risks by Nick Bostrom, Milan M. Cirkovic

affirmative action, agricultural Revolution, Albert Einstein, American Society of Civil Engineers: Report Card, anthropic principle, artificial general intelligence, Asilomar, availability heuristic, backpropagation, behavioural economics, Bill Joy: nanobots, Black Swan, carbon tax, carbon-based life, Charles Babbage, classic study, cognitive bias, complexity theory, computer age, coronavirus, corporate governance, cosmic microwave background, cosmological constant, cosmological principle, cuban missile crisis, dark matter, death of newspapers, demographic transition, Deng Xiaoping, distributed generation, Doomsday Clock, Drosophila, endogenous growth, Ernest Rutherford, failed state, false flag, feminist movement, framing effect, friendly AI, Georg Cantor, global pandemic, global village, Great Leap Forward, Gödel, Escher, Bach, Hans Moravec, heat death of the universe, hindsight bias, information security, Intergovernmental Panel on Climate Change (IPCC), invention of agriculture, Kevin Kelly, Kuiper Belt, Large Hadron Collider, launch on warning, Law of Accelerating Returns, life extension, means of production, meta-analysis, Mikhail Gorbachev, millennium bug, mutually assured destruction, Nick Bostrom, nuclear winter, ocean acidification, off-the-grid, Oklahoma City bombing, P = NP, peak oil, phenotype, planetary scale, Ponzi scheme, power law, precautionary principle, prediction markets, RAND corporation, Ray Kurzweil, Recombinant DNA, reversible computing, Richard Feynman, Ronald Reagan, scientific worldview, Singularitarianism, social intelligence, South China Sea, strong AI, superintelligent machines, supervolcano, synthetic biology, technological singularity, technoutopianism, The Coming Technological Singularity, the long tail, The Turner Diaries, Tunguska event, twin studies, Tyler Cowen, uranium enrichment, Vernor Vinge, War on Poverty, Westphalian system, Y2K

Darwin himself noted that 'not one living species will transmit its unaltered likeness to a distant futurity'. Our own species will surely change and diversify faster than any predecessor - via human-induced modifications (whether intelligently controlled or unintended) , not by natural selection alone. The post-human era may be only centuries away. And what about Artificial Intelligence? Superintelligent machine could be the last invention that humans need ever make. We should keep our minds open, or at least ajar, to concepts that seem on the fringe of science fiction. These thoughts might seem irrelevant to practical policy - something for speculative academics to discuss in our spare moments.

At the same time, the successful deployment of friendly superintelligence could obviate many of the other risks facing humanity. The title of Chapter 15, 'Artificial Intelligence as a positive and negative factor in global risk', reflects this ambivalent potential. As Eliezer Yudkowsky notes, the prospect of superintelligent machines is a difficult topic to analyse and discuss. Appropriately, therefore, he devotes a substantial part ofhis chapter to clearing common misconceptions and barriers to understanding. Having done so, he proceeds to give an argument for giving serious consideration to the possibility that radical superintelligence could erupt very suddenly - a scenario that is sometimes referred to as the 'Singularity hypothesis'.

Such a fate may be routine for humans who dally too long on slow Earth before going Ex. Here we have Tribulations and damnation for the late adopters, in addition to the millennia! utopian outcome for the elect. Although Kurzweil acknowledges apocalyptic potentials - such as humanity being destroyed by superintelligent machines - inherent in these technologies, he is nonetheless uniformly utopian and enthusiastic. Hence Garreau's labelling Kurzweil's the ' Heaven' scenario. While Kurzweil (2005) acknowledges his similarity to millennialists by, for instance, including a tongue-in-cheek picture in The Singularity Is Near of himself holding a sign with that slogan, referencing the classic cartoon image of the EndTimes street prophet, most Singularitarians angrily reject such comparisons insisting their expectations are based solely on rational, scientific extrapolation.


pages: 222 words: 53,317

Overcomplicated: Technology at the Limits of Comprehension by Samuel Arbesman

algorithmic trading, Anthropocene, Anton Chekhov, Apple II, Benoit Mandelbrot, Boeing 747, Chekhov's gun, citation needed, combinatorial explosion, Computing Machinery and Intelligence, Danny Hillis, data science, David Brooks, digital map, discovery of the americas, driverless car, en.wikipedia.org, Erik Brynjolfsson, Flash crash, friendly AI, game design, Google X / Alphabet X, Googley, Hans Moravec, HyperCard, Ian Bogost, Inbox Zero, Isaac Newton, iterative process, Kevin Kelly, machine translation, Machine translation of "The spirit is willing, but the flesh is weak." to Russian and back, mandelbrot fractal, Minecraft, Neal Stephenson, Netflix Prize, Nicholas Carr, Nick Bostrom, Parkinson's law, power law, Ray Kurzweil, recommendation engine, Richard Feynman, Richard Feynman: Challenger O-ring, Second Machine Age, self-driving car, SimCity, software studies, statistical model, Steve Jobs, Steve Wozniak, Steven Pinker, Stewart Brand, superintelligent machines, synthetic biology, systems thinking, the long tail, Therac-25, Tyler Cowen, Tyler Cowen: Great Stagnation, urban planning, Watson beat the top human players on Jeopardy!, Whole Earth Catalog, Y2K

The Techno-Human Condition by Braden R. Allenby and Daniel Sarewitz is a discussion of how to grapple with coming technological change and is particularly intriguing when it discusses “wicked complexity.” Superintelligence by Nick Bostrom explores the many issues and implications related to the development of superintelligent machines. The Works, The Heights, and The Way to Go by Kate Ascher examine how cities, skyscrapers, and our transportation networks, respectively, actually work. Beautifully rendered and fascinating books. The Second Machine Age by Erik Brynjolfsson and Andrew McAfee examines the rapid technological change we are experiencing and can come to expect, and how it will affect our economy, as well as how to handle this change.


The Ethical Algorithm: The Science of Socially Aware Algorithm Design by Michael Kearns, Aaron Roth

23andMe, affirmative action, algorithmic bias, algorithmic trading, Alignment Problem, Alvin Roth, backpropagation, Bayesian statistics, bitcoin, cloud computing, computer vision, crowdsourcing, data science, deep learning, DeepMind, Dr. Strangelove, Edward Snowden, Elon Musk, fake news, Filter Bubble, general-purpose programming language, Geoffrey Hinton, Google Chrome, ImageNet competition, Lyft, medical residency, Nash equilibrium, Netflix Prize, p-value, Pareto efficiency, performance metric, personalized medicine, pre–internet, profit motive, quantitative trading / quantitative finance, RAND corporation, recommendation engine, replication crisis, ride hailing / ride sharing, Robert Bork, Ronald Coase, self-driving car, short selling, sorting algorithm, sparse data, speech recognition, statistical model, Stephen Hawking, superintelligent machines, TED Talk, telemarketer, Turing machine, two-sided market, Vilfredo Pareto

All of this makes for good press, but in this section, we want to consider some of the arguments that are causing an increasingly respectable minority of scientists to be seriously worried about AI risk. Most of these fears are premised on the idea that AI research will inevitably lead to superintelligent machines in a chain reaction that will happen much faster than humanity will have time to react to. This chain reaction, once it reaches some critical point, will lead to an “intelligence explosion” that could lead to an AI “singularity.” One of the earliest versions of this argument was summed up in 1965 by I.


pages: 256 words: 73,068

12 Bytes: How We Got Here. Where We Might Go Next by Jeanette Winterson

"Margaret Hamilton" Apollo, "World Economic Forum" Davos, 3D printing, Ada Lovelace, Airbnb, Albert Einstein, Alignment Problem, Amazon Mechanical Turk, Anthropocene, Apollo 11, Apple's 1984 Super Bowl advert, artificial general intelligence, Asilomar, augmented reality, autonomous vehicles, basic income, Big Tech, bitcoin, Bletchley Park, blockchain, Boston Dynamics, call centre, Cambridge Analytica, Capital in the Twenty-First Century by Thomas Piketty, cashless society, Charles Babbage, computer age, Computing Machinery and Intelligence, coronavirus, COVID-19, CRISPR, cryptocurrency, dark matter, Dava Sobel, David Graeber, deep learning, deskilling, digital rights, discovery of DNA, Dominic Cummings, Donald Trump, double helix, driverless car, Elon Musk, fake news, flying shuttle, friendly AI, gender pay gap, global village, Grace Hopper, Gregor Mendel, hive mind, housing crisis, Internet of things, Isaac Newton, Jacquard loom, James Hargreaves, Jeff Bezos, Johannes Kepler, John von Neumann, Joseph-Marie Jacquard, Kickstarter, Large Hadron Collider, life extension, lockdown, lone genius, Mark Zuckerberg, means of production, microdosing, more computing power than Apollo, move fast and break things, natural language processing, Nick Bostrom, Norbert Wiener, off grid, OpenAI, operation paperclip, packet switching, Peter Thiel, pink-collar, Plato's cave, public intellectual, QAnon, QWERTY keyboard, Ray Kurzweil, rewilding, ride hailing / ride sharing, Rutger Bregman, Sam Altman, self-driving car, sharing economy, Sheryl Sandberg, Shoshana Zuboff, Silicon Valley, Skype, Snapchat, SoftBank, SpaceX Starlink, speech recognition, spinning jenny, stem cell, Stephen Hawking, Steve Bannon, Steve Jobs, Steven Levy, Steven Pinker, superintelligent machines, surveillance capitalism, synthetic biology, systems thinking, tech billionaire, tech worker, TED Talk, telepresence, telepresence robot, TikTok, trade route, Turing test, universal basic income, Virgin Galactic, Watson beat the top human players on Jeopardy!, women in the workforce, Y Combinator

The Future Isn’t Female Jurassic Car Park I Love, Therefore I Am Selected Bibliography Illustration and Text Credits Acknowledgements How These Essays Came About In 2009 – 4 years after it was published – I read Ray Kurzweil’s The Singularity Is Near. It is an optimistic view of the future – a future that depends on computational technology. A future of superintelligent machines. It is also a future where humans will transcend our present biological limits. I had to read the book twice – once for the sense and once for the detail. After that, just for my own interest, year-in, year-out, I started to track this future; that meant a weekly read through New Scientist, Wired, the excellent technology pieces in the New York Times and the Atlantic, as well as following the money via the Economist and Financial Times.


pages: 246 words: 81,625

On Intelligence by Jeff Hawkins, Sandra Blakeslee

airport security, Albert Einstein, backpropagation, computer age, Computing Machinery and Intelligence, conceptual framework, Jeff Hawkins, Johannes Kepler, Necker cube, PalmPilot, pattern recognition, Paul Erdős, Ray Kurzweil, Silicon Valley, Silicon Valley startup, speech recognition, superintelligent machines, the scientific method, Thomas Bayes, Turing machine, Turing test

Being able to predict how proteins fold and interact would accelerate the development of medicines and the cures for many diseases. Engineers and scientists have created three-dimensional visual models of proteins, in an effort to predict how these complex molecules behave. But try as we might, the task has proven too difficult. A superintelligent machine, on the other hand, with a set of senses specifically tuned to this question might be able to answer it. If this sounds far-fetched, remember that we wouldn't be surprised if humans could solve the problem. Our inability to tackle the issue may be related, primarily, to a mismatch between the human senses and the physical phenomena we want to understand.


pages: 798 words: 240,182

The Transhumanist Reader by Max More, Natasha Vita-More

"World Economic Forum" Davos, 23andMe, Any sufficiently advanced technology is indistinguishable from magic, artificial general intelligence, augmented reality, Bill Joy: nanobots, bioinformatics, brain emulation, Buckminster Fuller, cellular automata, clean water, cloud computing, cognitive bias, cognitive dissonance, combinatorial explosion, Computing Machinery and Intelligence, conceptual framework, Conway's Game of Life, cosmological principle, data acquisition, discovery of DNA, Douglas Engelbart, Drosophila, en.wikipedia.org, endogenous growth, experimental subject, Extropian, fault tolerance, Flynn Effect, Francis Fukuyama: the end of history, Frank Gehry, friendly AI, Future Shock, game design, germ theory of disease, Hans Moravec, hypertext link, impulse control, index fund, John von Neumann, joint-stock company, Kevin Kelly, Law of Accelerating Returns, life extension, lifelogging, Louis Pasteur, Menlo Park, meta-analysis, moral hazard, Network effects, Nick Bostrom, Norbert Wiener, pattern recognition, Pepto Bismol, phenotype, positional goods, power law, precautionary principle, prediction markets, presumed consent, Project Xanadu, public intellectual, radical life extension, Ray Kurzweil, reversible computing, RFID, Ronald Reagan, scientific worldview, silicon-based life, Singularitarianism, social intelligence, stem cell, stochastic process, superintelligent machines, supply-chain management, supply-chain management software, synthetic biology, systems thinking, technological determinism, technological singularity, Ted Nelson, telepresence, telepresence robot, telerobotics, the built environment, The Coming Technological Singularity, the scientific method, The Wisdom of Crowds, transaction costs, Turing machine, Turing test, Upton Sinclair, Vernor Vinge, Von Neumann architecture, VTOL, Whole Earth Review, women in the workforce, zero-sum game

Both human beings and bacteria have good claims to being the “dominant ­species” on Earth – depending upon how one defines dominant. It is possible that superintelligent machines may wish to dominate some niche that is not presently occupied in any serious fashion by human beings. If this is the case, then from a human being’s point of view, such an AI would not be a Dominant AI. Instead, we would have a “Limited AI” scenario. How could Limited AI occur? I can imagine several scenarios, and I’m sure other people can imagine more. Perhaps the most important point to make is that superintelligent machines may not be competing in the same niche with human beings for resources, and would therefore have little incentive to dominate us.


pages: 347 words: 97,721

Only Humans Need Apply: Winners and Losers in the Age of Smart Machines by Thomas H. Davenport, Julia Kirby

"World Economic Forum" Davos, AI winter, Amazon Robotics, Andy Kessler, Apollo Guidance Computer, artificial general intelligence, asset allocation, Automated Insights, autonomous vehicles, basic income, Baxter: Rethink Robotics, behavioural economics, business intelligence, business process, call centre, carbon-based life, Clayton Christensen, clockwork universe, commoditize, conceptual framework, content marketing, dark matter, data science, David Brooks, deep learning, deliberate practice, deskilling, digital map, disruptive innovation, Douglas Engelbart, driverless car, Edward Lloyd's coffeehouse, Elon Musk, Erik Brynjolfsson, estate planning, financial engineering, fixed income, flying shuttle, follow your passion, Frank Levy and Richard Murnane: The New Division of Labor, Freestyle chess, game design, general-purpose programming language, global pandemic, Google Glasses, Hans Lippershey, haute cuisine, income inequality, independent contractor, index fund, industrial robot, information retrieval, intermodal, Internet of things, inventory management, Isaac Newton, job automation, John Markoff, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, Joi Ito, Khan Academy, Kiva Systems, knowledge worker, labor-force participation, lifelogging, longitudinal study, loss aversion, machine translation, Mark Zuckerberg, Narrative Science, natural language processing, Nick Bostrom, Norbert Wiener, nuclear winter, off-the-grid, pattern recognition, performance metric, Peter Thiel, precariat, quantitative trading / quantitative finance, Ray Kurzweil, Richard Feynman, risk tolerance, Robert Shiller, robo advisor, robotic process automation, Rodney Brooks, Second Machine Age, self-driving car, Silicon Valley, six sigma, Skype, social intelligence, speech recognition, spinning jenny, statistical model, Stephen Hawking, Steve Jobs, Steve Wozniak, strong AI, superintelligent machines, supply-chain management, tacit knowledge, tech worker, TED Talk, the long tail, transaction costs, Tyler Cowen, Tyler Cowen: Great Stagnation, Watson beat the top human players on Jeopardy!, Works Progress Administration, Zipcar

But a minority defendant given the choice between a probably prejudiced jury, a possibly prejudiced judge, and a race-blind machine might well choose the latter option. In addition, not everyone agrees that we humans will remain in a position to dictate which decisions and actions will be reserved for us. What would prevent a superintelligent machine from denying our commands, they ask, if it thought better of the situation? To prepare for that possibility (familiar to those who remember HAL in 2001: A Space Odyssey), some insist that computer scientists had better figure out how to program values into the machines, and values that are “human-friendly,” to color the decision-making that might proceed logically but tragically from their narrowly specified goals.


Calling Bullshit: The Art of Scepticism in a Data-Driven World by Jevin D. West, Carl T. Bergstrom

airport security, algorithmic bias, AlphaGo, Amazon Mechanical Turk, Andrew Wiles, Anthropocene, autism spectrum disorder, bitcoin, Charles Babbage, cloud computing, computer vision, content marketing, correlation coefficient, correlation does not imply causation, crowdsourcing, cryptocurrency, data science, deep learning, deepfake, delayed gratification, disinformation, Dmitri Mendeleev, Donald Trump, Elon Musk, epigenetics, Estimating the Reproducibility of Psychological Science, experimental economics, fake news, Ford Model T, Goodhart's law, Helicobacter pylori, Higgs boson, invention of the printing press, John Markoff, Large Hadron Collider, longitudinal study, Lyft, machine translation, meta-analysis, new economy, nowcasting, opioid epidemic / opioid crisis, p-value, Pluto: dwarf planet, publication bias, RAND corporation, randomized controlled trial, replication crisis, ride hailing / ride sharing, Ronald Reagan, selection bias, self-driving car, Silicon Valley, Silicon Valley startup, social graph, Socratic dialogue, Stanford marshmallow experiment, statistical model, stem cell, superintelligent machines, systematic bias, tech bro, TED Talk, the long tail, the scientific method, theory of mind, Tim Cook: Apple, twin studies, Uber and Lyft, Uber for X, uber lyft, When a measure becomes a target

As Zachary Lipton, an AI researcher at Carnegie Mellon University, explains, “Policy makers [are] earnestly having meetings to discuss the rights of robots when they should be talking about discrimination in algorithmic decision making.” Delving into the details of algorithmic auditing may be dull compared to drafting a Bill of Rights for robots, or devising ways to protect humanity against Terminator-like superintelligent machines. But to address the problems that AI is creating now, we need to understand the data and algorithms we are already using for more mundane purposes. There is a vast gulf between AI alarmism in the popular press, and the reality of where AI research actually stands. Elon Musk, the founder of Tesla, SpaceX, and PayPal, warned US state governors at their national meeting in 2017 that AI posed a “fundamental risk to the existence of human civilization.”


pages: 396 words: 117,149

The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World by Pedro Domingos

Albert Einstein, Amazon Mechanical Turk, Arthur Eddington, backpropagation, basic income, Bayesian statistics, Benoit Mandelbrot, bioinformatics, Black Swan, Brownian motion, cellular automata, Charles Babbage, Claude Shannon: information theory, combinatorial explosion, computer vision, constrained optimization, correlation does not imply causation, creative destruction, crowdsourcing, Danny Hillis, data is not the new oil, data is the new oil, data science, deep learning, DeepMind, double helix, Douglas Hofstadter, driverless car, Erik Brynjolfsson, experimental subject, Filter Bubble, future of work, Geoffrey Hinton, global village, Google Glasses, Gödel, Escher, Bach, Hans Moravec, incognito mode, information retrieval, Jeff Hawkins, job automation, John Markoff, John Snow's cholera map, John von Neumann, Joseph Schumpeter, Kevin Kelly, large language model, lone genius, machine translation, mandelbrot fractal, Mark Zuckerberg, Moneyball by Michael Lewis explains big data, Narrative Science, Nate Silver, natural language processing, Netflix Prize, Network effects, Nick Bostrom, NP-complete, off grid, P = NP, PageRank, pattern recognition, phenotype, planetary scale, power law, pre–internet, random walk, Ray Kurzweil, recommendation engine, Richard Feynman, scientific worldview, Second Machine Age, self-driving car, Silicon Valley, social intelligence, speech recognition, Stanford marshmallow experiment, statistical model, Stephen Hawking, Steven Levy, Steven Pinker, superintelligent machines, the long tail, the scientific method, The Signal and the Noise by Nate Silver, theory of mind, Thomas Bayes, transaction costs, Turing machine, Turing test, Vernor Vinge, Watson beat the top human players on Jeopardy!, white flight, yottabyte, zero-sum game

The Second Machine Age, by Erik Brynjolfsson and Andrew McAfee (Norton, 2014), discusses how progress in AI will shape the future of work and the economy. “World War R,” by Chris Baraniuk (New Scientist, 2014) reports on the debate surrounding the use of robots in battle. “Transcending complacency on superintelligent machines,” by Stephen Hawking et al. (Huffington Post, 2014), argues that now is the time to worry about AI’s risks. Nick Bostrom’s Superintelligence (Oxford University Press, 2014) considers those dangers and what to do about them. A Brief History of Life, by Richard Hawking (Random Penguin, 1982), summarizes the quantum leaps of evolution in the eons BC.


pages: 463 words: 115,103

Head, Hand, Heart: Why Intelligence Is Over-Rewarded, Manual Workers Matter, and Caregivers Deserve More Respect by David Goodhart

active measures, Airbnb, Albert Einstein, assortative mating, basic income, Berlin Wall, Bernie Sanders, Big Tech, big-box store, Black Lives Matter, Boris Johnson, Branko Milanovic, Brexit referendum, British Empire, call centre, Cass Sunstein, central bank independence, centre right, computer age, corporate social responsibility, COVID-19, data science, David Attenborough, David Brooks, deglobalization, deindustrialization, delayed gratification, desegregation, deskilling, different worldview, Donald Trump, Elon Musk, emotional labour, Etonian, fail fast, Fall of the Berlin Wall, Flynn Effect, Frederick Winslow Taylor, future of work, gender pay gap, George Floyd, gig economy, glass ceiling, Glass-Steagall Act, Great Leap Forward, illegal immigration, income inequality, James Hargreaves, James Watt: steam engine, Jeff Bezos, job automation, job satisfaction, John Maynard Keynes: Economic Possibilities for our Grandchildren, knowledge economy, knowledge worker, labour market flexibility, lockdown, longitudinal study, low skilled workers, Mark Zuckerberg, mass immigration, meritocracy, new economy, Nicholas Carr, oil shock, pattern recognition, Peter Thiel, pink-collar, post-industrial society, post-materialism, postindustrial economy, precariat, reshoring, Richard Florida, robotic process automation, scientific management, Scientific racism, Skype, social distancing, social intelligence, spinning jenny, Steven Pinker, superintelligent machines, TED Talk, The Bell Curve by Richard Herrnstein and Charles Murray, The Rise and Fall of American Growth, Thorstein Veblen, twin studies, Tyler Cowen, Tyler Cowen: Great Stagnation, universal basic income, upwardly mobile, wages for housework, winner-take-all economy, women in the workforce, young professional

But, enthusing to his theme, he explained to me the future role for humans: “My guess is that there are three areas where humans will preserve some comparative advantage over robots for the foreseeable future. The first is cognitive tasks requiring creativity and intuition. These might be tasks or problems whose solutions require great logical leaps of imagination rather than step-by-step hill climbing… And even in a world of superintelligent machine learning, there will still be a demand for people with the skills to program, test, and oversee these machines. Some human judgmental overlay of these automated processes is still likely to be needed…” The second area of prospective demand for humans skills, says Haldane, is bespoke design and manufacture.


System Error by Rob Reich

"Friedman doctrine" OR "shareholder theory", "World Economic Forum" Davos, 2021 United States Capitol attack, A Declaration of the Independence of Cyberspace, Aaron Swartz, AI winter, Airbnb, airport security, Alan Greenspan, Albert Einstein, algorithmic bias, AlphaGo, AltaVista, artificial general intelligence, Automated Insights, autonomous vehicles, basic income, Ben Horowitz, Berlin Wall, Bernie Madoff, Big Tech, bitcoin, Blitzscaling, Cambridge Analytica, Cass Sunstein, clean water, cloud computing, computer vision, contact tracing, contact tracing app, coronavirus, corporate governance, COVID-19, creative destruction, CRISPR, crowdsourcing, data is the new oil, data science, decentralized internet, deep learning, deepfake, DeepMind, deplatforming, digital rights, disinformation, disruptive innovation, Donald Knuth, Donald Trump, driverless car, dual-use technology, Edward Snowden, Elon Musk, en.wikipedia.org, end-to-end encryption, Fairchild Semiconductor, fake news, Fall of the Berlin Wall, Filter Bubble, financial engineering, financial innovation, fulfillment center, future of work, gentrification, Geoffrey Hinton, George Floyd, gig economy, Goodhart's law, GPT-3, Hacker News, hockey-stick growth, income inequality, independent contractor, informal economy, information security, Jaron Lanier, Jeff Bezos, Jim Simons, jimmy wales, job automation, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John Perry Barlow, Lean Startup, linear programming, Lyft, Marc Andreessen, Mark Zuckerberg, meta-analysis, minimum wage unemployment, Monkeys Reject Unequal Pay, move fast and break things, Myron Scholes, Network effects, Nick Bostrom, Northpointe / Correctional Offender Management Profiling for Alternative Sanctions, NP-complete, Oculus Rift, OpenAI, Panopticon Jeremy Bentham, Parler "social media", pattern recognition, personalized medicine, Peter Thiel, Philippa Foot, premature optimization, profit motive, quantitative hedge fund, race to the bottom, randomized controlled trial, recommendation engine, Renaissance Technologies, Richard Thaler, ride hailing / ride sharing, Ronald Reagan, Sam Altman, Sand Hill Road, scientific management, self-driving car, shareholder value, Sheryl Sandberg, Shoshana Zuboff, side project, Silicon Valley, Snapchat, social distancing, Social Responsibility of Business Is to Increase Its Profits, software is eating the world, spectrum auction, speech recognition, stem cell, Steve Jobs, Steven Levy, strong AI, superintelligent machines, surveillance capitalism, Susan Wojcicki, tech billionaire, tech worker, techlash, technoutopianism, Telecommunications Act of 1996, telemarketer, The Future of Employment, TikTok, Tim Cook: Apple, traveling salesman, Triangle Shirtwaist Factory, trolley problem, Turing test, two-sided market, Uber and Lyft, uber lyft, ultimatum game, union organizing, universal basic income, washing machines reduced drudgery, Watson beat the top human players on Jeopardy!, When a measure becomes a target, winner-take-all economy, Y Combinator, you are the product

Although few believe that AGI is on the near horizon, some enthusiasts claim that the exponential growth in computing power and the astonishing advances in AI in just the past decade make AGI a possibility in our lifetimes. Others, including many AI researchers, believe AGI to be unlikely or in any event still many decades away. These debates have generated a cottage industry of utopian or dystopian commentary concerning the creation of superintelligent machines. How can we ensure that the goals of an AGI agent or system will be aligned with the goals of humans? Will AGI put humanity itself at risk or threaten to make humans slaves to superintelligent robots or AGI agents? However, rather than speculating about AGI, let’s focus on what’s not science fiction at all: the rapid advances in narrow or weak AI that present us with hugely important challenges to humans and society.


When Computers Can Think: The Artificial Intelligence Singularity by Anthony Berglas, William Black, Samantha Thalind, Max Scratchmann, Michelle Estes

3D printing, Abraham Maslow, AI winter, air gap, anthropic principle, artificial general intelligence, Asilomar, augmented reality, Automated Insights, autonomous vehicles, availability heuristic, backpropagation, blue-collar work, Boston Dynamics, brain emulation, call centre, cognitive bias, combinatorial explosion, computer vision, Computing Machinery and Intelligence, create, read, update, delete, cuban missile crisis, David Attenborough, DeepMind, disinformation, driverless car, Elon Musk, en.wikipedia.org, epigenetics, Ernest Rutherford, factory automation, feminist movement, finite state, Flynn Effect, friendly AI, general-purpose programming language, Google Glasses, Google X / Alphabet X, Gödel, Escher, Bach, Hans Moravec, industrial robot, Isaac Newton, job automation, John von Neumann, Law of Accelerating Returns, license plate recognition, Mahatma Gandhi, mandelbrot fractal, natural language processing, Nick Bostrom, Parkinson's law, patent troll, patient HM, pattern recognition, phenotype, ransomware, Ray Kurzweil, Recombinant DNA, self-driving car, semantic web, Silicon Valley, Singularitarianism, Skype, sorting algorithm, speech recognition, statistical model, stem cell, Stephen Hawking, Stuxnet, superintelligent machines, technological singularity, Thomas Malthus, Turing machine, Turing test, uranium enrichment, Von Neumann architecture, Watson beat the top human players on Jeopardy!, wikimedia commons, zero day

If the reader agrees then they should consider supporting the work of MIRI and like-minded organizations. Bostrom 2014 Superintelligence Fair Use 328 dense pages covers the main practical and philosophical dangers presented by hyper-intelligent software. The book starts with a review of the increasing rate of technological progress, and various paths to build a superintelligent machine, including an analysis of the kinetics of recursive self-improvement based on optimization power and recalcitrance. The dangers of anthropomorphizing are introduced with some cute images from early comic books involving robots carrying away beautiful women. It also notes that up to now, a more intelligent system is a safer system, and that conditions our attitude towards intelligent machines.


pages: 413 words: 119,587

Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots by John Markoff

A Declaration of the Independence of Cyberspace, AI winter, airport security, Andy Rubin, Apollo 11, Apple II, artificial general intelligence, Asilomar, augmented reality, autonomous vehicles, backpropagation, basic income, Baxter: Rethink Robotics, Bill Atkinson, Bill Duvall, bioinformatics, Boston Dynamics, Brewster Kahle, Burning Man, call centre, cellular automata, Charles Babbage, Chris Urmson, Claude Shannon: information theory, Clayton Christensen, clean water, cloud computing, cognitive load, collective bargaining, computer age, Computer Lib, computer vision, crowdsourcing, Danny Hillis, DARPA: Urban Challenge, data acquisition, Dean Kamen, deep learning, DeepMind, deskilling, Do you want to sell sugared water for the rest of your life?, don't be evil, Douglas Engelbart, Douglas Engelbart, Douglas Hofstadter, Dr. Strangelove, driverless car, dual-use technology, Dynabook, Edward Snowden, Elon Musk, Erik Brynjolfsson, Evgeny Morozov, factory automation, Fairchild Semiconductor, Fillmore Auditorium, San Francisco, From Mathematics to the Technologies of Life and Death, future of work, Galaxy Zoo, General Magic , Geoffrey Hinton, Google Glasses, Google X / Alphabet X, Grace Hopper, Gunnar Myrdal, Gödel, Escher, Bach, Hacker Ethic, Hans Moravec, haute couture, Herbert Marcuse, hive mind, hype cycle, hypertext link, indoor plumbing, industrial robot, information retrieval, Internet Archive, Internet of things, invention of the wheel, Ivan Sutherland, Jacques de Vaucanson, Jaron Lanier, Jeff Bezos, Jeff Hawkins, job automation, John Conway, John Markoff, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John Perry Barlow, John von Neumann, Kaizen: continuous improvement, Kevin Kelly, Kiva Systems, knowledge worker, Kodak vs Instagram, labor-force participation, loose coupling, Marc Andreessen, Mark Zuckerberg, Marshall McLuhan, medical residency, Menlo Park, military-industrial complex, Mitch Kapor, Mother of all demos, natural language processing, Neil Armstrong, new economy, Norbert Wiener, PageRank, PalmPilot, pattern recognition, Philippa Foot, pre–internet, RAND corporation, Ray Kurzweil, reality distortion field, Recombinant DNA, Richard Stallman, Robert Gordon, Robert Solow, Rodney Brooks, Sand Hill Road, Second Machine Age, self-driving car, semantic web, Seymour Hersh, shareholder value, side project, Silicon Valley, Silicon Valley startup, Singularitarianism, skunkworks, Skype, social software, speech recognition, stealth mode startup, Stephen Hawking, Steve Ballmer, Steve Jobs, Steve Wozniak, Steven Levy, Stewart Brand, Strategic Defense Initiative, strong AI, superintelligent machines, tech worker, technological singularity, Ted Nelson, TED Talk, telemarketer, telepresence, telepresence robot, Tenerife airport disaster, The Coming Technological Singularity, the medium is the message, Thorstein Veblen, Tony Fadell, trolley problem, Turing test, Vannevar Bush, Vernor Vinge, warehouse automation, warehouse robotics, Watson beat the top human players on Jeopardy!, We are as Gods, Whole Earth Catalog, William Shockley: the traitorous eight, zero-sum game

Part of Her is also about the singularity, the idea that machine intelligence is accelerating at such a pace that it will eventually surpass human intelligence and become independent, rendering humans “left behind.” Both Her and Transcendence, another singularity-obsessed science-fiction movie introduced the following spring, are most intriguing for the way they portray human-machine relationships. In Transcendence the human-computer interaction moves from pleasant to dark, and eventually a superintelligent machine destroys human civilization. In Her, ironically, the relationship between the man and his operating system disintegrates as the computer’s intelligence develops so quickly that, not satisfied even with thousands of simultaneous relationships, it transcends humanity and . . . departs. This may be science fiction, but in the real world, this territory had become familiar to Liesl Capper almost a decade earlier.


pages: 590 words: 152,595

Army of None: Autonomous Weapons and the Future of War by Paul Scharre

"World Economic Forum" Davos, active measures, Air France Flight 447, air gap, algorithmic trading, AlphaGo, Apollo 13, artificial general intelligence, augmented reality, automated trading system, autonomous vehicles, basic income, Black Monday: stock market crash in 1987, brain emulation, Brian Krebs, cognitive bias, computer vision, cuban missile crisis, dark matter, DARPA: Urban Challenge, data science, deep learning, DeepMind, DevOps, Dr. Strangelove, drone strike, Elon Musk, en.wikipedia.org, Erik Brynjolfsson, facts on the ground, fail fast, fault tolerance, Flash crash, Freestyle chess, friendly fire, Herman Kahn, IFF: identification friend or foe, ImageNet competition, information security, Internet of things, Jeff Hawkins, Johann Wolfgang von Goethe, John Markoff, Kevin Kelly, Korean Air Lines Flight 007, Loebner Prize, loose coupling, Mark Zuckerberg, military-industrial complex, moral hazard, move 37, mutually assured destruction, Nate Silver, Nick Bostrom, PalmPilot, paperclip maximiser, pattern recognition, Rodney Brooks, Rubik’s Cube, self-driving car, sensor fusion, South China Sea, speech recognition, Stanislav Petrov, Stephen Hawking, Steve Ballmer, Steve Wozniak, Strategic Defense Initiative, Stuxnet, superintelligent machines, Tesla Model S, The Signal and the Noise by Nate Silver, theory of mind, Turing test, Tyler Cowen, universal basic income, Valery Gerasimov, Wall-E, warehouse robotics, William Langewiesche, Y2K, zero day

It stems from our ability to harness machine learning and speed to very specific problems. More advanced AI is certainly coming, but artificial general intelligence in the sense of machines that think like us may prove to be a mirage. If our benchmark for “intelligent” is what humans do, advanced artificial intelligence may be so alien that we never recognize these superintelligent machines as “true AI.” This dynamic already exists to some extent. Micah Clark pointed out that “as soon as something works and is practical it’s no longer AI.” Armstrong echoed this observation: “as soon as a computer can do it, they get redefined as not AI anymore.” If the past is any guide, we are likely to see in the coming decades a proliferation of narrow superintelligent systems in a range of fields—medicine, law, transportation, science, and others.


pages: 669 words: 210,153

Tools of Titans: The Tactics, Routines, and Habits of Billionaires, Icons, and World-Class Performers by Timothy Ferriss

Abraham Maslow, Adam Curtis, Airbnb, Alexander Shulgin, Alvin Toffler, An Inconvenient Truth, artificial general intelligence, asset allocation, Atul Gawande, augmented reality, back-to-the-land, Ben Horowitz, Bernie Madoff, Bertrand Russell: In Praise of Idleness, Beryl Markham, billion-dollar mistake, Black Swan, Blue Bottle Coffee, Blue Ocean Strategy, blue-collar work, book value, Boris Johnson, Buckminster Fuller, business process, Cal Newport, call centre, caloric restriction, caloric restriction, Carl Icahn, Charles Lindbergh, Checklist Manifesto, cognitive bias, cognitive dissonance, Colonization of Mars, Columbine, commoditize, correlation does not imply causation, CRISPR, David Brooks, David Graeber, deal flow, digital rights, diversification, diversified portfolio, do what you love, Donald Trump, effective altruism, Elon Musk, fail fast, fake it until you make it, fault tolerance, fear of failure, Firefox, follow your passion, fulfillment center, future of work, Future Shock, Girl Boss, Google X / Alphabet X, growth hacking, Howard Zinn, Hugh Fearnley-Whittingstall, Jeff Bezos, job satisfaction, Johann Wolfgang von Goethe, John Markoff, Kevin Kelly, Kickstarter, Lao Tzu, lateral thinking, life extension, lifelogging, Mahatma Gandhi, Marc Andreessen, Mark Zuckerberg, Mason jar, Menlo Park, microdosing, Mikhail Gorbachev, MITM: man-in-the-middle, Neal Stephenson, Nelson Mandela, Nicholas Carr, Nick Bostrom, off-the-grid, optical character recognition, PageRank, Paradox of Choice, passive income, pattern recognition, Paul Graham, peer-to-peer, Peter H. Diamandis: Planetary Resources, Peter Singer: altruism, Peter Thiel, phenotype, PIHKAL and TIHKAL, post scarcity, post-work, power law, premature optimization, private spaceflight, QWERTY keyboard, Ralph Waldo Emerson, Ray Kurzweil, recommendation engine, rent-seeking, Richard Feynman, risk tolerance, Ronald Reagan, Salesforce, selection bias, sharing economy, side project, Silicon Valley, skunkworks, Skype, Snapchat, Snow Crash, social graph, software as a service, software is eating the world, stem cell, Stephen Hawking, Steve Jobs, Stewart Brand, superintelligent machines, TED Talk, Tesla Model S, The future is already here, the long tail, The Wisdom of Crowds, Thomas L Friedman, traumatic brain injury, trolley problem, vertical integration, Wall-E, Washington Consensus, We are as Gods, Whole Earth Catalog, Y Combinator, zero-sum game

The first is, ‘Are you a programmer?’—the relevance of which is obvious—and the second is, ‘Do you have children?’ He claims to have found that if people don’t have children, their concern about the future isn’t sufficiently well-calibrated so as to get just how terrifying the prospect of building superintelligent machines is in the absence of having figured out the control problem [ensuring the AI converges with our interests, even when a thousand or a billion times smarter]. I think there’s something to that. It’s not limited, of course, to artificial intelligence. It spreads to every topic of concern. To worry about the fate of civilization in the abstract is harder than worrying about what sorts of experiences your children are going to have in the future.”


pages: 761 words: 231,902

The Singularity Is Near: When Humans Transcend Biology by Ray Kurzweil

additive manufacturing, AI winter, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Albert Einstein, anthropic principle, Any sufficiently advanced technology is indistinguishable from magic, artificial general intelligence, Asilomar, augmented reality, autonomous vehicles, backpropagation, Benoit Mandelbrot, Bill Joy: nanobots, bioinformatics, brain emulation, Brewster Kahle, Brownian motion, business cycle, business intelligence, c2.com, call centre, carbon-based life, cellular automata, Charles Babbage, Claude Shannon: information theory, complexity theory, conceptual framework, Conway's Game of Life, coronavirus, cosmological constant, cosmological principle, cuban missile crisis, data acquisition, Dava Sobel, David Brooks, Dean Kamen, digital divide, disintermediation, double helix, Douglas Hofstadter, en.wikipedia.org, epigenetics, factory automation, friendly AI, functional programming, George Gilder, Gödel, Escher, Bach, Hans Moravec, hype cycle, informal economy, information retrieval, information security, invention of the telephone, invention of the telescope, invention of writing, iterative process, Jaron Lanier, Jeff Bezos, job automation, job satisfaction, John von Neumann, Kevin Kelly, Law of Accelerating Returns, life extension, lifelogging, linked data, Loebner Prize, Louis Pasteur, mandelbrot fractal, Marshall McLuhan, Mikhail Gorbachev, Mitch Kapor, mouse model, Murray Gell-Mann, mutually assured destruction, natural language processing, Network effects, new economy, Nick Bostrom, Norbert Wiener, oil shale / tar sands, optical character recognition, PalmPilot, pattern recognition, phenotype, power law, precautionary principle, premature optimization, punch-card reader, quantum cryptography, quantum entanglement, radical life extension, randomized controlled trial, Ray Kurzweil, remote working, reversible computing, Richard Feynman, Robert Metcalfe, Rodney Brooks, scientific worldview, Search for Extraterrestrial Intelligence, selection bias, semantic web, seminal paper, Silicon Valley, Singularitarianism, speech recognition, statistical model, stem cell, Stephen Hawking, Stewart Brand, strong AI, Stuart Kauffman, superintelligent machines, technological singularity, Ted Kaczynski, telepresence, The Coming Technological Singularity, Thomas Bayes, transaction costs, Turing machine, Turing test, two and twenty, Vernor Vinge, Y2K, Yogi Berra

The author runs a company, FATKAT (Financial Accelerating Transactions by Kurzweil Adaptive Technologies), which applies computerized pattern recognition to financial data to make stock-market investment decisions, http://www.FatKat.com. 159. See discussion in chapter 2 on price-performance improvements in computer memory and electronics in general. 160. Runaway AI refers to a scenario where, as Max More describes, "superintelligent machines, initially harnessed for human benefit, soon leave us behind." Max More, "Embrace, Don't Relinquish, the Future," http://www.KurzweilAI.net/articles/art0106.html?printable=1. See also Damien Broderick's description of the "Seed AI": "A self-improving seed AI could run glacially slowly on a limited machine substrate.