strong AI

53 results back to index


pages: 761 words: 231,902

The Singularity Is Near: When Humans Transcend Biology by Ray Kurzweil

additive manufacturing, AI winter, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Albert Einstein, anthropic principle, Any sufficiently advanced technology is indistinguishable from magic, artificial general intelligence, Asilomar, augmented reality, autonomous vehicles, backpropagation, Benoit Mandelbrot, Bill Joy: nanobots, bioinformatics, brain emulation, Brewster Kahle, Brownian motion, business cycle, business intelligence, c2.com, call centre, carbon-based life, cellular automata, Charles Babbage, Claude Shannon: information theory, complexity theory, conceptual framework, Conway's Game of Life, coronavirus, cosmological constant, cosmological principle, cuban missile crisis, data acquisition, Dava Sobel, David Brooks, Dean Kamen, digital divide, disintermediation, double helix, Douglas Hofstadter, en.wikipedia.org, epigenetics, factory automation, friendly AI, functional programming, George Gilder, Gödel, Escher, Bach, Hans Moravec, hype cycle, informal economy, information retrieval, information security, invention of the telephone, invention of the telescope, invention of writing, iterative process, Jaron Lanier, Jeff Bezos, job automation, job satisfaction, John von Neumann, Kevin Kelly, Law of Accelerating Returns, life extension, lifelogging, linked data, Loebner Prize, Louis Pasteur, mandelbrot fractal, Marshall McLuhan, Mikhail Gorbachev, Mitch Kapor, mouse model, Murray Gell-Mann, mutually assured destruction, natural language processing, Network effects, new economy, Nick Bostrom, Norbert Wiener, oil shale / tar sands, optical character recognition, PalmPilot, pattern recognition, phenotype, power law, precautionary principle, premature optimization, punch-card reader, quantum cryptography, quantum entanglement, radical life extension, randomized controlled trial, Ray Kurzweil, remote working, reversible computing, Richard Feynman, Robert Metcalfe, Rodney Brooks, scientific worldview, Search for Extraterrestrial Intelligence, selection bias, semantic web, seminal paper, Silicon Valley, Singularitarianism, speech recognition, statistical model, stem cell, Stephen Hawking, Stewart Brand, strong AI, Stuart Kauffman, superintelligent machines, technological singularity, Ted Kaczynski, telepresence, The Coming Technological Singularity, Thomas Bayes, transaction costs, Turing machine, Turing test, two and twenty, Vernor Vinge, Y2K, Yogi Berra

A key question regarding the Singularity is whether the "chicken" (strong AI) or the "egg"(nanotechnology) will come first. In other words, will strong AI lead to full nanotechnology (molecular-manufacturing assemblers that can turn information into physical products), or will full nanotechnology lead to strong AI? The logic of the first premise is that strong AI would imply superhuman AI for the reasons just cited, and superhuman AI would be in a position to solve any remaining design problems required to implement full nanotechnology. The second premise is based on the realization that the hardware requirements for strong AI will be met by nanotechnology-based computation.

However, I do expect that full MNT will emerge prior to strong AI, but only by a few years (around 2025 for nanotechnology, around 2029 for strong AI). As revolutionary as nanotechnology will be, strong AI will have far more profound consequences. Nanotechnology is powerful but not necessarily intelligent. We can devise ways of at least trying to manage the enormous powers of nanotechnology, but superintelligence innately cannot be controlled. Runaway AI. Once strong AI is achieved, it can readily be advanced and its powers multiplied, as that is the fundamental nature of machine abilities. As one strong AI immediately begets many strong Als, the latter access their own design, understand and improve it, and thereby very rapidly evolve into a yet more capable, more intelligent AI, with the cycle repeating itself indefinitely.

Once we can take down these signs, we'll have Turing-level machines, and the era of strong AI will have started. This era will creep up on us. As long as there are any discrepancies between human and machine performance—areas in which humans outperform machines—strong AI skeptics will seize on these differences. But our experience in each area of skill and knowledge is likely to follow that of Kasparov. Our perceptions of performance will shift quickly from pathetic to daunting as the knee of the exponential curve is reached for each human capability. How will strong AI be achieved? Most of the material in this book is intended to layout the fundamental requirements for both hardware and software and explain why we can be confident that these requirements will be met in nonbiological systems.


pages: 346 words: 97,890

The Road to Conscious Machines by Michael Wooldridge

Ada Lovelace, AI winter, algorithmic bias, AlphaGo, Andrew Wiles, Anthropocene, artificial general intelligence, Asilomar, augmented reality, autonomous vehicles, backpropagation, basic income, Bletchley Park, Boeing 747, British Empire, call centre, Charles Babbage, combinatorial explosion, computer vision, Computing Machinery and Intelligence, DARPA: Urban Challenge, deep learning, deepfake, DeepMind, Demis Hassabis, don't be evil, Donald Trump, driverless car, Elaine Herzberg, Elon Musk, Eratosthenes, factory automation, fake news, future of work, gamification, general purpose technology, Geoffrey Hinton, gig economy, Google Glasses, intangible asset, James Watt: steam engine, job automation, John von Neumann, Loebner Prize, Minecraft, Mustafa Suleyman, Nash equilibrium, Nick Bostrom, Norbert Wiener, NP-complete, P = NP, P vs NP, paperclip maximiser, pattern recognition, Philippa Foot, RAND corporation, Ray Kurzweil, Rodney Brooks, self-driving car, Silicon Valley, Stephen Hawking, Steven Pinker, strong AI, technological singularity, telemarketer, Tesla Model S, The Coming Technological Singularity, The Future of Employment, the scientific method, theory of mind, Thomas Bayes, Thomas Kuhn: the structure of scientific revolutions, traveling salesman, trolley problem, Turing machine, Turing test, universal basic income, Von Neumann architecture, warehouse robotics

But that doesn’t imply (to me at least) that machine consciousness is impossible – just that machine consciousness would be different. Nagel’s argument is one of many that have been set out in an attempt to show strong AI to be impossible. Let’s take a look at the best-known of these. Is Strong AI Impossible? Nagel’s argument is closely related to a common-sense objection to the possibility of strong AI, which says that it is not possible because there is something special about people. This intuitive response starts from the view that computers are different from people because people are animate objects, but computers are not.

For all that they may excel at what they do, they are nothing more than software components optimized to carry out a specific narrow task. Since I believe we are a long way from General AI, it naturally follows that I should be even more dubious about the prospects for strong AI: the idea of machines that are, like us, conscious, self-aware, truly autonomous beings. Nevertheless, in this final chapter, let’s indulge ourselves. Even though strong AI is not anywhere in prospect we can still have some fun thinking about it, and speculating about how we might progress towards it. So, let’s take a trip together down the road to conscious machines. We’ll imagine what the landscape might look like, which obstacles we might meet and what sights we can expect to see on the way.

, and to argue that certain forms of consciousness must remain beyond our comprehension (we can’t imagine what it would be like to be a bat). But his test can be applied to computers, and most people seem to believe that it isn’t like anything to be a computer, any more than a toaster. For this reason, Nagel’s ‘What is it like’ argument has been used against the possibility of strong AI. Strong AI is impossible, according to this argument, because computers cannot be conscious by Nagel’s argument. I am personally not convinced by this argument, because asking ‘What is it like to be a …’ is, for me, nothing more than an appeal to our intuition. Our intuition works well at separating out the obvious cases – orang-utans and toasters – but I don’t see why we should expect it to be a reliable guide in the more subtle cases, or cases that are far outside our own experience of the natural world – such as AI.


The Book of Why: The New Science of Cause and Effect by Judea Pearl, Dana Mackenzie

affirmative action, Albert Einstein, AlphaGo, Asilomar, Bayesian statistics, computer age, computer vision, Computing Machinery and Intelligence, confounding variable, correlation coefficient, correlation does not imply causation, Daniel Kahneman / Amos Tversky, data science, deep learning, DeepMind, driverless car, Edmond Halley, Elon Musk, en.wikipedia.org, experimental subject, Great Leap Forward, Gregor Mendel, Isaac Newton, iterative process, John Snow's cholera map, Loebner Prize, loose coupling, Louis Pasteur, Menlo Park, Monty Hall problem, pattern recognition, Paul Erdős, personalized medicine, Pierre-Simon Laplace, placebo effect, Plato's cave, prisoner's dilemma, probability theory / Blaise Pascal / Pierre de Fermat, randomized controlled trial, Recombinant DNA, selection bias, self-driving car, seminal paper, Silicon Valley, speech recognition, statistical model, Stephen Hawking, Steve Jobs, strong AI, The Design of Experiments, the scientific method, Thomas Bayes, Turing test

Typical examples are introducing new price structures or subsidies or changing the minimum wage. In technical terms, machine-learning methods today provide us with an efficient way of going from finite sample estimates to probability distributions, and we still need to get from distributions to cause-effect relations. When we start talking about strong AI, causal models move from a luxury to a necessity. To me, a strong AI should be a machine that can reflect on its actions and learn from past mistakes. It should be able to understand the statement “I should have acted differently,” whether it is told as much by a human or arrives at that conclusion itself. The counterfactual interpretation of this statement reads, “I have done X = x, and the outcome was Y = y.

Finally, when we start to adjust our own software, that is when we begin to take moral responsibility for our actions. This responsibility may be an illusion at the level of neural activation but not at the level of self-awareness software. Encouraged by these possibilities, I believe that strong AI with causal understanding and agency capabilities is a realizable promise, and this raises the question that science fiction writers have been asking since the 1950s: Should we be worried? Is strong AI a Pandora’s box that we should not open? Recently public figures like Elon Musk and Stephen Hawking have gone on record saying that we should be worried. On Twitter, Musk said that AIs were “potentially more dangerous than nukes.”

In the late 1980s, I realized that machines’ lack of understanding of causal relations was perhaps the biggest roadblock to giving them human-level intelligence. In the last chapter of this book, I will return to my roots, and together we will explore the implications of the Causal Revolution for artificial intelligence. I believe that strong AI is an achievable goal and one not to be feared precisely because causality is part of the solution. A causal reasoning module will give machines the ability to reflect on their mistakes, to pinpoint weaknesses in their software, to function as moral entities, and to converse naturally with humans about their own choices and intentions.


pages: 261 words: 10,785

The Lights in the Tunnel by Martin Ford

Alan Greenspan, Albert Einstein, Bear Stearns, Bill Joy: nanobots, Black-Scholes formula, business cycle, call centre, carbon tax, cloud computing, collateralized debt obligation, commoditize, Computing Machinery and Intelligence, creative destruction, credit crunch, double helix, en.wikipedia.org, factory automation, full employment, income inequality, index card, industrial robot, inventory management, invisible hand, Isaac Newton, job automation, John Markoff, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, knowledge worker, low skilled workers, mass immigration, Mitch Kapor, moral hazard, pattern recognition, prediction markets, Productivity paradox, Ray Kurzweil, Robert Solow, Search for Extraterrestrial Intelligence, Silicon Valley, Stephen Hawking, strong AI, technological singularity, the long tail, Thomas L Friedman, Turing test, Vernor Vinge, War on Poverty, warehouse automation, warehouse robotics

While narrow AI is increasingly deployed to solve real world problems and attracts most of the current commercial interest, the Holy Grail of artificial intelligence is, of course, strong AI—the construction of a truly intelligent machine. The realization of strong AI would mean the existence of a machine that is genuinely competitive with, or perhaps even superior to, a human being in its ability to reason and conceive ideas. The arguments I have made in this book do not depend on strong AI, but it is worth noting that if truly intelligent machines were built and became affordable, the trends I have predicted here would likely be amplified, and the economic impact would certainly be dramatic and might unfold in an accelerating fashion. Research into strong AI has suffered because of some overly optimistic predictions and expectations back in the 1980s—long before computer hardware was fast enough to make true machine intelligence feasible.

Research into strong AI has suffered because of some overly optimistic predictions and expectations back in the 1980s—long before computer hardware was fast enough to make true machine intelligence feasible. When reality fell far short of the projections, focus and financial backing shifted away from research into strong AI. Nonetheless, there is evidence that the vastly superior performance and affordability of today’s processors is helping to revitalize the field. Research into strong AI can be roughly divided into two main approaches. The direct computational approach attempts to extend traditional, algorithmic computing into the realm of true intelligence. This involves the development of sophisticated software applications that exhibit general reasoning.

Once researchers gain an understanding of the basic operating principles of the brain, it may be possible to build an artificial intelligence based on that framework. This would not be an exact replication of a human brain; instead, it would be something completely new, but based on a similar architecture. When might strong AI become reality—if ever? I suspect that if you were to survey the top experts working in the field, you would get a fairly wide range of estimates. Optimists might say it will happen within the next 20 to 30 years. A more cautious group would place it 50 or more years in the future, and some might argue that it will never happen.


pages: 797 words: 227,399

Wired for War: The Robotics Revolution and Conflict in the 21st Century by P. W. Singer

agricultural Revolution, Albert Einstein, Alvin Toffler, Any sufficiently advanced technology is indistinguishable from magic, Atahualpa, barriers to entry, Berlin Wall, Bill Joy: nanobots, Bletchley Park, blue-collar work, borderless world, Boston Dynamics, Charles Babbage, Charles Lindbergh, clean water, Craig Reynolds: boids flock, cuban missile crisis, digital divide, digital map, Dr. Strangelove, en.wikipedia.org, Ernest Rutherford, failed state, Fall of the Berlin Wall, Firefox, Ford Model T, Francisco Pizarro, Frank Gehry, friendly fire, Future Shock, game design, George Gilder, Google Earth, Grace Hopper, Hans Moravec, I think there is a world market for maybe five computers, if you build it, they will come, illegal immigration, industrial robot, information security, interchangeable parts, Intergovernmental Panel on Climate Change (IPCC), invention of gunpowder, invention of movable type, invention of the steam engine, Isaac Newton, Jacques de Vaucanson, job automation, Johann Wolfgang von Goethe, junk bonds, Law of Accelerating Returns, Mars Rover, Menlo Park, mirror neurons, Neal Stephenson, New Urbanism, Nick Bostrom, no-fly zone, PalmPilot, paperclip maximiser, pattern recognition, precautionary principle, private military company, RAND corporation, Ray Kurzweil, RFID, robot derives from the Czech word robota Czech, meaning slave, Rodney Brooks, Ronald Reagan, Schrödinger's Cat, Silicon Valley, social intelligence, speech recognition, Stephen Hawking, Strategic Defense Initiative, strong AI, technological singularity, The Coming Technological Singularity, The Wisdom of Crowds, Timothy McVeigh, Turing test, Vernor Vinge, Virgin Galactic, Wall-E, warehouse robotics, world market for maybe five computers, Yogi Berra

Computers eventually develop to the equivalent of human intelligence (“strong AI”) and then rapidly push past any attempts at human control. Ray Kurzweil explains how this would work. “As one strong AI immediately begets many strong AIs, the latter access their own design, understand and improve it, and thereby very rapidly evolve into a yet more capable, more intelligent AI, with the cycle repeating itself indefinitely. Each cycle not only creates a more intelligent AI, but takes less time than the cycle before it as is the nature of technological evolution. The premise is that once strong AI is achieved, it will immediately become a runaway phenomenon of rapidly escalating super-intelligence.”

There was even one robot that became the equivalent of artificially stupid or suicidal, that is, a robot that evolved to constantly make the worst possible decision. This idea of robots, one day being able to problem-solve, create, and even develop personalities past what their human designers intended is what some call “strong AI.” That is, the computer might learn so much that, at a certain point, it is not just mimicking human capabilities but has finally equaled, and even surpassed, its creators’ human intelligence. This is the essence of the so-called Turing test. Alan Turing was one of the pioneers of AI, who worked on the early computers like Colossus that helped crack the German codes during World War II.

The cost/performance ratio of Internet service providers is doubling every twelve months. Internet bandwidth backbone is doubling roughly every twelve months. The number of human genes mapped per year doubles every eighteen months. The resolution of brain scans (a key to understanding how the brain works, an important part of creating strong AI) doubles every twelve months. And, as a by-product, the number of personal and service robots has so far doubled every nine months. The darker side of these trends has been exponential change in our capability not merely to create, but also to destroy. The modern-day bomber jet has roughly half a million times the killing capacity of the Roman legionnaire carrying a sword in hand.


pages: 185 words: 43,609

Zero to One: Notes on Startups, or How to Build the Future by Peter Thiel, Blake Masters

Airbnb, Alan Greenspan, Albert Einstein, Andrew Wiles, Andy Kessler, Berlin Wall, clean tech, cloud computing, crony capitalism, discounted cash flows, diversified portfolio, do well by doing good, don't be evil, Elon Musk, eurozone crisis, Fairchild Semiconductor, heat death of the universe, income inequality, Jeff Bezos, Larry Ellison, Lean Startup, life extension, lone genius, Long Term Capital Management, Lyft, Marc Andreessen, Mark Zuckerberg, Max Levchin, minimum viable product, Nate Silver, Network effects, new economy, Nick Bostrom, PalmPilot, paypal mafia, Peter Thiel, pets.com, power law, profit motive, Ralph Waldo Emerson, Ray Kurzweil, self-driving car, shareholder value, Sheryl Sandberg, Silicon Valley, Silicon Valley startup, Singularitarianism, software is eating the world, Solyndra, Steve Jobs, strong AI, Suez canal 1869, tech worker, Ted Kaczynski, Tesla Model S, uber lyft, Vilfredo Pareto, working poor

The logical endpoint to this substitutionist thinking is called “strong AI”: computers that eclipse humans on every important dimension. Of course, the Luddites are terrified by the possibility. It even makes the futurists a little uneasy; it’s not clear whether strong AI would save humanity or doom it. Technology is supposed to increase our mastery over nature and reduce the role of chance in our lives; building smarter-than-human computers could actually bring chance back with a vengeance. Strong AI is like a cosmic lottery ticket: if we win, we get utopia; if we lose, Skynet substitutes us out of existence. But even if strong AI is a real possibility rather than an imponderable mystery, it won’t happen anytime soon: replacement by computers is a worry for the 22nd century.

Kaczynski, Ted Karim, Jawed Karp, Alex, 11.1, 12.1 Kasparov, Garry Katrina, Hurricane Kennedy, Anthony Kesey, Ken Kessler, Andy Kurzweil, Ray last mover, 11.1, 13.1 last mover advantage lean startup, 2.1, 6.1, 6.2 Levchin, Max, 4.1, 10.1, 12.1, 14.1 Levie, Aaron lifespan life tables LinkedIn, 5.1, 10.1, 12.1 Loiseau, Bernard Long-Term Capital Management (LTCM) Lord of the Rings (Tolkien) luck, 6.1, 6.2, 6.3, 6.4 Lucretius Lyft MacBook machine learning Madison, James Madrigal, Alexis Manhattan Project Manson, Charles manufacturing marginal cost marketing Marx, Karl, 4.1, 6.1, 6.2, 6.3 Masters, Blake, prf.1, 11.1 Mayer, Marissa Medicare Mercedes-Benz MiaSolé, 13.1, 13.2 Michelin Microsoft, 3.1, 3.2, 3.3, 4.1, 5.1, 14.1 mobile computing mobile credit card readers Mogadishu monopoly, monopolies, 3.1, 3.2, 3.3, 5.1, 7.1, 8.1 building of characteristics of in cleantech creative dynamism of new lies of profits of progress and sales and of Tesla Morrison, Jim Mosaic browser music recording industry Musk, Elon, 4.1, 6.1, 11.1, 13.1, 13.2, 13.3 Napster, 5.1, 14.1 NASA, 6.1, 11.1 NASDAQ, 2.1, 13.1 National Security Agency (NSA) natural gas natural secrets Navigator browser Netflix Netscape NetSecure network effects, 5.1, 5.2 New Economy, 2.1, 2.2 New York Times, 13.1, 14.1 New York Times Nietzsche, Friedrich Nokia nonprofits, 13.1, 13.2 Nosek, Luke, 9.1, 14.1 Nozick, Robert nutrition Oedipus, 14.1, 14.2 OfficeJet OmniBook online pet store market Oracle Outliers (Gladwell) ownership Packard, Dave Page, Larry Palantir, prf.1, 7.1, 10.1, 11.1, 12.1 PalmPilots, 2.1, 5.1, 11.1 Pan, Yu Panama Canal Pareto, Vilfredo Pareto principle Parker, Sean, 5.1, 14.1 Part-time employees patents path dependence PayPal, prf.1, 2.1, 3.1, 4.1, 4.2, 4.3, 5.1, 5.2, 5.3, 8.1, 9.1, 9.2, 10.1, 10.2, 10.3, 10.4, 11.1, 11.2, 12.1, 12.2, 14.1 founders of, 14.1 future cash flows of investors in “PayPal Mafia” PCs Pearce, Dave penicillin perfect competition, 3.1, 3.2 equilibrium of Perkins, Tom perk war Perot, Ross, 2.1, 12.1, 12.2 pessimism Petopia.com Pets.com, 4.1, 4.2 PetStore.com pharmaceutical companies philanthropy philosophy, indefinite physics planning, 2.1, 6.1, 6.2 progress without Plato politics, 6.1, 11.1 indefinite polling pollsters pollution portfolio, diversified possession power law, 7.1, 7.2, 7.3 of distribution of venture capital Power Sellers (eBay) Presley, Elvis Priceline.com Prince Procter & Gamble profits, 2.1, 3.1, 3.2, 3.3 progress, 6.1, 6.2 future of without planning proprietary technology, 5.1, 5.2, 13.1 public opinion public relations Pythagoras Q-Cells Rand, Ayn Rawls, John, 6.1, 6.2 Reber, John recession, of mid-1990 recruiting, 10.1, 12.1 recurrent collapse, bm1.1, bm1.2 renewable energy industrial index research and development resources, 12.1, bm1.1 restaurants, 3.1, 3.2, 5.1 risk risk aversion Romeo and Juliet (Shakespeare) Romulus and Remus Roosevelt, Theodore Royal Society Russia Sacks, David sales, 2.1, 11.1, 13.1 complex as hidden to non-customers personal Sandberg, Sheryl San Francisco Bay Area savings scale, economies of Scalia, Antonin scaling up scapegoats Schmidt, Eric search engines, prf.1, 3.1, 5.1 secrets, 8.1, 13.1 about people case for finding of looking for using self-driving cars service businesses service economy Shakespeare, William, 4.1, 7.1 Shark Tank Sharma, Suvi Shatner, William Siebel, Tom Siebel Systems Silicon Valley, 1.1, 2.1, 2.2, 2.3, 5.1, 5.2, 6.1, 7.1, 10.1, 11.1 Silver, Nate Simmons, Russel, 10.1, 14.1 singularity smartphones, 1.1, 12.1 social entrepreneurship Social Network, The social networks, prf.1, 5.1 Social Security software engineers software startups, 5.1, 6.1 solar energy, 13.1, 13.2, 13.3, 13.4 Solaria Solyndra, 13.1, 13.2, 13.3, 13.4, 13.5 South Korea space shuttle SpaceX, prf.1, 10.1, 11.1 Spears, Britney SpectraWatt, 13.1, 13.2 Spencer, Herbert, 6.1, 6.2 Square, 4.1, 6.1 Stanford Sleep Clinic startups, prf.1, 1.1, 5.1, 6.1, 6.2, 7.1 assigning responsibilities in cash flow at as cults disruption by during dot-com mania economies of scale and foundations of founder’s paradox in lessons of dot-com mania for power law in public relations in sales and staff of target market for uniform of venture capital and steam engine Stoppelman, Jeremy string theory strong AI substitution, complementarity vs. Suez Canal tablet computing technological advance technology, prf.1, 1.1, 1.2, 2.1, 2.2, 2.3 American fear of complementarity and globalization and proprietary technology companies terrorism Tesla Motors, 10.1, 13.1, 13.2 Thailand Theory of Justice, A (Rawls) Timberlake, Justin Time magazine Tolkien, J.R.R.


pages: 198 words: 59,351

The Internet Is Not What You Think It Is: A History, a Philosophy, a Warning by Justin E. H. Smith

3D printing, Ada Lovelace, Adrian Hon, agricultural Revolution, algorithmic management, artificial general intelligence, Big Tech, Charles Babbage, clean water, coronavirus, COVID-19, cryptocurrency, dark matter, disinformation, Donald Trump, drone strike, Elon Musk, game design, gamification, global pandemic, GPT-3, Internet of things, Isaac Newton, Jacquard loom, Jacques de Vaucanson, Jaron Lanier, jimmy wales, Joseph-Marie Jacquard, Kuiper Belt, Mark Zuckerberg, Marshall McLuhan, meme stock, new economy, Nick Bostrom, Norbert Wiener, packet switching, passive income, Potemkin village, printed gun, QAnon, Ray Kurzweil, Republic of Letters, Silicon Valley, Skype, strong AI, technological determinism, theory of mind, TikTok, Tragedy of the Commons, trolley problem, Turing machine, Turing test, you are the product

And those who do this also tend to see our computers differently than Leibniz saw his stepped reckoner: not as prostheses to which we outsource those computational activities of the mind that can be done without real thought, but as rivals or equals, as artificially generated kin, or as mutant enemies. In this respect, though the defenders of strong AI and of various species of the computational theory of mind might accuse defenders of the mill argument and its variants of attachment to a will-o’-the-wisp, to a vestige of prescientific thinking, they are sooner the ones who follow in the footsteps of the alchemists such as Roger Bacon, and of the people who feared the alchemists and their dark conjurings. The imminent arrival of strong AI is in many respects a neo-alchemist idea, of no more real interest in our efforts to understand the promises and threats of technology than any of the other forces medieval conjurers sought to awaken, and charlatans pretended to awaken, and chiliasts warned against awakening.

Judgment, in Cantwell Smith’s view, is “the normative ideal to which … we should hold full-blooded human intelligence—a form of dispassionate deliberative thought, grounded in ethical commitment and responsible action, appropriate to the situation in which it is deployed.”11 Cantwell Smith cites the philosopher and scholar of existential phenomenology John Haugeland, with whom he aligns his own views very closely, according to whom the thing that distinguishes computers from us the most is that, unlike us, “they don’t give a damn.”12 In Cantwell Smith’s gloss of this distinction, things will only begin to matter to computers when “they develop committed and deferential existential engagement with the world.”13 Now, it is not certain that such deferential engagement can only be instantiated in a non-mechanical mind, and it is possible that if reckoning just keeps getting streamlined and quicker, eventually it will cross over into judgment. But mere possibility, as opposed to concrete evidence, is not a very strong foundation for speculation about the inevitable emergence of strong AI, that is, of AI that matches or surpasses human beings in its power of judgment. Few theorists of the coming AI takeover, again, see the dawning of AI consciousness as a necessary or even likely part of this projected scenario. Yet the way they talk about it is often confused enough to make it unclear whether they envision conscious machines as a likely development.

If at least some of the people who are convinced of the likelihood of a singularity moment or an imminent AI takeover are themselves unconvinced that the AI in question must be “strong,” must experience its own consciousness as we do, this only makes it more surprising that those who are attracted to the simulation argument—who overlap to some considerable extent with those who defend the singularity thesis—have at least implicitly allowed strong AI—consciousness, reflective judgment, and so on—to sneak back into the account of what artificial intelligence does or in principle is capable of doing. Again, that this is what they have done is clear from the implicit commitments of the simulation hypothesis we have already considered: we know ourselves, from immediate first-person experience, to be conscious beings; therefore, if it is possible that we are artificial simulations created in the same way that we create our own artificial simulations with our computers, then it is possible that artificial simulations may be conscious beings.


pages: 268 words: 109,447

The Cultural Logic of Computation by David Golumbia

Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, American ideology, Benoit Mandelbrot, Bletchley Park, borderless world, business process, cellular automata, citizen journalism, Claude Shannon: information theory, computer age, Computing Machinery and Intelligence, corporate governance, creative destruction, digital capitalism, digital divide, en.wikipedia.org, finite state, folksonomy, future of work, Google Earth, Howard Zinn, IBM and the Holocaust, iterative process, Jaron Lanier, jimmy wales, John von Neumann, Joseph Schumpeter, late capitalism, Lewis Mumford, machine readable, machine translation, means of production, natural language processing, Norbert Wiener, One Laptop per Child (OLPC), packet switching, RAND corporation, Ray Kurzweil, RFID, Richard Stallman, semantic web, Shoshana Zuboff, Slavoj Žižek, social web, stem cell, Stephen Hawking, Steve Ballmer, Stewart Brand, strong AI, supply-chain management, supply-chain management software, technological determinism, Ted Nelson, telemarketer, The Wisdom of Crowds, theory of mind, Turing machine, Turing test, Vannevar Bush, web application, Yochai Benkler

Perhaps because language per se is a much more objective part of the social world than is the abstraction called “thinking,” however, the history of computational linguistics reveals a particular dynamism with regard to the data it takes as its object— exaggerated claims, that is, are frequently met with material tests that confirm or disconfirm theses. Accordingly, CL can claim more practical successes than can the program of Strong AI, but at the same time demonstrates with particular clarity where ideology meets material constraints. Computers invite us to view languages on their terms: on the terms by which computers use formal systems that we have recently decided to call languages—that is, programming languages. But these closed systems, subject to univocal, correct, “activating” interpretations, look little like human language practices, which seems not just to allow but to thrive on ambiguity, context, and polysemy.

(Even in Turing’s The Cultural Logic of Computation p 98 original statement of the Test, the interlocutors are supposed to be passing dialogue back and forth in written form, because Turing sees the obvious inability of machines to adequately mimic human speech as a separate question from whether computers can process language.) By focusing on written exemplars, CL and NLP have pursued a program that has much in common with the “Strong AI” programs of the 1960s and 1970s that Hubert Dreyfus (1992), John Haugeland (1985), John Searle (1984, 1992), and others have so effectively critiqued. This program has two distinct aspects, which although they are joined intellectually, are often pursued with apparent independence from each other—yet at the same time, the mere presence of the phrase “computational linguistics” in a title is often not at all enough to distinguish which program the researcher has in mind.

We find it easy to divorce technology from the raced, gendered, politicized parts of our own world—to think of computers as “pure” technology in much the same way that mechanical rationality is supposed to be pure reason. But when we look closer we see something much more reflective of our world and its political history in this technology than we might think at first. The “strong AI” movements of the late 1960s and 1970s, for example, represent and even implement powerful gender ideologies (Adam 1998). In what turns out in retrospect to be a field of study devoted to a mistaken metaphor (see especially Dreyfus 1992), according to which the brain primarily computes like any other Turing machine, we see advocates The Cultural Logic of Computation p 202 unusually invested in the idea that they might be creating something like life—in resuscitating the Frankenstein story that may, in fact, be inapplicable to the world of computing itself.


pages: 350 words: 98,077

Artificial Intelligence: A Guide for Thinking Humans by Melanie Mitchell

Ada Lovelace, AI winter, Alignment Problem, AlphaGo, Amazon Mechanical Turk, Apple's 1984 Super Bowl advert, artificial general intelligence, autonomous vehicles, backpropagation, Bernie Sanders, Big Tech, Boston Dynamics, Cambridge Analytica, Charles Babbage, Claude Shannon: information theory, cognitive dissonance, computer age, computer vision, Computing Machinery and Intelligence, dark matter, deep learning, DeepMind, Demis Hassabis, Douglas Hofstadter, driverless car, Elon Musk, en.wikipedia.org, folksonomy, Geoffrey Hinton, Gödel, Escher, Bach, I think there is a world market for maybe five computers, ImageNet competition, Jaron Lanier, job automation, John Markoff, John von Neumann, Kevin Kelly, Kickstarter, license plate recognition, machine translation, Mark Zuckerberg, natural language processing, Nick Bostrom, Norbert Wiener, ought to be enough for anybody, paperclip maximiser, pattern recognition, performance metric, RAND corporation, Ray Kurzweil, recommendation engine, ride hailing / ride sharing, Rodney Brooks, self-driving car, sentiment analysis, Silicon Valley, Singularitarianism, Skype, speech recognition, Stephen Hawking, Steve Jobs, Steve Wozniak, Steven Pinker, strong AI, superintelligent machines, tacit knowledge, tail risk, TED Talk, the long tail, theory of mind, There's no reason for any individual to have a computer in his home - Ken Olsen, trolley problem, Turing test, Vernor Vinge, Watson beat the top human players on Jeopardy!, world market for maybe five computers

In this widely read, controversial piece, Searle introduced the concepts of “strong” and “weak” AI in order to distinguish between two philosophical claims made about AI programs. While many people today use the phrase strong AI to mean “AI that can perform most tasks as well as a human” and weak AI to mean the kind of narrow AI that currently exists, Searle meant something different by these terms. For Searle, the strong AI claim would be that “the appropriately programmed digital computer does not just simulate having a mind; it literally has a mind.”13 In contrast, in Searle’s terminology, weak AI views computers as tools to simulate human intelligence and does not make any claims about them “literally” having a mind.14 We’re back to the philosophical question I was discussing with my mother: Is there a difference between “simulating a mind” and “literally having a mind”?

There is no danger of duplicating it anytime soon.”10 The roboticist (and former director of MIT’s AI Lab) Rodney Brooks agreed, stating that we “grossly overestimate the capabilities of machines—those of today and of the next few decades.”11 The psychologist and AI researcher Gary Marcus went so far as to assert that in the quest to create “strong AI”—that is, general human-level AI—“there has been almost no progress.”12 I could go on and on with dueling quotations. In short, what I found is that the field of AI is in turmoil. Either a huge amount of progress has been made, or almost none at all. Either we are within spitting distance of “true” AI, or it is centuries away.

For Searle, the strong AI claim would be that “the appropriately programmed digital computer does not just simulate having a mind; it literally has a mind.”13 In contrast, in Searle’s terminology, weak AI views computers as tools to simulate human intelligence and does not make any claims about them “literally” having a mind.14 We’re back to the philosophical question I was discussing with my mother: Is there a difference between “simulating a mind” and “literally having a mind”? Like my mother, Searle believes there is a fundamental difference, and he argued that strong AI is impossible even in principle.15 The Turing Test Searle’s article was spurred in part by Alan Turing’s 1950 paper, “Computing Machinery and Intelligence,” which had proposed a way to cut through the Gordian knot of “simulated” versus “actual” intelligence. Declaring that “the original question ‘Can a machine think?’


pages: 339 words: 94,769

Possible Minds: Twenty-Five Ways of Looking at AI by John Brockman

AI winter, airport security, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Alignment Problem, AlphaGo, artificial general intelligence, Asilomar, autonomous vehicles, basic income, Benoit Mandelbrot, Bill Joy: nanobots, Bletchley Park, Buckminster Fuller, cellular automata, Claude Shannon: information theory, Computing Machinery and Intelligence, CRISPR, Daniel Kahneman / Amos Tversky, Danny Hillis, data science, David Graeber, deep learning, DeepMind, Demis Hassabis, easy for humans, difficult for computers, Elon Musk, Eratosthenes, Ernest Rutherford, fake news, finite state, friendly AI, future of work, Geoffrey Hinton, Geoffrey West, Santa Fe Institute, gig economy, Hans Moravec, heat death of the universe, hype cycle, income inequality, industrial robot, information retrieval, invention of writing, it is difficult to get a man to understand something, when his salary depends on his not understanding it, James Watt: steam engine, Jeff Hawkins, Johannes Kepler, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John von Neumann, Kevin Kelly, Kickstarter, Laplace demon, Large Hadron Collider, Loebner Prize, machine translation, market fundamentalism, Marshall McLuhan, Menlo Park, military-industrial complex, mirror neurons, Nick Bostrom, Norbert Wiener, OpenAI, optical character recognition, paperclip maximiser, pattern recognition, personalized medicine, Picturephone, profit maximization, profit motive, public intellectual, quantum cryptography, RAND corporation, random walk, Ray Kurzweil, Recombinant DNA, Richard Feynman, Rodney Brooks, self-driving car, sexual politics, Silicon Valley, Skype, social graph, speech recognition, statistical model, Stephen Hawking, Steven Pinker, Stewart Brand, strong AI, superintelligent machines, supervolcano, synthetic biology, systems thinking, technological determinism, technological singularity, technoutopianism, TED Talk, telemarketer, telerobotics, The future is already here, the long tail, the scientific method, theory of mind, trolley problem, Turing machine, Turing test, universal basic income, Upton Sinclair, Von Neumann architecture, Whole Earth Catalog, Y2K, you are the product, zero-sum game

He wanted to argue, with John Searle and Roger Penrose, that “Strong AI” is impossible, but there are no good arguments for that conclusion. After all, everything we now know suggests that, as I have put it, we are robots made of robots made of robots . . . down to the motor proteins and their ilk, with no magical ingredients thrown in along the way. Weizenbaum’s more important and defensible message was that we should not strive to create Strong AI and should be extremely cautious about the AI systems that we can create and have already created. As one might expect, the defensible thesis is a hybrid: AI (Strong AI) is possible in principle but not desirable.

I believe that charting these barriers may be no less important than banging our heads against them. Current machine-learning systems operate almost exclusively in a statistical, or model-blind, mode, which is analogous in many ways to fitting a function to a cloud of data points. Such systems cannot reason about “What if?” questions and, therefore, cannot serve as the basis for Strong AI—that is, artificial intelligence that emulates human-level reasoning and competence. To achieve human-level intelligence, learning machines need the guidance of a blueprint of reality, a model—similar to a road map that guides us in driving through an unfamiliar city. To be more specific, current learning machines improve their performance by optimizing parameters for a stream of sensory inputs received from the environment.

It was this wild modeling strategy, not Babylonian extrapolation, that jolted Eratosthenes (276–194 BC) to perform one of the most creative experiments in the ancient world and calculate the circumference of the Earth. Such an experiment would never have occurred to a Babylonian data fitter. Model-blind approaches impose intrinsic limitations on the cognitive tasks that Strong AI can perform. My general conclusion is that human-level AI cannot emerge solely from model-blind learning machines; it requires the symbiotic collaboration of data and models. Data science is a science only to the extent that it facilitates the interpretation of data—a two-body problem, connecting data to reality.


pages: 161 words: 39,526

Applied Artificial Intelligence: A Handbook for Business Leaders by Mariya Yao, Adelyn Zhou, Marlene Jia

Airbnb, algorithmic bias, AlphaGo, Amazon Web Services, artificial general intelligence, autonomous vehicles, backpropagation, business intelligence, business process, call centre, chief data officer, cognitive load, computer vision, conceptual framework, data science, deep learning, DeepMind, en.wikipedia.org, fake news, future of work, Geoffrey Hinton, industrial robot, information security, Internet of things, iterative process, Jeff Bezos, job automation, machine translation, Marc Andreessen, natural language processing, new economy, OpenAI, pattern recognition, performance metric, price discrimination, randomized controlled trial, recommendation engine, robotic process automation, Salesforce, self-driving car, sentiment analysis, Silicon Valley, single source of truth, skunkworks, software is eating the world, source of truth, sparse data, speech recognition, statistical model, strong AI, subscription business, technological singularity, The future is already here

To avoid confusion, technical experts in the field of AI prefer to use the term Artificial General Intelligence (AGI) to refer to machines with human-level or higher intelligence, capable of abstracting concepts from limited experience and transferring knowledge between domains. AGI is also called “Strong AI” to differentiate from “Weak AI” or “Narrow AI," which refers to systems designed for one specific task and whose capabilities are not easily transferable to other systems. We go into more detail about the distinction between AI and AGI in our Machine Intelligence Continuum in Chapter 2. Though Deep Blue, which beat the world champion in chess in 1997, and AlphaGo, which did the same for the game of Go in 2016, have achieved impressive results, all of the AI systems we have today are “Weak AI."

You can also host competitions on Kaggle or similar platforms. Provide a problem, a dataset, and a prize purse to attract competitors. This is a good way to get international talent to work on your problem and will also build your reputation as a company that supports AI. As with any industry, like attracts like. Dominant tech companies build strong AI departments by hiring superstar leaders. Google and Facebook attracted university professors and AI research pioneers such as Geoffrey Hinton, Fei-Fei Li, and Yann LeCun with plum appointments and endless resources. These professors either take a sabbatical from their universities or split their time between academia and industry.

To meet business needs in the short-term, consider evaluating third-party solutions built by vendors who specialize in applying AI to enterprise functions.(58) Both startups and established enterprise vendors offer solutions to address common pain points for all departments, including sales and marketing, finance, operations and back-office, customer support, and even HR and recruiting. Emphasize Your Company’s Unique Advantages At the end of an interview cycle, a strong AI candidate will have multiple offers in hand. In order to close the candidate, you’ll need to differentiate your company from others. In addition to compensation, culture, and other general fit criteria, AI talent tends to evaluate offers on the following areas: Availability of Data Candidates want to be able to train their models with as much data as possible.


pages: 144 words: 43,356

Surviving AI: The Promise and Peril of Artificial Intelligence by Calum Chace

3D printing, Ada Lovelace, AI winter, Airbnb, Alvin Toffler, artificial general intelligence, augmented reality, barriers to entry, basic income, bitcoin, Bletchley Park, blockchain, brain emulation, Buckminster Fuller, Charles Babbage, cloud computing, computer age, computer vision, correlation does not imply causation, credit crunch, cryptocurrency, cuban missile crisis, deep learning, DeepMind, dematerialisation, Demis Hassabis, discovery of the americas, disintermediation, don't be evil, driverless car, Elon Musk, en.wikipedia.org, epigenetics, Erik Brynjolfsson, everywhere but in the productivity statistics, Flash crash, friendly AI, Geoffrey Hinton, Google Glasses, hedonic treadmill, hype cycle, industrial robot, Internet of things, invention of agriculture, job automation, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John von Neumann, Kevin Kelly, life extension, low skilled workers, machine translation, Mahatma Gandhi, means of production, mutually assured destruction, Neil Armstrong, Nicholas Carr, Nick Bostrom, paperclip maximiser, pattern recognition, peer-to-peer, peer-to-peer model, Peter Thiel, radical life extension, Ray Kurzweil, Robert Solow, Rodney Brooks, Second Machine Age, self-driving car, Silicon Valley, Silicon Valley ideology, Skype, South Sea Bubble, speech recognition, Stanislav Petrov, Stephen Hawking, Steve Jobs, strong AI, technological singularity, TED Talk, The future is already here, The Future of Employment, theory of mind, Turing machine, Turing test, universal basic income, Vernor Vinge, wage slave, Wall-E, zero-sum game

If you want to survive this coming fourth phase in the next fewdecades and prepare for it, you cannot afford NOT to read Chace’s book. Prof. Dr. Hugo de Garis, author of The Artilect War, former director of the Artificial Brain Lab, Xiamen University, China. Advances in AI are set to affect progress in all other areas in the coming decades. If this momentum leads to the achievement of strong AI within the century, then in the words of one field leader it would be “the biggest event in human history”. Now is therefore a perfect time for the thoughtful discussion ofchallenges and opportunities that Chace provides. Surviving AI is an exceptionally clear, well-researched and balanced introduction to a complex and controversial topic, and is a compelling read to boot.

Whether intelligence resides in the machine or in the software is analogous to the question of whether it resides in the neurons in your brain or in the electrochemical signals that they transmit and receive. Fortunately we don’t need to answer that question here. ANI and AGI We do need to discriminate between two very different types of artificial intelligence: artificial narrow intelligence (ANI) and artificial general intelligence (AGI (4)), which are also known as weak AI and strong AI, and as ordinary AI and full AI. The easiest way to do this is to say that artificial general intelligence, or AGI, is an AI which can carry out any cognitive function that a human can. We have long had computers which can add up much better than any human, and computers which can play chess better than the best human chess grandmaster.

Informed scepticism about near-term AGI We should take more seriously the arguments of very experienced AI researchers who claim that although the AGI undertaking is possible, it won’t be achieved for a very long time. Rodney Brooks, a veteran AI researcher and robot builder, says “I think it is a mistake to be worrying about us developing [strong] AI any time in the next few hundred years. I think the worry stems from a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI, and the enormity and complexity of building sentient volitional intelligence.” Andrew Ng at Baidu and Yann LeCun at Facebook are of a similar mind, as we saw in the last chapter.


pages: 303 words: 67,891

Advances in Artificial General Intelligence: Concepts, Architectures and Algorithms: Proceedings of the Agi Workshop 2006 by Ben Goertzel, Pei Wang

AI winter, artificial general intelligence, backpropagation, bioinformatics, brain emulation, classic study, combinatorial explosion, complexity theory, computer vision, Computing Machinery and Intelligence, conceptual framework, correlation coefficient, epigenetics, friendly AI, functional programming, G4S, higher-order functions, information retrieval, Isaac Newton, Jeff Hawkins, John Conway, Loebner Prize, Menlo Park, natural language processing, Nick Bostrom, Occam's razor, p-value, pattern recognition, performance metric, precautionary principle, Ray Kurzweil, Rodney Brooks, semantic web, statistical model, strong AI, theory of mind, traveling salesman, Turing machine, Turing test, Von Neumann architecture, Y2K

The next major step in this direction was the May 2006 AGIRI Workshop, of which this volume is essentially a proceedings. The term AGI, artificial general intelligence, was introduced as a modern successor to the earlier strong AI. Artificial General Intelligence What is artificial general intelligence? The AGIRI website lists several features, describing machines • • • • with human-level, and even superhuman, intelligence. that generalize their knowledge across different domains. that reflect on themselves. and that create fundamental innovations and insights. Even strong AI wouldn’t push for this much, and this general, an intelligence. Can there be such an artificial general intelligence? I think there can be, but that it can’t be done with a brain in a vat, with humans providing input and utilizing computational output.

Machine learning algorithms may be applied quite broadly in a variety of contexts, but the breadth and generality in this case is supplied largely by the human user of the algorithm; any particular machine learning program, considered as a holistic system taking in inputs and producing outputs without detailed human intervention, can solve only problems of a very specialized sort. Specified in this way, what we call AGI is similar to some other terms that have been used by other authors, such as “strong AI” [7], “human-level AI” [8], “true synthetic intelligence” [9], “general intelligent system” [10], and even “thinking machine” [11]. Though no term is perfect, we chose to use “AGI” because it correctly stresses the general nature of the research goal and scope, without committing too much to any theory or technique.

In addition to giving a quick overview of the LIDA conceptual model, and its underlying computational technology, we argue for the LIDA architecture’s role as a foundational architecture for an AGI. Finally, lessons For AGI researchers drawn from the model and its architecture are discussed. Introduction Early AI researchers aimed at what was later called “strong AI,” the simulation of human level intelligence. One of AI’s founders, Herbert Simon, claimed (circa 1957) that “… there are now in the world machines that think, that learn and that create.” He went on to predict that with 10 years a computer would beat a grandmaster at chess, would prove an “important new mathematical theorem, and would write music of “considerable aesthetic value.”


pages: 283 words: 81,376

The Doomsday Calculation: How an Equation That Predicts the Future Is Transforming Everything We Know About Life and the Universe by William Poundstone

Albert Einstein, anthropic principle, Any sufficiently advanced technology is indistinguishable from magic, Arthur Eddington, Bayesian statistics, behavioural economics, Benoit Mandelbrot, Berlin Wall, bitcoin, Black Swan, conceptual framework, cosmic microwave background, cosmological constant, cosmological principle, CRISPR, cuban missile crisis, dark matter, DeepMind, digital map, discounted cash flows, Donald Trump, Doomsday Clock, double helix, Dr. Strangelove, Eddington experiment, Elon Musk, Geoffrey Hinton, Gerolamo Cardano, Hans Moravec, heat death of the universe, Higgs boson, if you see hoof prints, think horses—not zebras, index fund, Isaac Newton, Jaron Lanier, Jeff Bezos, John Markoff, John von Neumann, Large Hadron Collider, mandelbrot fractal, Mark Zuckerberg, Mars Rover, Neil Armstrong, Nick Bostrom, OpenAI, paperclip maximiser, Peter Thiel, Pierre-Simon Laplace, Plato's cave, probability theory / Blaise Pascal / Pierre de Fermat, RAND corporation, random walk, Richard Feynman, ride hailing / ride sharing, Rodney Brooks, Ronald Reagan, Ronald Reagan: Tear down this wall, Sam Altman, Schrödinger's Cat, Search for Extraterrestrial Intelligence, self-driving car, Silicon Valley, Skype, Stanislav Petrov, Stephen Hawking, strong AI, tech billionaire, Thomas Bayes, Thomas Malthus, time value of money, Turing test

But this might be all on the surface. Inside, the AI-bot could be empty, what philosophers call a zombie. It would have no soul, no subjectivity, no inner spark of whatever it is that makes us what we are. Bostrom’s trilemma takes strong AI as a given. Maybe it should be called a quadrilemma, with strong AI as the fourth leg of the stool. But for most of those following what Bostrom is saying, strong AI is taken for granted. If simulated people have real feelings, then simulation is an ethically fraught enterprise. A simulation of global history would recreate famine, plague, natural disasters, murders, wars, slavery, and genocide.

Most of today’s AI researchers, and most in the tech community generally, believe that something that acts like a human and talks like a human and thinks like a human—to a sufficiently subtle degree—would have “a mind in exactly the same sense human beings have minds,” in philosopher John Searle’s words. This view is known as “strong AI.” Searle is among a dissenting faction of philosophers, and regular folk, who are not so sure about that. Almost all contemporary philosophers agree in principle that code could pass the Turing test, that it could be programmed to insist on having private moods and emotions, and that it could narrate a stream of consciousness as convincing as any human’s.


pages: 294 words: 81,292

Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat

AI winter, air gap, AltaVista, Amazon Web Services, artificial general intelligence, Asilomar, Automated Insights, Bayesian statistics, Bernie Madoff, Bill Joy: nanobots, Bletchley Park, brain emulation, California energy crisis, cellular automata, Chuck Templeton: OpenTable:, cloud computing, cognitive bias, commoditize, computer vision, Computing Machinery and Intelligence, cuban missile crisis, Daniel Kahneman / Amos Tversky, Danny Hillis, data acquisition, don't be evil, drone strike, dual-use technology, Extropian, finite state, Flash crash, friendly AI, friendly fire, Google Glasses, Google X / Alphabet X, Hacker News, Hans Moravec, Isaac Newton, Jaron Lanier, Jeff Hawkins, John Markoff, John von Neumann, Kevin Kelly, Law of Accelerating Returns, life extension, Loebner Prize, lone genius, machine translation, mutually assured destruction, natural language processing, Neil Armstrong, Nicholas Carr, Nick Bostrom, optical character recognition, PageRank, PalmPilot, paperclip maximiser, pattern recognition, Peter Thiel, precautionary principle, prisoner's dilemma, Ray Kurzweil, Recombinant DNA, Rodney Brooks, rolling blackouts, Search for Extraterrestrial Intelligence, self-driving car, semantic web, Silicon Valley, Singularitarianism, Skype, smart grid, speech recognition, statistical model, stealth mode startup, stem cell, Stephen Hawking, Steve Jobs, Steve Jurvetson, Steve Wozniak, strong AI, Stuxnet, subprime mortgage crisis, superintelligent machines, technological singularity, The Coming Technological Singularity, Thomas Bayes, traveling salesman, Turing machine, Turing test, Vernor Vinge, Watson beat the top human players on Jeopardy!, zero day

“Chapter eight is the deeply intertwined promise and peril in GNR [genetics, nanotechnology, and robotics] and I go into pretty graphic detail on the downsides of those three areas of technology. And the downside of robotics, which really refers to AI, is the most profound because intelligence is the most important phenomenon in the world. Inherently there is no absolute protection against strong AI.” Kurzweil’s book does underline the dangers of genetic engineering and nanotechnology, but it gives only a couple of anemic pages to strong AI, the old name for AGI. And in that chapter he also argues that relinquishment, or turning our backs on some technologies because they’re too dangerous, as advocated by Bill Joy and others, isn’t just a bad idea, but an immoral one.

We’ve seen how all of these drives will lead to very bad outcomes without extremely careful planning and programming. And we’re compelled to ask ourselves, are we capable of such careful work? Do you, like me, look around the world at expensive and lethal accidents and wonder how we’ll get it right the first time with very strong AI? Three-Mile Island, Chernobyl, Fukushima—in these nuclear power plant catastrophes, weren’t highly qualified designers and administrators trying their best to avoid the disasters that befell them? The 1986 Chernobyl meltdown occurred during a safety test. All three disasters are what organizational theorist Charles Perrow would call “normal accidents.”

It will be capable of thinking, planning, and gaming its makers. No other tool does anything like that. Kurzweil believes that a way to limit the dangerous aspects of AI, especially ASI, is to pair it with humans through intelligence augmentation—IA. From his uncomfortable metal chair the optimist said, “As I have pointed out, strong AI is emerging from many diverse efforts and will be deeply integrated into our civilization’s infrastructure. Indeed, it will be intimately embedded in our bodies and brains. As such it will reflect our values because it will be us.” And so, the argument goes, it will be as “safe” as we are. But, as I told Kurzweil, Homo sapiens are not known to be particularly harmless when in contact with one another, other animals, or the environment.


pages: 219 words: 63,495

50 Future Ideas You Really Need to Know by Richard Watson

23andMe, 3D printing, access to a mobile phone, Albert Einstein, Alvin Toffler, artificial general intelligence, augmented reality, autonomous vehicles, BRICs, Buckminster Fuller, call centre, carbon credits, Charles Babbage, clean water, cloud computing, collaborative consumption, computer age, computer vision, crowdsourcing, dark matter, dematerialisation, Dennis Tito, digital Maoism, digital map, digital nomad, driverless car, Elon Musk, energy security, Eyjafjallajökull, failed state, Ford Model T, future of work, Future Shock, gamification, Geoffrey West, Santa Fe Institute, germ theory of disease, global pandemic, happiness index / gross national happiness, Higgs boson, high-speed rail, hive mind, hydrogen economy, Internet of things, Jaron Lanier, life extension, Mark Shuttleworth, Marshall McLuhan, megacity, natural language processing, Neil Armstrong, Network effects, new economy, ocean acidification, oil shale / tar sands, pattern recognition, peak oil, personalized medicine, phenotype, precision agriculture, private spaceflight, profit maximization, RAND corporation, Ray Kurzweil, RFID, Richard Florida, Search for Extraterrestrial Intelligence, self-driving car, semantic web, Skype, smart cities, smart meter, smart transportation, space junk, statistical model, stem cell, Stephen Hawking, Steve Jobs, Steven Pinker, Stewart Brand, strong AI, Stuxnet, supervolcano, synthetic biology, tech billionaire, telepresence, The Wisdom of Crowds, Thomas Malthus, Turing test, urban decay, Vernor Vinge, Virgin Galactic, Watson beat the top human players on Jeopardy!, web application, women in the workforce, working-age population, young professional

So what happens when machine intelligence starts to rival that of its human designers? Before we descend down this rabbit hole we should first split AI in two. “Strong AI” is the term generally used to describe true thinking machines. “Weak AI” (sometimes known as “Narrow AI”) is intelligence intended to supplement rather than exceed human intelligence. So far most machines are preprogrammed or taught logical courses of action. But in the future, machines with strong AI will be able to learn as they go and respond to unexpected events. The implications? Think of automated disease diagnosis and surgery, military planning and battle command, customer-service avatars, artificial creativity and autonomous robots that predict then respond to crime (a “Department of Future Crime”—see also Chapter 32 and Biocriminology).

Dick Cheney Glossary 3D printer A way to produce 3D objects from digital instructions and layered materials dispersed or sprayed on via a printer. Affective computing Machines and systems that recognize or simulate human effects or emotions. AGI Artificial general intelligence, a term usually used to describe strong AI (the opposite of narrow or weak AI). It is machine intelligence that is equivalent to, or exceeds, human intelligence and it’s usually regarded as the long-term goal of AI research and development. Ambient intelligence Electronic or artificial environments that recognize the presence of other machines or people and respond to their needs.


pages: 742 words: 137,937

The Future of the Professions: How Technology Will Transform the Work of Human Experts by Richard Susskind, Daniel Susskind

23andMe, 3D printing, Abraham Maslow, additive manufacturing, AI winter, Albert Einstein, Amazon Mechanical Turk, Amazon Robotics, Amazon Web Services, Andrew Keen, Atul Gawande, Automated Insights, autonomous vehicles, Big bang: deregulation of the City of London, big data - Walmart - Pop Tarts, Bill Joy: nanobots, Blue Ocean Strategy, business process, business process outsourcing, Cass Sunstein, Checklist Manifesto, Clapham omnibus, Clayton Christensen, clean water, cloud computing, commoditize, computer age, Computer Numeric Control, computer vision, Computing Machinery and Intelligence, conceptual framework, corporate governance, creative destruction, crowdsourcing, Daniel Kahneman / Amos Tversky, data science, death of newspapers, disintermediation, Douglas Hofstadter, driverless car, en.wikipedia.org, Erik Brynjolfsson, Evgeny Morozov, Filter Bubble, full employment, future of work, Garrett Hardin, Google Glasses, Google X / Alphabet X, Hacker Ethic, industrial robot, informal economy, information retrieval, interchangeable parts, Internet of things, Isaac Newton, James Hargreaves, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, Joseph Schumpeter, Khan Academy, knowledge economy, Large Hadron Collider, lifelogging, lump of labour, machine translation, Marshall McLuhan, Metcalfe’s law, Narrative Science, natural language processing, Network effects, Nick Bostrom, optical character recognition, Paul Samuelson, personalized medicine, planned obsolescence, pre–internet, Ray Kurzweil, Richard Feynman, Second Machine Age, self-driving car, semantic web, Shoshana Zuboff, Skype, social web, speech recognition, spinning jenny, strong AI, supply-chain management, Susan Wojcicki, tacit knowledge, TED Talk, telepresence, The Future of Employment, the market place, The Wealth of Nations by Adam Smith, The Wisdom of Crowds, Tragedy of the Commons, transaction costs, Turing test, Two Sigma, warehouse robotics, Watson beat the top human players on Jeopardy!, WikiLeaks, world market for maybe five computers, Yochai Benkler, young professional

In the language of some AI scientists and philosophers of the 1980s, these systems would be labelled, perhaps a little pejoratively, as ‘weak AI’ rather than ‘strong AI’.8 Broadly speaking, ‘weak AI’ is a term applied to systems that appear, behaviourally, to engage in intelligent human-like thought but in fact enjoy no form of consciousness; whereas systems that exhibit ‘strong AI’ are those that, it is maintained, do have thoughts and cognitive states. On this latter view, the brain is often equated with the digital computer. Today, fascination with ‘strong AI’ is perhaps more intense than ever, even though really big questions remain unanswered and unanswerable.

Undeterred by these philosophical challenges, books and projects abound on building brains and creating minds.9 In the 1980s, in our speeches, we used to joke about the claim of one of the fathers of AI, Marvin Minsky, who reportedly said that ‘the next generation of computers will be so intelligent, we will be lucky if they keep us around as household pets’.10 Today, it is no longer laugh-worthy or sciencefictional11 to contemplate a future in which our computers are vastly more intelligent than us—this prospect is discussed at length in Superintelligence by Nick Bostrom, who runs the Future of Humanity Institute at the Oxford Martin School at the University of Oxford.12 Ironically, this growth in confidence in the possibility of ‘strong AI’, at least in part, has been fuelled by the success of Watson itself. The irony here is that Watson in fact belongs in the category of ‘weak AI’, and it is precisely because it cannot meaningfully be said to think that the system is not deemed very interesting by some AI scientists, psychologists, and philosophers. For pragmatists (like us) rather than purists, whether Watson is an example of ‘weak’ or ‘strong’ AI is of little moment. Pragmatists are interested in high-performing systems, whether or not they can think.


pages: 245 words: 64,288

Robots Will Steal Your Job, But That's OK: How to Survive the Economic Collapse and Be Happy by Pistono, Federico

3D printing, Albert Einstein, autonomous vehicles, bioinformatics, Buckminster Fuller, cloud computing, computer vision, correlation does not imply causation, en.wikipedia.org, epigenetics, Erik Brynjolfsson, Firefox, future of work, gamification, George Santayana, global village, Google Chrome, happiness index / gross national happiness, hedonic treadmill, illegal immigration, income inequality, information retrieval, Internet of things, invention of the printing press, Jeff Hawkins, jimmy wales, job automation, John Markoff, Kevin Kelly, Khan Academy, Kickstarter, Kiva Systems, knowledge worker, labor-force participation, Lao Tzu, Law of Accelerating Returns, life extension, Loebner Prize, longitudinal study, means of production, Narrative Science, natural language processing, new economy, Occupy movement, patent troll, pattern recognition, peak oil, post scarcity, QR code, quantum entanglement, race to the bottom, Ray Kurzweil, recommendation engine, RFID, Rodney Brooks, selection bias, self-driving car, seminal paper, slashdot, smart cities, software as a service, software is eating the world, speech recognition, Steven Pinker, strong AI, synthetic biology, technological singularity, TED Talk, Turing test, Vernor Vinge, warehouse automation, warehouse robotics, women in the workforce

A machine able to pass the Turning test is said to have achieved human-level intelligence, or at least perceived intelligence (whether we consider that to be true intelligence or not is irrelevant for the purpose of the argument). Some people call this Strong Artificial Intelligence (Strong AI), and many see Strong AI as an unachievable myth, because the brain is mysterious, and so much more than the sum of its individual components. They claim that the brain operates using unknown, possibly unintelligible quantum mechanical processes, and any effort to reach or even surpass it using mechanical machines is pure fantasy.

Others claim that the brain is just a biological machine, not much different from any other machine, and that it is merely a matter of time before we can surpass it using our artificial creations. This is certainly a fascinating topic, one that would require a thorough examination. Perhaps I will explore it on another book. For now, let us concentrate on the present, on what we know for sure, and on the upcoming future. As we will see, there is no need for machines to achieve Strong AI in order to change the nature of the economy, employment, and our lives, forever. We will start by looking at what intelligence is, how it can be useful, and if machines have become intelligent, perhaps even more so than us. Chapter 5 Intelligence There is a great deal of confusion regarding the meaning of the word intelligence, mainly because nobody really knows what it is.


pages: 281 words: 71,242

World Without Mind: The Existential Threat of Big Tech by Franklin Foer

artificial general intelligence, back-to-the-land, Berlin Wall, big data - Walmart - Pop Tarts, Big Tech, big-box store, Buckminster Fuller, citizen journalism, Colonization of Mars, computer age, creative destruction, crowdsourcing, data is the new oil, data science, deep learning, DeepMind, don't be evil, Donald Trump, Double Irish / Dutch Sandwich, Douglas Engelbart, driverless car, Edward Snowden, Electric Kool-Aid Acid Test, Elon Musk, Evgeny Morozov, Fall of the Berlin Wall, Filter Bubble, Geoffrey Hinton, global village, Google Glasses, Haight Ashbury, hive mind, income inequality, intangible asset, Jeff Bezos, job automation, John Markoff, Kevin Kelly, knowledge economy, Law of Accelerating Returns, Marc Andreessen, Mark Zuckerberg, Marshall McLuhan, means of production, move fast and break things, new economy, New Journalism, Norbert Wiener, off-the-grid, offshore financial centre, PageRank, Peace of Westphalia, Peter Thiel, planetary scale, Ray Kurzweil, scientific management, self-driving car, Silicon Valley, Singularitarianism, software is eating the world, Steve Jobs, Steven Levy, Stewart Brand, strong AI, supply-chain management, TED Talk, the medium is the message, the scientific method, The Wealth of Nations by Adam Smith, The Wisdom of Crowds, Thomas L Friedman, Thorstein Veblen, Upton Sinclair, Vernor Vinge, vertical integration, We are as Gods, Whole Earth Catalog, yellow journalism

In Kurzweil’s telling, the singularity is when artificial intelligence becomes all-powerful, when computers are capable of designing and building other computers. This superintelligence will, of course, create a superintelligence even more powerful than itself—and so on, down the posthuman generations. At that point, all bets are off—“strong AI and nanotechnology can create any product, any situation, any environment that we can imagine at will.” As a scientist, Kurzweil believes in precision. When he makes predictions, he doesn’t chuck darts; he extrapolates data. In fact, he’s loaded everything we know about the history of human technology onto his computer and run the numbers.

There’s a school of incrementalists, who cherish everything that has been accomplished to date—victories like the PageRank algorithm or the software that allows ATMs to read the scrawled writing on checks. This school holds out little to no hope that computers will ever acquire anything approximating human consciousness. Then there are the revolutionaries who gravitate toward Kurzweil and the singularitarian view. They aim to build computers with either “artificial general intelligence” or “strong AI.” For most of Google’s history, it trained its efforts on incremental improvements. During that earlier era, the company was run by Eric Schmidt—an older, experienced manager, whom Google’s investors forced Page and Brin to accept as their “adult” supervisor. That’s not to say that Schmidt was timid.

he made an appearance on Steve Allen’s game show, I’ve Got a Secret: Ray Kurzweil, “I’ve Got a Secret,” 1965, https://www.youtube.com/watch?v=X4Neivqp2K4. “to invent things so that the blind could see”: Steve Rabinowitz quoted in Transcendent Man, directed by Barry Ptolemy, 2011. “profoundly sad, lonely feeling that I really can’t bear it”: Transcendent Man. “strong AI and nanotechnology can create any product”: Ray Kurzweil, The Singularity Is Near (Viking Penguin, 2005), 299. “Each epoch of evolution has progressed more rapidly”: Kurzweil, Singularity, 40. “version 1.0 biological bodies”: Kurzweil, Singularity, 9. “We will be software, not hardware”: Ray Kurzweil, The Age of Spiritual Machines (Viking Penguin, 1999), 129.


pages: 586 words: 186,548

Architects of Intelligence by Martin Ford

3D printing, agricultural Revolution, AI winter, algorithmic bias, Alignment Problem, AlphaGo, Apple II, artificial general intelligence, Asilomar, augmented reality, autonomous vehicles, backpropagation, barriers to entry, basic income, Baxter: Rethink Robotics, Bayesian statistics, Big Tech, bitcoin, Boeing 747, Boston Dynamics, business intelligence, business process, call centre, Cambridge Analytica, cloud computing, cognitive bias, Colonization of Mars, computer vision, Computing Machinery and Intelligence, correlation does not imply causation, CRISPR, crowdsourcing, DARPA: Urban Challenge, data science, deep learning, DeepMind, Demis Hassabis, deskilling, disruptive innovation, Donald Trump, Douglas Hofstadter, driverless car, Elon Musk, Erik Brynjolfsson, Ernest Rutherford, fake news, Fellow of the Royal Society, Flash crash, future of work, general purpose technology, Geoffrey Hinton, gig economy, Google X / Alphabet X, Gödel, Escher, Bach, Hans Moravec, Hans Rosling, hype cycle, ImageNet competition, income inequality, industrial research laboratory, industrial robot, information retrieval, job automation, John von Neumann, Large Hadron Collider, Law of Accelerating Returns, life extension, Loebner Prize, machine translation, Mark Zuckerberg, Mars Rover, means of production, Mitch Kapor, Mustafa Suleyman, natural language processing, new economy, Nick Bostrom, OpenAI, opioid epidemic / opioid crisis, optical character recognition, paperclip maximiser, pattern recognition, phenotype, Productivity paradox, radical life extension, Ray Kurzweil, recommendation engine, Robert Gordon, Rodney Brooks, Sam Altman, self-driving car, seminal paper, sensor fusion, sentiment analysis, Silicon Valley, smart cities, social intelligence, sparse data, speech recognition, statistical model, stealth mode startup, stem cell, Stephen Hawking, Steve Jobs, Steve Wozniak, Steven Pinker, strong AI, superintelligent machines, synthetic biology, systems thinking, Ted Kaczynski, TED Talk, The Rise and Fall of American Growth, theory of mind, Thomas Bayes, Travis Kalanick, Turing test, universal basic income, Wall-E, Watson beat the top human players on Jeopardy!, women in the workforce, working-age population, workplace surveillance , zero-sum game, Zipcar

MARTIN FORD: So, you believe that the capability to think causally is critical to achieving what you’d call strong AI or AGI, artificial general intelligence? JUDEA PEARL: I have no doubt that it is essential. Whether it is sufficient, I’m not sure. However, causal reasoning doesn’t solve every problem of general AI. It doesn’t solve the object recognition problem, and it doesn’t solve the language understanding problem. We basically solved the cause-effect puzzle, and we can learn a lot from these solutions so that we can help the other tasks circumvent their obstacles. MARTIN FORD: Do you think that strong AI or AGI is feasible? Is that something you think will happen someday?

A breakthrough that allowed machines to efficiently learn in a truly unsupervised way would likely be considered one of the biggest events in AI so far, and an important waypoint on the road to human-level AI. ARTIFICIAL GENERAL INTELLIGENCE (AGI) refers to a true thinking machine. AGI is typically considered to be more or less synonymous with the terms HUMAN-LEVEL AI or STRONG AI. You’ve likely seen several examples of AGI—but they have all been in the realm of science fiction. HAL from 2001 A Space Odyssey, the Enterprise’s main computer (or Mr. Data) from Star Trek, C3PO from Star Wars and Agent Smith from The Matrix are all examples of AGI. Each of these fictional systems would be capable of passing the TURING TEST—in other words, these AI systems could carry out a conversation so that they would be indistinguishable from a human being.

MARTIN FORD: It sounds like your strategy is to attract AI talent in part by offering the opportunity and infrastructure to found a startup venture. ANDREW NG: Yes, building a successful AI company takes more than AI talent. We focus so much on the technology because it’s advancing so quickly, but building a strong AI team often needs a portfolio of different skills ranging from the tech, to the business strategy, to product, to marketing, to business development. Our role is building full stack teams that are able to build concrete business verticals. The technology is super important, but a startup is much more than technology.


pages: 561 words: 157,589

WTF?: What's the Future and Why It's Up to Us by Tim O'Reilly

"Friedman doctrine" OR "shareholder theory", 4chan, Affordable Care Act / Obamacare, Airbnb, AlphaGo, Alvin Roth, Amazon Mechanical Turk, Amazon Robotics, Amazon Web Services, AOL-Time Warner, artificial general intelligence, augmented reality, autonomous vehicles, barriers to entry, basic income, behavioural economics, benefit corporation, Bernie Madoff, Bernie Sanders, Bill Joy: nanobots, bitcoin, Blitzscaling, blockchain, book value, Bretton Woods, Brewster Kahle, British Empire, business process, call centre, Capital in the Twenty-First Century by Thomas Piketty, Captain Sullenberger Hudson, carbon tax, Carl Icahn, Chuck Templeton: OpenTable:, Clayton Christensen, clean water, cloud computing, cognitive dissonance, collateralized debt obligation, commoditize, computer vision, congestion pricing, corporate governance, corporate raider, creative destruction, CRISPR, crowdsourcing, Danny Hillis, data acquisition, data science, deep learning, DeepMind, Demis Hassabis, Dennis Ritchie, deskilling, DevOps, Didi Chuxing, digital capitalism, disinformation, do well by doing good, Donald Davies, Donald Trump, Elon Musk, en.wikipedia.org, Erik Brynjolfsson, fake news, Filter Bubble, Firefox, Flash crash, Free Software Foundation, fulfillment center, full employment, future of work, George Akerlof, gig economy, glass ceiling, Glass-Steagall Act, Goodhart's law, Google Glasses, Gordon Gekko, gravity well, greed is good, Greyball, Guido van Rossum, High speed trading, hiring and firing, Home mortgage interest deduction, Hyperloop, income inequality, independent contractor, index fund, informal economy, information asymmetry, Internet Archive, Internet of things, invention of movable type, invisible hand, iterative process, Jaron Lanier, Jeff Bezos, jitney, job automation, job satisfaction, John Bogle, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John Zimmer (Lyft cofounder), Kaizen: continuous improvement, Ken Thompson, Kevin Kelly, Khan Academy, Kickstarter, Kim Stanley Robinson, knowledge worker, Kodak vs Instagram, Lao Tzu, Larry Ellison, Larry Wall, Lean Startup, Leonard Kleinrock, Lyft, machine readable, machine translation, Marc Andreessen, Mark Zuckerberg, market fundamentalism, Marshall McLuhan, McMansion, microbiome, microservices, minimum viable product, mortgage tax deduction, move fast and break things, Network effects, new economy, Nicholas Carr, Nick Bostrom, obamacare, Oculus Rift, OpenAI, OSI model, Overton Window, packet switching, PageRank, pattern recognition, Paul Buchheit, peer-to-peer, peer-to-peer model, Ponzi scheme, post-truth, race to the bottom, Ralph Nader, randomized controlled trial, RFC: Request For Comment, Richard Feynman, Richard Stallman, ride hailing / ride sharing, Robert Gordon, Robert Metcalfe, Ronald Coase, Rutger Bregman, Salesforce, Sam Altman, school choice, Second Machine Age, secular stagnation, self-driving car, SETI@home, shareholder value, Silicon Valley, Silicon Valley startup, skunkworks, Skype, smart contracts, Snapchat, Social Responsibility of Business Is to Increase Its Profits, social web, software as a service, software patent, spectrum auction, speech recognition, Stephen Hawking, Steve Ballmer, Steve Jobs, Steven Levy, Stewart Brand, stock buybacks, strong AI, synthetic biology, TaskRabbit, telepresence, the built environment, the Cathedral and the Bazaar, The future is already here, The Future of Employment, the map is not the territory, The Nature of the Firm, The Rise and Fall of American Growth, The Wealth of Nations by Adam Smith, Thomas Davenport, Tony Fadell, Tragedy of the Commons, transaction costs, transcontinental railway, transportation-network company, Travis Kalanick, trickle-down economics, two-pizza team, Uber and Lyft, Uber for X, uber lyft, ubercab, universal basic income, US Airways Flight 1549, VA Linux, warehouse automation, warehouse robotics, Watson beat the top human players on Jeopardy!, We are the 99%, web application, Whole Earth Catalog, winner-take-all economy, women in the workforce, Y Combinator, yellow journalism, zero-sum game, Zipcar

As computational neuroscientist and AI entrepreneur Beau Cronin puts it, “In many cases, Google has succeeded by reducing problems that were previously assumed to require strong AI—that is, reasoning and problem-solving abilities generally associated with human intelligence—into narrow AI, solvable by matching new inputs against vast repositories of previously encountered examples.” Enough narrow AI infused with the data thrown off by billions of humans starts to look suspiciously like strong AI. In short, these are systems of collective intelligence that use algorithms to aggregate the collective knowledge and decisions of millions of individual humans.

And then we need to understand how financial markets (often colloquially, and inaccurately, referred to simply as “Wall Street”) have become a machine that its creators no longer fully understand, and how the goals and operation of that machine have become radically disconnected from the market of real goods and services that it was originally created to support. THREE TYPES OF ARTIFICIAL INTELLIGENCE As we’ve seen, when experts talk about artificial intelligence, they distinguish between “narrow artificial intelligence” and “general artificial intelligence,” also referred to as “weak AI” and “strong AI.” Narrow AI burst into the public debate in 2011. That was the year that IBM’s Watson soundly trounced the best human Jeopardy players in a nationally televised match in February. In October of that same year, Apple introduced Siri, its personal agent, able to answer common questions spoken aloud in plain language.

Rather than spelling out every procedure, a base program such as an image recognizer or categorizer is built, and then trained by feeding it large amounts of data labeled by humans until it can recognize patterns in the data on its own. We teach the program what success looks like, and it learns to copy us. This leads to the fear that these programs will become increasingly independent of their creators. Artificial general intelligence (also sometimes referred to as “strong AI”) is still the stuff of science fiction. It is the product of a hypothetical future in which an artificial intelligence isn’t just trained to be smart about a specific task, but to learn entirely on its own, and can effectively apply its intelligence to any problem that comes its way. The fear is that an artificial general intelligence will develop its own goals and, because of its ability to learn on its own at superhuman speeds, will improve itself at a rate that soon leaves humans far behind.


pages: 247 words: 43,430

Think Complexity by Allen B. Downey

Benoit Mandelbrot, cellular automata, Conway's Game of Life, Craig Reynolds: boids flock, discrete time, en.wikipedia.org, Frank Gehry, Gini coefficient, Guggenheim Bilbao, Laplace demon, mandelbrot fractal, Occupy movement, Paul Erdős, peer-to-peer, Pierre-Simon Laplace, power law, seminal paper, sorting algorithm, stochastic process, strong AI, Thomas Kuhn: the structure of scientific revolutions, Turing complete, Turing machine, Vilfredo Pareto, We are the 99%

One of the strongest challenges to compatibilism is the consequence argument. What is the consequence argument? What response can you give to the consequence argument based on what you have read in this book? Example 10-7. In the philosophy of mind, Strong AI is the position that an appropriately programmed computer could have a mind in the same sense that humans have minds. John Searle presented a thought experiment called The Chinese Room, intended to show that Strong AI is false. You can read about it at http://en.wikipedia.org/wiki/Chinese_room. What is the system reply to the Chinese Room argument? How does what you have learned about complexity science influence your reaction to the system response?


pages: 372 words: 101,174

How to Create a Mind: The Secret of Human Thought Revealed by Ray Kurzweil

Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Albert Einstein, Albert Michelson, anesthesia awareness, anthropic principle, brain emulation, cellular automata, Charles Babbage, Claude Shannon: information theory, cloud computing, computer age, Computing Machinery and Intelligence, Dean Kamen, discovery of DNA, double helix, driverless car, en.wikipedia.org, epigenetics, George Gilder, Google Earth, Hans Moravec, Isaac Newton, iterative process, Jacquard loom, Jeff Hawkins, John von Neumann, Law of Accelerating Returns, linear programming, Loebner Prize, mandelbrot fractal, Nick Bostrom, Norbert Wiener, optical character recognition, PalmPilot, pattern recognition, Peter Thiel, Ralph Waldo Emerson, random walk, Ray Kurzweil, reversible computing, selective serotonin reuptake inhibitor (SSRI), self-driving car, speech recognition, Steven Pinker, strong AI, the scientific method, theory of mind, Turing complete, Turing machine, Turing test, Wall-E, Watson beat the top human players on Jeopardy!, X Prize

The Google self-driving cars learn from their own driving experience as well as from data from Google cars driven by human drivers; Watson learned most of its knowledge by reading on its own. It is interesting to note that the methods deployed today in AI have evolved to be mathematically very similar to the mechanisms in the neocortex. Another objection to the feasibility of “strong AI” (artificial intelligence at human levels and beyond) that is often raised is that the human brain makes extensive use of analog computing, whereas digital methods inherently cannot replicate the gradations of value that analog representations can embody. It is true that one bit is either on or off, but multiple-bit words easily represent multiple gradations and can do so to any desired degree of accuracy.

“IBM Unveils Cognitive Computing Chips,” IBM news release, August 18, 2011, http://www-03.ibm.com/press/us/en/pressrelease/35251.wss. 8. “Japan’s K Computer Tops 10 Petaflop/s to Stay Atop TOP500 List.” Chapter 9: Thought Experiments on the Mind 1. John R. Searle, “I Married a Computer,” in Jay W. Richards, ed., Are We Spiritual Machines? Ray Kurzweil vs. the Critics of Strong AI (Seattle: Discovery Institute, 2002). 2. Stuart Hameroff, Ultimate Computing: Biomolecular Consciousness and Nanotechnology (Amsterdam: Elsevier Science, 1987). 3. P. S. Sebel et al., “The Incidence of Awareness during Anesthesia: A Multicenter United States Study,” Anesthesia and Analgesia 99 (2004): 833–39. 4.

., “Cognitive Computing,” Communications of the ACM 54, no. 8 (2011): 62–71, http://cacm.acm.org/magazines/2011/8/114944-cognitive-computing/fulltext. 9. Kurzweil, The Singularity Is Near, chapter 9, section titled “The Criticism from Ontology: Can a Computer Be Conscious?” (pp. 458–69). 10. Michael Denton, “Organism and Machine: The Flawed Analogy,” in Are We Spiritual Machines? Ray Kurzweil vs. the Critics of Strong AI (Seattle: Discovery Institute, 2002). 11. Hans Moravec, Mind Children (Cambridge, MA: Harvard University Press, 1988). Epilogue 1. “In U.S., Optimism about Future for Youth Reaches All-Time Low,” Gallup Politics, May 2, 2011, http://www.gallup.com/poll/147350/optimism-future-youth-reaches-time-low.aspx. 2.


Work in the Future The Automation Revolution-Palgrave MacMillan (2019) by Robert Skidelsky Nan Craig

3D printing, Airbnb, algorithmic trading, AlphaGo, Alvin Toffler, Amazon Web Services, anti-work, antiwork, artificial general intelligence, asset light, autonomous vehicles, basic income, behavioural economics, business cycle, cloud computing, collective bargaining, Computing Machinery and Intelligence, correlation does not imply causation, creative destruction, data is the new oil, data science, David Graeber, David Ricardo: comparative advantage, deep learning, DeepMind, deindustrialization, Demis Hassabis, deskilling, disintermediation, do what you love, Donald Trump, driverless car, Erik Brynjolfsson, fake news, feminist movement, Ford Model T, Frederick Winslow Taylor, future of work, Future Shock, general purpose technology, gig economy, global supply chain, income inequality, independent contractor, informal economy, Internet of things, Jarndyce and Jarndyce, Jarndyce and Jarndyce, job automation, job polarisation, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John von Neumann, Joseph Schumpeter, knowledge economy, Loebner Prize, low skilled workers, Lyft, Mark Zuckerberg, means of production, moral panic, Network effects, new economy, Nick Bostrom, off grid, pattern recognition, post-work, Ronald Coase, scientific management, Second Machine Age, self-driving car, sharing economy, SoftBank, Steve Jobs, strong AI, tacit knowledge, technological determinism, technoutopianism, TED Talk, The Chicago School, The Future of Employment, the market place, The Nature of the Firm, The Wealth of Nations by Adam Smith, Thorstein Veblen, Turing test, Uber for X, uber lyft, universal basic income, wealth creators, working poor

These theorists claim that a suitably programmed computer could imitate conscious mental states, such as self-awareness, understanding or love, but could never actually experience them—it could never be conscious, and hence it could never be self-aware and would never actually understand or love anything. Proponents of Strong AI believe the opposite. They claim that a ­computer could, given the right programming, possess consciousness and thereby experience conscious mental states. This title is inspired by Dreyfus’s (1992) ‘What Computers Still Can’t Do’. T. Tozer (*) Centre for Global Studies, London, UK © The Author(s) 2020 R.

Indeed, there is no example to be found of an entity producing its opposite by itself.2 Therefore, it is unreasonable to suppose that the brain would be able to do so: an entirely physical thing (brain/ computer) could not produce something that is entirely non-physical (consciousness). Note that although the above examples admit of the possibility of something leading to its opposite if combined with something else, the argument of Strong AI proponents such as Kurzweil entails no such ­non-­physical enabling substance. Such theorists suggest that a physical thing, and that physical thing alone, can produce consciousness. That is precisely the claim I am rejecting. 2 Or if there is, let the reader pen it in writing as an objection to this premise. 11 What Computers Will Never Be Able To Do 105 Finally, let us consider an objection from Turing (1950: 445–447).


pages: 677 words: 206,548

Future Crimes: Everything Is Connected, Everyone Is Vulnerable and What We Can Do About It by Marc Goodman

23andMe, 3D printing, active measures, additive manufacturing, Affordable Care Act / Obamacare, Airbnb, airport security, Albert Einstein, algorithmic trading, Alvin Toffler, Apollo 11, Apollo 13, artificial general intelligence, Asilomar, Asilomar Conference on Recombinant DNA, augmented reality, autonomous vehicles, Baxter: Rethink Robotics, Bill Joy: nanobots, bitcoin, Black Swan, blockchain, borderless world, Boston Dynamics, Brian Krebs, business process, butterfly effect, call centre, Charles Lindbergh, Chelsea Manning, Citizen Lab, cloud computing, Cody Wilson, cognitive dissonance, computer vision, connected car, corporate governance, crowdsourcing, cryptocurrency, data acquisition, data is the new oil, data science, Dean Kamen, deep learning, DeepMind, digital rights, disinformation, disintermediation, Dogecoin, don't be evil, double helix, Downton Abbey, driverless car, drone strike, Edward Snowden, Elon Musk, Erik Brynjolfsson, Evgeny Morozov, Filter Bubble, Firefox, Flash crash, Free Software Foundation, future of work, game design, gamification, global pandemic, Google Chrome, Google Earth, Google Glasses, Gordon Gekko, Hacker News, high net worth, High speed trading, hive mind, Howard Rheingold, hypertext link, illegal immigration, impulse control, industrial robot, information security, Intergovernmental Panel on Climate Change (IPCC), Internet of things, Jaron Lanier, Jeff Bezos, job automation, John Harrison: Longitude, John Markoff, Joi Ito, Jony Ive, Julian Assange, Kevin Kelly, Khan Academy, Kickstarter, Kiva Systems, knowledge worker, Kuwabatake Sanjuro: assassination market, Large Hadron Collider, Larry Ellison, Laura Poitras, Law of Accelerating Returns, Lean Startup, license plate recognition, lifelogging, litecoin, low earth orbit, M-Pesa, machine translation, Mark Zuckerberg, Marshall McLuhan, Menlo Park, Metcalfe’s law, MITM: man-in-the-middle, mobile money, more computing power than Apollo, move fast and break things, Nate Silver, national security letter, natural language processing, Nick Bostrom, obamacare, Occupy movement, Oculus Rift, off grid, off-the-grid, offshore financial centre, operational security, optical character recognition, Parag Khanna, pattern recognition, peer-to-peer, personalized medicine, Peter H. Diamandis: Planetary Resources, Peter Thiel, pre–internet, printed gun, RAND corporation, ransomware, Ray Kurzweil, Recombinant DNA, refrigerator car, RFID, ride hailing / ride sharing, Rodney Brooks, Ross Ulbricht, Russell Brand, Salesforce, Satoshi Nakamoto, Second Machine Age, security theater, self-driving car, shareholder value, Sheryl Sandberg, Silicon Valley, Silicon Valley startup, SimCity, Skype, smart cities, smart grid, smart meter, Snapchat, social graph, SoftBank, software as a service, speech recognition, stealth mode startup, Stephen Hawking, Steve Jobs, Steve Wozniak, strong AI, Stuxnet, subscription business, supply-chain management, synthetic biology, tech worker, technological singularity, TED Talk, telepresence, telepresence robot, Tesla Model S, The future is already here, The Future of Employment, the long tail, The Wisdom of Crowds, Tim Cook: Apple, trade route, uranium enrichment, Virgin Galactic, Wall-E, warehouse robotics, Watson beat the top human players on Jeopardy!, Wave and Pay, We are Anonymous. We are Legion, web application, Westphalian system, WikiLeaks, Y Combinator, you are the product, zero day

Somehow, the impossible always seems to become the possible. In the world of artificial intelligence, that next phase of development is called artificial general intelligence (AGI), or strong AI. In contrast to narrow AI, which cleverly performs a specific limited task, such as machine translation or auto navigation, strong AI refers to “thinking machines” that might perform any intellectual task that a human being could. Characteristics of a strong AI would include the ability to reason, make judgments, plan, learn, communicate, and unify these skills toward achieving common goals across a variety of domains, and commercial interest is growing.

To resolve the contradiction in his program, he attempts to kill the crew. As narrow AI becomes more powerful, robots grow more autonomous, and AGI looms large, we need to ensure that the algorithms of tomorrow are better equipped to resolve programming conflicts and moral judgments than was HAL. It’s not that any strong AI would necessarily be “evil” and attempt to destroy humanity, but in pursuit of its primary goal as programmed, an AGI might not stop until it had achieved its mission at all costs, even if that meant competing with or harming human beings, seizing our resources, or damaging our environment. As the perceived risks from AGI have grown, numerous nonprofit institutes have been formed to address and study them, including Oxford’s Future of Humanity Institute, the Machine Intelligence Research Institute, the Future of Life Institute, and the Cambridge Centre for the Study of Existential Risk.


pages: 688 words: 147,571

Robot Rules: Regulating Artificial Intelligence by Jacob Turner

"World Economic Forum" Davos, Ada Lovelace, Affordable Care Act / Obamacare, AI winter, algorithmic bias, algorithmic trading, AlphaGo, artificial general intelligence, Asilomar, Asilomar Conference on Recombinant DNA, autonomous vehicles, backpropagation, Basel III, bitcoin, Black Monday: stock market crash in 1987, blockchain, brain emulation, Brexit referendum, Cambridge Analytica, Charles Babbage, Clapham omnibus, cognitive dissonance, Computing Machinery and Intelligence, corporate governance, corporate social responsibility, correlation does not imply causation, crowdsourcing, data science, deep learning, DeepMind, Demis Hassabis, distributed ledger, don't be evil, Donald Trump, driverless car, easy for humans, difficult for computers, effective altruism, Elon Musk, financial exclusion, financial innovation, friendly fire, future of work, hallucination problem, hive mind, Internet of things, iterative process, job automation, John Markoff, John von Neumann, Loebner Prize, machine readable, machine translation, medical malpractice, Nate Silver, natural language processing, Nick Bostrom, Northpointe / Correctional Offender Management Profiling for Alternative Sanctions, nudge unit, obamacare, off grid, OpenAI, paperclip maximiser, pattern recognition, Peace of Westphalia, Philippa Foot, race to the bottom, Ray Kurzweil, Recombinant DNA, Rodney Brooks, self-driving car, Silicon Valley, Stanislav Petrov, Stephen Hawking, Steve Wozniak, strong AI, technological singularity, Tesla Model S, The Coming Technological Singularity, The Future of Employment, The Signal and the Noise by Nate Silver, trolley problem, Turing test, Vernor Vinge

These limited goals might include natural language processing functions like translation, or navigating through an unfamiliar physical environment. A narrow AI system is suited only to the task for which it is designed. The great majority of AI systems in the world today are closer to this narrow and limited type. General (or “strong”) AI is the ability to achieve an unlimited range of goals, and even to set new goals independently, including in situations of uncertainty or vagueness. This encompasses many of the attributes we think of as intelligence in humans. Indeed, general AI is what we see portrayed in the robots and AI of popular culture discussed above.

What about if 20%, 50% or 80% of their mental functioning was the result of computer processing powers? On one view, the answer would be the same—a human should not lose rights just because they have added to their mental functioning. However, consistent with his view that no artificial process can produce “strong” AI which resembles human intelligence, the philosopher John Searle argues that replacement would gradually remove conscious experience.118 Replacement or augmentation of human physical functions with artificial ones does not render someone less deserving of rights.119 Someone who loses an arm and has it replaced with a mechanical version is not considered less human.

See Kill Switch OpenAI Open Roboethics Institute (ORI) Organisation for Economic Co-operation and Development (OECD) P Paris Climate Agreement Partnership on AI to benefit People and Society Positivism Posthumanism Private Law Product liability EU Product Liability Directive US Restatement (Third) of Torts–Products Liability Professions, The Public International Law Q Qualia R Random Darknet Shopper Rawls, John Red Flag Law Rousseau, Jean Jacques S Safe interruptibility Sandbox Saudi Arabia Self-driving cars. See Autonomous vehicles Sexbots Sex Robots. See Sexbots Singularity, The “Shut Down” Problem Slavery Space Law Outer Space Treaty 1967 Stochastic Gradient Descent Strict liability Strong AI. See General AI Subsidiarity Superintelligence Symbolic AI T TD-gammon Teleological Principle TenCent TensorFlow Transhumanism. See Posthumanism Transparency See alsoExplanation, Black Box Problem Trolley Problem Turing Test U UAE, The UK, The UK Financial Conduct Authority (FCA) sandbox Uncanny Valley The US US National Institute of Standards and Technology (NIST) V Vicarious liability Villani Report W Warnock Inquiry Weak AI.


pages: 339 words: 92,785

I, Warbot: The Dawn of Artificially Intelligent Conflict by Kenneth Payne

Abraham Maslow, AI winter, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, AlphaGo, anti-communist, Any sufficiently advanced technology is indistinguishable from magic, artificial general intelligence, Asperger Syndrome, augmented reality, Automated Insights, autonomous vehicles, backpropagation, Black Lives Matter, Bletchley Park, Boston Dynamics, classic study, combinatorial explosion, computer age, computer vision, Computing Machinery and Intelligence, coronavirus, COVID-19, CRISPR, cuban missile crisis, data science, deep learning, deepfake, DeepMind, delayed gratification, Demis Hassabis, disinformation, driverless car, drone strike, dual-use technology, Elon Musk, functional programming, Geoffrey Hinton, Google X / Alphabet X, Internet of things, job automation, John Nash: game theory, John von Neumann, Kickstarter, language acquisition, loss aversion, machine translation, military-industrial complex, move 37, mutually assured destruction, Nash equilibrium, natural language processing, Nick Bostrom, Norbert Wiener, nuclear taboo, nuclear winter, OpenAI, paperclip maximiser, pattern recognition, RAND corporation, ransomware, risk tolerance, Ronald Reagan, self-driving car, semantic web, side project, Silicon Valley, South China Sea, speech recognition, Stanislav Petrov, stem cell, Stephen Hawking, Steve Jobs, strong AI, Stuxnet, technological determinism, TED Talk, theory of mind, TikTok, Turing machine, Turing test, uranium enrichment, urban sprawl, V2 rocket, Von Neumann architecture, Wall-E, zero-sum game

These are the sorts of skills that might persuade us it had human-like intelligence. Conversely, perhaps those are the wrong sort of skills to be thinking about when gauging intelligence. We certainly have a tendency to judge by human-centric standards. Perhaps we shouldn’t. A tell-tale is the term ‘strong AI’, used by afficionados to mean AI that can perform like a human—flexibly, socially, emotionally. But that’s certainly not the only yardstick for intelligence. After all, we may be very good at those things, but there are plenty of other things we are poor at. We are for example weak at statistical reasoning and our memory is patchy and selective.

A-10 Warthog abacuses Abbottabad, Pakistan Able Archer (1983) acoustic decoys acoustic torpedoes Adams, Douglas Aegis combat system Aerostatic Corps affective empathy Affecto Afghanistan agency aircraft see also dogfighting; drones aircraft carriers algorithms algorithm creation Alpha biases choreography deep fakes DeepMind, see DeepMind emotion recognition F-117 Nighthawk facial recognition genetic selection imagery analysis meta-learning natural language processing object recognition predictive policing alien hand syndrome Aliens (1986 film) Alpha AlphaGo Altered Carbon (television series) Amazon Amnesty International amygdala Andropov, Yuri Anduril Ghost anti-personnel mines ants Apple Aristotle armour arms races Army Research Lab Army Signal Corps Arnalds, Ólafur ARPA Art of War, The (Sun Tzu) art Artificial Intelligence agency and architecture autonomy and as ‘brittle’ connectionism definition of decision-making technology expert systems and feedback loops fuzzy logic innateness intelligence analysis meta-learning as ‘narrow’ needle-in-a-haystack problems neural networks reinforcement learning ‘strong AI’ symbolic logic and unsupervised learning ‘winters’ artificial neural networks Ashby, William Ross Asimov, Isaac Asperger syndrome Astute class boats Atari Breakout (1976) Montezuma’s Revenge (1984) Space Invaders (1978) Athens ATLAS robots augmented intelligence Austin Powers (1997 film) Australia authoritarianism autonomous vehicles see also drones autonomy B-21 Raider B-52 Stratofortress B2 Spirit Baby X BAE Systems Baghdad, Iraq Baidu balloons ban, campaigns for Banks, Iain Battle of Britain (1940) Battle of Fleurus (1794) Battle of Midway (1942) Battle of Sedan (1940) batwing design BBN Beautiful Mind, A (2001 film) beetles Bell Laboratories Bengio, Yoshua Berlin Crisis (1961) biases big data Bin Laden, Osama binary code biological weapons biotechnology bipolarity bits Black Lives Matter Black Mirror (television series) Blade Runner (1982 film) Blade Runner 2049 (2017 film) Bletchley Park, Buckinghamshire blindness Blunt, Emily board games, see under games boats Boden, Margaret bodies Boeing MQ-25 Stingray Orca submarines Boolean logic Boston Dynamics Bostrom, Nick Boyd, John brain amygdala bodies and chunking dopamine emotion and genetic engineering and language and mind merge and morality and plasticity prediction and subroutines umwelts and Breakout (1976 game) breathing control brittleness brute force Buck Rogers (television series) Campaign against Killer Robots Carlsen, Magnus Carnegie Mellon University Casino Royale (2006 film) Castro, Fidel cat detector centaur combination Central Intelligence Agency (CIA) centre of gravity chaff Challenger Space Shuttle disaster (1986) Chauvet cave, France chemical weapons Chernobyl nuclear disaster (1986) chess centaur teams combinatorial explosion and creativity in Deep Blue game theory and MuZero as toy universe chicken (game) chimeras chimpanzees China aircraft carriers Baidu COVID-19 pandemic (2019–21) D-21 in genetic engineering in GJ-11 Sharp Sword nuclear weapons surveillance in Thucydides trap and US Navy drone seizure (2016) China Lake, California Chomsky, Noam choreography chunking Cicero civilians Clarke, Arthur Charles von Clausewitz, Carl on character on culmination on defence on genius on grammar of war on materiel on nature on poker on willpower on wrestling codebreaking cognitive empathy Cold War (1947–9) arms race Berlin Crisis (1961) Cuban Missile Crisis (1962) F-117 Nighthawk Iran-Iraq War (1980–88) joint action Korean War (1950–53) nuclear weapons research and SR-71 Blackbird U2 incident (1960) Vienna Summit (1961) Vietnam War (1955–75) VRYAN Cole, August combinatorial creativity combinatorial explosion combined arms common sense computers creativity cyber security games graphics processing unit (GPU) mice Moore’s Law symbolic logic viruses VRYAN confirmation bias connectionism consequentialism conservatism Convention on Conventional Weapons ConvNets copying Cormorant cortical interfaces cost-benefit analysis counterfactual regret minimization counterinsurgency doctrine courageous restraint COVID-19 pandemic (2019–21) creativity combinatorial exploratory genetic engineering and mental disorders and transformational criminal law CRISPR, crows Cruise, Thomas Cuban Missile Crisis (1962) culmination Culture novels (Banks) cyber security cybernetics cyborgs Cyc cystic fibrosis D-21 drones Damasio, Antonio dance DARPA autonomous vehicle research battlespace manager codebreaking research cortical interface research cyborg beetle Deep Green expert system programme funding game theory research LongShot programme Mayhem Ng’s helicopter Shakey understanding and reason research unmanned aerial combat research Dartmouth workshop (1956) Dassault data DDoS (distributed denial-of-service) dead hand system decision-making technology Deep Blue deep fakes Deep Green DeepMind AlphaGo Atari playing meta-learning research MuZero object recognition research Quake III competition (2019) deep networks defence industrial complex Defence Innovation Unit Defence Science and Technology Laboratory defence delayed gratification demons deontological approach depth charges Dionysus DNA (deoxyribonucleic acid) dodos dogfighting Alpha domains dot-matrix tongue Dota II (2013 game) double effect drones Cormorant D-21 GJ-11 Sharp Sword Global Hawk Gorgon Stare kamikaze loitering munitions nEUROn operators Predator Reaper reconnaissance RQ-170 Sentinel S-70 Okhotnik surveillance swarms Taranis wingman role X-37 X-47b dual use technology Eagleman, David early warning systems Echelon economics Edge of Tomorrow (2014 film) Eisenhower, Dwight Ellsberg, Daniel embodied cognition emotion empathy encryption entropy environmental niches epilepsy epistemic community escalation ethics Asimov’s rules brain and consequentialism deep brain stimulation and deontological approach facial recognition and genetic engineering and golden rule honour hunter-gatherer bands and identity just war post-conflict reciprocity regulation surveillance and European Union (EU) Ex Machina (2014 film) expert systems exploratory creativity extra limbs Eye in the Sky (2015 film) F-105 Thunderchief F-117 Nighthawk F-16 Fighting Falcon F-22 Raptor F-35 Lightning F/A-18 Hornet Facebook facial recognition feedback loops fighting power fire and forget firmware 5G cellular networks flow fog of war Ford forever wars FOXP2 gene Frahm, Nils frame problem France Fukushima nuclear disaster (2011) Future of Life Institute fuzzy logic gait recognition game theory games Breakout (1976) chess, see chess chicken Dota II (2013) Go, see Go Montezuma’s Revenge (1984) poker Quake III (1999) Space Invaders (1978) StarCraft II (2010) toy universes zero sum games gannets ‘garbage in, garbage out’ Garland, Alexander Gates, William ‘Bill’ Gattaca (1997 film) Gavotti, Giulio Geertz, Clifford generalised intelligence measure Generative Adversarial Networks genetic engineering genetic selection algorithms genetically modified crops genius Germany Berlin Crisis (1961) Nuremburg Trials (1945–6) Russian hacking operation (2015) World War I (1914–18) World War II (1939–45) Ghost in the Shell (comic book) GJ-11 Sharp Sword Gladwell, Malcolm Global Hawk drone global positioning system (GPS) global workspace Go (game) AlphaGo Gödel, Kurt von Goethe, Johann golden rule golf Good Judgment Project Google BERT Brain codebreaking research DeepMind, see DeepMind Project Maven (2017–) Gordievsky, Oleg Gorgon Stare GPT series grammar of war Grand Challenge aerial combat autonomous vehicles codebreaking graphics processing unit (GPU) Greece, ancient grooming standard Groundhog Day (1993 film) groupthink guerilla warfare Gulf War First (1990–91) Second (2003–11) hacking hallucinogenic drugs handwriting recognition haptic vest hardware Harpy Hawke, Ethan Hawking, Stephen heat-seeking missiles Hebrew Testament helicopters Hellfire missiles Her (2013 film) Hero-30 loitering munitions Heron Systems Hinton, Geoffrey Hitchhiker’s Guide to the Galaxy, The (Adams) HIV (human immunodeficiency viruses) Hoffman, Frank ‘Holeshot’ (Cole) Hollywood homeostasis Homer homosexuality Hongdu GJ-11 Sharp Sword honour Hughes human in the loop human resources human-machine teaming art cyborgs emotion games King Midas problem prediction strategy hunter-gatherer bands Huntingdon’s disease Hurricane fighter aircraft hydraulics hypersonic engines I Robot (Asimov) IARPA IBM identity Iliad (Homer) image analysis image recognition cat detector imagination Improbotics nformation dominance information warfare innateness intelligence analysts International Atomic Energy Agency International Criminal Court international humanitarian law internet of things Internet IQ (intelligence quotient) Iran Aegis attack (1988) Iraq War (1980–88) nuclear weapons Stuxnet attack (2010) Iraq Gulf War I (1990–91) Gulf War II (2003–11) Iran War (1980–88) Iron Dome Israel Italo-Turkish War (1911–12) Jaguar Land Rover Japan jazz JDAM (joint directed attack munition) Jeopardy Jobs, Steven Johansson, Scarlett Johnson, Lyndon Joint Artificial Intelligence Center (JAIC) de Jomini, Antoine jus ad bellum jus in bello jus post bellum just war Kalibr cruise missiles kamikaze drones Kasparov, Garry Kellogg Briand Pact (1928) Kennedy, John Fitzgerald KGB (Komitet Gosudarstvennoy Bezopasnosti) Khrushchev, Nikita kill chain King Midas problem Kissinger, Henry Kittyhawk Knight Rider (television series) know your enemy know yourself Korean War (1950–53) Kratos XQ-58 Valkyrie Kubrick, Stanley Kumar, Vijay Kuwait language connectionism and genetic engineering and natural language processing pattern recognition and semantic webs translation universal grammar Law, Jude LeCun, Yann Lenat, Douglas Les, Jason Libratus lip reading Litvinenko, Alexander locked-in patients Lockheed dogfighting trials F-117 Nighthawk F-22 Raptor F-35 Lightning SR-71 Blackbird logic loitering munitions LongShot programme Lord of the Rings (2001–3 film trilogy) LSD (lysergic acid diethylamide) Luftwaffe madman theory Main Battle Tanks malum in se Manhattan Project (1942–6) Marcus, Gary Maslow, Abraham Massachusetts Institute of Technology (MIT) Matrix, The (1999 film) Mayhem McCulloch, Warren McGregor, Wayne McNamara, Robert McNaughton, John Me109 fighter aircraft medical field memory Merkel, Angela Microsoft military industrial complex Mill, John Stuart Milrem mimicry mind merge mind-shifting minimax regret strategy Minority Report (2002 film) Minsky, Marvin Miramar air base, San Diego missiles Aegis combat system agency and anti-missile gunnery heat-seeking Hellfire missiles intercontinental Kalibr cruise missiles nuclear warheads Patriot missile interceptor Pershing II missiles Scud missiles Tomahawk cruise missiles V1 rockets V2 rockets mission command mixed strategy Montezuma’s Revenge (1984 game) Moore’s Law mosaic warfare Mueller inquiry (2017–19) music Musk, Elon Mutually Assured Destruction (MAD) MuZero Nagel, Thomas Napoleon I, Emperor of the French Napoleonic France (1804–15) narrowness Nash equilibrium Nash, John National Aeronautics and Space Administration (NASA) National Security Agency (NSA) National War College natural language processing natural selection Nature navigation computers Nazi Germany (1933–45) needle-in-a-haystack problems Netflix network enabled warfare von Neumann, John neural networks neurodiversity nEUROn drone neuroplasticity Ng, Andrew Nixon, Richard normal accident theory North Atlantic Treaty Organization (NATO) North Korea nuclear weapons Cuban Missile Crisis (1962) dead hand system early warning systems F-105 Thunderchief and game theory and Hiroshima and Nagasaki bombings (1945) Manhattan Project (1942–6) missiles Mutually Assured Destruction (MAD) second strike capability submarines and VRYAN and in WarGames (1983 film) Nuremburg Trials (1945–6) Obama, Barack object recognition Observe Orient Decide and Act (OODA) offence-defence balance Office for Naval Research Olympic Games On War (Clausewitz), see Clausewitz, Carl OpenAI optogenetics Orca submarines Ottoman Empire (1299–1922) pain Pakistan Palantir Palmer, Arnold Pandemonium Panoramic Research Papert, Seymour Parkinson’s disease Patriot missile interceptors pattern recognition Pearl Harbor attack (1941) Peloponnesian War (431–404 BCE) Pentagon autonomous vehicle research codebreaking research computer mouse development Deep Green Defence Innovation Unit Ellsberg leaks (1971) expert system programme funding ‘garbage in, garbage out’ story intelligence analysts Project Maven (2017–) Shakey unmanned aerial combat research Vietnam War (1955–75) perceptrons Perdix Pershing II missiles Petrov, Stanislav Phalanx system phrenology pilot’s associate Pitts, Walter platform neutrality Pluribus poker policing polygeneity Portsmouth, Hampshire Portuguese Man o’ War post-traumatic stress disorder (PTSD) Predator drones prediction centaur teams ‘garbage in, garbage out’ story policing toy universes VRYAN Prescience principles of war prisoners Project Improbable Project Maven (2017–) prosthetic arms proximity fuses Prussia (1701–1918) psychology psychopathy punishment Putin, Vladimir Pyeongchang Olympics (2018) Qinetiq Quake III (1999 game) radar Rafael RAND Corporation rational actor model Rawls, John Re:member (Arnalds) Ready Player One (Cline) Reagan, Ronald Reaper drones reciprocal punishment reciprocity reconnaissance regulation ban, campaigns for defection self-regulation reinforcement learning remotely piloted air vehicles (RPAVs) revenge porn revolution in military affairs Rid, Thomas Robinson, William Heath Robocop (1987 film) Robotics Challenge robots Asimov’s rules ATLAS Boston Dynamics homeostatic Shakey symbolic logic and Rome Air Defense Center Rome, ancient Rosenblatt, Frank Royal Air Force (RAF) Royal Navy RQ-170 Sentinel Russell, Stuart Russian Federation German hacking operation (2015) Litvinenko murder (2006) S-70 Okhotnik Skripal poisoning (2018) Ukraine War (2014–) US election interference (2016) S-70 Okhotnik SAGE Said and Done’ (Frahm) satellite navigation satellites Saudi Arabia Schelling, Thomas schizophrenia Schwartz, Jack Sea Hunter security dilemma Sedol, Lee self-actualisation self-awareness self-driving cars Selfridge, Oliver semantic webs Shakey Shanahan, Murray Shannon, Claude Shogi Silicon Valley Simon, Herbert Single Integrated Operations Plan (SIOP) singularity Siri situational awareness situationalist intelligence Skripal, Sergei and Yulia Slaughterbots (2017 video) Slovic, Paul smartphones Smith, Willard social environments software Sophia Sorcerer’s Apprentice, The (Goethe) South China Sea Soviet Union (1922–91) aircraft Berlin Crisis (1961) Chernobyl nuclear disaster (1986) Cold War (1947–9), see Cold War collapse (1991) Cuban Missile Crisis (1962) early warning systems Iran-Iraq War (1980–88) Korean War (1950–53) nuclear weapons radar technology U2 incident (1960) Vienna Summit (1961) Vietnam War (1955–75) VRYAN World War II (1939–45) Space Invaders (1978 game) SpaceX Sparta Spike Firefly loitering munitions Spitfire fighter aircraft Spotify Stanford University Stanley Star Trek (television series) StarCraft II (2010 game) stealth strategic bombing strategic computing programme strategic culture Strategy Robot strategy Strava Stuxnet sub-units submarines acoustic decoys nuclear Orca South China Sea incident (2016) subroutines Sukhoi Sun Tzu superforecasting surveillance swarms symbolic logic synaesthesia synthetic operation environment Syria Taliban tanks Taranis drone technological determinism Tempest Terminator franchise Tesla Tetlock, Philip theory of mind Threshold Logic Unit Thucydides TikTok Tomahawk cruise missiles tongue Top Gun (1986 film) Top Gun: Maverick (2021 film) torpedoes toy universes trade-offs transformational creativity translation Trivers, Robert Trump, Donald tumours Turing, Alan Twitter 2001: A Space Odyssey (1968 film) Type-X Robotic Combat Vehicle U2 incident (1960) Uber Uexküll, Jacob Ukraine ultraviolet light spectrum umwelts uncanny valley unidentified flying objects (UFOs) United Kingdom AI weapons policy armed force, size of Battle of Britain (1940) Bletchley Park codebreaking Blitz (1940–41) Cold War (1947–9) COVID-19 pandemic (2019–21) DeepMind, see DeepMind F-35 programme fighting power human rights legislation in Litvinenko murder (2006) nuclear weapons principles of war Project Improbable Qinetiq radar technology Royal Air Force Royal Navy Skripal poisoning (2018) swarm research wingman concept World War I (1914–18) United Nations United States Afghanistan War (2001–14) Air Force Army Research Lab Army Signal Corps Battle of Midway (1942) Berlin Crisis (1961) Bin Laden assassination (2011) Black Lives Matter protests (2020) centaur team research Central Intelligence Agency (CIA) Challenger Space Shuttle disaster (1986) Cold War (1947–9), see Cold War COVID-19 pandemic (2019–21) Cuban Missile Crisis (1962) culture cyber security DARPA, see DARPA Defense Department drones early warning systems F-35 programme Gulf War I (1990–91) Gulf War II (2003–11) IARPA Iran Air shoot-down (1988) Korean War (1950–53) Manhattan Project (1942–6) Marines Mueller inquiry (2017–19) National Security Agency National War College Navy nuclear weapons Office for Naval Research Patriot missile interceptor Pearl Harbor attack (1941) Pentagon, see Pentagon Project Maven (2017–) Rome Air Defense Center Silicon Valley strategic computing programme U2 incident (1960) Vienna Summit (1961) Vietnam War (1955–75) universal grammar Universal Schelling Machine (USM) unmanned aerial vehicles (UAVs), see drones unsupervised learning utilitarianism UVision V1 rockets V2 rockets Vacanti mouse Valkyries Van Gogh, Vincent Vietnam War (1955–75) Vigen, Tyler Vincennes, USS voice assistants VRYAN Wall-e (2008 film) WannaCry ransomware War College, see National War College WarGames (1983 film) warrior ethos Watson weapon systems WhatsApp Wiener, Norbert Wikipedia wingman role Wittgenstein, Ludwig World War I (1914–18) World War II (1939–45) Battle of Britain (1940) Battle of Midway (1942) Battle of Sedan (1940) Bletchley Park codebreaking Blitz (1940–41) Hiroshima and Nagasaki bombings (1945) Pearl Harbor attack (1941) radar technology V1 rockets V2 rockets VRYAN and Wrangham, Richard Wright brothers WS-43 loitering munitions Wuhan, China X-37 drone X-drone X-rays YouTube zero sum games


pages: 846 words: 232,630

Darwin's Dangerous Idea: Evolution and the Meanings of Life by Daniel C. Dennett

Albert Einstein, Alfred Russel Wallace, anthropic principle, assortative mating, buy low sell high, cellular automata, Charles Babbage, classic study, combinatorial explosion, complexity theory, computer age, Computing Machinery and Intelligence, conceptual framework, Conway's Game of Life, Danny Hillis, double helix, Douglas Hofstadter, Drosophila, finite state, Garrett Hardin, Gregor Mendel, Gödel, Escher, Bach, heat death of the universe, In Cold Blood by Truman Capote, invention of writing, Isaac Newton, Johann Wolfgang von Goethe, John von Neumann, junk bonds, language acquisition, Murray Gell-Mann, New Journalism, non-fiction novel, Peter Singer: altruism, phenotype, price mechanism, prisoner's dilemma, QWERTY keyboard, random walk, Recombinant DNA, Richard Feynman, Rodney Brooks, Schrödinger's Cat, selection bias, Stephen Hawking, Steven Pinker, strong AI, Stuart Kauffman, the scientific method, theory of mind, Thomas Malthus, Tragedy of the Commons, Turing machine, Turing test

Simpler survival machines — plants, for instance — never achieve the heights of self-redefinition made possible by the complexities of your robot; considering them just as survival machines for their comatose inhabitants leaves no patterns in their behavior unexplained. If you pursue this avenue, which of course I recommend, then you must abandon Searle's and Fodor's "principled" objection to "strong AI." The imagined robot, however difficult or unlikely an engineering feat, is not an impossibility — nor do they claim it to be. They concede the possibility of such a robot, but just dispute its "metaphysical status"; however adroitly it managed its affairs, they say, its intentionality would not be the real thing.

Certainly everybody in AI has always known about Godel's Theorem, and they have all continued, unworried, with their labors. In fact, Hofstadter's classic Godel Escher Bach (1979) can be read as the demonstration that Godel is an unwilling champion of AI, providing essential insights about the paths to follow to strong AI, not showing the futility of the field. But Roger Penrose, Rouse Ball Professor of Mathematics at Oxford, and one of the world's leading mathematical physicists, thinks otherwise. His challenge has to be taken seriously, even if, as I and others in AI are convinced, he is making a fairly simple mistake.

As a product of biological design processes (both genetic and individual), it is almost certainly one of those algorithms that are somewhere or other in the Vast space of interesting algorithms, full of typographical errors or "bugs," but good enough to bet your life on — so far. Penrose sees this as a "far-fetched" possibility, but if that is all he can say against it, he has not yet come to grips with the best version of "strong AI." {444} 3. THE PHANTOM QUANTUM-GRAVITY COMPUTER: LESSONS FROM LAPLAND I am a strong believer in the power of natural selection. But I do not see how natural selection, in itself, can evolve algorithms which could have the kind of conscious judgements of the validity of other algorithms that we seem to have


pages: 574 words: 164,509

Superintelligence: Paths, Dangers, Strategies by Nick Bostrom

agricultural Revolution, AI winter, Albert Einstein, algorithmic trading, anthropic principle, Anthropocene, anti-communist, artificial general intelligence, autism spectrum disorder, autonomous vehicles, backpropagation, barriers to entry, Bayesian statistics, bioinformatics, brain emulation, cloud computing, combinatorial explosion, computer vision, Computing Machinery and Intelligence, cosmological constant, dark matter, DARPA: Urban Challenge, data acquisition, delayed gratification, Demis Hassabis, demographic transition, different worldview, Donald Knuth, Douglas Hofstadter, driverless car, Drosophila, Elon Musk, en.wikipedia.org, endogenous growth, epigenetics, fear of failure, Flash crash, Flynn Effect, friendly AI, general purpose technology, Geoffrey Hinton, Gödel, Escher, Bach, hallucination problem, Hans Moravec, income inequality, industrial robot, informal economy, information retrieval, interchangeable parts, iterative process, job automation, John Markoff, John von Neumann, knowledge worker, Large Hadron Collider, longitudinal study, machine translation, megaproject, Menlo Park, meta-analysis, mutually assured destruction, Nash equilibrium, Netflix Prize, new economy, Nick Bostrom, Norbert Wiener, NP-complete, nuclear winter, operational security, optical character recognition, paperclip maximiser, pattern recognition, performance metric, phenotype, prediction markets, price stability, principal–agent problem, race to the bottom, random walk, Ray Kurzweil, recommendation engine, reversible computing, search costs, social graph, speech recognition, Stanislav Petrov, statistical model, stem cell, Stephen Hawking, Strategic Defense Initiative, strong AI, superintelligent machines, supervolcano, synthetic biology, technological singularity, technoutopianism, The Coming Technological Singularity, The Nature of the Firm, Thomas Kuhn: the structure of scientific revolutions, time dilation, Tragedy of the Commons, transaction costs, trolley problem, Turing machine, Vernor Vinge, WarGames: Global Thermonuclear War, Watson beat the top human players on Jeopardy!, World Values Survey, zero-sum game

One result of this conservatism has been increased concentration on “weak AI”—the variety devoted to providing aids to human thought—and away from “strong AI”—the variety that attempts to mechanize human-level intelligence.73 Nilsson’s sentiment has been echoed by several others of the founders, including Marvin Minsky, John McCarthy, and Patrick Winston.74 The last few years have seen a resurgence of interest in AI, which might yet spill over into renewed efforts towards artificial general intelligence (what Nilsson calls “strong AI”). In addition to faster hardware, a contemporary project would benefit from the great strides that have been made in the many subfields of AI, in software engineering more generally, and in neighboring fields such as computational neuroscience.

13 K Kasparov, Garry 12 Kepler, Johannes 14 Knuth, Donald 14, 264 Kurzweil, Ray 2, 261, 269 L Lenat, Douglas 12, 263 Logic Theorist (system) 6 logicist paradigm, see Good Old-Fashioned Artificial Intelligence (GOFAI) Logistello 12 M machine intelligence; see also artificial intelligence human-level (HLMI) 4, 19–21, 27–35, 73–74, 207, 243, 264, 267 revolution, see intelligence explosion machine learning 8–18, 28, 121, 152, 188, 274, 290 machine translation 15 macro-structural development accelerator 233–235 malignant failure 123–126, 149, 196 Malthusian condition 163–165, 252 Manhattan Project 75, 80–87, 276 McCarthy, John 5–18 McCulloch–Pitts neuron 237 MegaEarth 56 memory capacity 7–9, 60, 71 memory sharing 61 Mill, John Stuart 210 mind crime 125–126, 153, 201–208, 213, 226, 297 Minsky, Marvin 18, 261, 262, 282 Monte Carlo method 9–13 Moore’s law 24–25, 73–77, 274, 286; see also computing power moral growth 214 moral permissibility (MP)218–220, 297 moral rightness (MR)217–220.296, 297 moral status 125–126, 166–169, 173, 202–205, 268, 288, 296 Moravec, Hans 24, 265, 288 motivation selection 29, 127–129, 138–144, 147, 158, 168, 180–191, 222 definition 138 motivational scaffolding 191, 207 multipolar scenarios 90, 132, 159–184, 243–254, 301 mutational load 41 N nanotechnology 53, 94–98, 103, 113, 177, 231, 239, 276, 277, 299, 300 natural language 14 neural networks 5–9, 28, 46, 173, 237, 262, 274 neurocomputational modeling 25–30, 35, 61, 301; see also whole brain emulation (WBE) and neuromorphic AI neuromorphic AI 28, 34, 47, 237–245, 267, 300, 301 Newton, Isaac 56 Nilsson, Nils 18–20, 264 nootropics 36–44, 66–67, 201, 267 Norvig, Peter 19, 264, 282 O observation selection theory, see anthropics Oliphant, Mark 85 O’Neill, Gerard 101 ontological crisis 146, 197 optimality notions 10, 186, 194, 291–293 Bayesian agent 9–11 value learner (AI-VL) 194 observation-utility-maximizer (AI-OUM) 194 reinforcement learner (AI-RL) 194 optimization power 24, 62–75, 83, 92–96, 227, 274 definition 65 oracle AI 141–158, 222–226, 285, 286 definition 146 orthogonality thesis 105–109, 115, 279, 280 P paperclip AI 107–108, 123–125, 132–135, 153, 212, 243 Parfit, Derek 279 Pascal’s mugging 223, 298 Pascal’s wager 223 person-affecting perspective 228, 245–246, 301 perverse instantiation 120–124, 153, 190–196 poker 13 principal–agent problem 127–128, 184 Principle of Epistemic Deference 211, 221 Proverb (program) 12 Q qualia, see consciousness quality superintelligence 51–58, 72, 243, 272 definition 56 R race dynamic, see technology race rate of growth, see growth ratification 222–225 Rawls, John 150 Reagan, Ronald 86–87 reasons-based goal 220 recalcitrance 62–77, 92, 241, 274 definition 65 recursive self-improvement 29, 75, 96, 142, 259; see also seed AI reinforcement learning 12, 28, 188–189, 194–196, 207, 237, 277, 282, 290 resource acquisition 113–116, 123, 193 reward signal 71, 121–122, 188, 194, 207 Riemann hypothesis catastrophe 123, 141 robotics 9–19, 94–97, 117–118, 139, 238, 276, 290 Roosevelt, Franklin D.85 RSA encryption scheme 80 Russell, Bertrand 6, 87, 139, 277 S Samuel, Arthur 12 Sandberg, Anders 265, 267, 272, 274 scanning, see whole brain emulation (WBE) Schaeffer, Jonathan 12 scheduling 15 Schelling point 147, 183, 296 Scrabble 13 second transition 176–178, 238, 243–245, 252 second-guessing (arguments) 238–239 seed AI 23–29, 36, 75, 83, 92–96, 107, 116–120, 142, 151, 189–198, 201–217, 224–225, 240–241, 266, 274, 275, 282 self-limiting goal 123 Shakey (robot) 6 SHRDLU (program) 6 Shulman, Carl 178–180, 265, 287, 300, 302, 304 simulation hypothesis 134–135, 143, 278, 288, 292 singleton 78–90, 95–104, 112–114, 115–126, 136, 159, 176–184, 242, 275, 276, 279, 281, 287, 299, 301, 303 definition 78, 100 singularity 1, 2, 49, 75, 261, 274; see also intelligence explosion social signaling 110 somatic gene therapy 42 sovereign AI 148–158, 187, 226, 285 speech recognition 15–16, 46 speed superintelligence 52–58, 75, 270, 271 definition 53 Strategic Defense Initiative (“Star Wars”) 86 strong AI 18 stunting 135–137, 143 sub-symbolic processing, see connectionism superintelligence; see also collective superintelligence, quality superintelligence and speed superintelligence definition 22, 52 forms 52, 59 paths to 22, 50 predicting the behavior of 108, 155, 302 superorganisms 178–180 superpowers 52–56, 80, 86–87, 91–104, 119, 133, 148, 277, 279, 296 types 94 surveillance 15, 49, 64, 82–85, 94, 117, 132, 181, 232, 253, 276, 294, 299 Szilárd, Leó 85 T TD-Gammon 12 Technological Completion Conjecture 112–113, 229 technology race 80–82, 86–90 203–205, 231, 246–252, 302 teleological threads 110 Tesauro, Gerry 12 TextRunner (system) 71 theorem prover 15, 266 three laws of robotics 139, 284 Thrun, Sebastian 19 tool-AI 151–158 definition 151 treacherous turn 116–119, 128 Tribolium castaneum 154 tripwires 137–143 Truman, Harry 85 Turing, Alan 4, 23, 29, 44, 225, 265, 271, 272 U unemployment 65, 159–180, 287 United Nations 87–89, 252–253 universal accelerator 233 unmanned vehicle, see drone uploading, see whole brain emulation (WBE) utility function 10–11, 88, 100, 110, 119, 124–125, 133–134, 172, 185–187, 192–208, 290, 292, 293, 303 V value learning 191–198, 208, 293 value-accretion 189–190, 207 value-loading 185–208, 293, 294 veil of ignorance 150, 156, 253, 285 Vinge, Vernor 2, 49, 270 virtual reality 30, 31, 53, 113, 166, 171, 198, 204, 300 von Neumann probe 100–101, 113 von Neumann, John 44, 87, 114, 261, 277, 281 W wages 65, 69, 160–169 Watson (IBM) 13, 71 WBE, see whole brain emulation (WBE) Whitehead, Alfred N.6 whole brain emulation (WBE) 28–36, 50, 60, 68–73, 77, 84–85, 108, 172, 198, 201–202, 236–245, 252, 266, 267, 274, 299, 300, 301 Wigner, Eugene 85 windfall clause 254, 303 Winston, Patrick 18 wire-heading 122–123, 133, 189, 194, 207, 282, 291 wise-singleton sustainability threshold 100–104, 279 world economy 2–3, 63, 74, 83, 159–184, 274, 277, 285 Y Yudkowsky, Eliezer 70, 92, 98, 106, 197, 211–216, 266, 273, 282, 286, 291, 299


pages: 340 words: 97,723

The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity by Amy Webb

"Friedman doctrine" OR "shareholder theory", Ada Lovelace, AI winter, air gap, Airbnb, airport security, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, algorithmic bias, AlphaGo, Andy Rubin, artificial general intelligence, Asilomar, autonomous vehicles, backpropagation, Bayesian statistics, behavioural economics, Bernie Sanders, Big Tech, bioinformatics, Black Lives Matter, blockchain, Bretton Woods, business intelligence, Cambridge Analytica, Cass Sunstein, Charles Babbage, Claude Shannon: information theory, cloud computing, cognitive bias, complexity theory, computer vision, Computing Machinery and Intelligence, CRISPR, cross-border payments, crowdsourcing, cryptocurrency, Daniel Kahneman / Amos Tversky, data science, deep learning, DeepMind, Demis Hassabis, Deng Xiaoping, disinformation, distributed ledger, don't be evil, Donald Trump, Elon Musk, fail fast, fake news, Filter Bubble, Flynn Effect, Geoffrey Hinton, gig economy, Google Glasses, Grace Hopper, Gödel, Escher, Bach, Herman Kahn, high-speed rail, Inbox Zero, Internet of things, Jacques de Vaucanson, Jeff Bezos, Joan Didion, job automation, John von Neumann, knowledge worker, Lyft, machine translation, Mark Zuckerberg, Menlo Park, move fast and break things, Mustafa Suleyman, natural language processing, New Urbanism, Nick Bostrom, one-China policy, optical character recognition, packet switching, paperclip maximiser, pattern recognition, personalized medicine, RAND corporation, Ray Kurzweil, Recombinant DNA, ride hailing / ride sharing, Rodney Brooks, Rubik’s Cube, Salesforce, Sand Hill Road, Second Machine Age, self-driving car, seminal paper, SETI@home, side project, Silicon Valley, Silicon Valley startup, skunkworks, Skype, smart cities, South China Sea, sovereign wealth fund, speech recognition, Stephen Hawking, strong AI, superintelligent machines, surveillance capitalism, technological singularity, The Coming Technological Singularity, the long tail, theory of mind, Tim Cook: Apple, trade route, Turing machine, Turing test, uber lyft, Von Neumann architecture, Watson beat the top human players on Jeopardy!, zero day

Text and signatories available online. https://futureoflife.org/ai-principles/. Gaddis, J. L. The Cold War: A New History. New York: Penguin Press, 2006. . On Grand Strategy. New York: Penguin Press, 2018. Gilder, G. F., and Ray Kurzweil. Are We Spiritual Machines? Ray Kurzweil vs. the Critics of Strong AI. edited by Jay Wesley Richards. Seattle: Discovery Institute Press, 2001. Goertzel, B., and C. Pennachin, eds. Artificial General Intelligence. Cognitive Technologies Series. Berlin: Springer, 2007. doi:10.1007/978-3-540-68677-4. Gold, E. M. “Language Identification in the Limit.” Information and Control 10, no. 5 (1967): 447–474.

Deciding is a computational activity, something that can be programmed. Choice, however, is the product of judgment, not calculation. It is the capacity to choose that ultimately makes us human. University of California, Berkeley, philosopher John Searle, in his paper “Minds, Brains, and Programs,” argued against the plausibility of general, or what he called “strong,” AI. Searle said a program cannot give a computer a “mind,” “understanding,” or “consciousness,” regardless of how humanlike the program might behave. 34. Jonathan Schaeffer, Robert Lake, Paul Lu, and Martin Bryant, “CHINOOK: The World Man-Machine Checkers Champion,” AI Magazine 17, no. 1 (Spring 1966): 21–29, https://www.aaai.org/ojs/index.php/aimagazine/article/viewFile/1208/1109.pdf. 35.


pages: 345 words: 104,404

Pandora's Brain by Calum Chace

AI winter, Any sufficiently advanced technology is indistinguishable from magic, artificial general intelligence, Bletchley Park, brain emulation, Extropian, friendly AI, hive mind, lateral thinking, machine translation, mega-rich, Nick Bostrom, precautionary principle, Ray Kurzweil, self-driving car, Silicon Valley, Singularitarianism, Skype, speech recognition, stealth mode startup, Stephen Hawking, strong AI, technological singularity, theory of mind, Turing test, Wall-E

‘This all sounds like an argument for stopping people working on strong AI?’ asked Matt. ‘Although I guess that would be hard to do. There are too many people working in the field, and as you say, a lot of them show no sign of understanding the danger.’ ‘You’re right,’ Ivan agreed, ‘we’re on a runaway train that cannot be stopped. Some science fiction novels feature a powerful police force – the Turing Police – that keeps watch to ensure that no-one creates a human-level artificial intelligence. But that’s hopelessly unrealistic. The prize – both intellectual and material – for owning an AGI is too great. Strong AI is coming, whether we like it or not.’


Robot Futures by Illah Reza Nourbakhsh

3D printing, autonomous vehicles, Burning Man, business logic, commoditize, computer vision, digital divide, Mars Rover, Menlo Park, phenotype, Skype, social intelligence, software as a service, stealth mode startup, strong AI, telepresence, telepresence robot, Therac-25, Turing test, Vernor Vinge

Eye tracking A skill enabling a robot to visually examine the scene before it, identify the faces in the scene, mark the location of the eyes on each face, and then find the irises so that the gaze directions of the humans are known. Humans are particularly good at this even when we face other people at acute angles. Hard AI Also known as strong AI, this embodies the AI goal of going all the way toward human equivalence: matching natural intelligence along every possible axis so that artificial beings and natural humans are, at least from a cognitive point of view, indistinguishable. Laser cutting A rapid-prototyping technique in which flat material such as plastic or metal lays on a table and a high-power laser is able to rapidly cut a complex two-dimensional shape out of the raw material.


pages: 405 words: 117,219

In Our Own Image: Savior or Destroyer? The History and Future of Artificial Intelligence by George Zarkadakis

3D printing, Ada Lovelace, agricultural Revolution, Airbnb, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, animal electricity, anthropic principle, Asperger Syndrome, autonomous vehicles, barriers to entry, battle of ideas, Berlin Wall, bioinformatics, Bletchley Park, British Empire, business process, carbon-based life, cellular automata, Charles Babbage, Claude Shannon: information theory, combinatorial explosion, complexity theory, Computing Machinery and Intelligence, continuous integration, Conway's Game of Life, cosmological principle, dark matter, data science, deep learning, DeepMind, dematerialisation, double helix, Douglas Hofstadter, driverless car, Edward Snowden, epigenetics, Flash crash, Google Glasses, Gödel, Escher, Bach, Hans Moravec, income inequality, index card, industrial robot, intentional community, Internet of things, invention of agriculture, invention of the steam engine, invisible hand, Isaac Newton, Jacquard loom, Jacques de Vaucanson, James Watt: steam engine, job automation, John von Neumann, Joseph-Marie Jacquard, Kickstarter, liberal capitalism, lifelogging, machine translation, millennium bug, mirror neurons, Moravec's paradox, natural language processing, Nick Bostrom, Norbert Wiener, off grid, On the Economy of Machinery and Manufactures, packet switching, pattern recognition, Paul Erdős, Plato's cave, post-industrial society, power law, precautionary principle, prediction markets, Ray Kurzweil, Recombinant DNA, Rodney Brooks, Second Machine Age, self-driving car, seminal paper, Silicon Valley, social intelligence, speech recognition, stem cell, Stephen Hawking, Steven Pinker, Strategic Defense Initiative, strong AI, Stuart Kauffman, synthetic biology, systems thinking, technological singularity, The Coming Technological Singularity, The Future of Employment, the scientific method, theory of mind, Turing complete, Turing machine, Turing test, Tyler Cowen, Tyler Cowen: Great Stagnation, Vernor Vinge, Von Neumann architecture, Watson beat the top human players on Jeopardy!, Y2K

Thirdly, that intelligence, from its simplest manifestation in a squirming worm to self-awareness and consciousness in sophisticated cappuccino-sipping humans, is a purely material, indeed biological, phenomenon. Finally, that if a material object called ‘brain’ can be conscious then it is theoretically feasible that another material object, made of some other material stuff, can also be conscious. Based on those four propositions, empiricism tells us that ‘strong AI’ is possible. And that’s because, for empiricists, a brain is an information-processing machine, not metaphorically but literarily. We have several billion cells in our body.27 If we adopt an empirical perspective, the scientific problem of intelligence – or consciousness, natural or artificial – can be (re)defined as a simple question: how can several billion unconscious nanorobots arrive at consciousness?

And although they produced some very capable systems, none of them could arguably be called intelligent. Of course, how one defines intelligence is also crucial. For the pioneers of AI, ‘artificial intelligence’ was nothing less than the artificial equivalent of human intelligence, a position nowadays referred to as ‘strong AI’. An intelligent machine ought to be one that possessed general intelligence, just like a human. This meant that the machine ought to be able to solve any problem using first principles and experience derived from learning. Early models of general-solving were built, but could not scale up. Systems could solve one general problem but not any general problem.6 Algorithms that searched data in order to make general inferences failed quickly because of something called ‘combinatorial explosion’: there were simply too many interrelated parameters and variables to calculate after a number of steps.


Succeeding With AI: How to Make AI Work for Your Business by Veljko Krunic

AI winter, Albert Einstein, algorithmic trading, AlphaGo, Amazon Web Services, anti-fragile, anti-pattern, artificial general intelligence, autonomous vehicles, Bayesian statistics, bioinformatics, Black Swan, Boeing 737 MAX, business process, cloud computing, commoditize, computer vision, correlation coefficient, data is the new oil, data science, deep learning, DeepMind, en.wikipedia.org, fail fast, Gini coefficient, high net worth, information retrieval, Internet of things, iterative process, job automation, Lean Startup, license plate recognition, minimum viable product, natural language processing, recommendation engine, self-driving car, sentiment analysis, Silicon Valley, six sigma, smart cities, speech recognition, statistical model, strong AI, tail risk, The Design of Experiments, the scientific method, web application, zero-sum game

While the logic from the sidebar “Imagine that you’re a CEO” applies to businesses such as Google, Baidu, or Microsoft, there’s an unfortunate tendency for many enterprises to emulate these companies without understanding the rationale behind their actions. Yes, the biggest players make significant money with their AI efforts. They also Pitfalls to avoid 77 invest a lot in AI research. Before you start emulating their AI research efforts, ask yourself, “Am I in the same business?” If your company were to invent something important for strong AI/AGI [76], would you know how to monetize it? Suppose you’re a large brick-and-mortar retailer. Could you take full advantage of that discovery? Probably not—the retailer’s business is different from Google’s. Almost certainly, your company would benefit more from AI technology if you used it to solve your own concrete business problems.

AI will make actuarial mistakes that an average human, uninformed about AI, will see as malicious. Juries, whether in court or in the court of public opinion, are made up of humans. WARNING There’s no way to know if AI will ever develop common sense. It may not for quite a while; maybe not even until we get strong AI/Artificial General Intelligence [76]. Accounting for AI’s actuarial view is a part of your problem domain and part of why understanding your domain is crucial. It’s difficult to account for the differences between the actuarial view AI takes and human social expectations. Accounting for those differences is not an engineering problem and something that you should pass on to the engineering team to solve.


pages: 481 words: 125,946

What to Think About Machines That Think: Today's Leading Thinkers on the Age of Machine Intelligence by John Brockman

Adam Curtis, agricultural Revolution, AI winter, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, algorithmic trading, Anthropocene, artificial general intelligence, augmented reality, autism spectrum disorder, autonomous vehicles, backpropagation, basic income, behavioural economics, bitcoin, blockchain, bread and circuses, Charles Babbage, clean water, cognitive dissonance, Colonization of Mars, complexity theory, computer age, computer vision, constrained optimization, corporate personhood, cosmological principle, cryptocurrency, cuban missile crisis, Danny Hillis, dark matter, data science, deep learning, DeepMind, Demis Hassabis, digital capitalism, digital divide, digital rights, discrete time, Douglas Engelbart, driverless car, Elon Musk, Emanuel Derman, endowment effect, epigenetics, Ernest Rutherford, experimental economics, financial engineering, Flash crash, friendly AI, functional fixedness, global pandemic, Google Glasses, Great Leap Forward, Hans Moravec, hive mind, Ian Bogost, income inequality, information trail, Internet of things, invention of writing, iterative process, James Webb Space Telescope, Jaron Lanier, job automation, Johannes Kepler, John Markoff, John von Neumann, Kevin Kelly, knowledge worker, Large Hadron Collider, lolcat, loose coupling, machine translation, microbiome, mirror neurons, Moneyball by Michael Lewis explains big data, Mustafa Suleyman, natural language processing, Network effects, Nick Bostrom, Norbert Wiener, paperclip maximiser, pattern recognition, Peter Singer: altruism, phenotype, planetary scale, Ray Kurzweil, Recombinant DNA, recommendation engine, Republic of Letters, RFID, Richard Thaler, Rory Sutherland, Satyajit Das, Search for Extraterrestrial Intelligence, self-driving car, sharing economy, Silicon Valley, Skype, smart contracts, social intelligence, speech recognition, statistical model, stem cell, Stephen Hawking, Steve Jobs, Steven Pinker, Stewart Brand, strong AI, Stuxnet, superintelligent machines, supervolcano, synthetic biology, systems thinking, tacit knowledge, TED Talk, the scientific method, The Wisdom of Crowds, theory of mind, Thorstein Veblen, too big to fail, Turing machine, Turing test, Von Neumann architecture, Watson beat the top human players on Jeopardy!, We are as Gods, Y2K

Learning to detect a cat in full frontal position after 10 million frames drawn from Internet videos is a long way from understanding what a cat is, and anybody who thinks that we’ve “solved” AI doesn’t realize the limitations of the current technology. To be sure, there have been exponential advances in narrow-engineering applications of artificial intelligence, such as playing chess, calculating travel routes, or translating texts in rough fashion, but there’s been scarcely more than linear progress in five decades of working toward strong AI. For example, the different flavors of intelligent personal assistants available on your smartphone are only modestly better than Eliza, an early example of primitive natural-language-processing from the mid-1960s. We still have no machine that can, for instance, read all that the Web has to say about war and plot a decent campaign, nor do we even have an open-ended AI system that can figure out how to write an essay to pass a freshman composition class or an eighth-grade science exam.

AI can easily look like the real thing but still be a million miles away from being the real thing—like kissing through a pane of glass: It looks like a kiss but is only a faint shadow of the actual concept. I concede to AI proponents all of the semantic prowess of Shakespeare, the symbol juggling they do perfectly. Missing is the direct relationship with the ideas the symbols represent. Much of what is certain to come soon would have belonged in the old-school “Strong AI” territory. Anything that can be approached in an iterative process can and will be achieved, sooner than many think. On this point I reluctantly side with the proponents: exaflops in CPU+GPU performance, 10K resolution immersive VR, personal petabyte databases . . . here in a couple of decades.


pages: 413 words: 119,587

Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots by John Markoff

A Declaration of the Independence of Cyberspace, AI winter, airport security, Andy Rubin, Apollo 11, Apple II, artificial general intelligence, Asilomar, augmented reality, autonomous vehicles, backpropagation, basic income, Baxter: Rethink Robotics, Bill Atkinson, Bill Duvall, bioinformatics, Boston Dynamics, Brewster Kahle, Burning Man, call centre, cellular automata, Charles Babbage, Chris Urmson, Claude Shannon: information theory, Clayton Christensen, clean water, cloud computing, cognitive load, collective bargaining, computer age, Computer Lib, computer vision, crowdsourcing, Danny Hillis, DARPA: Urban Challenge, data acquisition, Dean Kamen, deep learning, DeepMind, deskilling, Do you want to sell sugared water for the rest of your life?, don't be evil, Douglas Engelbart, Douglas Engelbart, Douglas Hofstadter, Dr. Strangelove, driverless car, dual-use technology, Dynabook, Edward Snowden, Elon Musk, Erik Brynjolfsson, Evgeny Morozov, factory automation, Fairchild Semiconductor, Fillmore Auditorium, San Francisco, From Mathematics to the Technologies of Life and Death, future of work, Galaxy Zoo, General Magic , Geoffrey Hinton, Google Glasses, Google X / Alphabet X, Grace Hopper, Gunnar Myrdal, Gödel, Escher, Bach, Hacker Ethic, Hans Moravec, haute couture, Herbert Marcuse, hive mind, hype cycle, hypertext link, indoor plumbing, industrial robot, information retrieval, Internet Archive, Internet of things, invention of the wheel, Ivan Sutherland, Jacques de Vaucanson, Jaron Lanier, Jeff Bezos, Jeff Hawkins, job automation, John Conway, John Markoff, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John Perry Barlow, John von Neumann, Kaizen: continuous improvement, Kevin Kelly, Kiva Systems, knowledge worker, Kodak vs Instagram, labor-force participation, loose coupling, Marc Andreessen, Mark Zuckerberg, Marshall McLuhan, medical residency, Menlo Park, military-industrial complex, Mitch Kapor, Mother of all demos, natural language processing, Neil Armstrong, new economy, Norbert Wiener, PageRank, PalmPilot, pattern recognition, Philippa Foot, pre–internet, RAND corporation, Ray Kurzweil, reality distortion field, Recombinant DNA, Richard Stallman, Robert Gordon, Robert Solow, Rodney Brooks, Sand Hill Road, Second Machine Age, self-driving car, semantic web, Seymour Hersh, shareholder value, side project, Silicon Valley, Silicon Valley startup, Singularitarianism, skunkworks, Skype, social software, speech recognition, stealth mode startup, Stephen Hawking, Steve Ballmer, Steve Jobs, Steve Wozniak, Steven Levy, Stewart Brand, Strategic Defense Initiative, strong AI, superintelligent machines, tech worker, technological singularity, Ted Nelson, TED Talk, telemarketer, telepresence, telepresence robot, Tenerife airport disaster, The Coming Technological Singularity, the medium is the message, Thorstein Veblen, Tony Fadell, trolley problem, Turing test, Vannevar Bush, Vernor Vinge, warehouse automation, warehouse robotics, Watson beat the top human players on Jeopardy!, We are as Gods, Whole Earth Catalog, William Shockley: the traitorous eight, zero-sum game

Some want to replace humans with machines; some are resigned to the inevitability—“I for one, welcome our insect overlords” (later “robot overlords”) was a meme that was popularized by The Simpsons—and some of them just as passionately want to build machines to extend the reach of humans. The question of whether true artificial intelligence—the concept known as “Strong AI” or Artificial General Intelligence—will emerge, and whether machines can do more than mimic humans, has also been debated for decades. Today there is a growing chorus of scientists and technologists raising new alarms about the possibility of the emergence of self-aware machines and their consequences.

Whether or not Google is on the trail of a genuine artificial “brain” has become increasingly controversial. There is certainly no question that the deep learning techniques are paying off in a wealth of increasingly powerful AI achievements in vision and speech. And there remains in Silicon Valley a growing group of engineers and scientists who believe they are once again closing in on “Strong AI”—the creation of a self-aware machine with human or greater intelligence. Ray Kurzweil, the artificial intelligence researcher and barnstorming advocate for technologically induced immortality, joined Google in 2013 to take over the brain work from Ng, shortly after publishing How to Create a Mind, a book that purported to offer a recipe for creating a working AI.


pages: 170 words: 49,193

The People vs Tech: How the Internet Is Killing Democracy (And How We Save It) by Jamie Bartlett

Ada Lovelace, Airbnb, AlphaGo, Amazon Mechanical Turk, Andrew Keen, autonomous vehicles, barriers to entry, basic income, Bernie Sanders, Big Tech, bitcoin, Black Lives Matter, blockchain, Boris Johnson, Californian Ideology, Cambridge Analytica, central bank independence, Chelsea Manning, cloud computing, computer vision, creative destruction, cryptocurrency, Daniel Kahneman / Amos Tversky, data science, deep learning, DeepMind, disinformation, Dominic Cummings, Donald Trump, driverless car, Edward Snowden, Elon Musk, Evgeny Morozov, fake news, Filter Bubble, future of work, general purpose technology, gig economy, global village, Google bus, Hans Moravec, hive mind, Howard Rheingold, information retrieval, initial coin offering, Internet of things, Jeff Bezos, Jeremy Corbyn, job automation, John Gilmore, John Maynard Keynes: technological unemployment, John Perry Barlow, Julian Assange, manufacturing employment, Mark Zuckerberg, Marshall McLuhan, Menlo Park, meta-analysis, mittelstand, move fast and break things, Network effects, Nicholas Carr, Nick Bostrom, off grid, Panopticon Jeremy Bentham, payday loans, Peter Thiel, post-truth, prediction markets, QR code, ransomware, Ray Kurzweil, recommendation engine, Renaissance Technologies, ride hailing / ride sharing, Robert Mercer, Ross Ulbricht, Sam Altman, Satoshi Nakamoto, Second Machine Age, sharing economy, Silicon Valley, Silicon Valley billionaire, Silicon Valley ideology, Silicon Valley startup, smart cities, smart contracts, smart meter, Snapchat, Stanford prison experiment, Steve Bannon, Steve Jobs, Steven Levy, strong AI, surveillance capitalism, TaskRabbit, tech worker, technological singularity, technoutopianism, Ted Kaczynski, TED Talk, the long tail, the medium is the message, the scientific method, The Spirit Level, The Wealth of Nations by Adam Smith, The Wisdom of Crowds, theory of mind, too big to fail, ultimatum game, universal basic income, WikiLeaks, World Values Survey, Y Combinator, you are the product

The General Data Protection Regulation (GDPR) which is due to come into law across Europe shortly after this book goes to print, is a good example and must be enforced with vigour.* SAFE AI FOR GOOD Artificial Intelligence must not become a proprietary operating system owned and run by a single winner-takes-all company. However, we cannot fall behind in the international race to develop strong AI. Non-democracies must not get an edge on us. We should encourage the sector, but it must be subject to democratic control and, above all, tough regulation to ensure it works in the public interest and is not subject to being hacked or misused.2 Just as the inventors of the atomic bomb realised the power of their creation and so dedicated themselves to creating arms control and nuclear reactor safety, so AI inventors should take similar responsibility.


pages: 573 words: 157,767

From Bacteria to Bach and Back: The Evolution of Minds by Daniel C. Dennett

Ada Lovelace, adjacent possible, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, AlphaGo, Andrew Wiles, Bayesian statistics, bioinformatics, bitcoin, Bletchley Park, Build a better mousetrap, Claude Shannon: information theory, computer age, computer vision, Computing Machinery and Intelligence, CRISPR, deep learning, disinformation, double entry bookkeeping, double helix, Douglas Hofstadter, Elon Musk, epigenetics, experimental subject, Fermat's Last Theorem, Gödel, Escher, Bach, Higgs boson, information asymmetry, information retrieval, invention of writing, Isaac Newton, iterative process, John von Neumann, language acquisition, megaproject, Menlo Park, Murray Gell-Mann, Necker cube, Norbert Wiener, pattern recognition, phenotype, Richard Feynman, Rodney Brooks, self-driving car, social intelligence, sorting algorithm, speech recognition, Stephen Hawking, Steven Pinker, strong AI, Stuart Kauffman, TED Talk, The Wealth of Nations by Adam Smith, theory of mind, Thomas Bayes, trickle-down economics, Turing machine, Turing test, Watson beat the top human players on Jeopardy!, Y2K

There is a long tradition of hype in AI, going back to the earliest days, and many of us have a well-developed habit of discounting the latest “revolutionary breakthrough” by, say, 70% or more, but when such high-tech mavens as Elon Musk and such world-class scientists as Sir Martin Rees and Stephen Hawking start ringing alarm bells about how AI could soon lead to a cataclysmic dissolution of human civilization in one way or another, it is time to rein in one’s habits and reexamine one’s suspicions. Having done so, my verdict is unchanged but more tentative than it used to be. I have always affirmed that “strong AI” is “possible in principle”—but I viewed it as a negligible practical possibility, because it would cost too much and not give us anything we really needed. Domingos and others have shown me that there may be feasible pathways (technically and economically feasible) that I had underestimated, but I still think the task is orders of magnitude larger and more difficult than the cheerleaders have claimed, for the reasons presented in this chapter, and in chapter 8 (the example of Newyorkabot, p. 164).

I discuss the prospects of such a powerful theory or model of an intelligent agent, and point out a key ambiguity in the original Turing Test, in an interview with Jimmy So about the implications of Her, in “Can Robots Fall in Love” (2013), The Daily Beast, http://www.thedailybeast.com/articles/2013/12/31/can-robots-fall-in-love-and-why-would-they.html. 400a negligible practical possibility. When explaining why I thought strong AI was possible in principle but practically impossible, I have often compared it to the task of making a robotic bird that weighed no more than a robin, could catch insects on the fly, and land on a twig. No cosmic mystery, I averred, in such a bird, but the engineering required to bring it to reality would cost more than a dozen Manhattan Projects, and to what end?


pages: 252 words: 74,167

Thinking Machines: The Inside Story of Artificial Intelligence and Our Race to Build the Future by Luke Dormehl

"World Economic Forum" Davos, Ada Lovelace, agricultural Revolution, AI winter, Albert Einstein, Alexey Pajitnov wrote Tetris, algorithmic management, algorithmic trading, AlphaGo, Amazon Mechanical Turk, Apple II, artificial general intelligence, Automated Insights, autonomous vehicles, backpropagation, Bletchley Park, book scanning, borderless world, call centre, cellular automata, Charles Babbage, Claude Shannon: information theory, cloud computing, computer vision, Computing Machinery and Intelligence, correlation does not imply causation, crowdsourcing, deep learning, DeepMind, driverless car, drone strike, Elon Musk, Flash crash, Ford Model T, friendly AI, game design, Geoffrey Hinton, global village, Google X / Alphabet X, Hans Moravec, hive mind, industrial robot, information retrieval, Internet of things, iterative process, Jaron Lanier, John Markoff, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John von Neumann, Kickstarter, Kodak vs Instagram, Law of Accelerating Returns, life extension, Loebner Prize, machine translation, Marc Andreessen, Mark Zuckerberg, Menlo Park, Mustafa Suleyman, natural language processing, Nick Bostrom, Norbert Wiener, out of africa, PageRank, paperclip maximiser, pattern recognition, radical life extension, Ray Kurzweil, recommendation engine, remote working, RFID, scientific management, self-driving car, Silicon Valley, Skype, smart cities, Smart Cities: Big Data, Civic Hackers, and the Quest for a New Utopia, social intelligence, speech recognition, Stephen Hawking, Steve Jobs, Steve Wozniak, Steven Pinker, strong AI, superintelligent machines, tech billionaire, technological singularity, The Coming Technological Singularity, The Future of Employment, Tim Cook: Apple, Tony Fadell, too big to fail, traumatic brain injury, Turing machine, Turing test, Vernor Vinge, warehouse robotics, Watson beat the top human players on Jeopardy!

As Alan Turing pointed out with his Turing Test, the question of whether or not a machine can think is ‘meaningless’ in the sense that it is virtually impossible to assess with any certainty. As we saw in the last chapter, the idea that consciousness is some emergent byproduct of faster and faster computers is overly simplistic. Consider the difficulty in distinguishing between ‘weak’ and ‘strong’ AI. Some people mistakenly suggest that, in the former, an AI’s outcome has been pre-programmed and it is therefore the result of an algorithm carrying out a specific series of steps to achieve a knowable outcome. This means an AI has little to no chance of generating an unpredictable outcome, provided that the training process is properly carried out.


pages: 345 words: 75,660

Prediction Machines: The Simple Economics of Artificial Intelligence by Ajay Agrawal, Joshua Gans, Avi Goldfarb

Abraham Wald, Ada Lovelace, AI winter, Air France Flight 447, Airbus A320, algorithmic bias, AlphaGo, Amazon Picking Challenge, artificial general intelligence, autonomous vehicles, backpropagation, basic income, Bayesian statistics, Black Swan, blockchain, call centre, Capital in the Twenty-First Century by Thomas Piketty, Captain Sullenberger Hudson, carbon tax, Charles Babbage, classic study, collateralized debt obligation, computer age, creative destruction, Daniel Kahneman / Amos Tversky, data acquisition, data is the new oil, data science, deep learning, DeepMind, deskilling, disruptive innovation, driverless car, Elon Musk, en.wikipedia.org, Erik Brynjolfsson, everywhere but in the productivity statistics, financial engineering, fulfillment center, general purpose technology, Geoffrey Hinton, Google Glasses, high net worth, ImageNet competition, income inequality, information retrieval, inventory management, invisible hand, Jeff Hawkins, job automation, John Markoff, Joseph Schumpeter, Kevin Kelly, Lyft, Minecraft, Mitch Kapor, Moneyball by Michael Lewis explains big data, Nate Silver, new economy, Nick Bostrom, On the Economy of Machinery and Manufactures, OpenAI, paperclip maximiser, pattern recognition, performance metric, profit maximization, QWERTY keyboard, race to the bottom, randomized controlled trial, Ray Kurzweil, ride hailing / ride sharing, Robert Solow, Salesforce, Second Machine Age, self-driving car, shareholder value, Silicon Valley, statistical model, Stephen Hawking, Steve Jobs, Steve Jurvetson, Steven Levy, strong AI, The Future of Employment, the long tail, The Signal and the Noise by Nate Silver, Tim Cook: Apple, trolley problem, Turing test, Uber and Lyft, uber lyft, US Airways Flight 1549, Vernor Vinge, vertical integration, warehouse automation, warehouse robotics, Watson beat the top human players on Jeopardy!, William Langewiesche, Y Combinator, zero-sum game

Finally, the school would adjust other elements of the work flow to take advantage of being able to provide instantaneous school admission decisions. 13 Decomposing Decisions Today’s AI tools are far from the machines with human-like intelligence of science fiction (often referred to as “artificial general intelligence” or AGI, or “strong AI”). The current generation of AI provides tools for prediction and little else. This view of AI does not diminish it. As Steve Jobs once remarked, “One of the things that really separates us from the high primates is that we’re tool builders.” He used the example of the bicycle as a tool that had given people superpowers in locomotion above every other animal.


Toast by Stross, Charles

anthropic principle, Buckminster Fuller, cosmological principle, dark matter, disinformation, double helix, Ernest Rutherford, Extropian, Fairchild Semiconductor, flag carrier, Francis Fukuyama: the end of history, Free Software Foundation, Future Shock, Gary Kildall, glass ceiling, gravity well, Great Leap Forward, Hans Moravec, Higgs boson, hydroponic farming, It's morning again in America, junk bonds, Khyber Pass, launch on warning, Mars Rover, Mikhail Gorbachev, military-industrial complex, Neil Armstrong, NP-complete, oil shale / tar sands, peak oil, performance metric, phenotype, plutocrats, punch-card reader, Recombinant DNA, Ronald Reagan, Silicon Valley, slashdot, speech recognition, strong AI, traveling salesman, Turing test, urban renewal, Vernor Vinge, Whole Earth Review, Y2K

“Way I see it, we’ve been fighting a losing battle here. Maybe if we hadn’t put a spike in Babbage’s gears he’d have developed computing technology on an ad-hoc basis and we might have been able to finesse the mathematicians into ignoring it as being beneath them—brute engineering—but I’m not optimistic. Immunizing a civilization against developing strong AI is one of those difficult problems that no algorithm exists to solve. The way I see it, once a civilization develops the theory of the general purpose computer, and once someone comes up with the goal of artificial intelligence, the foundations are rotten and the dam is leaking. You might as well take off and nuke them from orbit; it can’t do any more damage.”


pages: 329 words: 95,309

Digital Bank: Strategies for Launching or Becoming a Digital Bank by Chris Skinner

algorithmic trading, AltaVista, Amazon Web Services, Any sufficiently advanced technology is indistinguishable from magic, augmented reality, bank run, Basel III, bitcoin, Bitcoin Ponzi scheme, business cycle, business intelligence, business process, business process outsourcing, buy and hold, call centre, cashless society, clean water, cloud computing, corporate social responsibility, credit crunch, cross-border payments, crowdsourcing, cryptocurrency, demand response, disintermediation, don't be evil, en.wikipedia.org, fault tolerance, fiat currency, financial innovation, gamification, Google Glasses, high net worth, informal economy, information security, Infrastructure as a Service, Internet of things, Jeff Bezos, Kevin Kelly, Kickstarter, M-Pesa, margin call, mass affluent, MITM: man-in-the-middle, mobile money, Mohammed Bouazizi, new economy, Northern Rock, Occupy movement, Pingit, platform as a service, Ponzi scheme, prediction markets, pre–internet, QR code, quantitative easing, ransomware, reserve currency, RFID, Salesforce, Satoshi Nakamoto, Silicon Valley, smart cities, social intelligence, software as a service, Steve Jobs, strong AI, Stuxnet, the long tail, trade route, unbanked and underbanked, underbanked, upwardly mobile, vertical integration, We are the 99%, web application, WikiLeaks, Y2K

Social media is creating new currencies and new economic models, and this will be very big and very important in the two to three years downstream from now. The question for the banks is how will they position in this new world of peer-to-peer currencies in social media. That is going to be a key question for banks in innovation for the next few years. The other area is what I call strong AI. This is a modern way of looking at AI. The old way was mechanical and thought of this as expert systems. Today, we have this enormous computational power in our hands now, and we should make a big splash around this for the next four or five years. So social data, social media, alternative currencies and peer-to-peer payments will dominate for the near term, and then big data and AI in four or five years from now.


pages: 294 words: 96,661

The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity by Byron Reese

"World Economic Forum" Davos, agricultural Revolution, AI winter, Apollo 11, artificial general intelligence, basic income, bread and circuses, Buckminster Fuller, business cycle, business process, Charles Babbage, Claude Shannon: information theory, clean water, cognitive bias, computer age, CRISPR, crowdsourcing, dark matter, DeepMind, Edward Jenner, Elon Musk, Eratosthenes, estate planning, financial independence, first square of the chessboard, first square of the chessboard / second half of the chessboard, flying shuttle, full employment, Hans Moravec, Hans Rosling, income inequality, invention of agriculture, invention of movable type, invention of the printing press, invention of writing, Isaac Newton, Islamic Golden Age, James Hargreaves, job automation, Johannes Kepler, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John von Neumann, Kevin Kelly, lateral thinking, life extension, Louis Pasteur, low interest rates, low skilled workers, manufacturing employment, Marc Andreessen, Mark Zuckerberg, Marshall McLuhan, Mary Lou Jepsen, Moravec's paradox, Nick Bostrom, On the Revolutions of the Heavenly Spheres, OpenAI, pattern recognition, profit motive, quantum entanglement, radical life extension, Ray Kurzweil, recommendation engine, Rodney Brooks, Sam Altman, self-driving car, seminal paper, Silicon Valley, Skype, spinning jenny, Stephen Hawking, Steve Wozniak, Steven Pinker, strong AI, technological singularity, TED Talk, telepresence, telepresence robot, The Future of Employment, the scientific method, Timothy McVeigh, Turing machine, Turing test, universal basic income, Von Neumann architecture, Wall-E, warehouse robotics, Watson beat the top human players on Jeopardy!, women in the workforce, working poor, Works Progress Administration, Y Combinator

The kind of AI we have today is narrow AI, also known as weak AI. It is the only kind of AI we know how to build, and it is incredibly useful. Narrow AI is the ability for a computer to solve a specific kind of problem or perform a specific task. The other kind of AI is referred to by three different names: general AI, strong AI, or artificial general intelligence (AGI). Although the terms are interchangeable, I will use AGI from this point forward to refer to an artificial intelligence as smart and versatile as you or me. A Roomba vacuum cleaner, Siri, and a self-driving car are powered by narrow AI. A hypothetical robot that can unload the dishwasher would be powered by narrow AI.


pages: 307 words: 88,180

AI Superpowers: China, Silicon Valley, and the New World Order by Kai-Fu Lee

"World Economic Forum" Davos, AI winter, Airbnb, Albert Einstein, algorithmic bias, algorithmic trading, Alignment Problem, AlphaGo, artificial general intelligence, autonomous vehicles, barriers to entry, basic income, bike sharing, business cycle, Cambridge Analytica, cloud computing, commoditize, computer vision, corporate social responsibility, cotton gin, creative destruction, crony capitalism, data science, deep learning, DeepMind, Demis Hassabis, Deng Xiaoping, deskilling, Didi Chuxing, Donald Trump, driverless car, Elon Musk, en.wikipedia.org, Erik Brynjolfsson, fake news, full employment, future of work, general purpose technology, Geoffrey Hinton, gig economy, Google Chrome, Hans Moravec, happiness index / gross national happiness, high-speed rail, if you build it, they will come, ImageNet competition, impact investing, income inequality, informal economy, Internet of things, invention of the telegraph, Jeff Bezos, job automation, John Markoff, Kickstarter, knowledge worker, Lean Startup, low skilled workers, Lyft, machine translation, mandatory minimum, Mark Zuckerberg, Menlo Park, minimum viable product, natural language processing, Neil Armstrong, new economy, Nick Bostrom, OpenAI, pattern recognition, pirate software, profit maximization, QR code, Ray Kurzweil, recommendation engine, ride hailing / ride sharing, risk tolerance, Robert Mercer, Rodney Brooks, Rubik’s Cube, Sam Altman, Second Machine Age, self-driving car, sentiment analysis, sharing economy, Silicon Valley, Silicon Valley ideology, Silicon Valley startup, Skype, SoftBank, Solyndra, special economic zone, speech recognition, Stephen Hawking, Steve Jobs, strong AI, TED Talk, The Future of Employment, Travis Kalanick, Uber and Lyft, uber lyft, universal basic income, urban planning, vertical integration, Vision Fund, warehouse robotics, Y Combinator

THE AI WORLD ORDER Inequality will not be contained within national borders. China and the United States have already jumped out to an enormous lead over all other countries in artificial intelligence, setting the stage for a new kind of bipolar world order. Several other countries—the United Kingdom, France, and Canada, to name a few—have strong AI research labs staffed with great talent, but they lack the venture-capital ecosystem and large user bases to generate the data that will be key to the age of implementation. As AI companies in the United States and China accumulate more data and talent, the virtuous cycle of data-driven improvements is widening their lead to a point where it will become insurmountable.


Noam Chomsky: A Life of Dissent by Robert F. Barsky

Albert Einstein, anti-communist, centre right, feminist movement, Herbert Marcuse, Howard Zinn, information retrieval, language acquisition, machine translation, means of production, military-industrial complex, Murray Bookchin, Norman Mailer, profit motive, public intellectual, Ralph Nader, Ronald Reagan, strong AI, The Bell Curve by Richard Herrnstein and Charles Murray, theory of mind, Yom Kippur War

Chomsky, who in fact only attended the conference briefly, preferring to spend his time engaged in the subject in the context of "talks to popular audiences," insists that the Times misrepresented what had occurred at the meetings: "There was scientific interest, but it had nothing whatsoever to do with language translation (MT) and artificial intelligence file:///D|/export3/www.netlibrary.com/nlreader/nlreader.dll@bookid=9296&filename=page_174.html [4/16/2007 3:21:17 PM] Document Page 175 (AI). MT is a very low level engineering project, and so-called classic strong AI is largely vacuous, dismissed by most serious scientists and lacking any results, as its leading exponents concede" (31 Mar. 1995). Entire research projects, on language acquisition and other topics, were now being conducted with the aim of either establishing or disproving Chomsky's theories. Chomsky himself fuelled these enterprises by maintaining a high level of productivity: he published Reflections on Language (1975), Essays on Form and Interpretation (1977), Rules and Representations (1980), and Modular Approaches to the Study of the Mind (1984).


pages: 347 words: 97,721

Only Humans Need Apply: Winners and Losers in the Age of Smart Machines by Thomas H. Davenport, Julia Kirby

"World Economic Forum" Davos, AI winter, Amazon Robotics, Andy Kessler, Apollo Guidance Computer, artificial general intelligence, asset allocation, Automated Insights, autonomous vehicles, basic income, Baxter: Rethink Robotics, behavioural economics, business intelligence, business process, call centre, carbon-based life, Clayton Christensen, clockwork universe, commoditize, conceptual framework, content marketing, dark matter, data science, David Brooks, deep learning, deliberate practice, deskilling, digital map, disruptive innovation, Douglas Engelbart, driverless car, Edward Lloyd's coffeehouse, Elon Musk, Erik Brynjolfsson, estate planning, financial engineering, fixed income, flying shuttle, follow your passion, Frank Levy and Richard Murnane: The New Division of Labor, Freestyle chess, game design, general-purpose programming language, global pandemic, Google Glasses, Hans Lippershey, haute cuisine, income inequality, independent contractor, index fund, industrial robot, information retrieval, intermodal, Internet of things, inventory management, Isaac Newton, job automation, John Markoff, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, Joi Ito, Khan Academy, Kiva Systems, knowledge worker, labor-force participation, lifelogging, longitudinal study, loss aversion, machine translation, Mark Zuckerberg, Narrative Science, natural language processing, Nick Bostrom, Norbert Wiener, nuclear winter, off-the-grid, pattern recognition, performance metric, Peter Thiel, precariat, quantitative trading / quantitative finance, Ray Kurzweil, Richard Feynman, risk tolerance, Robert Shiller, robo advisor, robotic process automation, Rodney Brooks, Second Machine Age, self-driving car, Silicon Valley, six sigma, Skype, social intelligence, speech recognition, spinning jenny, statistical model, Stephen Hawking, Steve Jobs, Steve Wozniak, strong AI, superintelligent machines, supply-chain management, tacit knowledge, tech worker, TED Talk, the long tail, transaction costs, Tyler Cowen, Tyler Cowen: Great Stagnation, Watson beat the top human players on Jeopardy!, Works Progress Administration, Zipcar

Deloitte is working with companies like IBM and Cognitive Scale to create not just a single application, but a broad “Intelligent Automation Platform.” Even when progress is made on these types of integration, the result will still fall short of the all-knowing “artificial general intelligence” or “strong AI” that we discussed in Chapter 2. That may well be coming, but not anytime soon. Still, these short-term combinations of tools and methods may well make automation solutions much more useful. Broadening Application of the Same Tools —In addition to employing broader types of technology, organizations that are stepping forward are using their existing technology to address different industries and business functions.


Falter: Has the Human Game Begun to Play Itself Out? by Bill McKibben

"Hurricane Katrina" Superdome, 23andMe, Affordable Care Act / Obamacare, Airbnb, Alan Greenspan, American Legislative Exchange Council, An Inconvenient Truth, Anne Wojcicki, Anthropocene, Apollo 11, artificial general intelligence, Bernie Sanders, Bill Joy: nanobots, biodiversity loss, Burning Man, call centre, Cambridge Analytica, carbon footprint, carbon tax, Charles Lindbergh, clean water, Colonization of Mars, computer vision, CRISPR, David Attenborough, deep learning, DeepMind, degrowth, disinformation, Donald Trump, double helix, driverless car, Easter island, Edward Snowden, Elon Musk, ending welfare as we know it, energy transition, Extinction Rebellion, Flynn Effect, gigafactory, Google Earth, Great Leap Forward, green new deal, Greta Thunberg, Hyperloop, impulse control, income inequality, Intergovernmental Panel on Climate Change (IPCC), James Bridle, Jane Jacobs, Jaron Lanier, Jeff Bezos, job automation, Kim Stanley Robinson, life extension, light touch regulation, Mark Zuckerberg, mass immigration, megacity, Menlo Park, moral hazard, Naomi Klein, Neil Armstrong, Nelson Mandela, Nick Bostrom, obamacare, ocean acidification, off grid, oil shale / tar sands, paperclip maximiser, Paris climate accords, pattern recognition, Peter Thiel, plutocrats, profit motive, Ralph Waldo Emerson, Ray Kurzweil, Robert Mercer, Ronald Reagan, Sam Altman, San Francisco homelessness, self-driving car, Silicon Valley, Silicon Valley startup, smart meter, Snapchat, stem cell, Stephen Hawking, Steve Jobs, Steve Wozniak, Steven Pinker, strong AI, supervolcano, tech baron, tech billionaire, technoutopianism, TED Talk, The Wealth of Nations by Adam Smith, traffic fines, Tragedy of the Commons, Travis Kalanick, Tyler Cowen, urban sprawl, Virgin Galactic, Watson beat the top human players on Jeopardy!, Y Combinator, Y2K, yield curve

You’ll be able to drink IPAs for hours at your local tavern, and the self-driving car will take you home—and it may well be able to recommend precisely which IPAs you’d like best. But it won’t be able to carry on an interesting discussion about whether this is the best course for your life. That next step up is artificial general intelligence, sometimes referred to as “strong AI.” That’s a computer “as smart as a human across the board, a machine that can perform any intellectual task a human being can,” in Urban’s description. This kind of intelligence would require “the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience.”9 Five years ago a pair of researchers asked hundreds of AI experts at a series of conferences when we’d reach this milestone—more precisely, it asked them to name a “median optimistic year,” when there was a 10 percent chance we’d get there; a median realistic year, a 50 percent chance; and a “pessimistic” year, in which there was a 90 percent chance.


pages: 385 words: 111,113

Augmented: Life in the Smart Lane by Brett King

23andMe, 3D printing, additive manufacturing, Affordable Care Act / Obamacare, agricultural Revolution, Airbnb, Albert Einstein, Amazon Web Services, Any sufficiently advanced technology is indistinguishable from magic, Apollo 11, Apollo Guidance Computer, Apple II, artificial general intelligence, asset allocation, augmented reality, autonomous vehicles, barriers to entry, bitcoin, Bletchley Park, blockchain, Boston Dynamics, business intelligence, business process, call centre, chief data officer, Chris Urmson, Clayton Christensen, clean water, Computing Machinery and Intelligence, congestion charging, CRISPR, crowdsourcing, cryptocurrency, data science, deep learning, DeepMind, deskilling, different worldview, disruptive innovation, distributed generation, distributed ledger, double helix, drone strike, electricity market, Elon Musk, Erik Brynjolfsson, Fellow of the Royal Society, fiat currency, financial exclusion, Flash crash, Flynn Effect, Ford Model T, future of work, gamification, Geoffrey Hinton, gig economy, gigafactory, Google Glasses, Google X / Alphabet X, Hans Lippershey, high-speed rail, Hyperloop, income inequality, industrial robot, information asymmetry, Internet of things, invention of movable type, invention of the printing press, invention of the telephone, invention of the wheel, James Dyson, Jeff Bezos, job automation, job-hopping, John Markoff, John von Neumann, Kevin Kelly, Kickstarter, Kim Stanley Robinson, Kiva Systems, Kodak vs Instagram, Leonard Kleinrock, lifelogging, low earth orbit, low skilled workers, Lyft, M-Pesa, Mark Zuckerberg, Marshall McLuhan, megacity, Metcalfe’s law, Minecraft, mobile money, money market fund, more computing power than Apollo, Neal Stephenson, Neil Armstrong, Network effects, new economy, Nick Bostrom, obamacare, Occupy movement, Oculus Rift, off grid, off-the-grid, packet switching, pattern recognition, peer-to-peer, Ray Kurzweil, retail therapy, RFID, ride hailing / ride sharing, Robert Metcalfe, Salesforce, Satoshi Nakamoto, Second Machine Age, selective serotonin reuptake inhibitor (SSRI), self-driving car, sharing economy, Shoshana Zuboff, Silicon Valley, Silicon Valley startup, Skype, smart cities, smart grid, smart transportation, Snapchat, Snow Crash, social graph, software as a service, speech recognition, statistical model, stem cell, Stephen Hawking, Steve Jobs, Steve Wozniak, strong AI, synthetic biology, systems thinking, TaskRabbit, technological singularity, TED Talk, telemarketer, telepresence, telepresence robot, Tesla Model S, The future is already here, The Future of Employment, Tim Cook: Apple, trade route, Travis Kalanick, TSMC, Turing complete, Turing test, Twitter Arab Spring, uber lyft, undersea cable, urban sprawl, V2 rocket, warehouse automation, warehouse robotics, Watson beat the top human players on Jeopardy!, white picket fence, WikiLeaks, yottabyte

Automated UAVs, autonomous emergency vehicles and robots, and sensor nets giving feedback loops to the right algorithms or AIs to dispatch those resources. Artificial intelligence will not only be an underpinning of smart cities, it will also be necessary simply to process all of the sensor data coming into smart city operations centres. Humans will only slow down the process too much. Strong AI involvement running smart cities is closer to two decades away. Within 20 to 30 years, we will see smart governance at the hands of AI—coded laws and enforcement, resource allocation, budgeting and optimal decision-making made by algorithms that run independent of human committees and voting. The manual counting of votes for elections will be a thing of the past, as citizens will BYOD (bring your own device) to the challenge of casting their votes.


pages: 484 words: 104,873

Rise of the Robots: Technology and the Threat of a Jobless Future by Martin Ford

3D printing, additive manufacturing, Affordable Care Act / Obamacare, AI winter, algorithmic management, algorithmic trading, Amazon Mechanical Turk, artificial general intelligence, assortative mating, autonomous vehicles, banking crisis, basic income, Baxter: Rethink Robotics, Bernie Madoff, Bill Joy: nanobots, bond market vigilante , business cycle, call centre, Capital in the Twenty-First Century by Thomas Piketty, carbon tax, Charles Babbage, Chris Urmson, Clayton Christensen, clean water, cloud computing, collateralized debt obligation, commoditize, computer age, creative destruction, data science, debt deflation, deep learning, deskilling, digital divide, disruptive innovation, diversified portfolio, driverless car, Erik Brynjolfsson, factory automation, financial innovation, Flash crash, Ford Model T, Fractional reserve banking, Freestyle chess, full employment, general purpose technology, Geoffrey Hinton, Goldman Sachs: Vampire Squid, Gunnar Myrdal, High speed trading, income inequality, indoor plumbing, industrial robot, informal economy, iterative process, Jaron Lanier, job automation, John Markoff, John Maynard Keynes: technological unemployment, John von Neumann, Kenneth Arrow, Khan Academy, Kiva Systems, knowledge worker, labor-force participation, large language model, liquidity trap, low interest rates, low skilled workers, low-wage service sector, Lyft, machine readable, machine translation, manufacturing employment, Marc Andreessen, McJob, moral hazard, Narrative Science, Network effects, new economy, Nicholas Carr, Norbert Wiener, obamacare, optical character recognition, passive income, Paul Samuelson, performance metric, Peter Thiel, plutocrats, post scarcity, precision agriculture, price mechanism, public intellectual, Ray Kurzweil, rent control, rent-seeking, reshoring, RFID, Richard Feynman, Robert Solow, Rodney Brooks, Salesforce, Sam Peltzman, secular stagnation, self-driving car, Silicon Valley, Silicon Valley billionaire, Silicon Valley startup, single-payer health, software is eating the world, sovereign wealth fund, speech recognition, Spread Networks laid a new fibre optics cable between New York and Chicago, stealth mode startup, stem cell, Stephen Hawking, Steve Jobs, Steven Levy, Steven Pinker, strong AI, Stuxnet, technological singularity, telepresence, telepresence robot, The Bell Curve by Richard Herrnstein and Charles Murray, The Coming Technological Singularity, The Future of Employment, the long tail, Thomas L Friedman, too big to fail, Tragedy of the Commons, Tyler Cowen, Tyler Cowen: Great Stagnation, uber lyft, union organizing, Vernor Vinge, very high income, warehouse automation, warehouse robotics, Watson beat the top human players on Jeopardy!, women in the workforce

MIT physicist Max Tegmark, one of the co-authors of the Hawking article, told The Atlantic’s James Hamblin that “this is very near-term stuff. Anyone who’s thinking about what their kids should study in high school or college should care a lot about this.”11 Others view a thinking machine as fundamentally possible, but much further out. Gary Marcus, for example, thinks strong AI will take at least twice as long as Kurzweil predicts, but that “it’s likely that machines will be smarter than us before the end of the century—not just at chess or trivia questions but at just about everything, from mathematics and engineering to science and medicine.”12 In recent years, speculation about human-level AI has shifted increasingly away from a top-down programming approach and, instead, toward an emphasis on reverse engineering and then simulating the human brain.


pages: 379 words: 108,129

An Optimist's Tour of the Future by Mark Stevenson

23andMe, Albert Einstein, Alvin Toffler, Andy Kessler, Apollo 11, augmented reality, bank run, Boston Dynamics, carbon credits, carbon footprint, carbon-based life, clean water, computer age, decarbonisation, double helix, Douglas Hofstadter, Dr. Strangelove, Elon Musk, flex fuel, Ford Model T, Future Shock, Great Leap Forward, Gregor Mendel, Gödel, Escher, Bach, Hans Moravec, Hans Rosling, Intergovernmental Panel on Climate Change (IPCC), Internet of things, invention of agriculture, Isaac Newton, Jeff Bezos, Kevin Kelly, Law of Accelerating Returns, Leonard Kleinrock, life extension, Louis Pasteur, low earth orbit, mutually assured destruction, Naomi Klein, Nick Bostrom, off grid, packet switching, peak oil, pre–internet, private spaceflight, radical life extension, Ray Kurzweil, Richard Feynman, Rodney Brooks, Scaled Composites, self-driving car, Silicon Valley, smart cities, social intelligence, SpaceShipOne, stem cell, Stephen Hawking, Steven Pinker, Stewart Brand, strong AI, synthetic biology, TED Talk, the scientific method, Virgin Galactic, Wall-E, X Prize

This proposed necessity of having to raise robots might lead you to the conclusion that truly intelligent robots will be few and far between. But the thing about robots is you can replicate them. Once we’ve got one intelligent robot brain, we can copy it to another machine, and another, and another. The robots have finally arrived, bringing an explosion of ‘strong AI’. Of course, it may not just be us (the humans) doing the copying, it might be the robots themselves. And because technology improves at a startling rate (way faster than biological evolution), one has to consider the possibility that things won’t stop there. Once we achieve a robot with human-level (if not human-like) intelligence, it won’t be very long until robot cognition outstrips the human mind – marrying the human-like intelligence with instant recall, flawless memory and the number-crunching ability of Deep Blue.


pages: 419 words: 109,241

A World Without Work: Technology, Automation, and How We Should Respond by Daniel Susskind

"World Economic Forum" Davos, 3D printing, agricultural Revolution, AI winter, Airbnb, Albert Einstein, algorithmic trading, AlphaGo, artificial general intelligence, autonomous vehicles, basic income, Bertrand Russell: In Praise of Idleness, Big Tech, blue-collar work, Boston Dynamics, British Empire, Capital in the Twenty-First Century by Thomas Piketty, cloud computing, computer age, computer vision, computerized trading, creative destruction, David Graeber, David Ricardo: comparative advantage, deep learning, DeepMind, Demis Hassabis, demographic transition, deskilling, disruptive innovation, Donald Trump, Douglas Hofstadter, driverless car, drone strike, Edward Glaeser, Elon Musk, en.wikipedia.org, Erik Brynjolfsson, fake news, financial innovation, flying shuttle, Ford Model T, fulfillment center, future of work, gig economy, Gini coefficient, Google Glasses, Gödel, Escher, Bach, Hans Moravec, income inequality, income per capita, industrial robot, interchangeable parts, invisible hand, Isaac Newton, Jacques de Vaucanson, James Hargreaves, job automation, John Markoff, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John von Neumann, Joi Ito, Joseph Schumpeter, Kenneth Arrow, Kevin Roose, Khan Academy, Kickstarter, Larry Ellison, low skilled workers, lump of labour, machine translation, Marc Andreessen, Mark Zuckerberg, means of production, Metcalfe’s law, natural language processing, Neil Armstrong, Network effects, Nick Bostrom, Occupy movement, offshore financial centre, Paul Samuelson, Peter Thiel, pink-collar, precariat, purchasing power parity, Ray Kurzweil, ride hailing / ride sharing, road to serfdom, Robert Gordon, Sam Altman, Second Machine Age, self-driving car, shareholder value, sharing economy, Silicon Valley, Snapchat, social intelligence, software is eating the world, sovereign wealth fund, spinning jenny, Stephen Hawking, Steve Jobs, strong AI, tacit knowledge, technological solutionism, TED Talk, telemarketer, The Future of Employment, The Rise and Fall of American Growth, the scientific method, The Theory of the Leisure Class by Thorstein Veblen, The Wealth of Nations by Adam Smith, Thorstein Veblen, Travis Kalanick, Turing test, Two Sigma, Tyler Cowen, Tyler Cowen: Great Stagnation, universal basic income, upwardly mobile, warehouse robotics, Watson beat the top human players on Jeopardy!, We are the 99%, wealth creators, working poor, working-age population, Y Combinator

Charles Darwin, On the Origin of Species (London: Penguin Books, 2009), p. 427. 20.  See Isaiah Berlin, The Hedgehog and the Fox (New York: Simon & Schuster, 1953). 21.  The distinction between AGI and ANI is often conflated with another one made by John Searle, who speaks of the difference between “strong” AI and “weak” AI. But the two are not the same thing at all. AGI and ANI reflect the breadth of a machine’s capability, while Searle’s terms describe whether a machine thinks like a human being (“strong”) or unlike one (“weak”). 22.  Nick Bostrom and Eliezer Yudkowsky, “The Ethics of Artificial Intelligence” in William Ramsey and Keith Frankish, eds., Cambridge Handbook of Artificial Intelligence (Cambridge: Cambridge University Press, 2011). 23.  


pages: 463 words: 118,936

Darwin Among the Machines by George Dyson

Ada Lovelace, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Albert Einstein, anti-communist, backpropagation, Bletchley Park, British Empire, carbon-based life, cellular automata, Charles Babbage, Claude Shannon: information theory, combinatorial explosion, computer age, Computing Machinery and Intelligence, Danny Hillis, Donald Davies, fault tolerance, Fellow of the Royal Society, finite state, IFF: identification friend or foe, independent contractor, invention of the telescope, invisible hand, Isaac Newton, Jacquard loom, James Watt: steam engine, John Nash: game theory, John von Neumann, launch on warning, low earth orbit, machine readable, Menlo Park, Nash equilibrium, Norbert Wiener, On the Economy of Machinery and Manufactures, packet switching, pattern recognition, phenotype, RAND corporation, Richard Feynman, spectrum auction, strong AI, synthetic biology, the scientific method, The Wealth of Nations by Adam Smith, Turing machine, Von Neumann architecture, zero-sum game

The argument over where to draw this distinction has been going on for a long time. Can machines calculate? Can machines think? Can machines become conscious? Can machines have souls? Although Leibniz believed that the process of thought could be arithmetized and that mechanism could perform the requisite arithmetic, he disagreed with the “strong AI” of Hobbes that reduced everything to mechanism, even our own consciousness or the existence (and corporeal mortality) of a soul. “Whatever is performed in the body of man and of every animal is no less mechanical than what is performed in a watch,” wrote Leibniz to Samuel Clarke.51 But, in the Monadology, Leibniz argued that “perception, and that which depends upon it, are inexplicable by mechanical causes,” and he presented a thought experiment to support his views: “Supposing that there were a machine whose structure produced thought, sensation, and perception, we could conceive of it as increased in size with the same proportions until one was able to enter into its interior, as he would into a mill.


pages: 444 words: 117,770

The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma by Mustafa Suleyman

"World Economic Forum" Davos, 23andMe, 3D printing, active measures, Ada Lovelace, additive manufacturing, agricultural Revolution, AI winter, air gap, Airbnb, Alan Greenspan, algorithmic bias, Alignment Problem, AlphaGo, Alvin Toffler, Amazon Web Services, Anthropocene, artificial general intelligence, Asilomar, Asilomar Conference on Recombinant DNA, ASML, autonomous vehicles, backpropagation, barriers to entry, basic income, benefit corporation, Big Tech, biodiversity loss, bioinformatics, Bletchley Park, Blitzscaling, Boston Dynamics, business process, business process outsourcing, call centre, Capital in the Twenty-First Century by Thomas Piketty, ChatGPT, choice architecture, circular economy, classic study, clean tech, cloud computing, commoditize, computer vision, coronavirus, corporate governance, correlation does not imply causation, COVID-19, creative destruction, CRISPR, critical race theory, crowdsourcing, cryptocurrency, cuban missile crisis, data science, decarbonisation, deep learning, deepfake, DeepMind, deindustrialization, dematerialisation, Demis Hassabis, disinformation, drone strike, drop ship, dual-use technology, Easter island, Edward Snowden, effective altruism, energy transition, epigenetics, Erik Brynjolfsson, Ernest Rutherford, Extinction Rebellion, facts on the ground, failed state, Fairchild Semiconductor, fear of failure, flying shuttle, Ford Model T, future of work, general purpose technology, Geoffrey Hinton, global pandemic, GPT-3, GPT-4, hallucination problem, hive mind, hype cycle, Intergovernmental Panel on Climate Change (IPCC), Internet Archive, Internet of things, invention of the wheel, job automation, John Maynard Keynes: technological unemployment, John von Neumann, Joi Ito, Joseph Schumpeter, Kickstarter, lab leak, large language model, Law of Accelerating Returns, Lewis Mumford, license plate recognition, lockdown, machine readable, Marc Andreessen, meta-analysis, microcredit, move 37, Mustafa Suleyman, mutually assured destruction, new economy, Nick Bostrom, Nikolai Kondratiev, off grid, OpenAI, paperclip maximiser, personalized medicine, Peter Thiel, planetary scale, plutocrats, precautionary principle, profit motive, prompt engineering, QAnon, quantum entanglement, ransomware, Ray Kurzweil, Recombinant DNA, Richard Feynman, Robert Gordon, Ronald Reagan, Sam Altman, Sand Hill Road, satellite internet, Silicon Valley, smart cities, South China Sea, space junk, SpaceX Starlink, stealth mode startup, stem cell, Stephen Fry, Steven Levy, strong AI, synthetic biology, tacit knowledge, tail risk, techlash, techno-determinism, technoutopianism, Ted Kaczynski, the long tail, The Rise and Fall of American Growth, Thomas Malthus, TikTok, TSMC, Turing test, Tyler Cowen, Tyler Cowen: Great Stagnation, universal basic income, uranium enrichment, warehouse robotics, William MacAskill, working-age population, world market for maybe five computers, zero day

So, where does AI go next as the wave fully breaks? Today we have narrow or weak AI: limited and specific versions. GPT-4 can spit out virtuoso texts, but it can’t turn around tomorrow and drive a car, as other AI programs do. Existing AI systems still operate in relatively narrow lanes. What is yet to come is a truly general or strong AI capable of human-level performance across a wide range of complex tasks—able to seamlessly shift among them. But this is exactly what the scaling hypothesis predicts is coming and what we see the first signs of in today’s systems. AI is still in an early phase. It may look smart to claim that AI doesn’t live up to the hype, and it’ll earn you some Twitter followers.


System Error by Rob Reich

"Friedman doctrine" OR "shareholder theory", "World Economic Forum" Davos, 2021 United States Capitol attack, A Declaration of the Independence of Cyberspace, Aaron Swartz, AI winter, Airbnb, airport security, Alan Greenspan, Albert Einstein, algorithmic bias, AlphaGo, AltaVista, artificial general intelligence, Automated Insights, autonomous vehicles, basic income, Ben Horowitz, Berlin Wall, Bernie Madoff, Big Tech, bitcoin, Blitzscaling, Cambridge Analytica, Cass Sunstein, clean water, cloud computing, computer vision, contact tracing, contact tracing app, coronavirus, corporate governance, COVID-19, creative destruction, CRISPR, crowdsourcing, data is the new oil, data science, decentralized internet, deep learning, deepfake, DeepMind, deplatforming, digital rights, disinformation, disruptive innovation, Donald Knuth, Donald Trump, driverless car, dual-use technology, Edward Snowden, Elon Musk, en.wikipedia.org, end-to-end encryption, Fairchild Semiconductor, fake news, Fall of the Berlin Wall, Filter Bubble, financial engineering, financial innovation, fulfillment center, future of work, gentrification, Geoffrey Hinton, George Floyd, gig economy, Goodhart's law, GPT-3, Hacker News, hockey-stick growth, income inequality, independent contractor, informal economy, information security, Jaron Lanier, Jeff Bezos, Jim Simons, jimmy wales, job automation, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John Perry Barlow, Lean Startup, linear programming, Lyft, Marc Andreessen, Mark Zuckerberg, meta-analysis, minimum wage unemployment, Monkeys Reject Unequal Pay, move fast and break things, Myron Scholes, Network effects, Nick Bostrom, Northpointe / Correctional Offender Management Profiling for Alternative Sanctions, NP-complete, Oculus Rift, OpenAI, Panopticon Jeremy Bentham, Parler "social media", pattern recognition, personalized medicine, Peter Thiel, Philippa Foot, premature optimization, profit motive, quantitative hedge fund, race to the bottom, randomized controlled trial, recommendation engine, Renaissance Technologies, Richard Thaler, ride hailing / ride sharing, Ronald Reagan, Sam Altman, Sand Hill Road, scientific management, self-driving car, shareholder value, Sheryl Sandberg, Shoshana Zuboff, side project, Silicon Valley, Snapchat, social distancing, Social Responsibility of Business Is to Increase Its Profits, software is eating the world, spectrum auction, speech recognition, stem cell, Steve Jobs, Steven Levy, strong AI, superintelligent machines, surveillance capitalism, Susan Wojcicki, tech billionaire, tech worker, techlash, technoutopianism, Telecommunications Act of 1996, telemarketer, The Future of Employment, TikTok, Tim Cook: Apple, traveling salesman, Triangle Shirtwaist Factory, trolley problem, Turing test, two-sided market, Uber and Lyft, uber lyft, ultimatum game, union organizing, universal basic income, washing machines reduced drudgery, Watson beat the top human players on Jeopardy!, When a measure becomes a target, winner-take-all economy, Y Combinator, you are the product

Until machines are capable of defining their own goals, the choices of the problems we want to solve with these technologies—what goals are worthy to pursue—are still ours. There is an outer frontier of AI that occupies the fantasies of some technologists: the idea of artificial general intelligence (AGI). Whereas today’s AI progress is marked by a computer’s ability to complete specific narrow tasks (“weak AI”), the aspiration to create AGI (“strong AI”) involves developing machines that can set their own goals in addition to accomplishing the goals set by humans. Although few believe that AGI is on the near horizon, some enthusiasts claim that the exponential growth in computing power and the astonishing advances in AI in just the past decade make AGI a possibility in our lifetimes.


pages: 451 words: 125,201

What We Owe the Future: A Million-Year View by William MacAskill

Ada Lovelace, agricultural Revolution, Albert Einstein, Alignment Problem, AlphaGo, artificial general intelligence, Bartolomé de las Casas, Bletchley Park, British Empire, Brownian motion, carbon footprint, carbon tax, charter city, clean tech, coronavirus, COVID-19, cuban missile crisis, decarbonisation, deep learning, DeepMind, Deng Xiaoping, different worldview, effective altruism, endogenous growth, European colonialism, experimental subject, feminist movement, framing effect, friendly AI, global pandemic, GPT-3, hedonic treadmill, Higgs boson, income inequality, income per capita, Indoor air pollution, Intergovernmental Panel on Climate Change (IPCC), Isaac Newton, Islamic Golden Age, iterative process, Jeff Bezos, job satisfaction, lab leak, Lao Tzu, Large Hadron Collider, life extension, lockdown, long peace, low skilled workers, machine translation, Mars Rover, negative emissions, Nick Bostrom, nuclear winter, OpenAI, Peter Singer: altruism, Peter Thiel, QWERTY keyboard, Robert Gordon, Rutger Bregman, Sam Altman, seminal paper, Shenzhen special economic zone , Shenzhen was a fishing village, Silicon Valley, special economic zone, speech recognition, Stanislav Petrov, stem cell, Steven Pinker, strong AI, synthetic biology, total factor productivity, transatlantic slave trade, Tyler Cowen, William MacAskill, women in the workforce, working-age population, World Values Survey, Y Combinator

.), superintelligence (Bostrom 1998, 2014a), ultraintelligent machines (Good 1966), advanced AI (Center for the Governance of AI, n.d.), high-level machine intelligence (Grace et al. 2018; and, using a slightly different definition, V. C. Müller and Bostrom 2016), comprehensive AI services (Drexler 2019), strong AI (J. R. Searle 1980, but since used in a variety of different ways), and human-level AI (AI Impacts, n.d.-c). I’m using the term “AGI” simply because it is probably the most widely used one, and its definition is easy to understand. However, in this chapter, I am interested in any way in which AI could enable permanent value lock-in, and by using “AGI” as opposed to any of the other terms mentioned previously, I do not intend to exclude any possibility for how this could happen.


pages: 492 words: 141,544

Red Moon by Kim Stanley Robinson

artificial general intelligence, basic income, blockchain, Brownian motion, correlation does not imply causation, cryptocurrency, deep learning, Deng Xiaoping, gig economy, Great Leap Forward, Hyperloop, illegal immigration, income inequality, invisible hand, Ken Thompson, Kim Stanley Robinson, low earth orbit, machine translation, Magellanic Cloud, megacity, Neil Armstrong, precariat, quantum entanglement, Schrödinger's Cat, seigniorage, strong AI, Turing machine, universal basic income, zero-sum game

It has a very low bit rate because it’s so hard to detect neutrinos, but his people have a way to send a real flood of them, and the ice flooring this crater is just enough to catch a signal strength that is about the equal of the first telegraphs. So he keeps his messages brief.” “Seems like a lot of trouble for a telegraph,” John Semple observed. Anna nodded. “Just a toy, at least for now. The real power here is the quantum computer, down there in that building you see in the ice. That thing is a monster.” “Strong AI?” Ta Shu asked. “I don’t know what you mean by that, but definitely a lot of AI. Not strong in the philosophical sense, but, you know—fast. Yottaflops fast.” “Yottaflops,” Ta Shu repeated. “I like that word. That means very fast?” “Very fast. Not so much strong, in my opinion, because of how lame we are at programming.


pages: 561 words: 167,631

2312 by Kim Stanley Robinson

agricultural Revolution, Anthropocene, caloric restriction, caloric restriction, clean tech, double helix, full employment, higher-order functions, hive mind, if you see hoof prints, think horses—not zebras, Jevons paradox, Kim Stanley Robinson, Kuiper Belt, late capitalism, Late Heavy Bombardment, mutually assured destruction, Nelson Mandela, Neolithic agricultural revolution, off-the-grid, offshore financial centre, orbital mechanics / astrodynamics, pattern recognition, phenotype, post scarcity, precariat, quantum entanglement, retrograde motion, rewilding, Skinner box, stem cell, strong AI, synthetic biology, the built environment, the High Line, Tragedy of the Commons, Turing machine, Turing test, Winter of Discontent

In these years all the bad trends converged in “perfect storm” fashion, leading to a rise in average global temperature of five K, and sea level rise of five meters—and as a result, in the 2120s, food shortages, mass riots, catastrophic death on all continents, and an immense spike in the extinction rate of other species. Early lunar bases, scientific stations on Mars. The Turnaround: 2130 to 2160. Verteswandel (Shortback’s famous “mutation of values”), followed by revolutions; strong AI; self-replicating factories; terraforming of Mars begun; fusion power; strong synthetic biology; climate modification efforts, including the disastrous Little Ice Age of 2142–54; space elevators on Earth and Mars; fast space propulsion; the space diaspora begun; the Mondragon Accord signed. And thus: The Accelerando: 2160 to 2220.


pages: 625 words: 167,349

The Alignment Problem: Machine Learning and Human Values by Brian Christian

Albert Einstein, algorithmic bias, Alignment Problem, AlphaGo, Amazon Mechanical Turk, artificial general intelligence, augmented reality, autonomous vehicles, backpropagation, butterfly effect, Cambridge Analytica, Cass Sunstein, Claude Shannon: information theory, computer vision, Computing Machinery and Intelligence, data science, deep learning, DeepMind, Donald Knuth, Douglas Hofstadter, effective altruism, Elaine Herzberg, Elon Musk, Frances Oldham Kelsey, game design, gamification, Geoffrey Hinton, Goodhart's law, Google Chrome, Google Glasses, Google X / Alphabet X, Gödel, Escher, Bach, Hans Moravec, hedonic treadmill, ImageNet competition, industrial robot, Internet Archive, John von Neumann, Joi Ito, Kenneth Arrow, language acquisition, longitudinal study, machine translation, mandatory minimum, mass incarceration, multi-armed bandit, natural language processing, Nick Bostrom, Norbert Wiener, Northpointe / Correctional Offender Management Profiling for Alternative Sanctions, OpenAI, Panopticon Jeremy Bentham, pattern recognition, Peter Singer: altruism, Peter Thiel, precautionary principle, premature optimization, RAND corporation, recommendation engine, Richard Feynman, Rodney Brooks, Saturday Night Live, selection bias, self-driving car, seminal paper, side project, Silicon Valley, Skinner box, sparse data, speech recognition, Stanislav Petrov, statistical model, Steve Jobs, strong AI, the map is not the territory, theory of mind, Tim Cook: Apple, W. E. B. Du Bois, Wayback Machine, zero-sum game

It’s worth noting that handing an object to another person is itself a surprisingly subtle and complex action that includes making inferences about how the other person will want to take hold of the object, how to signal to them that you are intending for them to take it, etc. See, e.g., Strabala et al., “Toward Seamless Human-Robot Handovers.” 40. Hadfield-Menell et al., “Cooperative Inverse Reinforcement Learning.” (“CIRL” is pronounced with a soft c, homophonous with the last name of strong AI skeptic John Searle (no relation). I have agitated within the community that a hard c “curl” pronunciation makes more sense, given that “cooperative” uses a hard c, but it appears the die is cast.) 41. Dylan Hadfield-Menell, personal interview, March 15, 2018. 42. Russell, Human Compatible. 43.


Global Catastrophic Risks by Nick Bostrom, Milan M. Cirkovic

affirmative action, agricultural Revolution, Albert Einstein, American Society of Civil Engineers: Report Card, anthropic principle, artificial general intelligence, Asilomar, availability heuristic, backpropagation, behavioural economics, Bill Joy: nanobots, Black Swan, carbon tax, carbon-based life, Charles Babbage, classic study, cognitive bias, complexity theory, computer age, coronavirus, corporate governance, cosmic microwave background, cosmological constant, cosmological principle, cuban missile crisis, dark matter, death of newspapers, demographic transition, Deng Xiaoping, distributed generation, Doomsday Clock, Drosophila, endogenous growth, Ernest Rutherford, failed state, false flag, feminist movement, framing effect, friendly AI, Georg Cantor, global pandemic, global village, Great Leap Forward, Gödel, Escher, Bach, Hans Moravec, heat death of the universe, hindsight bias, information security, Intergovernmental Panel on Climate Change (IPCC), invention of agriculture, Kevin Kelly, Kuiper Belt, Large Hadron Collider, launch on warning, Law of Accelerating Returns, life extension, means of production, meta-analysis, Mikhail Gorbachev, millennium bug, mutually assured destruction, Nick Bostrom, nuclear winter, ocean acidification, off-the-grid, Oklahoma City bombing, P = NP, peak oil, phenotype, planetary scale, Ponzi scheme, power law, precautionary principle, prediction markets, RAND corporation, Ray Kurzweil, Recombinant DNA, reversible computing, Richard Feynman, Ronald Reagan, scientific worldview, Singularitarianism, social intelligence, South China Sea, strong AI, superintelligent machines, supervolcano, synthetic biology, technological singularity, technoutopianism, The Coming Technological Singularity, the long tail, The Turner Diaries, Tunguska event, twin studies, Tyler Cowen, uranium enrichment, Vernor Vinge, War on Poverty, Westphalian system, Y2K

The catastrophic scenario that stems from underestimating the power of intelligence is that someone builds a button, and does not care enough what the button does, because they do not think the button is powerful enough to hurt them. Or the wider field of AI researchers will not pay enough attention to risks of strong AI, and therefore good tools and firm foundations for friendliness will not be available when it becomes possible to build strong intelligences. And one should not fail to mention - for it also impacts upon existential risk ­ that AI could be the powerful solution to other existential risks, and by mistake we will ignore our best hope of survival.


pages: 1,152 words: 266,246

Why the West Rules--For Now: The Patterns of History, and What They Reveal About the Future by Ian Morris

addicted to oil, Admiral Zheng, agricultural Revolution, Albert Einstein, anti-communist, Apollo 11, Arthur Eddington, Atahualpa, Berlin Wall, British Empire, classic study, Columbian Exchange, conceptual framework, cotton gin, cuban missile crisis, defense in depth, demographic transition, Deng Xiaoping, discovery of the americas, Doomsday Clock, Eddington experiment, en.wikipedia.org, falling living standards, Flynn Effect, Ford Model T, Francisco Pizarro, global village, God and Mammon, Great Leap Forward, hiring and firing, indoor plumbing, Intergovernmental Panel on Climate Change (IPCC), invention of agriculture, Isaac Newton, It's morning again in America, James Watt: steam engine, Kickstarter, Kitchen Debate, knowledge economy, market bubble, mass immigration, Medieval Warm Period, Menlo Park, Mikhail Gorbachev, military-industrial complex, mutually assured destruction, New Journalism, out of africa, Peter Thiel, phenotype, pink-collar, place-making, purchasing power parity, RAND corporation, Ray Kurzweil, Ronald Reagan, Scientific racism, sexual politics, Silicon Valley, Sinatra Doctrine, South China Sea, special economic zone, Steve Jobs, Steve Wozniak, Steven Pinker, strong AI, Suez canal 1869, The inhabitant of London could order by telephone, sipping his morning tea in bed, the various products of the whole earth, The Wealth of Nations by Adam Smith, Thomas Kuhn: the structure of scientific revolutions, Thomas L Friedman, Thomas Malthus, trade route, upwardly mobile, wage slave, washing machines reduced drudgery

Becoming Human: Innovation in Prehistoric Material and Spiritual Culture. Cambridge, UK: Cambridge University Press, 2009. Reynolds, David. One World Divisible: A Global History Since 1945. New York: Norton, 2000. Richards, Jay, et al. Are We Spiritual Machines? Ray Kurzweil vs. the Critics of Strong A.I. Seattle: Discovery Institute, 2002. Richards, John. Unending Frontier: An Environmental History of the Early Modern World. Berkeley: University of California Press, 2003. Richardson, Lewis Fry. Statistics of Deadly Quarrels. Pacific Grove, CA: Boxwood Press, 1960. Richerson, Peter, Robert Boyd, and Robert Bettinger.


pages: 1,737 words: 491,616

Rationality: From AI to Zombies by Eliezer Yudkowsky

Albert Einstein, Alfred Russel Wallace, anthropic principle, anti-pattern, anti-work, antiwork, Arthur Eddington, artificial general intelligence, availability heuristic, backpropagation, Bayesian statistics, behavioural economics, Berlin Wall, Boeing 747, Build a better mousetrap, Cass Sunstein, cellular automata, Charles Babbage, cognitive bias, cognitive dissonance, correlation does not imply causation, cosmological constant, creative destruction, Daniel Kahneman / Amos Tversky, dematerialisation, different worldview, discovery of DNA, disinformation, Douglas Hofstadter, Drosophila, Eddington experiment, effective altruism, experimental subject, Extropian, friendly AI, fundamental attribution error, Great Leap Forward, Gödel, Escher, Bach, Hacker News, hindsight bias, index card, index fund, Isaac Newton, John Conway, John von Neumann, Large Hadron Collider, Long Term Capital Management, Louis Pasteur, mental accounting, meta-analysis, mirror neurons, money market fund, Monty Hall problem, Nash equilibrium, Necker cube, Nick Bostrom, NP-complete, One Laptop per Child (OLPC), P = NP, paperclip maximiser, pattern recognition, Paul Graham, peak-end rule, Peter Thiel, Pierre-Simon Laplace, placebo effect, planetary scale, prediction markets, random walk, Ray Kurzweil, reversible computing, Richard Feynman, risk tolerance, Rubik’s Cube, Saturday Night Live, Schrödinger's Cat, scientific mainstream, scientific worldview, sensible shoes, Silicon Valley, Silicon Valley startup, Singularitarianism, SpaceShipOne, speech recognition, statistical model, Steve Jurvetson, Steven Pinker, strong AI, sunk-cost fallacy, technological singularity, The Bell Curve by Richard Herrnstein and Charles Murray, the map is not the territory, the scientific method, Turing complete, Turing machine, Tyler Cowen, ultimatum game, X Prize, Y Combinator, zero-sum game

Now please explain to me what your AI does, and why you believe it will do it, without pointing to humans as an example.” Planes would fly just as well, given a fixed design, if birds had never existed; they are not kept aloft by analogies. So now you perceive, I hope, why, if you wanted to teach someone to do fundamental work on strong AI—bearing in mind that this is demonstrably a very difficult art, which is not learned by a supermajority of students who are just taught existing reductions such as search trees—then you might go on for some length about such matters as the fine art of reductionism, about playing rationalist’s Taboo to excise problematic words and replace them with their referents, about anthropomorphism, and, of course, about early stopping on mysterious answers to mysterious questions