strong AI

53 results back to index


pages: 761 words: 231,902

The Singularity Is Near: When Humans Transcend Biology by Ray Kurzweil

additive manufacturing, AI winter, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Albert Einstein, anthropic principle, Any sufficiently advanced technology is indistinguishable from magic, artificial general intelligence, Asilomar, augmented reality, autonomous vehicles, Benoit Mandelbrot, Bill Joy: nanobots, bioinformatics, brain emulation, Brewster Kahle, Brownian motion, business cycle, business intelligence, c2.com, call centre, carbon-based life, cellular automata, Claude Shannon: information theory, complexity theory, conceptual framework, Conway's Game of Life, coronavirus, cosmological constant, cosmological principle, cuban missile crisis, data acquisition, Dava Sobel, David Brooks, Dean Kamen, disintermediation, double helix, Douglas Hofstadter, en.wikipedia.org, epigenetics, factory automation, friendly AI, George Gilder, Gödel, Escher, Bach, informal economy, information retrieval, invention of the telephone, invention of the telescope, invention of writing, iterative process, Jaron Lanier, Jeff Bezos, job automation, job satisfaction, John von Neumann, Kevin Kelly, Law of Accelerating Returns, life extension, lifelogging, linked data, Loebner Prize, Louis Pasteur, mandelbrot fractal, Marshall McLuhan, Mikhail Gorbachev, Mitch Kapor, mouse model, Murray Gell-Mann, mutually assured destruction, natural language processing, Network effects, new economy, Norbert Wiener, oil shale / tar sands, optical character recognition, pattern recognition, phenotype, premature optimization, randomized controlled trial, Ray Kurzweil, remote working, reversible computing, Richard Feynman, Robert Metcalfe, Rodney Brooks, scientific worldview, Search for Extraterrestrial Intelligence, selection bias, semantic web, Silicon Valley, Singularitarianism, speech recognition, statistical model, stem cell, Stephen Hawking, Stewart Brand, strong AI, superintelligent machines, technological singularity, Ted Kaczynski, telepresence, The Coming Technological Singularity, Thomas Bayes, transaction costs, Turing machine, Turing test, Vernor Vinge, Y2K, Yogi Berra

For these reasons, once a computer is able to match the subtlety and range of human intelligence, it will necessarily soar past it and then continue its double-exponential ascent. A key question regarding the Singularity is whether the "chicken" (strong AI) or the "egg"(nanotechnology) will come first. In other words, will strong AI lead to full nanotechnology (molecular-manufacturing assemblers that can turn information into physical products), or will full nanotechnology lead to strong AI? The logic of the first premise is that strong AI would imply superhuman AI for the reasons just cited, and superhuman AI would be in a position to solve any remaining design problems required to implement full nanotechnology. The second premise is based on the realization that the hardware requirements for strong AI will be met by nanotechnology-based computation. Likewise the software requirements will be facilitated by nanobots that could create highly detailed scans of human brain functioning and thereby achieve the completion of reverse engineering the human brain.

The reality is that progress in both areas will necessarily use our most advanced tools, so advances in each field will simultaneously facilitate the other. However, I do expect that full MNT will emerge prior to strong AI, but only by a few years (around 2025 for nanotechnology, around 2029 for strong AI). As revolutionary as nanotechnology will be, strong AI will have far more profound consequences. Nanotechnology is powerful but not necessarily intelligent. We can devise ways of at least trying to manage the enormous powers of nanotechnology, but superintelligence innately cannot be controlled. Runaway AI. Once strong AI is achieved, it can readily be advanced and its powers multiplied, as that is the fundamental nature of machine abilities. As one strong AI immediately begets many strong Als, the latter access their own design, understand and improve it, and thereby very rapidly evolve into a yet more capable, more intelligent AI, with the cycle repeating itself indefinitely.

Such robots may make great assistants, but who's to say that we can count on them to remain reliably friendly to mere biological humans? Strong AI. Strong AI promises to continue the exponential gains of human civilization. (As I discussed earlier, I include the nonbiological intelligence derived from our human civilization as still human.) But the dangers it presents are also profound precisely because of its amplification of intelligence. Intelligence is inherently impossible to control, so the various strategies that have been devised to control nanotechnology (for example, the "broadcast architecture" described below) won't work for strong AI. There have been discussions and proposals to guide AI development toward what Eliezer Yudkowsky calls "friendly AI"30 (see the section "Protection from 'Unfriendly' Strong AI," p. 420). These are useful for discussion, but it is infeasible today to devise strategies that will absolutely ensure that future AI embodies human ethics and values.


The Book of Why: The New Science of Cause and Effect by Judea Pearl, Dana Mackenzie

affirmative action, Albert Einstein, Asilomar, Bayesian statistics, computer age, computer vision, correlation coefficient, correlation does not imply causation, Daniel Kahneman / Amos Tversky, Edmond Halley, Elon Musk, en.wikipedia.org, experimental subject, Isaac Newton, iterative process, John Snow's cholera map, Loebner Prize, loose coupling, Louis Pasteur, Menlo Park, pattern recognition, Paul Erdős, personalized medicine, Pierre-Simon Laplace, placebo effect, prisoner's dilemma, probability theory / Blaise Pascal / Pierre de Fermat, randomized controlled trial, selection bias, self-driving car, Silicon Valley, speech recognition, statistical model, Stephen Hawking, Steve Jobs, strong AI, The Design of Experiments, the scientific method, Thomas Bayes, Turing test

They lack the understanding that the observed shadows are mere projections of three-dimensional objects moving in a three-dimensional space. Strong AI requires this understanding. Deep-learning researchers are not unaware of these basic limitations. For example, economists using machine learning have noted that their methods do not answer key questions of interest, such as estimating the impact of untried policies and actions. Typical examples are introducing new price structures or subsidies or changing the minimum wage. In technical terms, machine-learning methods today provide us with an efficient way of going from finite sample estimates to probability distributions, and we still need to get from distributions to cause-effect relations. When we start talking about strong AI, causal models move from a luxury to a necessity. To me, a strong AI should be a machine that can reflect on its actions and learn from past mistakes.

It has not addressed the truly difficult questions that continue to prevent us from achieving humanlike AI. As a result the public believes that “strong AI,” machines that think like humans, is just around the corner or maybe even here already. In reality, nothing could be farther from the truth. I fully agree with Gary Marcus, a neuroscientist at New York University, who recently wrote in the New York Times that the field of artificial intelligence is “bursting with microdiscoveries”—the sort of things that make good press releases—but machines are still disappointingly far from humanlike cognition. My colleague in computer science at the University of California, Los Angeles, Adnan Darwiche, has titled a position paper “Human-Level Intelligence or Animal-Like Abilities?” which I think frames the question in just the right way. The goal of strong AI is to produce machines with humanlike intelligence, able to converse with and guide humans.

We watch what happens, repeat the process, and keep a record of how good our intention generator is. Finally, when we start to adjust our own software, that is when we begin to take moral responsibility for our actions. This responsibility may be an illusion at the level of neural activation but not at the level of self-awareness software. Encouraged by these possibilities, I believe that strong AI with causal understanding and agency capabilities is a realizable promise, and this raises the question that science fiction writers have been asking since the 1950s: Should we be worried? Is strong AI a Pandora’s box that we should not open? Recently public figures like Elon Musk and Stephen Hawking have gone on record saying that we should be worried. On Twitter, Musk said that AIs were “potentially more dangerous than nukes.” In 2015, John Brockman’s website Edge.org posed as its annual question, that year asking, “What do you think about machines that think?”


pages: 261 words: 10,785

The Lights in the Tunnel by Martin Ford

"Robert Solow", Albert Einstein, Bill Joy: nanobots, Black-Scholes formula, business cycle, call centre, cloud computing, collateralized debt obligation, commoditize, creative destruction, credit crunch, double helix, en.wikipedia.org, factory automation, full employment, income inequality, index card, industrial robot, inventory management, invisible hand, Isaac Newton, job automation, John Markoff, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, knowledge worker, low skilled workers, mass immigration, Mitch Kapor, moral hazard, pattern recognition, prediction markets, Productivity paradox, Ray Kurzweil, Search for Extraterrestrial Intelligence, Silicon Valley, Stephen Hawking, strong AI, technological singularity, Thomas L Friedman, Turing test, Vernor Vinge, War on Poverty

While narrow AI is increasingly deployed to solve real world problems and attracts most of the current commercial interest, the Holy Grail of artificial intelligence is, of course, strong AI—the construction of a truly intelligent machine. The realization of strong AI would mean the existence of a machine that is genuinely competitive with, or perhaps even superior to, a human being in its ability to reason and conceive ideas. The arguments I have made in this book do not depend on strong AI, but it is worth noting that if truly intelligent machines were built and became affordable, the trends I have predicted here would likely be amplified, and the economic impact would certainly be dramatic and might unfold in an accelerating fashion. Research into strong AI has suffered because of some overly optimistic predictions and expectations back in the 1980s—long before computer hardware was fast enough to make true machine intelligence feasible.

Research into strong AI has suffered because of some overly optimistic predictions and expectations back in the 1980s—long before computer hardware was fast enough to make true machine intelligence feasible. When reality fell far short of the projections, focus and financial backing shifted away from research into strong AI. Nonetheless, there is evidence that the vastly superior performance and affordability of today’s processors is helping to revitalize the field. Research into strong AI can be roughly divided into two main approaches. The direct computational approach attempts to extend traditional, algorithmic computing into the realm of true intelligence. This involves the development of sophisticated software applications that exhibit general reasoning. A second approach begins by attempting to understand and then simulate the human brain. The Blue Brain Project,56 a collaboration between Switzerland’s EPFL (one of Europe’s top technical universities) and IBM, is one such effort to simulate the workings of the brain.

The Blue Brain Project,56 a collaboration between Switzerland’s EPFL (one of Europe’s top technical universities) and IBM, is one such effort to simulate the workings of the brain. Once researchers gain an understanding of the basic operating principles of the brain, it may be possible to build an artificial intelligence based on that framework. This would not be an exact replication of a human brain; instead, it would be something completely new, but based on a similar architecture. When might strong AI become reality—if ever? I suspect that if you were to survey the top experts working in the field, you would get a fairly wide range of estimates. Optimists might say it will happen within the next 20 to 30 years. A more cautious group would place it 50 or more years in the future, and some might argue that it will never happen. True machine intelligence is an idea that, in many ways, intrudes into the realm of philosophy, and for some people, perhaps even religion.


pages: 797 words: 227,399

Wired for War: The Robotics Revolution and Conflict in the 21st Century by P. W. Singer

agricultural Revolution, Albert Einstein, Any sufficiently advanced technology is indistinguishable from magic, Atahualpa, barriers to entry, Berlin Wall, Bill Joy: nanobots, blue-collar work, borderless world, Charles Lindbergh, clean water, Craig Reynolds: boids flock, cuban missile crisis, digital map, en.wikipedia.org, Ernest Rutherford, failed state, Fall of the Berlin Wall, Firefox, Francisco Pizarro, Frank Gehry, friendly fire, game design, George Gilder, Google Earth, Grace Hopper, I think there is a world market for maybe five computers, if you build it, they will come, illegal immigration, industrial robot, interchangeable parts, Intergovernmental Panel on Climate Change (IPCC), invention of gunpowder, invention of movable type, invention of the steam engine, Isaac Newton, Jacques de Vaucanson, job automation, Johann Wolfgang von Goethe, Law of Accelerating Returns, Mars Rover, Menlo Park, New Urbanism, pattern recognition, private military company, RAND corporation, Ray Kurzweil, RFID, robot derives from the Czech word robota Czech, meaning slave, Rodney Brooks, Ronald Reagan, Schrödinger's Cat, Silicon Valley, social intelligence, speech recognition, Stephen Hawking, strong AI, technological singularity, The Coming Technological Singularity, The Wisdom of Crowds, Turing test, Vernor Vinge, Wall-E, Yogi Berra

A machine takeover is generally imagined as following a path of evolution to revolution. Computers eventually develop to the equivalent of human intelligence (“strong AI”) and then rapidly push past any attempts at human control. Ray Kurzweil explains how this would work. “As one strong AI immediately begets many strong AIs, the latter access their own design, understand and improve it, and thereby very rapidly evolve into a yet more capable, more intelligent AI, with the cycle repeating itself indefinitely. Each cycle not only creates a more intelligent AI, but takes less time than the cycle before it as is the nature of technological evolution. The premise is that once strong AI is achieved, it will immediately become a runaway phenomenon of rapidly escalating super-intelligence.” Or as the AI Agent Smith says to his human adversary in The Matrix, “Evolution, Morpheus, evolution, like the dinosaur.

Despite all the robots having the same initial software, the researchers are seeing the emergence of “good” robots that cooperate and “bad” robots that constantly attack each other. There was even one robot that became the equivalent of artificially stupid or suicidal, that is, a robot that evolved to constantly make the worst possible decision. This idea of robots, one day being able to problem-solve, create, and even develop personalities past what their human designers intended is what some call “strong AI.” That is, the computer might learn so much that, at a certain point, it is not just mimicking human capabilities but has finally equaled, and even surpassed, its creators’ human intelligence. This is the essence of the so-called Turing test. Alan Turing was one of the pioneers of AI, who worked on the early computers like Colossus that helped crack the German codes during World War II. His test is now encapsulated in a real-world prize that will go to the first designer of a computer intelligent enough to trick human experts into thinking that it is human.

Wireless capacity doubles every nine months. Optical capacity doubles every twelve months. The cost/performance ratio of Internet service providers is doubling every twelve months. Internet bandwidth backbone is doubling roughly every twelve months. The number of human genes mapped per year doubles every eighteen months. The resolution of brain scans (a key to understanding how the brain works, an important part of creating strong AI) doubles every twelve months. And, as a by-product, the number of personal and service robots has so far doubled every nine months. The darker side of these trends has been exponential change in our capability not merely to create, but also to destroy. The modern-day bomber jet has roughly half a million times the killing capacity of the Roman legionnaire carrying a sword in hand. Even within the twentieth century, the range and effectiveness of artillery fire increased by a factor of twenty, antitank fire by a factor of sixty.


pages: 185 words: 43,609

Zero to One: Notes on Startups, or How to Build the Future by Peter Thiel, Blake Masters

Airbnb, Albert Einstein, Andrew Wiles, Andy Kessler, Berlin Wall, cleantech, cloud computing, crony capitalism, discounted cash flows, diversified portfolio, don't be evil, Elon Musk, eurozone crisis, income inequality, Jeff Bezos, Lean Startup, life extension, lone genius, Long Term Capital Management, Lyft, Marc Andreessen, Mark Zuckerberg, minimum viable product, Nate Silver, Network effects, new economy, paypal mafia, Peter Thiel, pets.com, profit motive, Ralph Waldo Emerson, Ray Kurzweil, self-driving car, shareholder value, Silicon Valley, Silicon Valley startup, Singularitarianism, software is eating the world, Steve Jobs, strong AI, Ted Kaczynski, Tesla Model S, uber lyft, Vilfredo Pareto, working poor

It’s become conventional to see ever-smarter anthropomorphized robot intelligences like Siri and Watson as harbingers of things to come; once computers can answer all our questions, perhaps they’ll ask why they should remain subservient to us at all. The logical endpoint to this substitutionist thinking is called “strong AI”: computers that eclipse humans on every important dimension. Of course, the Luddites are terrified by the possibility. It even makes the futurists a little uneasy; it’s not clear whether strong AI would save humanity or doom it. Technology is supposed to increase our mastery over nature and reduce the role of chance in our lives; building smarter-than-human computers could actually bring chance back with a vengeance. Strong AI is like a cosmic lottery ticket: if we win, we get utopia; if we lose, Skynet substitutes us out of existence. But even if strong AI is a real possibility rather than an imponderable mystery, it won’t happen anytime soon: replacement by computers is a worry for the 22nd century.

Kaczynski, Ted Karim, Jawed Karp, Alex, 11.1, 12.1 Kasparov, Garry Katrina, Hurricane Kennedy, Anthony Kesey, Ken Kessler, Andy Kurzweil, Ray last mover, 11.1, 13.1 last mover advantage lean startup, 2.1, 6.1, 6.2 Levchin, Max, 4.1, 10.1, 12.1, 14.1 Levie, Aaron lifespan life tables LinkedIn, 5.1, 10.1, 12.1 Loiseau, Bernard Long-Term Capital Management (LTCM) Lord of the Rings (Tolkien) luck, 6.1, 6.2, 6.3, 6.4 Lucretius Lyft MacBook machine learning Madison, James Madrigal, Alexis Manhattan Project Manson, Charles manufacturing marginal cost marketing Marx, Karl, 4.1, 6.1, 6.2, 6.3 Masters, Blake, prf.1, 11.1 Mayer, Marissa Medicare Mercedes-Benz MiaSolé, 13.1, 13.2 Michelin Microsoft, 3.1, 3.2, 3.3, 4.1, 5.1, 14.1 mobile computing mobile credit card readers Mogadishu monopoly, monopolies, 3.1, 3.2, 3.3, 5.1, 7.1, 8.1 building of characteristics of in cleantech creative dynamism of new lies of profits of progress and sales and of Tesla Morrison, Jim Mosaic browser music recording industry Musk, Elon, 4.1, 6.1, 11.1, 13.1, 13.2, 13.3 Napster, 5.1, 14.1 NASA, 6.1, 11.1 NASDAQ, 2.1, 13.1 National Security Agency (NSA) natural gas natural secrets Navigator browser Netflix Netscape NetSecure network effects, 5.1, 5.2 New Economy, 2.1, 2.2 New York Times, 13.1, 14.1 New York Times Nietzsche, Friedrich Nokia nonprofits, 13.1, 13.2 Nosek, Luke, 9.1, 14.1 Nozick, Robert nutrition Oedipus, 14.1, 14.2 OfficeJet OmniBook online pet store market Oracle Outliers (Gladwell) ownership Packard, Dave Page, Larry Palantir, prf.1, 7.1, 10.1, 11.1, 12.1 PalmPilots, 2.1, 5.1, 11.1 Pan, Yu Panama Canal Pareto, Vilfredo Pareto principle Parker, Sean, 5.1, 14.1 Part-time employees patents path dependence PayPal, prf.1, 2.1, 3.1, 4.1, 4.2, 4.3, 5.1, 5.2, 5.3, 8.1, 9.1, 9.2, 10.1, 10.2, 10.3, 10.4, 11.1, 11.2, 12.1, 12.2, 14.1 founders of, 14.1 future cash flows of investors in “PayPal Mafia” PCs Pearce, Dave penicillin perfect competition, 3.1, 3.2 equilibrium of Perkins, Tom perk war Perot, Ross, 2.1, 12.1, 12.2 pessimism Petopia.com Pets.com, 4.1, 4.2 PetStore.com pharmaceutical companies philanthropy philosophy, indefinite physics planning, 2.1, 6.1, 6.2 progress without Plato politics, 6.1, 11.1 indefinite polling pollsters pollution portfolio, diversified possession power law, 7.1, 7.2, 7.3 of distribution of venture capital Power Sellers (eBay) Presley, Elvis Priceline.com Prince Procter & Gamble profits, 2.1, 3.1, 3.2, 3.3 progress, 6.1, 6.2 future of without planning proprietary technology, 5.1, 5.2, 13.1 public opinion public relations Pythagoras Q-Cells Rand, Ayn Rawls, John, 6.1, 6.2 Reber, John recession, of mid-1990 recruiting, 10.1, 12.1 recurrent collapse, bm1.1, bm1.2 renewable energy industrial index research and development resources, 12.1, bm1.1 restaurants, 3.1, 3.2, 5.1 risk risk aversion Romeo and Juliet (Shakespeare) Romulus and Remus Roosevelt, Theodore Royal Society Russia Sacks, David sales, 2.1, 11.1, 13.1 complex as hidden to non-customers personal Sandberg, Sheryl San Francisco Bay Area savings scale, economies of Scalia, Antonin scaling up scapegoats Schmidt, Eric search engines, prf.1, 3.1, 5.1 secrets, 8.1, 13.1 about people case for finding of looking for using self-driving cars service businesses service economy Shakespeare, William, 4.1, 7.1 Shark Tank Sharma, Suvi Shatner, William Siebel, Tom Siebel Systems Silicon Valley, 1.1, 2.1, 2.2, 2.3, 5.1, 5.2, 6.1, 7.1, 10.1, 11.1 Silver, Nate Simmons, Russel, 10.1, 14.1 singularity smartphones, 1.1, 12.1 social entrepreneurship Social Network, The social networks, prf.1, 5.1 Social Security software engineers software startups, 5.1, 6.1 solar energy, 13.1, 13.2, 13.3, 13.4 Solaria Solyndra, 13.1, 13.2, 13.3, 13.4, 13.5 South Korea space shuttle SpaceX, prf.1, 10.1, 11.1 Spears, Britney SpectraWatt, 13.1, 13.2 Spencer, Herbert, 6.1, 6.2 Square, 4.1, 6.1 Stanford Sleep Clinic startups, prf.1, 1.1, 5.1, 6.1, 6.2, 7.1 assigning responsibilities in cash flow at as cults disruption by during dot-com mania economies of scale and foundations of founder’s paradox in lessons of dot-com mania for power law in public relations in sales and staff of target market for uniform of venture capital and steam engine Stoppelman, Jeremy string theory strong AI substitution, complementarity vs. Suez Canal tablet computing technological advance technology, prf.1, 1.1, 1.2, 2.1, 2.2, 2.3 American fear of complementarity and globalization and proprietary technology companies terrorism Tesla Motors, 10.1, 13.1, 13.2 Thailand Theory of Justice, A (Rawls) Timberlake, Justin Time magazine Tolkien, J.R.R. Tolstoy, Leo Tom Sawyer (char.) Toyota Tumblr 27 Club Twitter, 5.1, 6.1 Uber Unabomber VCs, rules of “veil of ignorance” venture capital power law in venture fund, J-curve of successful, 7.1 vertical progress viral marketing Virgin Atlantic Airways Virgin Group Virgin Records Wagner Wall Street Journal Warby Parker Watson web browsers Western Union White, Phil Wiles, Andrew Wilson, Andrew Winehouse, Amy World Wide Web Xanadu X.com Yahoo!


pages: 268 words: 109,447

The Cultural Logic of Computation by David Golumbia

Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, American ideology, Benoit Mandelbrot, borderless world, business process, cellular automata, citizen journalism, Claude Shannon: information theory, computer age, corporate governance, creative destruction, en.wikipedia.org, finite state, future of work, Google Earth, Howard Zinn, IBM and the Holocaust, iterative process, Jaron Lanier, jimmy wales, John von Neumann, Joseph Schumpeter, late capitalism, means of production, natural language processing, Norbert Wiener, packet switching, RAND corporation, Ray Kurzweil, RFID, Richard Stallman, semantic web, Shoshana Zuboff, Slavoj Žižek, social web, stem cell, Stephen Hawking, Steve Ballmer, Stewart Brand, strong AI, supply-chain management, supply-chain management software, Ted Nelson, telemarketer, The Wisdom of Crowds, theory of mind, Turing machine, Turing test, Vannevar Bush, web application

But when we look closer we see something much more reflective of our world and its political history in this technology than we might think at first. The “strong AI” movements of the late 1960s and 1970s, for example, represent and even implement powerful gender ideologies (Adam 1998). In what turns out in retrospect to be a field of study devoted to a mistaken metaphor (see especially Dreyfus 1992), according to which the brain primarily computes like any other Turing machine, we see advocates The Cultural Logic of Computation p 202 unusually invested in the idea that they might be creating something like life—in resuscitating the Frankenstein story that may, in fact, be inapplicable to the world of computing itself. Adam reminds us of the degree to which both Cyc and Soar, the two most fully articulated of the strong AI projects, were reliant on the particular rationalist models of human cognition found in any number of conservative intellectual traditions.

Perhaps because language per se is a much more objective part of the social world than is the abstraction called “thinking,” however, the history of computational linguistics reveals a particular dynamism with regard to the data it takes as its object— exaggerated claims, that is, are frequently met with material tests that confirm or disconfirm theses. Accordingly, CL can claim more practical successes than can the program of Strong AI, but at the same time demonstrates with particular clarity where ideology meets material constraints. Computers invite us to view languages on their terms: on the terms by which computers use formal systems that we have recently decided to call languages—that is, programming languages. But these closed systems, subject to univocal, correct, “activating” interpretations, look little like human language practices, which seems not just to allow but to thrive on ambiguity, context, and polysemy.

(Even in Turing’s The Cultural Logic of Computation p 98 original statement of the Test, the interlocutors are supposed to be passing dialogue back and forth in written form, because Turing sees the obvious inability of machines to adequately mimic human speech as a separate question from whether computers can process language.) By focusing on written exemplars, CL and NLP have pursued a program that has much in common with the “Strong AI” programs of the 1960s and 1970s that Hubert Dreyfus (1992), John Haugeland (1985), John Searle (1984, 1992), and others have so effectively critiqued. This program has two distinct aspects, which although they are joined intellectually, are often pursued with apparent independence from each other—yet at the same time, the mere presence of the phrase “computational linguistics” in a title is often not at all enough to distinguish which program the researcher has in mind.


pages: 350 words: 98,077

Artificial Intelligence: A Guide for Thinking Humans by Melanie Mitchell

Ada Lovelace, AI winter, Amazon Mechanical Turk, Apple's 1984 Super Bowl advert, artificial general intelligence, autonomous vehicles, Bernie Sanders, Claude Shannon: information theory, cognitive dissonance, computer age, computer vision, dark matter, Douglas Hofstadter, Elon Musk, en.wikipedia.org, Gödel, Escher, Bach, I think there is a world market for maybe five computers, ImageNet competition, Jaron Lanier, job automation, John Markoff, John von Neumann, Kevin Kelly, Kickstarter, license plate recognition, Mark Zuckerberg, natural language processing, Norbert Wiener, ought to be enough for anybody, pattern recognition, performance metric, RAND corporation, Ray Kurzweil, recommendation engine, ride hailing / ride sharing, Rodney Brooks, self-driving car, sentiment analysis, Silicon Valley, Singularitarianism, Skype, speech recognition, Stephen Hawking, Steve Jobs, Steve Wozniak, Steven Pinker, strong AI, superintelligent machines, theory of mind, There's no reason for any individual to have a computer in his home - Ken Olsen, Turing test, Vernor Vinge, Watson beat the top human players on Jeopardy!

While many people today use the phrase strong AI to mean “AI that can perform most tasks as well as a human” and weak AI to mean the kind of narrow AI that currently exists, Searle meant something different by these terms. For Searle, the strong AI claim would be that “the appropriately programmed digital computer does not just simulate having a mind; it literally has a mind.”13 In contrast, in Searle’s terminology, weak AI views computers as tools to simulate human intelligence and does not make any claims about them “literally” having a mind.14 We’re back to the philosophical question I was discussing with my mother: Is there a difference between “simulating a mind” and “literally having a mind”? Like my mother, Searle believes there is a fundamental difference, and he argued that strong AI is impossible even in principle.15 The Turing Test Searle’s article was spurred in part by Alan Turing’s 1950 paper, “Computing Machinery and Intelligence,” which had proposed a way to cut through the Gordian knot of “simulated” versus “actual” intelligence.

The entrepreneur and activist Mitchell Kapor advised, “Human intelligence is a marvelous, subtle, and poorly understood phenomenon. There is no danger of duplicating it anytime soon.”10 The roboticist (and former director of MIT’s AI Lab) Rodney Brooks agreed, stating that we “grossly overestimate the capabilities of machines—those of today and of the next few decades.”11 The psychologist and AI researcher Gary Marcus went so far as to assert that in the quest to create “strong AI”—that is, general human-level AI—“there has been almost no progress.”12 I could go on and on with dueling quotations. In short, what I found is that the field of AI is in turmoil. Either a huge amount of progress has been made, or almost none at all. Either we are within spitting distance of “true” AI, or it is centuries away. AI will solve all our problems, put us all out of a job, destroy the human race, or cheapen our humanity.

Pinker, “Thinking Does Not Imply Subjugating,” in What to Think About Machines That Think, ed. J. Brockman (New York: Harper Perennial, 2015), 5–8. 11.  A. M. Turing, “Computing Machinery and Intelligence,” Mind 59, no. 236 (1950): 433–60. 12.  J. R. Searle, “Minds, Brains, and Programs,” Behavioral and Brain Sciences 3, no. 3 (1980): 417–24. 13.  J. R. Searle, Mind: A Brief Introduction (Oxford: Oxford University Press, 2004), 66. 14.  The terms strong AI and weak AI have also been used to mean something more like general AI and narrow AI. This is how Ray Kurzweil uses them, but this differs from Searle’s original meaning. 15.  Searle’s article is reprinted in D. R. Hofstadter and D. C. Dennett, The Mind’s I: Fantasies and Reflections on Self and Soul (New York: Basic Books, 1981), along with a cogent counterargument from Hofstadter. 16.  S.


pages: 339 words: 94,769

Possible Minds: Twenty-Five Ways of Looking at AI by John Brockman

AI winter, airport security, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, artificial general intelligence, Asilomar, autonomous vehicles, basic income, Benoit Mandelbrot, Bill Joy: nanobots, Buckminster Fuller, cellular automata, Claude Shannon: information theory, Daniel Kahneman / Amos Tversky, Danny Hillis, David Graeber, easy for humans, difficult for computers, Elon Musk, Eratosthenes, Ernest Rutherford, finite state, friendly AI, future of work, Geoffrey West, Santa Fe Institute, gig economy, income inequality, industrial robot, information retrieval, invention of writing, James Watt: steam engine, Johannes Kepler, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John von Neumann, Kevin Kelly, Kickstarter, Laplace demon, Loebner Prize, market fundamentalism, Marshall McLuhan, Menlo Park, Norbert Wiener, optical character recognition, pattern recognition, personalized medicine, Picturephone, profit maximization, profit motive, RAND corporation, random walk, Ray Kurzweil, Richard Feynman, Rodney Brooks, self-driving car, sexual politics, Silicon Valley, Skype, social graph, speech recognition, statistical model, Stephen Hawking, Steven Pinker, Stewart Brand, strong AI, superintelligent machines, supervolcano, technological singularity, technoutopianism, telemarketer, telerobotics, the scientific method, theory of mind, Turing machine, Turing test, universal basic income, Upton Sinclair, Von Neumann architecture, Whole Earth Catalog, Y2K, zero-sum game

He wanted to argue, with John Searle and Roger Penrose, that “Strong AI” is impossible, but there are no good arguments for that conclusion. After all, everything we now know suggests that, as I have put it, we are robots made of robots made of robots . . . down to the motor proteins and their ilk, with no magical ingredients thrown in along the way. Weizenbaum’s more important and defensible message was that we should not strive to create Strong AI and should be extremely cautious about the AI systems that we can create and have already created. As one might expect, the defensible thesis is a hybrid: AI (Strong AI) is possible in principle but not desirable. The AI that’s practically possible is not necessarily evil—unless it is mistaken for Strong AI! The gap between today’s systems and the science-fictional systems dominating the popular imagination is still huge, though many folks, both lay and expert, manage to underestimate it.

In this context, we’ve discovered that some basic barriers exist, and that unless they are breached we won’t get a real human kind of intelligence no matter what we do. I believe that charting these barriers may be no less important than banging our heads against them. Current machine-learning systems operate almost exclusively in a statistical, or model-blind, mode, which is analogous in many ways to fitting a function to a cloud of data points. Such systems cannot reason about “What if?” questions and, therefore, cannot serve as the basis for Strong AI—that is, artificial intelligence that emulates human-level reasoning and competence. To achieve human-level intelligence, learning machines need the guidance of a blueprint of reality, a model—similar to a road map that guides us in driving through an unfamiliar city. To be more specific, current learning machines improve their performance by optimizing parameters for a stream of sensory inputs received from the environment.

Yet science favored the creative-speculative strategy of the Greek astronomers, which was wild with metaphorical imagery: circular tubes full of fire, small holes through which celestial fire was visible as stars, and hemispherical Earth riding on turtleback. It was this wild modeling strategy, not Babylonian extrapolation, that jolted Eratosthenes (276–194 BC) to perform one of the most creative experiments in the ancient world and calculate the circumference of the Earth. Such an experiment would never have occurred to a Babylonian data fitter. Model-blind approaches impose intrinsic limitations on the cognitive tasks that Strong AI can perform. My general conclusion is that human-level AI cannot emerge solely from model-blind learning machines; it requires the symbiotic collaboration of data and models. Data science is a science only to the extent that it facilitates the interpretation of data—a two-body problem, connecting data to reality. Data alone are hardly a science, no matter how “big” they get and how skillfully they are manipulated.


pages: 161 words: 39,526

Applied Artificial Intelligence: A Handbook for Business Leaders by Mariya Yao, Adelyn Zhou, Marlene Jia

Airbnb, Amazon Web Services, artificial general intelligence, autonomous vehicles, business intelligence, business process, call centre, chief data officer, computer vision, conceptual framework, en.wikipedia.org, future of work, industrial robot, Internet of things, iterative process, Jeff Bezos, job automation, Marc Andreessen, natural language processing, new economy, pattern recognition, performance metric, price discrimination, randomized controlled trial, recommendation engine, self-driving car, sentiment analysis, Silicon Valley, skunkworks, software is eating the world, source of truth, speech recognition, statistical model, strong AI, technological singularity

AGI Artificial intelligence, also known as AI, has been misused in pop culture to describe almost any kind of computerized analysis or automation. To avoid confusion, technical experts in the field of AI prefer to use the term Artificial General Intelligence (AGI) to refer to machines with human-level or higher intelligence, capable of abstracting concepts from limited experience and transferring knowledge between domains. AGI is also called “Strong AI” to differentiate from “Weak AI” or “Narrow AI," which refers to systems designed for one specific task and whose capabilities are not easily transferable to other systems. We go into more detail about the distinction between AI and AGI in our Machine Intelligence Continuum in Chapter 2. Though Deep Blue, which beat the world champion in chess in 1997, and AlphaGo, which did the same for the game of Go in 2016, have achieved impressive results, all of the AI systems we have today are “Weak AI."

To this end, you can partner with universities or research departments and sponsor conferences to build your brand reputation. You can also host competitions on Kaggle or similar platforms. Provide a problem, a dataset, and a prize purse to attract competitors. This is a good way to get international talent to work on your problem and will also build your reputation as a company that supports AI. As with any industry, like attracts like. Dominant tech companies build strong AI departments by hiring superstar leaders. Google and Facebook attracted university professors and AI research pioneers such as Geoffrey Hinton, Fei-Fei Li, and Yann LeCun with plum appointments and endless resources. These professors either take a sabbatical from their universities or split their time between academia and industry. Effective Alternatives to Hiring Despite your best efforts, hiring new AI talent may prove to be slow or impossible.

To meet business needs in the short-term, consider evaluating third-party solutions built by vendors who specialize in applying AI to enterprise functions.(58) Both startups and established enterprise vendors offer solutions to address common pain points for all departments, including sales and marketing, finance, operations and back-office, customer support, and even HR and recruiting. Emphasize Your Company’s Unique Advantages At the end of an interview cycle, a strong AI candidate will have multiple offers in hand. In order to close the candidate, you’ll need to differentiate your company from others. In addition to compensation, culture, and other general fit criteria, AI talent tends to evaluate offers on the following areas: Availability of Data Candidates want to be able to train their models with as much data as possible. The data should go back many years, if possible, and be real rather than inferred data.


pages: 144 words: 43,356

Surviving AI: The Promise and Peril of Artificial Intelligence by Calum Chace

"Robert Solow", 3D printing, Ada Lovelace, AI winter, Airbnb, artificial general intelligence, augmented reality, barriers to entry, basic income, bitcoin, blockchain, brain emulation, Buckminster Fuller, cloud computing, computer age, computer vision, correlation does not imply causation, credit crunch, cryptocurrency, cuban missile crisis, dematerialisation, discovery of the americas, disintermediation, don't be evil, Elon Musk, en.wikipedia.org, epigenetics, Erik Brynjolfsson, everywhere but in the productivity statistics, Flash crash, friendly AI, Google Glasses, hedonic treadmill, industrial robot, Internet of things, invention of agriculture, job automation, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John von Neumann, Kevin Kelly, life extension, low skilled workers, Mahatma Gandhi, means of production, mutually assured destruction, Nicholas Carr, pattern recognition, peer-to-peer, peer-to-peer model, Peter Thiel, Ray Kurzweil, Rodney Brooks, Second Machine Age, self-driving car, Silicon Valley, Silicon Valley ideology, Skype, South Sea Bubble, speech recognition, Stanislav Petrov, Stephen Hawking, Steve Jobs, strong AI, technological singularity, The Future of Employment, theory of mind, Turing machine, Turing test, universal basic income, Vernor Vinge, wage slave, Wall-E, zero-sum game

The down-to-earth clarity of Chace’s style will help take humanity into what could be a very violent, “Transcendence” movie-like, real-life, phase four. If you want to survive this coming fourth phase in the next fewdecades and prepare for it, you cannot afford NOT to read Chace’s book. Prof. Dr. Hugo de Garis, author of The Artilect War, former director of the Artificial Brain Lab, Xiamen University, China. Advances in AI are set to affect progress in all other areas in the coming decades. If this momentum leads to the achievement of strong AI within the century, then in the words of one field leader it would be “the biggest event in human history”. Now is therefore a perfect time for the thoughtful discussion ofchallenges and opportunities that Chace provides. Surviving AI is an exceptionally clear, well-researched and balanced introduction to a complex and controversial topic, and is a compelling read to boot. Seán Ó hÉigeartaigh, executive director, Cambridge Centrefor the Study of Existential Risk CALUM writes fiction and non-fiction, primarily on the subject of artificial intelligence.

Whether intelligence resides in the machine or in the software is analogous to the question of whether it resides in the neurons in your brain or in the electrochemical signals that they transmit and receive. Fortunately we don’t need to answer that question here. ANI and AGI We do need to discriminate between two very different types of artificial intelligence: artificial narrow intelligence (ANI) and artificial general intelligence (AGI (4)), which are also known as weak AI and strong AI, and as ordinary AI and full AI. The easiest way to do this is to say that artificial general intelligence, or AGI, is an AI which can carry out any cognitive function that a human can. We have long had computers which can add up much better than any human, and computers which can play chess better than the best human chess grandmaster. However, no computer can yet beat humans at every intellectual endeavour.

But it is daft to dismiss as failures today’s best pattern recognition systems, self-driving cars, and machines which can beat any human at many games of skill. Informed scepticism about near-term AGI We should take more seriously the arguments of very experienced AI researchers who claim that although the AGI undertaking is possible, it won’t be achieved for a very long time. Rodney Brooks, a veteran AI researcher and robot builder, says “I think it is a mistake to be worrying about us developing [strong] AI any time in the next few hundred years. I think the worry stems from a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI, and the enormity and complexity of building sentient volitional intelligence.” Andrew Ng at Baidu and Yann LeCun at Facebook are of a similar mind, as we saw in the last chapter. Less sceptical experts However there are also plenty of veteran AI researchers who think AGI may arrive soon.


pages: 303 words: 67,891

Advances in Artificial General Intelligence: Concepts, Architectures and Algorithms: Proceedings of the Agi Workshop 2006 by Ben Goertzel, Pei Wang

AI winter, artificial general intelligence, bioinformatics, brain emulation, combinatorial explosion, complexity theory, computer vision, conceptual framework, correlation coefficient, epigenetics, friendly AI, G4S, information retrieval, Isaac Newton, John Conway, Loebner Prize, Menlo Park, natural language processing, Occam's razor, p-value, pattern recognition, performance metric, Ray Kurzweil, Rodney Brooks, semantic web, statistical model, strong AI, theory of mind, traveling salesman, Turing machine, Turing test, Von Neumann architecture, Y2K

Introduction Early AI researchers aimed at what was later called “strong AI,” the simulation of human level intelligence. One of AI’s founders, Herbert Simon, claimed (circa 1957) that “… there are now in the world machines that think, that learn and that create.” He went on to predict that with 10 years a computer would beat a grandmaster at chess, would prove an “important new mathematical theorem, and would write music of “considerable aesthetic value.” Science fiction writer Arthur C. Clarke predicted that, “[AI] technology will become sufficiently advanced that it will be indistinguishable from magic” [1]. AI research had as its goal the simulation of human-like intelligence. Within a decade of so, it became abundantly clear that the problems AI had to overcome for this “strong AI” to become a reality were immense, perhaps intractable.

The next major step in this direction was the May 2006 AGIRI Workshop, of which this volume is essentially a proceedings. The term AGI, artificial general intelligence, was introduced as a modern successor to the earlier strong AI. Artificial General Intelligence What is artificial general intelligence? The AGIRI website lists several features, describing machines • • • • with human-level, and even superhuman, intelligence. that generalize their knowledge across different domains. that reflect on themselves. and that create fundamental innovations and insights. Even strong AI wouldn’t push for this much, and this general, an intelligence. Can there be such an artificial general intelligence? I think there can be, but that it can’t be done with a brain in a vat, with humans providing input and utilizing computational output.

Machine learning algorithms may be applied quite broadly in a variety of contexts, but the breadth and generality in this case is supplied largely by the human user of the algorithm; any particular machine learning program, considered as a holistic system taking in inputs and producing outputs without detailed human intervention, can solve only problems of a very specialized sort. Specified in this way, what we call AGI is similar to some other terms that have been used by other authors, such as “strong AI” [7], “human-level AI” [8], “true synthetic intelligence” [9], “general intelligent system” [10], and even “thinking machine” [11]. Though no term is perfect, we chose to use “AGI” because it correctly stresses the general nature of the research goal and scope, without committing too much to any theory or technique. We will also refer in this chapter to “AGI projects.” We use this term to refer to an AI research project that satisfies all the following criteria: 1.


pages: 294 words: 81,292

Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat

AI winter, AltaVista, Amazon Web Services, artificial general intelligence, Asilomar, Automated Insights, Bayesian statistics, Bernie Madoff, Bill Joy: nanobots, brain emulation, cellular automata, Chuck Templeton: OpenTable:, cloud computing, cognitive bias, commoditize, computer vision, cuban missile crisis, Daniel Kahneman / Amos Tversky, Danny Hillis, data acquisition, don't be evil, drone strike, Extropian, finite state, Flash crash, friendly AI, friendly fire, Google Glasses, Google X / Alphabet X, Isaac Newton, Jaron Lanier, John Markoff, John von Neumann, Kevin Kelly, Law of Accelerating Returns, life extension, Loebner Prize, lone genius, mutually assured destruction, natural language processing, Nicholas Carr, optical character recognition, PageRank, pattern recognition, Peter Thiel, prisoner's dilemma, Ray Kurzweil, Rodney Brooks, Search for Extraterrestrial Intelligence, self-driving car, semantic web, Silicon Valley, Singularitarianism, Skype, smart grid, speech recognition, statistical model, stealth mode startup, stem cell, Stephen Hawking, Steve Jobs, Steve Wozniak, strong AI, Stuxnet, superintelligent machines, technological singularity, The Coming Technological Singularity, Thomas Bayes, traveling salesman, Turing machine, Turing test, Vernor Vinge, Watson beat the top human players on Jeopardy!, zero day

“Chapter eight is the deeply intertwined promise and peril in GNR [genetics, nanotechnology, and robotics] and I go into pretty graphic detail on the downsides of those three areas of technology. And the downside of robotics, which really refers to AI, is the most profound because intelligence is the most important phenomenon in the world. Inherently there is no absolute protection against strong AI.” Kurzweil’s book does underline the dangers of genetic engineering and nanotechnology, but it gives only a couple of anemic pages to strong AI, the old name for AGI. And in that chapter he also argues that relinquishment, or turning our backs on some technologies because they’re too dangerous, as advocated by Bill Joy and others, isn’t just a bad idea, but an immoral one. I agree relinquishment is unworkable. But immoral? “Relinquishment is immoral because it would deprive us of profound benefits.

* * * So far we’ve explored three drives that Omohundro argues will motivate self-aware, self-improving systems: efficiency, self-protection, and resource acquisition. We’ve seen how all of these drives will lead to very bad outcomes without extremely careful planning and programming. And we’re compelled to ask ourselves, are we capable of such careful work? Do you, like me, look around the world at expensive and lethal accidents and wonder how we’ll get it right the first time with very strong AI? Three-Mile Island, Chernobyl, Fukushima—in these nuclear power plant catastrophes, weren’t highly qualified designers and administrators trying their best to avoid the disasters that befell them? The 1986 Chernobyl meltdown occurred during a safety test. All three disasters are what organizational theorist Charles Perrow would call “normal accidents.” In his seminal book Normal Accidents: Living with High-Risk Technologies, Perrow proposes that accidents, even catastrophes, are “normal” features of systems with complex infrastructures.

Yet the analogy doesn’t fit—advanced AI isn’t at all like fire, or any other technology. It will be capable of thinking, planning, and gaming its makers. No other tool does anything like that. Kurzweil believes that a way to limit the dangerous aspects of AI, especially ASI, is to pair it with humans through intelligence augmentation—IA. From his uncomfortable metal chair the optimist said, “As I have pointed out, strong AI is emerging from many diverse efforts and will be deeply integrated into our civilization’s infrastructure. Indeed, it will be intimately embedded in our bodies and brains. As such it will reflect our values because it will be us.” And so, the argument goes, it will be as “safe” as we are. But, as I told Kurzweil, Homo sapiens are not known to be particularly harmless when in contact with one another, other animals, or the environment.


pages: 283 words: 81,376

The Doomsday Calculation: How an Equation That Predicts the Future Is Transforming Everything We Know About Life and the Universe by William Poundstone

Albert Einstein, anthropic principle, Any sufficiently advanced technology is indistinguishable from magic, Arthur Eddington, Bayesian statistics, Benoit Mandelbrot, Berlin Wall, bitcoin, Black Swan, conceptual framework, cosmic microwave background, cosmological constant, cosmological principle, cuban missile crisis, dark matter, digital map, discounted cash flows, Donald Trump, Doomsday Clock, double helix, Elon Musk, Gerolamo Cardano, index fund, Isaac Newton, Jaron Lanier, Jeff Bezos, John Markoff, John von Neumann, mandelbrot fractal, Mark Zuckerberg, Mars Rover, Peter Thiel, Pierre-Simon Laplace, probability theory / Blaise Pascal / Pierre de Fermat, RAND corporation, random walk, Richard Feynman, ride hailing / ride sharing, Rodney Brooks, Ronald Reagan, Ronald Reagan: Tear down this wall, Sam Altman, Schrödinger's Cat, Search for Extraterrestrial Intelligence, self-driving car, Silicon Valley, Skype, Stanislav Petrov, Stephen Hawking, strong AI, Thomas Bayes, Thomas Malthus, time value of money, Turing test

This view is known as “strong AI.” Searle is among a dissenting faction of philosophers, and regular folk, who are not so sure about that. Almost all contemporary philosophers agree in principle that code could pass the Turing test, that it could be programmed to insist on having private moods and emotions, and that it could narrate a stream of consciousness as convincing as any human’s. But this might be all on the surface. Inside, the AI-bot could be empty, what philosophers call a zombie. It would have no soul, no subjectivity, no inner spark of whatever it is that makes us what we are. Bostrom’s trilemma takes strong AI as a given. Maybe it should be called a quadrilemma, with strong AI as the fourth leg of the stool. But for most of those following what Bostrom is saying, strong AI is taken for granted.


pages: 219 words: 63,495

50 Future Ideas You Really Need to Know by Richard Watson

23andMe, 3D printing, access to a mobile phone, Albert Einstein, artificial general intelligence, augmented reality, autonomous vehicles, BRICs, Buckminster Fuller, call centre, clean water, cloud computing, collaborative consumption, computer age, computer vision, crowdsourcing, dark matter, dematerialisation, digital Maoism, digital map, Elon Musk, energy security, failed state, future of work, Geoffrey West, Santa Fe Institute, germ theory of disease, global pandemic, happiness index / gross national happiness, hive mind, hydrogen economy, Internet of things, Jaron Lanier, life extension, Mark Shuttleworth, Marshall McLuhan, megacity, natural language processing, Network effects, new economy, oil shale / tar sands, pattern recognition, peak oil, personalized medicine, phenotype, precision agriculture, profit maximization, RAND corporation, Ray Kurzweil, RFID, Richard Florida, Search for Extraterrestrial Intelligence, self-driving car, semantic web, Skype, smart cities, smart meter, smart transportation, statistical model, stem cell, Stephen Hawking, Steve Jobs, Steven Pinker, Stewart Brand, strong AI, Stuxnet, supervolcano, telepresence, The Wisdom of Crowds, Thomas Malthus, Turing test, urban decay, Vernor Vinge, Watson beat the top human players on Jeopardy!, web application, women in the workforce, working-age population, young professional

By around 2040 machine brains should, in theory, be able to handle around 100 trillion instructions per second. That’s about the same as a human brain. So what happens when machine intelligence starts to rival that of its human designers? Before we descend down this rabbit hole we should first split AI in two. “Strong AI” is the term generally used to describe true thinking machines. “Weak AI” (sometimes known as “Narrow AI”) is intelligence intended to supplement rather than exceed human intelligence. So far most machines are preprogrammed or taught logical courses of action. But in the future, machines with strong AI will be able to learn as they go and respond to unexpected events. The implications? Think of automated disease diagnosis and surgery, military planning and battle command, customer-service avatars, artificial creativity and autonomous robots that predict then respond to crime (a “Department of Future Crime”—see also Chapter 32 and Biocriminology).

Sumner Redstone, chairman, Viacom and CBS 2002 “There is no doubt that Saddam Hussein has weapons of mass destruction.” Dick Cheney Glossary 3D printer A way to produce 3D objects from digital instructions and layered materials dispersed or sprayed on via a printer. Affective computing Machines and systems that recognize or simulate human effects or emotions. AGI Artificial general intelligence, a term usually used to describe strong AI (the opposite of narrow or weak AI). It is machine intelligence that is equivalent to, or exceeds, human intelligence and it’s usually regarded as the long-term goal of AI research and development. Ambient intelligence Electronic or artificial environments that recognize the presence of other machines or people and respond to their needs. Artificial photosynthesis The artificial replication of natural photosynthesis to create or store solar fuels.


pages: 742 words: 137,937

The Future of the Professions: How Technology Will Transform the Work of Human Experts by Richard Susskind, Daniel Susskind

23andMe, 3D printing, additive manufacturing, AI winter, Albert Einstein, Amazon Mechanical Turk, Amazon Web Services, Andrew Keen, Atul Gawande, Automated Insights, autonomous vehicles, Big bang: deregulation of the City of London, big data - Walmart - Pop Tarts, Bill Joy: nanobots, business process, business process outsourcing, Cass Sunstein, Checklist Manifesto, Clapham omnibus, Clayton Christensen, clean water, cloud computing, commoditize, computer age, Computer Numeric Control, computer vision, conceptual framework, corporate governance, creative destruction, crowdsourcing, Daniel Kahneman / Amos Tversky, death of newspapers, disintermediation, Douglas Hofstadter, en.wikipedia.org, Erik Brynjolfsson, Filter Bubble, full employment, future of work, Google Glasses, Google X / Alphabet X, Hacker Ethic, industrial robot, informal economy, information retrieval, interchangeable parts, Internet of things, Isaac Newton, James Hargreaves, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, Joseph Schumpeter, Khan Academy, knowledge economy, lifelogging, lump of labour, Marshall McLuhan, Metcalfe’s law, Narrative Science, natural language processing, Network effects, optical character recognition, Paul Samuelson, personalized medicine, pre–internet, Ray Kurzweil, Richard Feynman, Second Machine Age, self-driving car, semantic web, Shoshana Zuboff, Skype, social web, speech recognition, spinning jenny, strong AI, supply-chain management, telepresence, The Future of Employment, the market place, The Wealth of Nations by Adam Smith, The Wisdom of Crowds, transaction costs, Turing test, Watson beat the top human players on Jeopardy!, WikiLeaks, young professional

Like Watson, although vastly less ambitious, ours was a non-thinking, high-performing system. In the language of some AI scientists and philosophers of the 1980s, these systems would be labelled, perhaps a little pejoratively, as ‘weak AI’ rather than ‘strong AI’.8 Broadly speaking, ‘weak AI’ is a term applied to systems that appear, behaviourally, to engage in intelligent human-like thought but in fact enjoy no form of consciousness; whereas systems that exhibit ‘strong AI’ are those that, it is maintained, do have thoughts and cognitive states. On this latter view, the brain is often equated with the digital computer. Today, fascination with ‘strong AI’ is perhaps more intense than ever, even though really big questions remain unanswered and unanswerable. How can we know if machines are conscious in the way that human beings are? How, for that matter, do we know that consciousness feels the same for all of us as human beings?

Undeterred by these philosophical challenges, books and projects abound on building brains and creating minds.9 In the 1980s, in our speeches, we used to joke about the claim of one of the fathers of AI, Marvin Minsky, who reportedly said that ‘the next generation of computers will be so intelligent, we will be lucky if they keep us around as household pets’.10 Today, it is no longer laugh-worthy or sciencefictional11 to contemplate a future in which our computers are vastly more intelligent than us—this prospect is discussed at length in Superintelligence by Nick Bostrom, who runs the Future of Humanity Institute at the Oxford Martin School at the University of Oxford.12 Ironically, this growth in confidence in the possibility of ‘strong AI’, at least in part, has been fuelled by the success of Watson itself. The irony here is that Watson in fact belongs in the category of ‘weak AI’, and it is precisely because it cannot meaningfully be said to think that the system is not deemed very interesting by some AI scientists, psychologists, and philosophers. For pragmatists (like us) rather than purists, whether Watson is an example of ‘weak’ or ‘strong’ AI is of little moment. Pragmatists are interested in high-performing systems, whether or not they can think. Watson did not need to be able to think to win. Nor does a computer need to be able to think or be conscious to pass the celebrated ‘Turing Test’.


pages: 245 words: 64,288

Robots Will Steal Your Job, But That's OK: How to Survive the Economic Collapse and Be Happy by Pistono, Federico

3D printing, Albert Einstein, autonomous vehicles, bioinformatics, Buckminster Fuller, cloud computing, computer vision, correlation does not imply causation, en.wikipedia.org, epigenetics, Erik Brynjolfsson, Firefox, future of work, George Santayana, global village, Google Chrome, happiness index / gross national happiness, hedonic treadmill, illegal immigration, income inequality, information retrieval, Internet of things, invention of the printing press, jimmy wales, job automation, John Markoff, Kevin Kelly, Khan Academy, Kickstarter, knowledge worker, labor-force participation, Lao Tzu, Law of Accelerating Returns, life extension, Loebner Prize, longitudinal study, means of production, Narrative Science, natural language processing, new economy, Occupy movement, patent troll, pattern recognition, peak oil, post scarcity, QR code, race to the bottom, Ray Kurzweil, recommendation engine, RFID, Rodney Brooks, selection bias, self-driving car, slashdot, smart cities, software as a service, software is eating the world, speech recognition, Steven Pinker, strong AI, technological singularity, Turing test, Vernor Vinge, women in the workforce

Whatever the flavour, the main idea is clear, conversations through natural language to determine if you are human or not. A machine able to pass the Turning test is said to have achieved human-level intelligence, or at least perceived intelligence (whether we consider that to be true intelligence or not is irrelevant for the purpose of the argument). Some people call this Strong Artificial Intelligence (Strong AI), and many see Strong AI as an unachievable myth, because the brain is mysterious, and so much more than the sum of its individual components. They claim that the brain operates using unknown, possibly unintelligible quantum mechanical processes, and any effort to reach or even surpass it using mechanical machines is pure fantasy. Others claim that the brain is just a biological machine, not much different from any other machine, and that it is merely a matter of time before we can surpass it using our artificial creations.

Others claim that the brain is just a biological machine, not much different from any other machine, and that it is merely a matter of time before we can surpass it using our artificial creations. This is certainly a fascinating topic, one that would require a thorough examination. Perhaps I will explore it on another book. For now, let us concentrate on the present, on what we know for sure, and on the upcoming future. As we will see, there is no need for machines to achieve Strong AI in order to change the nature of the economy, employment, and our lives, forever. We will start by looking at what intelligence is, how it can be useful, and if machines have become intelligent, perhaps even more so than us. Chapter 5 Intelligence There is a great deal of confusion regarding the meaning of the word intelligence, mainly because nobody really knows what it is. There are attempts to define this word, but they fall short when confronted with some logic and informed questions.


pages: 281 words: 71,242

World Without Mind: The Existential Threat of Big Tech by Franklin Foer

artificial general intelligence, back-to-the-land, Berlin Wall, big data - Walmart - Pop Tarts, big-box store, Buckminster Fuller, citizen journalism, Colonization of Mars, computer age, creative destruction, crowdsourcing, data is the new oil, don't be evil, Donald Trump, Double Irish / Dutch Sandwich, Douglas Engelbart, Edward Snowden, Electric Kool-Aid Acid Test, Elon Musk, Fall of the Berlin Wall, Filter Bubble, global village, Google Glasses, Haight Ashbury, hive mind, income inequality, intangible asset, Jeff Bezos, job automation, John Markoff, Kevin Kelly, knowledge economy, Law of Accelerating Returns, Marc Andreessen, Mark Zuckerberg, Marshall McLuhan, means of production, move fast and break things, move fast and break things, new economy, New Journalism, Norbert Wiener, offshore financial centre, PageRank, Peace of Westphalia, Peter Thiel, planetary scale, Ray Kurzweil, self-driving car, Silicon Valley, Singularitarianism, software is eating the world, Steve Jobs, Steven Levy, Stewart Brand, strong AI, supply-chain management, the medium is the message, the scientific method, The Wealth of Nations by Adam Smith, The Wisdom of Crowds, Thomas L Friedman, Thorstein Veblen, Upton Sinclair, Vernor Vinge, Whole Earth Catalog, yellow journalism

The singularity refers to a rupture in the time-space continuum—it describes the moment when the finite become infinite. In Kurzweil’s telling, the singularity is when artificial intelligence becomes all-powerful, when computers are capable of designing and building other computers. This superintelligence will, of course, create a superintelligence even more powerful than itself—and so on, down the posthuman generations. At that point, all bets are off—“strong AI and nanotechnology can create any product, any situation, any environment that we can imagine at will.” As a scientist, Kurzweil believes in precision. When he makes predictions, he doesn’t chuck darts; he extrapolates data. In fact, he’s loaded everything we know about the history of human technology onto his computer and run the numbers. Technological progress, he has concluded, isn’t a matter of linear growth; it’s a never-ending exponential explosion.

There’s a school of incrementalists, who cherish everything that has been accomplished to date—victories like the PageRank algorithm or the software that allows ATMs to read the scrawled writing on checks. This school holds out little to no hope that computers will ever acquire anything approximating human consciousness. Then there are the revolutionaries who gravitate toward Kurzweil and the singularitarian view. They aim to build computers with either “artificial general intelligence” or “strong AI.” For most of Google’s history, it trained its efforts on incremental improvements. During that earlier era, the company was run by Eric Schmidt—an older, experienced manager, whom Google’s investors forced Page and Brin to accept as their “adult” supervisor. That’s not to say that Schmidt was timid. Those years witnessed Google’s plot to upload every book on the planet and the creation of products that are now commonplace utilities, like Gmail, Google Docs, and Google Maps.

His parents, Viennese Jews, fled on the eve of the Anschluss: Ray Kurzweil, Ask Ray blog, “My Trip to Brussels, Zurich, Warsaw, and Vienna,” December 14, 2010. he made an appearance on Steve Allen’s game show, I’ve Got a Secret: Ray Kurzweil, “I’ve Got a Secret,” 1965, https://www.youtube.com/watch?v=X4Neivqp2K4. “to invent things so that the blind could see”: Steve Rabinowitz quoted in Transcendent Man, directed by Barry Ptolemy, 2011. “profoundly sad, lonely feeling that I really can’t bear it”: Transcendent Man. “strong AI and nanotechnology can create any product”: Ray Kurzweil, The Singularity Is Near (Viking Penguin, 2005), 299. “Each epoch of evolution has progressed more rapidly”: Kurzweil, Singularity, 40. “version 1.0 biological bodies”: Kurzweil, Singularity, 9. “We will be software, not hardware”: Ray Kurzweil, The Age of Spiritual Machines (Viking Penguin, 1999), 129. “What, after all, is the difference between a human”: Kurzweil, Spiritual Machines, 148.


pages: 586 words: 186,548

Architects of Intelligence by Martin Ford

3D printing, agricultural Revolution, AI winter, Apple II, artificial general intelligence, Asilomar, augmented reality, autonomous vehicles, barriers to entry, basic income, Baxter: Rethink Robotics, Bayesian statistics, bitcoin, business intelligence, business process, call centre, cloud computing, cognitive bias, Colonization of Mars, computer vision, correlation does not imply causation, crowdsourcing, DARPA: Urban Challenge, deskilling, disruptive innovation, Donald Trump, Douglas Hofstadter, Elon Musk, Erik Brynjolfsson, Ernest Rutherford, Fellow of the Royal Society, Flash crash, future of work, gig economy, Google X / Alphabet X, Gödel, Escher, Bach, Hans Rosling, ImageNet competition, income inequality, industrial robot, information retrieval, job automation, John von Neumann, Law of Accelerating Returns, life extension, Loebner Prize, Mark Zuckerberg, Mars Rover, means of production, Mitch Kapor, natural language processing, new economy, optical character recognition, pattern recognition, phenotype, Productivity paradox, Ray Kurzweil, recommendation engine, Robert Gordon, Rodney Brooks, Sam Altman, self-driving car, sensor fusion, sentiment analysis, Silicon Valley, smart cities, social intelligence, speech recognition, statistical model, stealth mode startup, stem cell, Stephen Hawking, Steve Jobs, Steve Wozniak, Steven Pinker, strong AI, superintelligent machines, Ted Kaczynski, The Rise and Fall of American Growth, theory of mind, Thomas Bayes, Travis Kalanick, Turing test, universal basic income, Wall-E, Watson beat the top human players on Jeopardy!, women in the workforce, working-age population, zero-sum game, Zipcar

MARTIN FORD: So, you believe that the capability to think causally is critical to achieving what you’d call strong AI or AGI, artificial general intelligence? JUDEA PEARL: I have no doubt that it is essential. Whether it is sufficient, I’m not sure. However, causal reasoning doesn’t solve every problem of general AI. It doesn’t solve the object recognition problem, and it doesn’t solve the language understanding problem. We basically solved the cause-effect puzzle, and we can learn a lot from these solutions so that we can help the other tasks circumvent their obstacles. MARTIN FORD: Do you think that strong AI or AGI is feasible? Is that something you think will happen someday? JUDEA PEARL: I have no doubt that it is feasible. But what does it mean for me to say no doubt? It means that I am strongly convinced it can be done because I haven’t seen any theoretical impediment to strong AI. MARTIN FORD: You said that way back around 1961, when you were at RCA, people were already thinking about this.

However, it is also one of the most difficult challenges facing the field. A breakthrough that allowed machines to efficiently learn in a truly unsupervised way would likely be considered one of the biggest events in AI so far, and an important waypoint on the road to human-level AI. ARTIFICIAL GENERAL INTELLIGENCE (AGI) refers to a true thinking machine. AGI is typically considered to be more or less synonymous with the terms HUMAN-LEVEL AI or STRONG AI. You’ve likely seen several examples of AGI—but they have all been in the realm of science fiction. HAL from 2001 A Space Odyssey, the Enterprise’s main computer (or Mr. Data) from Star Trek, C3PO from Star Wars and Agent Smith from The Matrix are all examples of AGI. Each of these fictional systems would be capable of passing the TURING TEST—in other words, these AI systems could carry out a conversation so that they would be indistinguishable from a human being.

When people join us, we’re very happy to share this long list of ideas with them to see which ones fit. MARTIN FORD: It sounds like your strategy is to attract AI talent in part by offering the opportunity and infrastructure to found a startup venture. ANDREW NG: Yes, building a successful AI company takes more than AI talent. We focus so much on the technology because it’s advancing so quickly, but building a strong AI team often needs a portfolio of different skills ranging from the tech, to the business strategy, to product, to marketing, to business development. Our role is building full stack teams that are able to build concrete business verticals. The technology is super important, but a startup is much more than technology. MARTIN FORD: So far, it seems that any AI startup that demonstrates real potential gets acquired by one of the huge tech firms.


pages: 561 words: 157,589

WTF?: What's the Future and Why It's Up to Us by Tim O'Reilly

4chan, Affordable Care Act / Obamacare, Airbnb, Alvin Roth, Amazon Mechanical Turk, Amazon Web Services, artificial general intelligence, augmented reality, autonomous vehicles, barriers to entry, basic income, Bernie Madoff, Bernie Sanders, Bill Joy: nanobots, bitcoin, blockchain, Bretton Woods, Brewster Kahle, British Empire, business process, call centre, Capital in the Twenty-First Century by Thomas Piketty, Captain Sullenberger Hudson, Chuck Templeton: OpenTable:, Clayton Christensen, clean water, cloud computing, cognitive dissonance, collateralized debt obligation, commoditize, computer vision, corporate governance, corporate raider, creative destruction, crowdsourcing, Danny Hillis, data acquisition, deskilling, DevOps, Donald Davies, Donald Trump, Elon Musk, en.wikipedia.org, Erik Brynjolfsson, Filter Bubble, Firefox, Flash crash, full employment, future of work, George Akerlof, gig economy, glass ceiling, Google Glasses, Gordon Gekko, gravity well, greed is good, Guido van Rossum, High speed trading, hiring and firing, Home mortgage interest deduction, Hyperloop, income inequality, index fund, informal economy, information asymmetry, Internet Archive, Internet of things, invention of movable type, invisible hand, iterative process, Jaron Lanier, Jeff Bezos, jitney, job automation, job satisfaction, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, Kevin Kelly, Khan Academy, Kickstarter, knowledge worker, Kodak vs Instagram, Lao Tzu, Larry Wall, Lean Startup, Leonard Kleinrock, Lyft, Marc Andreessen, Mark Zuckerberg, market fundamentalism, Marshall McLuhan, McMansion, microbiome, microservices, minimum viable product, mortgage tax deduction, move fast and break things, move fast and break things, Network effects, new economy, Nicholas Carr, obamacare, Oculus Rift, packet switching, PageRank, pattern recognition, Paul Buchheit, peer-to-peer, peer-to-peer model, Ponzi scheme, race to the bottom, Ralph Nader, randomized controlled trial, RFC: Request For Comment, Richard Feynman, Richard Stallman, ride hailing / ride sharing, Robert Gordon, Robert Metcalfe, Ronald Coase, Sam Altman, school choice, Second Machine Age, secular stagnation, self-driving car, SETI@home, shareholder value, Silicon Valley, Silicon Valley startup, skunkworks, Skype, smart contracts, Snapchat, Social Responsibility of Business Is to Increase Its Profits, social web, software as a service, software patent, spectrum auction, speech recognition, Stephen Hawking, Steve Ballmer, Steve Jobs, Steven Levy, Stewart Brand, strong AI, TaskRabbit, telepresence, the built environment, The Future of Employment, the map is not the territory, The Nature of the Firm, The Rise and Fall of American Growth, The Wealth of Nations by Adam Smith, Thomas Davenport, transaction costs, transcontinental railway, transportation-network company, Travis Kalanick, trickle-down economics, Uber and Lyft, Uber for X, uber lyft, ubercab, universal basic income, US Airways Flight 1549, VA Linux, Watson beat the top human players on Jeopardy!, We are the 99%, web application, Whole Earth Catalog, winner-take-all economy, women in the workforce, Y Combinator, yellow journalism, zero-sum game, Zipcar

Google Search, financial markets, and social media platforms like Facebook and Twitter gather data from trillions of human interactions, distilling that data into collective intelligence that can be acted on by narrow AI algorithms. As computational neuroscientist and AI entrepreneur Beau Cronin puts it, “In many cases, Google has succeeded by reducing problems that were previously assumed to require strong AI—that is, reasoning and problem-solving abilities generally associated with human intelligence—into narrow AI, solvable by matching new inputs against vast repositories of previously encountered examples.” Enough narrow AI infused with the data thrown off by billions of humans starts to look suspiciously like strong AI. In short, these are systems of collective intelligence that use algorithms to aggregate the collective knowledge and decisions of millions of individual humans. And that, of course, is also the classical conception of “the market”—the system by which, without any central coordination, prices of goods and labor are set, buyers and sellers are found for all of the fruits of the earth and the products of human ingenuity, guided as if, as Adam Smith famously noted, “by an invisible hand.”

And then we need to understand how financial markets (often colloquially, and inaccurately, referred to simply as “Wall Street”) have become a machine that its creators no longer fully understand, and how the goals and operation of that machine have become radically disconnected from the market of real goods and services that it was originally created to support. THREE TYPES OF ARTIFICIAL INTELLIGENCE As we’ve seen, when experts talk about artificial intelligence, they distinguish between “narrow artificial intelligence” and “general artificial intelligence,” also referred to as “weak AI” and “strong AI.” Narrow AI burst into the public debate in 2011. That was the year that IBM’s Watson soundly trounced the best human Jeopardy players in a nationally televised match in February. In October of that same year, Apple introduced Siri, its personal agent, able to answer common questions spoken aloud in plain language. Siri’s responses, in a pleasing female voice, were the stuff of science fiction.

Rather than spelling out every procedure, a base program such as an image recognizer or categorizer is built, and then trained by feeding it large amounts of data labeled by humans until it can recognize patterns in the data on its own. We teach the program what success looks like, and it learns to copy us. This leads to the fear that these programs will become increasingly independent of their creators. Artificial general intelligence (also sometimes referred to as “strong AI”) is still the stuff of science fiction. It is the product of a hypothetical future in which an artificial intelligence isn’t just trained to be smart about a specific task, but to learn entirely on its own, and can effectively apply its intelligence to any problem that comes its way. The fear is that an artificial general intelligence will develop its own goals and, because of its ability to learn on its own at superhuman speeds, will improve itself at a rate that soon leaves humans far behind.


pages: 247 words: 43,430

Think Complexity by Allen B. Downey

Benoit Mandelbrot, cellular automata, Conway's Game of Life, Craig Reynolds: boids flock, discrete time, en.wikipedia.org, Frank Gehry, Gini coefficient, Guggenheim Bilbao, Laplace demon, mandelbrot fractal, Occupy movement, Paul Erdős, peer-to-peer, Pierre-Simon Laplace, sorting algorithm, stochastic process, strong AI, Thomas Kuhn: the structure of scientific revolutions, Turing complete, Turing machine, Vilfredo Pareto, We are the 99%

The view that free will is compatible with determinism is called compatibilism. One of the strongest challenges to compatibilism is the consequence argument. What is the consequence argument? What response can you give to the consequence argument based on what you have read in this book? Example 10-7. In the philosophy of mind, Strong AI is the position that an appropriately programmed computer could have a mind in the same sense that humans have minds. John Searle presented a thought experiment called The Chinese Room, intended to show that Strong AI is false. You can read about it at http://en.wikipedia.org/wiki/Chinese_room. What is the system reply to the Chinese Room argument? How does what you have learned about complexity science influence your reaction to the system response? Chapter 11. Case Study: Sugarscape Dan Kearney, Natalie Mattison, and Theo Thompson The Original Sugarscape Sugarscape is an agent-based model developed by Joshua M.


pages: 372 words: 101,174

How to Create a Mind: The Secret of Human Thought Revealed by Ray Kurzweil

Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Albert Einstein, Albert Michelson, anesthesia awareness, anthropic principle, brain emulation, cellular automata, Claude Shannon: information theory, cloud computing, computer age, Dean Kamen, discovery of DNA, double helix, en.wikipedia.org, epigenetics, George Gilder, Google Earth, Isaac Newton, iterative process, Jacquard loom, John von Neumann, Law of Accelerating Returns, linear programming, Loebner Prize, mandelbrot fractal, Norbert Wiener, optical character recognition, pattern recognition, Peter Thiel, Ralph Waldo Emerson, random walk, Ray Kurzweil, reversible computing, selective serotonin reuptake inhibitor (SSRI), self-driving car, speech recognition, Steven Pinker, strong AI, the scientific method, theory of mind, Turing complete, Turing machine, Turing test, Wall-E, Watson beat the top human players on Jeopardy!, X Prize

The current state of the art in AI does in fact enable systems to also learn from their own experience. The Google self-driving cars learn from their own driving experience as well as from data from Google cars driven by human drivers; Watson learned most of its knowledge by reading on its own. It is interesting to note that the methods deployed today in AI have evolved to be mathematically very similar to the mechanisms in the neocortex. Another objection to the feasibility of “strong AI” (artificial intelligence at human levels and beyond) that is often raised is that the human brain makes extensive use of analog computing, whereas digital methods inherently cannot replicate the gradations of value that analog representations can embody. It is true that one bit is either on or off, but multiple-bit words easily represent multiple gradations and can do so to any desired degree of accuracy.

Carver Mead, Analog VLSI and Neural Systems (Reading, MA: Addison-Wesley, 1986). 7. “IBM Unveils Cognitive Computing Chips,” IBM news release, August 18, 2011, http://www-03.ibm.com/press/us/en/pressrelease/35251.wss. 8. “Japan’s K Computer Tops 10 Petaflop/s to Stay Atop TOP500 List.” Chapter 9: Thought Experiments on the Mind 1. John R. Searle, “I Married a Computer,” in Jay W. Richards, ed., Are We Spiritual Machines? Ray Kurzweil vs. the Critics of Strong AI (Seattle: Discovery Institute, 2002). 2. Stuart Hameroff, Ultimate Computing: Biomolecular Consciousness and Nanotechnology (Amsterdam: Elsevier Science, 1987). 3. P. S. Sebel et al., “The Incidence of Awareness during Anesthesia: A Multicenter United States Study,” Anesthesia and Analgesia 99 (2004): 833–39. 4. Stuart Sutherland, The International Dictionary of Psychology (New York: Macmillan, 1990). 5.

., “Cognitive Computing,” Communications of the ACM 54, no. 8 (2011): 62–71, http://cacm.acm.org/magazines/2011/8/114944-cognitive-computing/fulltext. 9. Kurzweil, The Singularity Is Near, chapter 9, section titled “The Criticism from Ontology: Can a Computer Be Conscious?” (pp. 458–69). 10. Michael Denton, “Organism and Machine: The Flawed Analogy,” in Are We Spiritual Machines? Ray Kurzweil vs. the Critics of Strong AI (Seattle: Discovery Institute, 2002). 11. Hans Moravec, Mind Children (Cambridge, MA: Harvard University Press, 1988). Epilogue 1. “In U.S., Optimism about Future for Youth Reaches All-Time Low,” Gallup Politics, May 2, 2011, http://www.gallup.com/poll/147350/optimism-future-youth-reaches-time-low.aspx. 2. James C. Riley, Rising Life Expectancy: A Global History (Cambridge: Cambridge University Press, 2001). 3.


Work in the Future The Automation Revolution-Palgrave MacMillan (2019) by Robert Skidelsky Nan Craig

3D printing, Airbnb, algorithmic trading, Amazon Web Services, anti-work, artificial general intelligence, autonomous vehicles, basic income, business cycle, cloud computing, collective bargaining, correlation does not imply causation, creative destruction, data is the new oil, David Graeber, David Ricardo: comparative advantage, deindustrialization, deskilling, disintermediation, Donald Trump, Erik Brynjolfsson, feminist movement, Frederick Winslow Taylor, future of work, gig economy, global supply chain, income inequality, informal economy, Internet of things, Jarndyce and Jarndyce, Jarndyce and Jarndyce, job automation, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John von Neumann, Joseph Schumpeter, knowledge economy, Loebner Prize, low skilled workers, Lyft, Mark Zuckerberg, means of production, moral panic, Network effects, new economy, off grid, pattern recognition, post-work, Ronald Coase, Second Machine Age, self-driving car, sharing economy, Steve Jobs, strong AI, technoutopianism, The Chicago School, The Future of Employment, the market place, The Nature of the Firm, The Wealth of Nations by Adam Smith, Thorstein Veblen, Turing test, Uber for X, uber lyft, universal basic income, wealth creators, working poor

Namely: consciousness. Were they correct? Not according to proponents of Weak AI. These theorists claim that a suitably programmed computer could imitate conscious mental states, such as self-awareness, understanding or love, but could never actually experience them—it could never be conscious, and hence it could never be self-aware and would never actually understand or love anything. Proponents of Strong AI believe the opposite. They claim that a ­computer could, given the right programming, possess consciousness and thereby experience conscious mental states. This title is inspired by Dreyfus’s (1992) ‘What Computers Still Can’t Do’. T. Tozer (*) Centre for Global Studies, London, UK © The Author(s) 2020 R. Skidelsky, N. Craig (eds.), Work in the Future, https://doi.org/10.1007/978-3-030-21134-9_11 99 100 T.

Indeed, there is no example to be found of an entity producing its opposite by itself.2 Therefore, it is unreasonable to suppose that the brain would be able to do so: an entirely physical thing (brain/ computer) could not produce something that is entirely non-physical (consciousness). Note that although the above examples admit of the possibility of something leading to its opposite if combined with something else, the argument of Strong AI proponents such as Kurzweil entails no such ­non-­physical enabling substance. Such theorists suggest that a physical thing, and that physical thing alone, can produce consciousness. That is precisely the claim I am rejecting. 2 Or if there is, let the reader pen it in writing as an objection to this premise. 11 What Computers Will Never Be Able To Do 105 Finally, let us consider an objection from Turing (1950: 445–447).


pages: 677 words: 206,548

Future Crimes: Everything Is Connected, Everyone Is Vulnerable and What We Can Do About It by Marc Goodman

23andMe, 3D printing, active measures, additive manufacturing, Affordable Care Act / Obamacare, Airbnb, airport security, Albert Einstein, algorithmic trading, artificial general intelligence, Asilomar, Asilomar Conference on Recombinant DNA, augmented reality, autonomous vehicles, Baxter: Rethink Robotics, Bill Joy: nanobots, bitcoin, Black Swan, blockchain, borderless world, Brian Krebs, business process, butterfly effect, call centre, Charles Lindbergh, Chelsea Manning, cloud computing, cognitive dissonance, computer vision, connected car, corporate governance, crowdsourcing, cryptocurrency, data acquisition, data is the new oil, Dean Kamen, disintermediation, don't be evil, double helix, Downton Abbey, drone strike, Edward Snowden, Elon Musk, Erik Brynjolfsson, Filter Bubble, Firefox, Flash crash, future of work, game design, global pandemic, Google Chrome, Google Earth, Google Glasses, Gordon Gekko, high net worth, High speed trading, hive mind, Howard Rheingold, hypertext link, illegal immigration, impulse control, industrial robot, Intergovernmental Panel on Climate Change (IPCC), Internet of things, Jaron Lanier, Jeff Bezos, job automation, John Harrison: Longitude, John Markoff, Joi Ito, Jony Ive, Julian Assange, Kevin Kelly, Khan Academy, Kickstarter, knowledge worker, Kuwabatake Sanjuro: assassination market, Law of Accelerating Returns, Lean Startup, license plate recognition, lifelogging, litecoin, low earth orbit, M-Pesa, Mark Zuckerberg, Marshall McLuhan, Menlo Park, Metcalfe’s law, MITM: man-in-the-middle, mobile money, more computing power than Apollo, move fast and break things, move fast and break things, Nate Silver, national security letter, natural language processing, obamacare, Occupy movement, Oculus Rift, off grid, offshore financial centre, optical character recognition, Parag Khanna, pattern recognition, peer-to-peer, personalized medicine, Peter H. Diamandis: Planetary Resources, Peter Thiel, pre–internet, RAND corporation, ransomware, Ray Kurzweil, refrigerator car, RFID, ride hailing / ride sharing, Rodney Brooks, Ross Ulbricht, Satoshi Nakamoto, Second Machine Age, security theater, self-driving car, shareholder value, Silicon Valley, Silicon Valley startup, Skype, smart cities, smart grid, smart meter, Snapchat, social graph, software as a service, speech recognition, stealth mode startup, Stephen Hawking, Steve Jobs, Steve Wozniak, strong AI, Stuxnet, supply-chain management, technological singularity, telepresence, telepresence robot, Tesla Model S, The Future of Employment, The Wisdom of Crowds, Tim Cook: Apple, trade route, uranium enrichment, Wall-E, Watson beat the top human players on Jeopardy!, Wave and Pay, We are Anonymous. We are Legion, web application, Westphalian system, WikiLeaks, Y Combinator, zero day

The device is inherently of no value to us (internal memo at Western Union, 1878). Somehow, the impossible always seems to become the possible. In the world of artificial intelligence, that next phase of development is called artificial general intelligence (AGI), or strong AI. In contrast to narrow AI, which cleverly performs a specific limited task, such as machine translation or auto navigation, strong AI refers to “thinking machines” that might perform any intellectual task that a human being could. Characteristics of a strong AI would include the ability to reason, make judgments, plan, learn, communicate, and unify these skills toward achieving common goals across a variety of domains, and commercial interest is growing. In 2014, Google purchased DeepMind Technologies for more than $500 million in order to strengthen its already strong capabilities in deep learning AI.

His algorithmic programming requires him to complete the vessel’s mission near Jupiter, but for national security reasons he cannot disclose the true purpose of the voyage to the crew. To resolve the contradiction in his program, he attempts to kill the crew. As narrow AI becomes more powerful, robots grow more autonomous, and AGI looms large, we need to ensure that the algorithms of tomorrow are better equipped to resolve programming conflicts and moral judgments than was HAL. It’s not that any strong AI would necessarily be “evil” and attempt to destroy humanity, but in pursuit of its primary goal as programmed, an AGI might not stop until it had achieved its mission at all costs, even if that meant competing with or harming human beings, seizing our resources, or damaging our environment. As the perceived risks from AGI have grown, numerous nonprofit institutes have been formed to address and study them, including Oxford’s Future of Humanity Institute, the Machine Intelligence Research Institute, the Future of Life Institute, and the Cambridge Centre for the Study of Existential Risk.


pages: 688 words: 147,571

Robot Rules: Regulating Artificial Intelligence by Jacob Turner

Ada Lovelace, Affordable Care Act / Obamacare, AI winter, algorithmic trading, artificial general intelligence, Asilomar, Asilomar Conference on Recombinant DNA, autonomous vehicles, Basel III, bitcoin, blockchain, brain emulation, Clapham omnibus, cognitive dissonance, corporate governance, corporate social responsibility, correlation does not imply causation, crowdsourcing, distributed ledger, don't be evil, Donald Trump, easy for humans, difficult for computers, effective altruism, Elon Musk, financial exclusion, financial innovation, friendly fire, future of work, hive mind, Internet of things, iterative process, job automation, John Markoff, John von Neumann, Loebner Prize, medical malpractice, Nate Silver, natural language processing, nudge unit, obamacare, off grid, pattern recognition, Peace of Westphalia, race to the bottom, Ray Kurzweil, Rodney Brooks, self-driving car, Silicon Valley, Stanislav Petrov, Stephen Hawking, Steve Wozniak, strong AI, technological singularity, Tesla Model S, The Coming Technological Singularity, The Future of Employment, The Signal and the Noise by Nate Silver, Turing test, Vernor Vinge

Therefore, before it is possible to demonstrate the spreading influence of AI or the need for legal controls, we must first set out what we mean by this term. 2 Narrow and General AI It is helpful at the outset to distinguish two classifications for AI: narrow and general.18 Narrow (sometimes referred to as “weak”) AI denotes the ability of a system to achieve a certain stipulated goal or set of goals, in a manner or using techniques which qualify as intelligent (the meaning of “intelligence” is addressed below). These limited goals might include natural language processing functions like translation, or navigating through an unfamiliar physical environment. A narrow AI system is suited only to the task for which it is designed. The great majority of AI systems in the world today are closer to this narrow and limited type. General (or “strong”) AI is the ability to achieve an unlimited range of goals, and even to set new goals independently, including in situations of uncertainty or vagueness. This encompasses many of the attributes we think of as intelligence in humans. Indeed, general AI is what we see portrayed in the robots and AI of popular culture discussed above. As yet, general AI approaching the level of human capabilities does not exist and some have even cast doubt on whether it is possible.19 Narrow and general AI are not hermetically sealed from each other.

This raises similar problems to the Roman historian Plutarch’s “Ship of Theseus Paradox”:The ship wherein Theseus and the youth of Athens returned from Crete had thirty oars, and was preserved by the Athenians down even to the time of Demetrius Phalereus, for they took away the old planks as they decayed, putting in new and stronger timber in their places, in so much that this ship became a standing example among the philosophers, for the logical question of things that grow; one side holding that the ship remained the same, and the other contending that it was not the same.117 This paradox, which questions the nature of continuous identity through shifting physical components, can be applied to combinations of humanity and AI. We would not deny someone their human rights if they were 1% augmented by AI. What about if 20%, 50% or 80% of their mental functioning was the result of computer processing powers? On one view, the answer would be the same—a human should not lose rights just because they have added to their mental functioning. However, consistent with his view that no artificial process can produce “strong” AI which resembles human intelligence, the philosopher John Searle argues that replacement would gradually remove conscious experience.118 Replacement or augmentation of human physical functions with artificial ones does not render someone less deserving of rights.119 Someone who loses an arm and has it replaced with a mechanical version is not considered less human. The same argument might be made in the future, for instance if someone suffers a brain injury causing persistent amnesia and undergoes surgery to fit a processor replacing this mental function.

See Kill Switch OpenAI Open Roboethics Institute (ORI) Organisation for Economic Co-operation and Development (OECD) P Paris Climate Agreement Partnership on AI to benefit People and Society Positivism Posthumanism Private Law Product liability EU Product Liability Directive US Restatement (Third) of Torts–Products Liability Professions, The Public International Law Q Qualia R Random Darknet Shopper Rawls, John Red Flag Law Rousseau, Jean Jacques S Safe interruptibility Sandbox Saudi Arabia Self-driving cars. See Autonomous vehicles Sexbots Sex Robots. See Sexbots Singularity, The “Shut Down” Problem Slavery Space Law Outer Space Treaty 1967 Stochastic Gradient Descent Strict liability Strong AI. See General AI Subsidiarity Superintelligence Symbolic AI T TD-gammon Teleological Principle TenCent TensorFlow Transhumanism. See Posthumanism Transparency See alsoExplanation, Black Box Problem Trolley Problem Turing Test U UAE, The UK, The UK Financial Conduct Authority (FCA) sandbox Uncanny Valley The US US National Institute of Standards and Technology (NIST) V Vicarious liability Villani Report W Warnock Inquiry Weak AI.


pages: 846 words: 232,630

Darwin's Dangerous Idea: Evolution and the Meanings of Life by Daniel C. Dennett

Albert Einstein, Alfred Russel Wallace, anthropic principle, assortative mating, buy low sell high, cellular automata, combinatorial explosion, complexity theory, computer age, conceptual framework, Conway's Game of Life, Danny Hillis, double helix, Douglas Hofstadter, Drosophila, finite state, Gödel, Escher, Bach, In Cold Blood by Truman Capote, invention of writing, Isaac Newton, Johann Wolfgang von Goethe, John von Neumann, Murray Gell-Mann, New Journalism, non-fiction novel, Peter Singer: altruism, phenotype, price mechanism, prisoner's dilemma, QWERTY keyboard, random walk, Richard Feynman, Rodney Brooks, Schrödinger's Cat, selection bias, Stephen Hawking, Steven Pinker, strong AI, the scientific method, theory of mind, Thomas Malthus, Turing machine, Turing test

Simpler survival machines — plants, for instance — never achieve the heights of self-redefinition made possible by the complexities of your robot; considering them just as survival machines for their comatose inhabitants leaves no patterns in their behavior unexplained. If you pursue this avenue, which of course I recommend, then you must abandon Searle's and Fodor's "principled" objection to "strong AI." The imagined robot, however difficult or unlikely an engineering feat, is not an impossibility — nor do they claim it to be. They concede the possibility of such a robot, but just dispute its "metaphysical status"; however adroitly it managed its affairs, they say, its intentionality would not be the real thing. That's cutting it mighty fine. I recommend abandoning such a forlorn disclaimer and acknowledging that the meaning such a robot would discover in its world, and exploit in its own communications with others, would be exactly as real as the meaning you enjoy.

This difficulty had been widely seen as systematically blocking any argument from Godel's Theorem to the impossibility of AI. Certainly everybody in AI has always known about Godel's Theorem, and they have all continued, unworried, with their labors. In fact, Hofstadter's classic Godel Escher Bach (1979) can be read as the demonstration that Godel is an unwilling champion of AI, providing essential insights about the paths to follow to strong AI, not showing the futility of the field. But Roger Penrose, Rouse Ball Professor of Mathematics at Oxford, and one of the world's leading mathematical physicists, thinks otherwise. His challenge has to be taken seriously, even if, as I and others in AI are convinced, he is making a fairly simple mistake. When Penrose's book appeared, I pointed out the problem in a review: his argument is highly convoluted, and bristling with details of physics and mathematics, and it is unlikely that such an enterprise would succumb to a single, crashing oversight on the part of its creator — that the argument could be 'refuted' by any simple observation.

As a product of biological design processes (both genetic and individual), it is almost certainly one of those algorithms that are somewhere or other in the Vast space of interesting algorithms, full of typographical errors or "bugs," but good enough to bet your life on — so far. Penrose sees this as a "far-fetched" possibility, but if that is all he can say against it, he has not yet come to grips with the best version of "strong AI." {444} 3. THE PHANTOM QUANTUM-GRAVITY COMPUTER: LESSONS FROM LAPLAND I am a strong believer in the power of natural selection. But I do not see how natural selection, in itself, can evolve algorithms which could have the kind of conscious judgements of the validity of other algorithms that we seem to have. — ROGER PENROSE 1989, p. 414 I don't think the brain came in the Darwinian manner.


pages: 574 words: 164,509

Superintelligence: Paths, Dangers, Strategies by Nick Bostrom

agricultural Revolution, AI winter, Albert Einstein, algorithmic trading, anthropic principle, anti-communist, artificial general intelligence, autonomous vehicles, barriers to entry, Bayesian statistics, bioinformatics, brain emulation, cloud computing, combinatorial explosion, computer vision, cosmological constant, dark matter, DARPA: Urban Challenge, data acquisition, delayed gratification, demographic transition, different worldview, Donald Knuth, Douglas Hofstadter, Drosophila, Elon Musk, en.wikipedia.org, endogenous growth, epigenetics, fear of failure, Flash crash, Flynn Effect, friendly AI, Gödel, Escher, Bach, income inequality, industrial robot, informal economy, information retrieval, interchangeable parts, iterative process, job automation, John Markoff, John von Neumann, knowledge worker, longitudinal study, Menlo Park, meta analysis, meta-analysis, mutually assured destruction, Nash equilibrium, Netflix Prize, new economy, Norbert Wiener, NP-complete, nuclear winter, optical character recognition, pattern recognition, performance metric, phenotype, prediction markets, price stability, principal–agent problem, race to the bottom, random walk, Ray Kurzweil, recommendation engine, reversible computing, social graph, speech recognition, Stanislav Petrov, statistical model, stem cell, Stephen Hawking, strong AI, superintelligent machines, supervolcano, technological singularity, technoutopianism, The Coming Technological Singularity, The Nature of the Firm, Thomas Kuhn: the structure of scientific revolutions, transaction costs, Turing machine, Vernor Vinge, Watson beat the top human players on Jeopardy!, World Values Survey, zero-sum game

Now that we have made solid progress, let us not risk losing our respectability.” One result of this conservatism has been increased concentration on “weak AI”—the variety devoted to providing aids to human thought—and away from “strong AI”—the variety that attempts to mechanize human-level intelligence.73 Nilsson’s sentiment has been echoed by several others of the founders, including Marvin Minsky, John McCarthy, and Patrick Winston.74 The last few years have seen a resurgence of interest in AI, which might yet spill over into renewed efforts towards artificial general intelligence (what Nilsson calls “strong AI”). In addition to faster hardware, a contemporary project would benefit from the great strides that have been made in the many subfields of AI, in software engineering more generally, and in neighboring fields such as computational neuroscience.

13 K Kasparov, Garry 12 Kepler, Johannes 14 Knuth, Donald 14, 264 Kurzweil, Ray 2, 261, 269 L Lenat, Douglas 12, 263 Logic Theorist (system) 6 logicist paradigm, see Good Old-Fashioned Artificial Intelligence (GOFAI) Logistello 12 M machine intelligence; see also artificial intelligence human-level (HLMI) 4, 19–21, 27–35, 73–74, 207, 243, 264, 267 revolution, see intelligence explosion machine learning 8–18, 28, 121, 152, 188, 274, 290 machine translation 15 macro-structural development accelerator 233–235 malignant failure 123–126, 149, 196 Malthusian condition 163–165, 252 Manhattan Project 75, 80–87, 276 McCarthy, John 5–18 McCulloch–Pitts neuron 237 MegaEarth 56 memory capacity 7–9, 60, 71 memory sharing 61 Mill, John Stuart 210 mind crime 125–126, 153, 201–208, 213, 226, 297 Minsky, Marvin 18, 261, 262, 282 Monte Carlo method 9–13 Moore’s law 24–25, 73–77, 274, 286; see also computing power moral growth 214 moral permissibility (MP)218–220, 297 moral rightness (MR)217–220.296, 297 moral status 125–126, 166–169, 173, 202–205, 268, 288, 296 Moravec, Hans 24, 265, 288 motivation selection 29, 127–129, 138–144, 147, 158, 168, 180–191, 222 definition 138 motivational scaffolding 191, 207 multipolar scenarios 90, 132, 159–184, 243–254, 301 mutational load 41 N nanotechnology 53, 94–98, 103, 113, 177, 231, 239, 276, 277, 299, 300 natural language 14 neural networks 5–9, 28, 46, 173, 237, 262, 274 neurocomputational modeling 25–30, 35, 61, 301; see also whole brain emulation (WBE) and neuromorphic AI neuromorphic AI 28, 34, 47, 237–245, 267, 300, 301 Newton, Isaac 56 Nilsson, Nils 18–20, 264 nootropics 36–44, 66–67, 201, 267 Norvig, Peter 19, 264, 282 O observation selection theory, see anthropics Oliphant, Mark 85 O’Neill, Gerard 101 ontological crisis 146, 197 optimality notions 10, 186, 194, 291–293 Bayesian agent 9–11 value learner (AI-VL) 194 observation-utility-maximizer (AI-OUM) 194 reinforcement learner (AI-RL) 194 optimization power 24, 62–75, 83, 92–96, 227, 274 definition 65 oracle AI 141–158, 222–226, 285, 286 definition 146 orthogonality thesis 105–109, 115, 279, 280 P paperclip AI 107–108, 123–125, 132–135, 153, 212, 243 Parfit, Derek 279 Pascal’s mugging 223, 298 Pascal’s wager 223 person-affecting perspective 228, 245–246, 301 perverse instantiation 120–124, 153, 190–196 poker 13 principal–agent problem 127–128, 184 Principle of Epistemic Deference 211, 221 Proverb (program) 12 Q qualia, see consciousness quality superintelligence 51–58, 72, 243, 272 definition 56 R race dynamic, see technology race rate of growth, see growth ratification 222–225 Rawls, John 150 Reagan, Ronald 86–87 reasons-based goal 220 recalcitrance 62–77, 92, 241, 274 definition 65 recursive self-improvement 29, 75, 96, 142, 259; see also seed AI reinforcement learning 12, 28, 188–189, 194–196, 207, 237, 277, 282, 290 resource acquisition 113–116, 123, 193 reward signal 71, 121–122, 188, 194, 207 Riemann hypothesis catastrophe 123, 141 robotics 9–19, 94–97, 117–118, 139, 238, 276, 290 Roosevelt, Franklin D.85 RSA encryption scheme 80 Russell, Bertrand 6, 87, 139, 277 S Samuel, Arthur 12 Sandberg, Anders 265, 267, 272, 274 scanning, see whole brain emulation (WBE) Schaeffer, Jonathan 12 scheduling 15 Schelling point 147, 183, 296 Scrabble 13 second transition 176–178, 238, 243–245, 252 second-guessing (arguments) 238–239 seed AI 23–29, 36, 75, 83, 92–96, 107, 116–120, 142, 151, 189–198, 201–217, 224–225, 240–241, 266, 274, 275, 282 self-limiting goal 123 Shakey (robot) 6 SHRDLU (program) 6 Shulman, Carl 178–180, 265, 287, 300, 302, 304 simulation hypothesis 134–135, 143, 278, 288, 292 singleton 78–90, 95–104, 112–114, 115–126, 136, 159, 176–184, 242, 275, 276, 279, 281, 287, 299, 301, 303 definition 78, 100 singularity 1, 2, 49, 75, 261, 274; see also intelligence explosion social signaling 110 somatic gene therapy 42 sovereign AI 148–158, 187, 226, 285 speech recognition 15–16, 46 speed superintelligence 52–58, 75, 270, 271 definition 53 Strategic Defense Initiative (“Star Wars”) 86 strong AI 18 stunting 135–137, 143 sub-symbolic processing, see connectionism superintelligence; see also collective superintelligence, quality superintelligence and speed superintelligence definition 22, 52 forms 52, 59 paths to 22, 50 predicting the behavior of 108, 155, 302 superorganisms 178–180 superpowers 52–56, 80, 86–87, 91–104, 119, 133, 148, 277, 279, 296 types 94 surveillance 15, 49, 64, 82–85, 94, 117, 132, 181, 232, 253, 276, 294, 299 Szilárd, Leó 85 T TD-Gammon 12 Technological Completion Conjecture 112–113, 229 technology race 80–82, 86–90 203–205, 231, 246–252, 302 teleological threads 110 Tesauro, Gerry 12 TextRunner (system) 71 theorem prover 15, 266 three laws of robotics 139, 284 Thrun, Sebastian 19 tool-AI 151–158 definition 151 treacherous turn 116–119, 128 Tribolium castaneum 154 tripwires 137–143 Truman, Harry 85 Turing, Alan 4, 23, 29, 44, 225, 265, 271, 272 U unemployment 65, 159–180, 287 United Nations 87–89, 252–253 universal accelerator 233 unmanned vehicle, see drone uploading, see whole brain emulation (WBE) utility function 10–11, 88, 100, 110, 119, 124–125, 133–134, 172, 185–187, 192–208, 290, 292, 293, 303 V value learning 191–198, 208, 293 value-accretion 189–190, 207 value-loading 185–208, 293, 294 veil of ignorance 150, 156, 253, 285 Vinge, Vernor 2, 49, 270 virtual reality 30, 31, 53, 113, 166, 171, 198, 204, 300 von Neumann probe 100–101, 113 von Neumann, John 44, 87, 114, 261, 277, 281 W wages 65, 69, 160–169 Watson (IBM) 13, 71 WBE, see whole brain emulation (WBE) Whitehead, Alfred N.6 whole brain emulation (WBE) 28–36, 50, 60, 68–73, 77, 84–85, 108, 172, 198, 201–202, 236–245, 252, 266, 267, 274, 299, 300, 301 Wigner, Eugene 85 windfall clause 254, 303 Winston, Patrick 18 wire-heading 122–123, 133, 189, 194, 207, 282, 291 wise-singleton sustainability threshold 100–104, 279 world economy 2–3, 63, 74, 83, 159–184, 274, 277, 285 Y Yudkowsky, Eliezer 70, 92, 98, 106, 197, 211–216, 266, 273, 282, 286, 291, 299


pages: 340 words: 97,723

The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity by Amy Webb

Ada Lovelace, AI winter, Airbnb, airport security, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, artificial general intelligence, Asilomar, autonomous vehicles, Bayesian statistics, Bernie Sanders, bioinformatics, blockchain, Bretton Woods, business intelligence, Cass Sunstein, Claude Shannon: information theory, cloud computing, cognitive bias, complexity theory, computer vision, crowdsourcing, cryptocurrency, Daniel Kahneman / Amos Tversky, Deng Xiaoping, distributed ledger, don't be evil, Donald Trump, Elon Musk, Filter Bubble, Flynn Effect, gig economy, Google Glasses, Grace Hopper, Gödel, Escher, Bach, Inbox Zero, Internet of things, Jacques de Vaucanson, Jeff Bezos, Joan Didion, job automation, John von Neumann, knowledge worker, Lyft, Mark Zuckerberg, Menlo Park, move fast and break things, move fast and break things, natural language processing, New Urbanism, one-China policy, optical character recognition, packet switching, pattern recognition, personalized medicine, RAND corporation, Ray Kurzweil, ride hailing / ride sharing, Rodney Brooks, Rubik’s Cube, Sand Hill Road, Second Machine Age, self-driving car, SETI@home, side project, Silicon Valley, Silicon Valley startup, skunkworks, Skype, smart cities, South China Sea, sovereign wealth fund, speech recognition, Stephen Hawking, strong AI, superintelligent machines, technological singularity, The Coming Technological Singularity, theory of mind, Tim Cook: Apple, trade route, Turing machine, Turing test, uber lyft, Von Neumann architecture, Watson beat the top human players on Jeopardy!, zero day

Midnight in Peking: How the Murder of a Young Englishwoman Haunted the Last Days of Old China. Rev. ed. New York: Penguin Books, 2012. Future of Life Institute. “Asilomar AI Principles.” Text and signatories available online. https://futureoflife.org/ai-principles/. Gaddis, J. L. The Cold War: A New History. New York: Penguin Press, 2006. . On Grand Strategy. New York: Penguin Press, 2018. Gilder, G. F., and Ray Kurzweil. Are We Spiritual Machines? Ray Kurzweil vs. the Critics of Strong AI. edited by Jay Wesley Richards. Seattle: Discovery Institute Press, 2001. Goertzel, B., and C. Pennachin, eds. Artificial General Intelligence. Cognitive Technologies Series. Berlin: Springer, 2007. doi:10.1007/978-3-540-68677-4. Gold, E. M. “Language Identification in the Limit.” Information and Control 10, no. 5 (1967): 447–474. Good, I. J. “Ethical Machines.” Intelligent Systems. In vol. 10 of Machine Intelligence, edited by J.

Weizenbaum makes the crucial distinction between deciding and choosing. Deciding is a computational activity, something that can be programmed. Choice, however, is the product of judgment, not calculation. It is the capacity to choose that ultimately makes us human. University of California, Berkeley, philosopher John Searle, in his paper “Minds, Brains, and Programs,” argued against the plausibility of general, or what he called “strong,” AI. Searle said a program cannot give a computer a “mind,” “understanding,” or “consciousness,” regardless of how humanlike the program might behave. 34. Jonathan Schaeffer, Robert Lake, Paul Lu, and Martin Bryant, “CHINOOK: The World Man-Machine Checkers Champion,” AI Magazine 17, no. 1 (Spring 1966): 21–29, https://www.aaai.org/ojs/index.php/aimagazine/article/viewFile/1208/1109.pdf. 35. Ari Goldfarb and Daniel Trefler, “AI and International Trade,” The National Bureau of Economic Research, January 2018, http://www.nber.org/papers/w24254.pdf. 36.


pages: 345 words: 104,404

Pandora's Brain by Calum Chace

AI winter, Any sufficiently advanced technology is indistinguishable from magic, artificial general intelligence, brain emulation, Extropian, friendly AI, hive mind, lateral thinking, mega-rich, Ray Kurzweil, self-driving car, Silicon Valley, Singularitarianism, Skype, speech recognition, stealth mode startup, Stephen Hawking, strong AI, technological singularity, theory of mind, Turing test, Wall-E

And there is nothing to prevent an AI’s cognitive capability being expanded simply by increasing its hardware capacity.’ ‘This all sounds like an argument for stopping people working on strong AI?’ asked Matt. ‘Although I guess that would be hard to do. There are too many people working in the field, and as you say, a lot of them show no sign of understanding the danger.’ ‘You’re right,’ Ivan agreed, ‘we’re on a runaway train that cannot be stopped. Some science fiction novels feature a powerful police force – the Turing Police – that keeps watch to ensure that no-one creates a human-level artificial intelligence. But that’s hopelessly unrealistic. The prize – both intellectual and material – for owning an AGI is too great. Strong AI is coming, whether we like it or not.’ TEN ‘But surely, if you’re right about all this,’ Leo protested, sounding genuinely concerned, ‘people – governments, voters – will wake up when it gets closer, and slow it down or stop it?’


Robot Futures by Illah Reza Nourbakhsh

3D printing, autonomous vehicles, Burning Man, commoditize, computer vision, Mars Rover, Menlo Park, phenotype, Skype, social intelligence, software as a service, stealth mode startup, strong AI, telepresence, telepresence robot, Therac-25, Turing test, Vernor Vinge

Astronomical sky surveys are the stereotypical example of big data that must be mined to extract discoveries regarding new asteroids or new planets from indirect data. Eye tracking A skill enabling a robot to visually examine the scene before it, identify the faces in the scene, mark the location of the eyes on each face, and then find the irises so that the gaze directions of the humans are known. Humans are particularly good at this even when we face other people at acute angles. Hard AI Also known as strong AI, this embodies the AI goal of going all the way toward human equivalence: matching natural intelligence along every possible axis so that artificial beings and natural humans are, at least from a cognitive point of view, indistinguishable. Laser cutting A rapid-prototyping technique in which flat material such as plastic or metal lays on a table and a high-power laser is able to rapidly cut a complex two-dimensional shape out of the raw material.


pages: 405 words: 117,219

In Our Own Image: Savior or Destroyer? The History and Future of Artificial Intelligence by George Zarkadakis

3D printing, Ada Lovelace, agricultural Revolution, Airbnb, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, animal electricity, anthropic principle, Asperger Syndrome, autonomous vehicles, barriers to entry, battle of ideas, Berlin Wall, bioinformatics, British Empire, business process, carbon-based life, cellular automata, Claude Shannon: information theory, combinatorial explosion, complexity theory, continuous integration, Conway's Game of Life, cosmological principle, dark matter, dematerialisation, double helix, Douglas Hofstadter, Edward Snowden, epigenetics, Flash crash, Google Glasses, Gödel, Escher, Bach, income inequality, index card, industrial robot, Internet of things, invention of agriculture, invention of the steam engine, invisible hand, Isaac Newton, Jacquard loom, Jacques de Vaucanson, James Watt: steam engine, job automation, John von Neumann, Joseph-Marie Jacquard, Kickstarter, liberal capitalism, lifelogging, millennium bug, Moravec's paradox, natural language processing, Norbert Wiener, off grid, On the Economy of Machinery and Manufactures, packet switching, pattern recognition, Paul Erdős, post-industrial society, prediction markets, Ray Kurzweil, Rodney Brooks, Second Machine Age, self-driving car, Silicon Valley, social intelligence, speech recognition, stem cell, Stephen Hawking, Steven Pinker, strong AI, technological singularity, The Coming Technological Singularity, The Future of Employment, the scientific method, theory of mind, Turing complete, Turing machine, Turing test, Tyler Cowen: Great Stagnation, Vernor Vinge, Von Neumann architecture, Watson beat the top human players on Jeopardy!, Y2K

Thirdly, that intelligence, from its simplest manifestation in a squirming worm to self-awareness and consciousness in sophisticated cappuccino-sipping humans, is a purely material, indeed biological, phenomenon. Finally, that if a material object called ‘brain’ can be conscious then it is theoretically feasible that another material object, made of some other material stuff, can also be conscious. Based on those four propositions, empiricism tells us that ‘strong AI’ is possible. And that’s because, for empiricists, a brain is an information-processing machine, not metaphorically but literarily. We have several billion cells in our body.27 If we adopt an empirical perspective, the scientific problem of intelligence – or consciousness, natural or artificial – can be (re)defined as a simple question: how can several billion unconscious nanorobots arrive at consciousness?

The pioneers of AI explored many ideas including using algorithms for solving general logical problems, or simulating parts of the brain using artificial neural nets. And although they produced some very capable systems, none of them could arguably be called intelligent. Of course, how one defines intelligence is also crucial. For the pioneers of AI, ‘artificial intelligence’ was nothing less than the artificial equivalent of human intelligence, a position nowadays referred to as ‘strong AI’. An intelligent machine ought to be one that possessed general intelligence, just like a human. This meant that the machine ought to be able to solve any problem using first principles and experience derived from learning. Early models of general-solving were built, but could not scale up. Systems could solve one general problem but not any general problem.6 Algorithms that searched data in order to make general inferences failed quickly because of something called ‘combinatorial explosion’: there were simply too many interrelated parameters and variables to calculate after a number of steps.


pages: 413 words: 119,587

Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots by John Markoff

"Robert Solow", A Declaration of the Independence of Cyberspace, AI winter, airport security, Apple II, artificial general intelligence, Asilomar, augmented reality, autonomous vehicles, basic income, Baxter: Rethink Robotics, Bill Duvall, bioinformatics, Brewster Kahle, Burning Man, call centre, cellular automata, Chris Urmson, Claude Shannon: information theory, Clayton Christensen, clean water, cloud computing, collective bargaining, computer age, computer vision, crowdsourcing, Danny Hillis, DARPA: Urban Challenge, data acquisition, Dean Kamen, deskilling, don't be evil, Douglas Engelbart, Douglas Engelbart, Douglas Hofstadter, Dynabook, Edward Snowden, Elon Musk, Erik Brynjolfsson, factory automation, From Mathematics to the Technologies of Life and Death, future of work, Galaxy Zoo, Google Glasses, Google X / Alphabet X, Grace Hopper, Gunnar Myrdal, Gödel, Escher, Bach, Hacker Ethic, haute couture, hive mind, hypertext link, indoor plumbing, industrial robot, information retrieval, Internet Archive, Internet of things, invention of the wheel, Jacques de Vaucanson, Jaron Lanier, Jeff Bezos, job automation, John Conway, John Markoff, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John von Neumann, Kevin Kelly, knowledge worker, Kodak vs Instagram, labor-force participation, loose coupling, Marc Andreessen, Mark Zuckerberg, Marshall McLuhan, medical residency, Menlo Park, Mitch Kapor, Mother of all demos, natural language processing, new economy, Norbert Wiener, PageRank, pattern recognition, pre–internet, RAND corporation, Ray Kurzweil, Richard Stallman, Robert Gordon, Rodney Brooks, Sand Hill Road, Second Machine Age, self-driving car, semantic web, shareholder value, side project, Silicon Valley, Silicon Valley startup, Singularitarianism, skunkworks, Skype, social software, speech recognition, stealth mode startup, Stephen Hawking, Steve Ballmer, Steve Jobs, Steve Wozniak, Steven Levy, Stewart Brand, strong AI, superintelligent machines, technological singularity, Ted Nelson, telemarketer, telepresence, telepresence robot, Tenerife airport disaster, The Coming Technological Singularity, the medium is the message, Thorstein Veblen, Turing test, Vannevar Bush, Vernor Vinge, Watson beat the top human players on Jeopardy!, Whole Earth Catalog, William Shockley: the traitorous eight, zero-sum game

When pressed, the computer scientists, roboticists, and technologists offer conflicting views. Some want to replace humans with machines; some are resigned to the inevitability—“I for one, welcome our insect overlords” (later “robot overlords”) was a meme that was popularized by The Simpsons—and some of them just as passionately want to build machines to extend the reach of humans. The question of whether true artificial intelligence—the concept known as “Strong AI” or Artificial General Intelligence—will emerge, and whether machines can do more than mimic humans, has also been debated for decades. Today there is a growing chorus of scientists and technologists raising new alarms about the possibility of the emergence of self-aware machines and their consequences. Discussions about the state of AI technology today veer into the realm of science fiction or perhaps religion.

The experiment was made possible by Google’s immense computing resources that allowed the researchers to turn loose a cluster of sixteen thousand processors on the problem—which of course is still a tiny fraction of the brain’s billions of neurons, a huge portion of which are devoted to vision. Whether or not Google is on the trail of a genuine artificial “brain” has become increasingly controversial. There is certainly no question that the deep learning techniques are paying off in a wealth of increasingly powerful AI achievements in vision and speech. And there remains in Silicon Valley a growing group of engineers and scientists who believe they are once again closing in on “Strong AI”—the creation of a self-aware machine with human or greater intelligence. Ray Kurzweil, the artificial intelligence researcher and barnstorming advocate for technologically induced immortality, joined Google in 2013 to take over the brain work from Ng, shortly after publishing How to Create a Mind, a book that purported to offer a recipe for creating a working AI. Kurzweil, of course, has all along been one of the most eloquent backers of the idea of a singularity.


pages: 481 words: 125,946

What to Think About Machines That Think: Today's Leading Thinkers on the Age of Machine Intelligence by John Brockman

agricultural Revolution, AI winter, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, algorithmic trading, artificial general intelligence, augmented reality, autonomous vehicles, basic income, bitcoin, blockchain, clean water, cognitive dissonance, Colonization of Mars, complexity theory, computer age, computer vision, constrained optimization, corporate personhood, cosmological principle, cryptocurrency, cuban missile crisis, Danny Hillis, dark matter, discrete time, Douglas Engelbart, Elon Musk, Emanuel Derman, endowment effect, epigenetics, Ernest Rutherford, experimental economics, Flash crash, friendly AI, functional fixedness, global pandemic, Google Glasses, hive mind, income inequality, information trail, Internet of things, invention of writing, iterative process, Jaron Lanier, job automation, Johannes Kepler, John Markoff, John von Neumann, Kevin Kelly, knowledge worker, loose coupling, microbiome, Moneyball by Michael Lewis explains big data, natural language processing, Network effects, Norbert Wiener, pattern recognition, Peter Singer: altruism, phenotype, planetary scale, Ray Kurzweil, recommendation engine, Republic of Letters, RFID, Richard Thaler, Rory Sutherland, Satyajit Das, Search for Extraterrestrial Intelligence, self-driving car, sharing economy, Silicon Valley, Skype, smart contracts, social intelligence, speech recognition, statistical model, stem cell, Stephen Hawking, Steve Jobs, Steven Pinker, Stewart Brand, strong AI, Stuxnet, superintelligent machines, supervolcano, the scientific method, The Wisdom of Crowds, theory of mind, Thorstein Veblen, too big to fail, Turing machine, Turing test, Von Neumann architecture, Watson beat the top human players on Jeopardy!, Y2K

Learning to detect a cat in full frontal position after 10 million frames drawn from Internet videos is a long way from understanding what a cat is, and anybody who thinks that we’ve “solved” AI doesn’t realize the limitations of the current technology. To be sure, there have been exponential advances in narrow-engineering applications of artificial intelligence, such as playing chess, calculating travel routes, or translating texts in rough fashion, but there’s been scarcely more than linear progress in five decades of working toward strong AI. For example, the different flavors of intelligent personal assistants available on your smartphone are only modestly better than Eliza, an early example of primitive natural-language-processing from the mid-1960s. We still have no machine that can, for instance, read all that the Web has to say about war and plot a decent campaign, nor do we even have an open-ended AI system that can figure out how to write an essay to pass a freshman composition class or an eighth-grade science exam.

AI can easily look like the real thing but still be a million miles away from being the real thing—like kissing through a pane of glass: It looks like a kiss but is only a faint shadow of the actual concept. I concede to AI proponents all of the semantic prowess of Shakespeare, the symbol juggling they do perfectly. Missing is the direct relationship with the ideas the symbols represent. Much of what is certain to come soon would have belonged in the old-school “Strong AI” territory. Anything that can be approached in an iterative process can and will be achieved, sooner than many think. On this point I reluctantly side with the proponents: exaflops in CPU+GPU performance, 10K resolution immersive VR, personal petabyte databases . . . here in a couple of decades. But it is not all “iterative.” There’s a huge gap between that and the level of conscious understanding that truly deserves to be called Strong, as in “Alive AI.”


pages: 170 words: 49,193

The People vs Tech: How the Internet Is Killing Democracy (And How We Save It) by Jamie Bartlett

Ada Lovelace, Airbnb, Amazon Mechanical Turk, Andrew Keen, autonomous vehicles, barriers to entry, basic income, Bernie Sanders, bitcoin, blockchain, Boris Johnson, central bank independence, Chelsea Manning, cloud computing, computer vision, creative destruction, cryptocurrency, Daniel Kahneman / Amos Tversky, Dominic Cummings, Donald Trump, Edward Snowden, Elon Musk, Filter Bubble, future of work, gig economy, global village, Google bus, hive mind, Howard Rheingold, information retrieval, Internet of things, Jeff Bezos, job automation, John Maynard Keynes: technological unemployment, Julian Assange, manufacturing employment, Mark Zuckerberg, Marshall McLuhan, Menlo Park, meta analysis, meta-analysis, mittelstand, move fast and break things, move fast and break things, Network effects, Nicholas Carr, off grid, Panopticon Jeremy Bentham, payday loans, Peter Thiel, prediction markets, QR code, ransomware, Ray Kurzweil, recommendation engine, Renaissance Technologies, ride hailing / ride sharing, Robert Mercer, Ross Ulbricht, Sam Altman, Satoshi Nakamoto, Second Machine Age, sharing economy, Silicon Valley, Silicon Valley ideology, Silicon Valley startup, smart cities, smart contracts, smart meter, Snapchat, Stanford prison experiment, Steve Jobs, Steven Levy, strong AI, TaskRabbit, technological singularity, technoutopianism, Ted Kaczynski, the medium is the message, the scientific method, The Spirit Level, The Wealth of Nations by Adam Smith, The Wisdom of Crowds, theory of mind, too big to fail, ultimatum game, universal basic income, WikiLeaks, World Values Survey, Y Combinator

The General Data Protection Regulation (GDPR) which is due to come into law across Europe shortly after this book goes to print, is a good example and must be enforced with vigour.* SAFE AI FOR GOOD Artificial Intelligence must not become a proprietary operating system owned and run by a single winner-takes-all company. However, we cannot fall behind in the international race to develop strong AI. Non-democracies must not get an edge on us. We should encourage the sector, but it must be subject to democratic control and, above all, tough regulation to ensure it works in the public interest and is not subject to being hacked or misused.2 Just as the inventors of the atomic bomb realised the power of their creation and so dedicated themselves to creating arms control and nuclear reactor safety, so AI inventors should take similar responsibility.


pages: 573 words: 157,767

From Bacteria to Bach and Back: The Evolution of Minds by Daniel C. Dennett

Ada Lovelace, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Andrew Wiles, Bayesian statistics, bioinformatics, bitcoin, Build a better mousetrap, Claude Shannon: information theory, computer age, computer vision, double entry bookkeeping, double helix, Douglas Hofstadter, Elon Musk, epigenetics, experimental subject, Fermat's Last Theorem, Gödel, Escher, Bach, information asymmetry, information retrieval, invention of writing, Isaac Newton, iterative process, John von Neumann, Menlo Park, Murray Gell-Mann, Necker cube, Norbert Wiener, pattern recognition, phenotype, Richard Feynman, Rodney Brooks, self-driving car, social intelligence, sorting algorithm, speech recognition, Stephen Hawking, Steven Pinker, strong AI, The Wealth of Nations by Adam Smith, theory of mind, Thomas Bayes, trickle-down economics, Turing machine, Turing test, Watson beat the top human players on Jeopardy!, Y2K

There is a long tradition of hype in AI, going back to the earliest days, and many of us have a well-developed habit of discounting the latest “revolutionary breakthrough” by, say, 70% or more, but when such high-tech mavens as Elon Musk and such world-class scientists as Sir Martin Rees and Stephen Hawking start ringing alarm bells about how AI could soon lead to a cataclysmic dissolution of human civilization in one way or another, it is time to rein in one’s habits and reexamine one’s suspicions. Having done so, my verdict is unchanged but more tentative than it used to be. I have always affirmed that “strong AI” is “possible in principle”—but I viewed it as a negligible practical possibility, because it would cost too much and not give us anything we really needed. Domingos and others have shown me that there may be feasible pathways (technically and economically feasible) that I had underestimated, but I still think the task is orders of magnitude larger and more difficult than the cheerleaders have claimed, for the reasons presented in this chapter, and in chapter 8 (the example of Newyorkabot, p. 164).

I discuss the prospects of such a powerful theory or model of an intelligent agent, and point out a key ambiguity in the original Turing Test, in an interview with Jimmy So about the implications of Her, in “Can Robots Fall in Love” (2013), The Daily Beast, http://www.thedailybeast.com/articles/2013/12/31/can-robots-fall-in-love-and-why-would-they.html. 400a negligible practical possibility. When explaining why I thought strong AI was possible in principle but practically impossible, I have often compared it to the task of making a robotic bird that weighed no more than a robin, could catch insects on the fly, and land on a twig. No cosmic mystery, I averred, in such a bird, but the engineering required to bring it to reality would cost more than a dozen Manhattan Projects, and to what end? We can learn all we need to know about the principles of flight, and even bird flight, by making simpler models on which to test our theories, at a tiny fraction of the cost.


pages: 345 words: 75,660

Prediction Machines: The Simple Economics of Artificial Intelligence by Ajay Agrawal, Joshua Gans, Avi Goldfarb

"Robert Solow", Ada Lovelace, AI winter, Air France Flight 447, Airbus A320, artificial general intelligence, autonomous vehicles, basic income, Bayesian statistics, Black Swan, blockchain, call centre, Capital in the Twenty-First Century by Thomas Piketty, Captain Sullenberger Hudson, collateralized debt obligation, computer age, creative destruction, Daniel Kahneman / Amos Tversky, data acquisition, data is the new oil, deskilling, disruptive innovation, Elon Musk, en.wikipedia.org, Erik Brynjolfsson, everywhere but in the productivity statistics, Google Glasses, high net worth, ImageNet competition, income inequality, information retrieval, inventory management, invisible hand, job automation, John Markoff, Joseph Schumpeter, Kevin Kelly, Lyft, Minecraft, Mitch Kapor, Moneyball by Michael Lewis explains big data, Nate Silver, new economy, On the Economy of Machinery and Manufactures, pattern recognition, performance metric, profit maximization, QWERTY keyboard, race to the bottom, randomized controlled trial, Ray Kurzweil, ride hailing / ride sharing, Second Machine Age, self-driving car, shareholder value, Silicon Valley, statistical model, Stephen Hawking, Steve Jobs, Steven Levy, strong AI, The Future of Employment, The Signal and the Noise by Nate Silver, Tim Cook: Apple, Turing test, Uber and Lyft, uber lyft, US Airways Flight 1549, Vernor Vinge, Watson beat the top human players on Jeopardy!, William Langewiesche, Y Combinator, zero-sum game

The school would modify the task of offering incentives like scholarships and financial aid due to increased certainty about who will succeed. Finally, the school would adjust other elements of the work flow to take advantage of being able to provide instantaneous school admission decisions. 13 Decomposing Decisions Today’s AI tools are far from the machines with human-like intelligence of science fiction (often referred to as “artificial general intelligence” or AGI, or “strong AI”). The current generation of AI provides tools for prediction and little else. This view of AI does not diminish it. As Steve Jobs once remarked, “One of the things that really separates us from the high primates is that we’re tool builders.” He used the example of the bicycle as a tool that had given people superpowers in locomotion above every other animal. And he felt the same about computers: “What a computer is to me is it’s the most remarkable tool that we’ve ever come up with, and it’s the equivalent of a bicycle for our minds.”1 Today, AI tools predict the intention of speech (Amazon’s Echo), predict command context (Apple’s Siri), predict what you want to buy (Amazon’s recommendations), predict which links will connect you to the information you want to find (Google search), predict when to apply the brakes to avoid danger (Tesla’s Autopilot), and predict the news you will want to read (Facebook’s newsfeed).


pages: 252 words: 74,167

Thinking Machines: The Inside Story of Artificial Intelligence and Our Race to Build the Future by Luke Dormehl

Ada Lovelace, agricultural Revolution, AI winter, Albert Einstein, Alexey Pajitnov wrote Tetris, algorithmic trading, Amazon Mechanical Turk, Apple II, artificial general intelligence, Automated Insights, autonomous vehicles, book scanning, borderless world, call centre, cellular automata, Claude Shannon: information theory, cloud computing, computer vision, correlation does not imply causation, crowdsourcing, drone strike, Elon Musk, Flash crash, friendly AI, game design, global village, Google X / Alphabet X, hive mind, industrial robot, information retrieval, Internet of things, iterative process, Jaron Lanier, John Markoff, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John von Neumann, Kickstarter, Kodak vs Instagram, Law of Accelerating Returns, life extension, Loebner Prize, Marc Andreessen, Mark Zuckerberg, Menlo Park, natural language processing, Norbert Wiener, out of africa, PageRank, pattern recognition, Ray Kurzweil, recommendation engine, remote working, RFID, self-driving car, Silicon Valley, Skype, smart cities, Smart Cities: Big Data, Civic Hackers, and the Quest for a New Utopia, social intelligence, speech recognition, Stephen Hawking, Steve Jobs, Steve Wozniak, Steven Pinker, strong AI, superintelligent machines, technological singularity, The Coming Technological Singularity, The Future of Employment, Tim Cook: Apple, too big to fail, Turing machine, Turing test, Vernor Vinge, Watson beat the top human players on Jeopardy!

But not everyone is so convinced that the Singularity will be, well, quite so singular. As Alan Turing pointed out with his Turing Test, the question of whether or not a machine can think is ‘meaningless’ in the sense that it is virtually impossible to assess with any certainty. As we saw in the last chapter, the idea that consciousness is some emergent byproduct of faster and faster computers is overly simplistic. Consider the difficulty in distinguishing between ‘weak’ and ‘strong’ AI. Some people mistakenly suggest that, in the former, an AI’s outcome has been pre-programmed and it is therefore the result of an algorithm carrying out a specific series of steps to achieve a knowable outcome. This means an AI has little to no chance of generating an unpredictable outcome, provided that the training process is properly carried out. As noted in chapter six, however, genetic algorithms can generate solutions that we may not necessarily expect.


Toast by Stross, Charles

anthropic principle, Buckminster Fuller, cosmological principle, dark matter, double helix, Ernest Rutherford, Extropian, Francis Fukuyama: the end of history, glass ceiling, gravity well, Khyber Pass, Mars Rover, Mikhail Gorbachev, NP-complete, oil shale / tar sands, peak oil, performance metric, phenotype, plutocrats, Plutocrats, Ronald Reagan, Silicon Valley, slashdot, speech recognition, strong AI, traveling salesman, Turing test, urban renewal, Vernor Vinge, Whole Earth Review, Y2K

Suicide by the numbers.” A glass appeared by my right hand. “Way I see it, we’ve been fighting a losing battle here. Maybe if we hadn’t put a spike in Babbage’s gears he’d have developed computing technology on an ad-hoc basis and we might have been able to finesse the mathematicians into ignoring it as being beneath them—brute engineering—but I’m not optimistic. Immunizing a civilization against developing strong AI is one of those difficult problems that no algorithm exists to solve. The way I see it, once a civilization develops the theory of the general purpose computer, and once someone comes up with the goal of artificial intelligence, the foundations are rotten and the dam is leaking. You might as well take off and nuke them from orbit; it can’t do any more damage.” “You remind me of the story of the little Dutch boy.”


pages: 307 words: 88,180

AI Superpowers: China, Silicon Valley, and the New World Order by Kai-Fu Lee

AI winter, Airbnb, Albert Einstein, algorithmic trading, artificial general intelligence, autonomous vehicles, barriers to entry, basic income, business cycle, cloud computing, commoditize, computer vision, corporate social responsibility, creative destruction, crony capitalism, Deng Xiaoping, deskilling, Donald Trump, Elon Musk, en.wikipedia.org, Erik Brynjolfsson, full employment, future of work, gig economy, Google Chrome, happiness index / gross national happiness, if you build it, they will come, ImageNet competition, income inequality, informal economy, Internet of things, invention of the telegraph, Jeff Bezos, job automation, John Markoff, Kickstarter, knowledge worker, Lean Startup, low skilled workers, Lyft, mandatory minimum, Mark Zuckerberg, Menlo Park, minimum viable product, natural language processing, new economy, pattern recognition, pirate software, profit maximization, QR code, Ray Kurzweil, recommendation engine, ride hailing / ride sharing, risk tolerance, Robert Mercer, Rodney Brooks, Rubik’s Cube, Sam Altman, Second Machine Age, self-driving car, sentiment analysis, sharing economy, Silicon Valley, Silicon Valley ideology, Silicon Valley startup, Skype, special economic zone, speech recognition, Stephen Hawking, Steve Jobs, strong AI, The Future of Employment, Travis Kalanick, Uber and Lyft, uber lyft, universal basic income, urban planning, Y Combinator

Instead of a dispersion of industry profits across different companies and regions, we will begin to see greater and greater concentration of these astronomical sums in the hands of a few, all while unemployment lines grow longer. THE AI WORLD ORDER Inequality will not be contained within national borders. China and the United States have already jumped out to an enormous lead over all other countries in artificial intelligence, setting the stage for a new kind of bipolar world order. Several other countries—the United Kingdom, France, and Canada, to name a few—have strong AI research labs staffed with great talent, but they lack the venture-capital ecosystem and large user bases to generate the data that will be key to the age of implementation. As AI companies in the United States and China accumulate more data and talent, the virtuous cycle of data-driven improvements is widening their lead to a point where it will become insurmountable. China and the United States are currently incubating the AI giants that will dominate global markets and extract wealth from consumers around the globe.


pages: 294 words: 96,661

The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity by Byron Reese

agricultural Revolution, AI winter, artificial general intelligence, basic income, Buckminster Fuller, business cycle, business process, Claude Shannon: information theory, clean water, cognitive bias, computer age, crowdsourcing, dark matter, Elon Musk, Eratosthenes, estate planning, financial independence, first square of the chessboard, first square of the chessboard / second half of the chessboard, full employment, Hans Rosling, income inequality, invention of agriculture, invention of movable type, invention of the printing press, invention of writing, Isaac Newton, Islamic Golden Age, James Hargreaves, job automation, Johannes Kepler, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John von Neumann, Kevin Kelly, lateral thinking, life extension, Louis Pasteur, low skilled workers, manufacturing employment, Marc Andreessen, Mark Zuckerberg, Marshall McLuhan, Mary Lou Jepsen, Moravec's paradox, On the Revolutions of the Heavenly Spheres, pattern recognition, profit motive, Ray Kurzweil, recommendation engine, Rodney Brooks, Sam Altman, self-driving car, Silicon Valley, Skype, spinning jenny, Stephen Hawking, Steve Wozniak, Steven Pinker, strong AI, technological singularity, telepresence, telepresence robot, The Future of Employment, the scientific method, Turing machine, Turing test, universal basic income, Von Neumann architecture, Wall-E, Watson beat the top human players on Jeopardy!, women in the workforce, working poor, Works Progress Administration, Y Combinator

But—and this is really important—there are two completely different things people mean today when they talk about artificial intelligence. There is “narrow AI” and there is “general AI.” The kind of AI we have today is narrow AI, also known as weak AI. It is the only kind of AI we know how to build, and it is incredibly useful. Narrow AI is the ability for a computer to solve a specific kind of problem or perform a specific task. The other kind of AI is referred to by three different names: general AI, strong AI, or artificial general intelligence (AGI). Although the terms are interchangeable, I will use AGI from this point forward to refer to an artificial intelligence as smart and versatile as you or me. A Roomba vacuum cleaner, Siri, and a self-driving car are powered by narrow AI. A hypothetical robot that can unload the dishwasher would be powered by narrow AI. But if you wanted a robot MacGyver, that would require AGI, because MacGyver has to respond to situations that he has not previously considered.


pages: 329 words: 95,309

Digital Bank: Strategies for Launching or Becoming a Digital Bank by Chris Skinner

algorithmic trading, AltaVista, Amazon Web Services, Any sufficiently advanced technology is indistinguishable from magic, augmented reality, bank run, Basel III, bitcoin, business cycle, business intelligence, business process, business process outsourcing, buy and hold, call centre, cashless society, clean water, cloud computing, corporate social responsibility, credit crunch, crowdsourcing, cryptocurrency, demand response, disintermediation, don't be evil, en.wikipedia.org, fault tolerance, fiat currency, financial innovation, Google Glasses, high net worth, informal economy, Infrastructure as a Service, Internet of things, Jeff Bezos, Kevin Kelly, Kickstarter, M-Pesa, margin call, mass affluent, MITM: man-in-the-middle, mobile money, Mohammed Bouazizi, new economy, Northern Rock, Occupy movement, Pingit, platform as a service, Ponzi scheme, prediction markets, pre–internet, QR code, quantitative easing, ransomware, reserve currency, RFID, Satoshi Nakamoto, Silicon Valley, smart cities, social intelligence, software as a service, Steve Jobs, strong AI, Stuxnet, trade route, unbanked and underbanked, underbanked, upwardly mobile, We are the 99%, web application, WikiLeaks, Y2K

I know for a fact that the new economies and new values that we discuss within Innotribe are driven by social media. Social media is creating new currencies and new economic models, and this will be very big and very important in the two to three years downstream from now. The question for the banks is how will they position in this new world of peer-to-peer currencies in social media. That is going to be a key question for banks in innovation for the next few years. The other area is what I call strong AI. This is a modern way of looking at AI. The old way was mechanical and thought of this as expert systems. Today, we have this enormous computational power in our hands now, and we should make a big splash around this for the next four or five years. So social data, social media, alternative currencies and peer-to-peer payments will dominate for the near term, and then big data and AI in four or five years from now.


Noam Chomsky: A Life of Dissent by Robert F. Barsky

Albert Einstein, anti-communist, centre right, feminist movement, Howard Zinn, information retrieval, means of production, Norman Mailer, profit motive, Ralph Nader, Ronald Reagan, strong AI, The Bell Curve by Richard Herrnstein and Charles Murray, theory of mind, Yom Kippur War

Chomsky, who in fact only attended the conference briefly, preferring to spend his time engaged in the subject in the context of "talks to popular audiences," insists that the Times misrepresented what had occurred at the meetings: "There was scientific interest, but it had nothing whatsoever to do with language translation (MT) and artificial intelligence file:///D|/export3/www.netlibrary.com/nlreader/nlreader.dll@bookid=9296&filename=page_174.html [4/16/2007 3:21:17 PM] Document Page 175 (AI). MT is a very low level engineering project, and so-called classic strong AI is largely vacuous, dismissed by most serious scientists and lacking any results, as its leading exponents concede" (31 Mar. 1995). Entire research projects, on language acquisition and other topics, were now being conducted with the aim of either establishing or disproving Chomsky's theories. Chomsky himself fuelled these enterprises by maintaining a high level of productivity: he published Reflections on Language (1975), Essays on Form and Interpretation (1977), Rules and Representations (1980), and Modular Approaches to the Study of the Mind (1984).


pages: 347 words: 97,721

Only Humans Need Apply: Winners and Losers in the Age of Smart Machines by Thomas H. Davenport, Julia Kirby

AI winter, Andy Kessler, artificial general intelligence, asset allocation, Automated Insights, autonomous vehicles, basic income, Baxter: Rethink Robotics, business intelligence, business process, call centre, carbon-based life, Clayton Christensen, clockwork universe, commoditize, conceptual framework, dark matter, David Brooks, deliberate practice, deskilling, digital map, disruptive innovation, Douglas Engelbart, Edward Lloyd's coffeehouse, Elon Musk, Erik Brynjolfsson, estate planning, fixed income, follow your passion, Frank Levy and Richard Murnane: The New Division of Labor, Freestyle chess, game design, general-purpose programming language, global pandemic, Google Glasses, Hans Lippershey, haute cuisine, income inequality, index fund, industrial robot, information retrieval, intermodal, Internet of things, inventory management, Isaac Newton, job automation, John Markoff, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, Joi Ito, Khan Academy, knowledge worker, labor-force participation, lifelogging, longitudinal study, loss aversion, Mark Zuckerberg, Narrative Science, natural language processing, Norbert Wiener, nuclear winter, pattern recognition, performance metric, Peter Thiel, precariat, quantitative trading / quantitative finance, Ray Kurzweil, Richard Feynman, risk tolerance, Robert Shiller, Robert Shiller, Rodney Brooks, Second Machine Age, self-driving car, Silicon Valley, six sigma, Skype, social intelligence, speech recognition, spinning jenny, statistical model, Stephen Hawking, Steve Jobs, Steve Wozniak, strong AI, superintelligent machines, supply-chain management, transaction costs, Tyler Cowen: Great Stagnation, Watson beat the top human players on Jeopardy!, Works Progress Administration, Zipcar

Vendors like IBM, Cognitive Scale, SAS, and Tibco are adding new cognitive functions and integrating them into solutions. Deloitte is working with companies like IBM and Cognitive Scale to create not just a single application, but a broad “Intelligent Automation Platform.” Even when progress is made on these types of integration, the result will still fall short of the all-knowing “artificial general intelligence” or “strong AI” that we discussed in Chapter 2. That may well be coming, but not anytime soon. Still, these short-term combinations of tools and methods may well make automation solutions much more useful. Broadening Application of the Same Tools —In addition to employing broader types of technology, organizations that are stepping forward are using their existing technology to address different industries and business functions.


Falter: Has the Human Game Begun to Play Itself Out? by Bill McKibben

23andMe, Affordable Care Act / Obamacare, Airbnb, American Legislative Exchange Council, Anne Wojcicki, artificial general intelligence, Bernie Sanders, Bill Joy: nanobots, Burning Man, call centre, carbon footprint, Charles Lindbergh, clean water, Colonization of Mars, computer vision, David Attenborough, Donald Trump, double helix, Edward Snowden, Elon Musk, ending welfare as we know it, energy transition, Flynn Effect, Google Earth, Hyperloop, impulse control, income inequality, Intergovernmental Panel on Climate Change (IPCC), Jane Jacobs, Jaron Lanier, Jeff Bezos, job automation, life extension, light touch regulation, Mark Zuckerberg, mass immigration, megacity, Menlo Park, moral hazard, Naomi Klein, Nelson Mandela, obamacare, off grid, oil shale / tar sands, pattern recognition, Peter Thiel, plutocrats, Plutocrats, profit motive, Ralph Waldo Emerson, Ray Kurzweil, Robert Mercer, Ronald Reagan, Sam Altman, self-driving car, Silicon Valley, Silicon Valley startup, smart meter, Snapchat, stem cell, Stephen Hawking, Steve Jobs, Steve Wozniak, Steven Pinker, strong AI, supervolcano, technoutopianism, The Wealth of Nations by Adam Smith, traffic fines, Travis Kalanick, urban sprawl, Watson beat the top human players on Jeopardy!, Y Combinator, Y2K, yield curve

When the fully self-driving car finally arrives in your driveway, that will be weak AI to the max: thousands of sensors deployed to perform a specific task better than you can do it. You’ll be able to drink IPAs for hours at your local tavern, and the self-driving car will take you home—and it may well be able to recommend precisely which IPAs you’d like best. But it won’t be able to carry on an interesting discussion about whether this is the best course for your life. That next step up is artificial general intelligence, sometimes referred to as “strong AI.” That’s a computer “as smart as a human across the board, a machine that can perform any intellectual task a human being can,” in Urban’s description. This kind of intelligence would require “the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience.”9 Five years ago a pair of researchers asked hundreds of AI experts at a series of conferences when we’d reach this milestone—more precisely, it asked them to name a “median optimistic year,” when there was a 10 percent chance we’d get there; a median realistic year, a 50 percent chance; and a “pessimistic” year, in which there was a 90 percent chance.


pages: 484 words: 104,873

Rise of the Robots: Technology and the Threat of a Jobless Future by Martin Ford

"Robert Solow", 3D printing, additive manufacturing, Affordable Care Act / Obamacare, AI winter, algorithmic trading, Amazon Mechanical Turk, artificial general intelligence, assortative mating, autonomous vehicles, banking crisis, basic income, Baxter: Rethink Robotics, Bernie Madoff, Bill Joy: nanobots, business cycle, call centre, Capital in the Twenty-First Century by Thomas Piketty, Chris Urmson, Clayton Christensen, clean water, cloud computing, collateralized debt obligation, commoditize, computer age, creative destruction, debt deflation, deskilling, disruptive innovation, diversified portfolio, Erik Brynjolfsson, factory automation, financial innovation, Flash crash, Fractional reserve banking, Freestyle chess, full employment, Goldman Sachs: Vampire Squid, Gunnar Myrdal, High speed trading, income inequality, indoor plumbing, industrial robot, informal economy, iterative process, Jaron Lanier, job automation, John Markoff, John Maynard Keynes: technological unemployment, John von Neumann, Kenneth Arrow, Khan Academy, knowledge worker, labor-force participation, liquidity trap, low skilled workers, low-wage service sector, Lyft, manufacturing employment, Marc Andreessen, McJob, moral hazard, Narrative Science, Network effects, new economy, Nicholas Carr, Norbert Wiener, obamacare, optical character recognition, passive income, Paul Samuelson, performance metric, Peter Thiel, plutocrats, Plutocrats, post scarcity, precision agriculture, price mechanism, Ray Kurzweil, rent control, rent-seeking, reshoring, RFID, Richard Feynman, Rodney Brooks, Sam Peltzman, secular stagnation, self-driving car, Silicon Valley, Silicon Valley startup, single-payer health, software is eating the world, sovereign wealth fund, speech recognition, Spread Networks laid a new fibre optics cable between New York and Chicago, stealth mode startup, stem cell, Stephen Hawking, Steve Jobs, Steven Levy, Steven Pinker, strong AI, Stuxnet, technological singularity, telepresence, telepresence robot, The Bell Curve by Richard Herrnstein and Charles Murray, The Coming Technological Singularity, The Future of Employment, Thomas L Friedman, too big to fail, Tyler Cowen: Great Stagnation, uber lyft, union organizing, Vernor Vinge, very high income, Watson beat the top human players on Jeopardy!, women in the workforce

The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible.”9 Gordon Moore, whose name seems destined to be forever associated with exponentially advancing technology, is likewise skeptical that anything like the Singularity will ever occur.10 Kurzweil’s timeframe for the arrival of human-level artificial intelligence has plenty of defenders, however. MIT physicist Max Tegmark, one of the co-authors of the Hawking article, told The Atlantic’s James Hamblin that “this is very near-term stuff. Anyone who’s thinking about what their kids should study in high school or college should care a lot about this.”11 Others view a thinking machine as fundamentally possible, but much further out. Gary Marcus, for example, thinks strong AI will take at least twice as long as Kurzweil predicts, but that “it’s likely that machines will be smarter than us before the end of the century—not just at chess or trivia questions but at just about everything, from mathematics and engineering to science and medicine.”12 In recent years, speculation about human-level AI has shifted increasingly away from a top-down programming approach and, instead, toward an emphasis on reverse engineering and then simulating the human brain.


pages: 385 words: 111,113

Augmented: Life in the Smart Lane by Brett King

23andMe, 3D printing, additive manufacturing, Affordable Care Act / Obamacare, agricultural Revolution, Airbnb, Albert Einstein, Amazon Web Services, Any sufficiently advanced technology is indistinguishable from magic, Apple II, artificial general intelligence, asset allocation, augmented reality, autonomous vehicles, barriers to entry, bitcoin, blockchain, business intelligence, business process, call centre, chief data officer, Chris Urmson, Clayton Christensen, clean water, congestion charging, crowdsourcing, cryptocurrency, deskilling, different worldview, disruptive innovation, distributed generation, distributed ledger, double helix, drone strike, Elon Musk, Erik Brynjolfsson, Fellow of the Royal Society, fiat currency, financial exclusion, Flash crash, Flynn Effect, future of work, gig economy, Google Glasses, Google X / Alphabet X, Hans Lippershey, Hyperloop, income inequality, industrial robot, information asymmetry, Internet of things, invention of movable type, invention of the printing press, invention of the telephone, invention of the wheel, James Dyson, Jeff Bezos, job automation, job-hopping, John Markoff, John von Neumann, Kevin Kelly, Kickstarter, Kodak vs Instagram, Leonard Kleinrock, lifelogging, low earth orbit, low skilled workers, Lyft, M-Pesa, Mark Zuckerberg, Marshall McLuhan, megacity, Metcalfe’s law, Minecraft, mobile money, money market fund, more computing power than Apollo, Network effects, new economy, obamacare, Occupy movement, Oculus Rift, off grid, packet switching, pattern recognition, peer-to-peer, Ray Kurzweil, RFID, ride hailing / ride sharing, Robert Metcalfe, Satoshi Nakamoto, Second Machine Age, selective serotonin reuptake inhibitor (SSRI), self-driving car, sharing economy, Shoshana Zuboff, Silicon Valley, Silicon Valley startup, Skype, smart cities, smart grid, smart transportation, Snapchat, social graph, software as a service, speech recognition, statistical model, stem cell, Stephen Hawking, Steve Jobs, Steve Wozniak, strong AI, TaskRabbit, technological singularity, telemarketer, telepresence, telepresence robot, Tesla Model S, The Future of Employment, Tim Cook: Apple, trade route, Travis Kalanick, Turing complete, Turing test, uber lyft, undersea cable, urban sprawl, V2 rocket, Watson beat the top human players on Jeopardy!, white picket fence, WikiLeaks

Central to this will be infrastructure that starts to run itself, responding in real time. Automated UAVs, autonomous emergency vehicles and robots, and sensor nets giving feedback loops to the right algorithms or AIs to dispatch those resources. Artificial intelligence will not only be an underpinning of smart cities, it will also be necessary simply to process all of the sensor data coming into smart city operations centres. Humans will only slow down the process too much. Strong AI involvement running smart cities is closer to two decades away. Within 20 to 30 years, we will see smart governance at the hands of AI—coded laws and enforcement, resource allocation, budgeting and optimal decision-making made by algorithms that run independent of human committees and voting. The manual counting of votes for elections will be a thing of the past, as citizens will BYOD (bring your own device) to the challenge of casting their votes.


pages: 379 words: 108,129

An Optimist's Tour of the Future by Mark Stevenson

23andMe, Albert Einstein, Andy Kessler, augmented reality, bank run, carbon footprint, carbon-based life, clean water, computer age, decarbonisation, double helix, Douglas Hofstadter, Elon Musk, flex fuel, Gödel, Escher, Bach, Hans Rosling, Intergovernmental Panel on Climate Change (IPCC), Internet of things, invention of agriculture, Isaac Newton, Jeff Bezos, Kevin Kelly, Law of Accelerating Returns, Leonard Kleinrock, life extension, Louis Pasteur, low earth orbit, mutually assured destruction, Naomi Klein, off grid, packet switching, peak oil, pre–internet, Ray Kurzweil, Richard Feynman, Rodney Brooks, self-driving car, Silicon Valley, smart cities, social intelligence, stem cell, Stephen Hawking, Steven Pinker, Stewart Brand, strong AI, the scientific method, Wall-E, X Prize

If this were then subjected to an appropriate course of education one would obtain the adult brain.’ This proposed necessity of having to raise robots might lead you to the conclusion that truly intelligent robots will be few and far between. But the thing about robots is you can replicate them. Once we’ve got one intelligent robot brain, we can copy it to another machine, and another, and another. The robots have finally arrived, bringing an explosion of ‘strong AI’. Of course, it may not just be us (the humans) doing the copying, it might be the robots themselves. And because technology improves at a startling rate (way faster than biological evolution), one has to consider the possibility that things won’t stop there. Once we achieve a robot with human-level (if not human-like) intelligence, it won’t be very long until robot cognition outstrips the human mind – marrying the human-like intelligence with instant recall, flawless memory and the number-crunching ability of Deep Blue.


pages: 419 words: 109,241

A World Without Work: Technology, Automation, and How We Should Respond by Daniel Susskind

3D printing, agricultural Revolution, AI winter, Airbnb, Albert Einstein, algorithmic trading, artificial general intelligence, autonomous vehicles, basic income, Bertrand Russell: In Praise of Idleness, blue-collar work, British Empire, Capital in the Twenty-First Century by Thomas Piketty, cloud computing, computer age, computer vision, computerized trading, creative destruction, David Graeber, David Ricardo: comparative advantage, demographic transition, deskilling, disruptive innovation, Donald Trump, Douglas Hofstadter, drone strike, Edward Glaeser, Elon Musk, en.wikipedia.org, Erik Brynjolfsson, financial innovation, future of work, gig economy, Gini coefficient, Google Glasses, Gödel, Escher, Bach, income inequality, income per capita, industrial robot, interchangeable parts, invisible hand, Isaac Newton, Jacques de Vaucanson, James Hargreaves, job automation, John Markoff, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John von Neumann, Joi Ito, Joseph Schumpeter, Kenneth Arrow, Khan Academy, Kickstarter, low skilled workers, lump of labour, Marc Andreessen, Mark Zuckerberg, means of production, Metcalfe’s law, natural language processing, Network effects, Occupy movement, offshore financial centre, Paul Samuelson, Peter Thiel, pink-collar, precariat, purchasing power parity, Ray Kurzweil, ride hailing / ride sharing, road to serfdom, Robert Gordon, Sam Altman, Second Machine Age, self-driving car, shareholder value, sharing economy, Silicon Valley, Snapchat, social intelligence, software is eating the world, sovereign wealth fund, spinning jenny, Stephen Hawking, Steve Jobs, strong AI, telemarketer, The Future of Employment, The Rise and Fall of American Growth, the scientific method, The Wealth of Nations by Adam Smith, Thorstein Veblen, Travis Kalanick, Turing test, Tyler Cowen: Great Stagnation, universal basic income, upwardly mobile, Watson beat the top human players on Jeopardy!, We are the 99%, wealth creators, working poor, working-age population, Y Combinator

Kasparov, “The Chess Master and the Computer.” 18.  See Dennett, From Bacteria to Bach and Back, p. 36. 19.  Charles Darwin, On the Origin of Species (London: Penguin Books, 2009), p. 427. 20.  See Isaiah Berlin, The Hedgehog and the Fox (New York: Simon & Schuster, 1953). 21.  The distinction between AGI and ANI is often conflated with another one made by John Searle, who speaks of the difference between “strong” AI and “weak” AI. But the two are not the same thing at all. AGI and ANI reflect the breadth of a machine’s capability, while Searle’s terms describe whether a machine thinks like a human being (“strong”) or unlike one (“weak”). 22.  Nick Bostrom and Eliezer Yudkowsky, “The Ethics of Artificial Intelligence” in William Ramsey and Keith Frankish, eds., Cambridge Handbook of Artificial Intelligence (Cambridge: Cambridge University Press, 2011). 23.  


pages: 463 words: 118,936

Darwin Among the Machines by George Dyson

Ada Lovelace, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Albert Einstein, anti-communist, British Empire, carbon-based life, cellular automata, Claude Shannon: information theory, combinatorial explosion, computer age, Danny Hillis, Donald Davies, fault tolerance, Fellow of the Royal Society, finite state, IFF: identification friend or foe, invention of the telescope, invisible hand, Isaac Newton, Jacquard loom, James Watt: steam engine, John Nash: game theory, John von Neumann, low earth orbit, Menlo Park, Nash equilibrium, Norbert Wiener, On the Economy of Machinery and Manufactures, packet switching, pattern recognition, phenotype, RAND corporation, Richard Feynman, spectrum auction, strong AI, the scientific method, The Wealth of Nations by Adam Smith, Turing machine, Von Neumann architecture, zero-sum game

Gödel’s second incompleteness theorem—showing that no formal system can prove its own consistency—has been construed as limiting the ability of mechanical processes to comprehend levels of meaning that are accessible to our minds. The argument over where to draw this distinction has been going on for a long time. Can machines calculate? Can machines think? Can machines become conscious? Can machines have souls? Although Leibniz believed that the process of thought could be arithmetized and that mechanism could perform the requisite arithmetic, he disagreed with the “strong AI” of Hobbes that reduced everything to mechanism, even our own consciousness or the existence (and corporeal mortality) of a soul. “Whatever is performed in the body of man and of every animal is no less mechanical than what is performed in a watch,” wrote Leibniz to Samuel Clarke.51 But, in the Monadology, Leibniz argued that “perception, and that which depends upon it, are inexplicable by mechanical causes,” and he presented a thought experiment to support his views: “Supposing that there were a machine whose structure produced thought, sensation, and perception, we could conceive of it as increased in size with the same proportions until one was able to enter into its interior, as he would into a mill.


pages: 492 words: 141,544

Red Moon by Kim Stanley Robinson

artificial general intelligence, basic income, blockchain, Brownian motion, correlation does not imply causation, cryptocurrency, Deng Xiaoping, gig economy, Hyperloop, illegal immigration, income inequality, invisible hand, low earth orbit, Magellanic Cloud, megacity, precariat, Schrödinger's Cat, seigniorage, strong AI, Turing machine, universal basic income, zero-sum game

It has a very low bit rate because it’s so hard to detect neutrinos, but his people have a way to send a real flood of them, and the ice flooring this crater is just enough to catch a signal strength that is about the equal of the first telegraphs. So he keeps his messages brief.” “Seems like a lot of trouble for a telegraph,” John Semple observed. Anna nodded. “Just a toy, at least for now. The real power here is the quantum computer, down there in that building you see in the ice. That thing is a monster.” “Strong AI?” Ta Shu asked. “I don’t know what you mean by that, but definitely a lot of AI. Not strong in the philosophical sense, but, you know—fast. Yottaflops fast.” “Yottaflops,” Ta Shu repeated. “I like that word. That means very fast?” “Very fast. Not so much strong, in my opinion, because of how lame we are at programming. But fast for sure.” Anna then introduced a few of the free crater residents around the table, and invited the visitors to sit down.


pages: 561 words: 167,631

2312 by Kim Stanley Robinson

agricultural Revolution, double helix, full employment, hive mind, if you see hoof prints, think horses—not zebras, Kuiper Belt, late capitalism, mutually assured destruction, Nelson Mandela, offshore financial centre, orbital mechanics / astrodynamics, pattern recognition, phenotype, post scarcity, precariat, retrograde motion, stem cell, strong AI, the built environment, the High Line, Turing machine, Turing test, Winter of Discontent

In these years all the bad trends converged in “perfect storm” fashion, leading to a rise in average global temperature of five K, and sea level rise of five meters—and as a result, in the 2120s, food shortages, mass riots, catastrophic death on all continents, and an immense spike in the extinction rate of other species. Early lunar bases, scientific stations on Mars. The Turnaround: 2130 to 2160. Verteswandel (Shortback’s famous “mutation of values”), followed by revolutions; strong AI; self-replicating factories; terraforming of Mars begun; fusion power; strong synthetic biology; climate modification efforts, including the disastrous Little Ice Age of 2142–54; space elevators on Earth and Mars; fast space propulsion; the space diaspora begun; the Mondragon Accord signed. And thus: The Accelerando: 2160 to 2220. Full application of all the new technological powers, including human longevity increases; terraforming of Mars and subsequent Martian revolution; full diaspora into solar system; hollowing of the terraria; start of the terraforming of Venus; the construction of Terminator; and Mars joining the Mondragon Accord.


Global Catastrophic Risks by Nick Bostrom, Milan M. Cirkovic

affirmative action, agricultural Revolution, Albert Einstein, American Society of Civil Engineers: Report Card, anthropic principle, artificial general intelligence, Asilomar, availability heuristic, Bill Joy: nanobots, Black Swan, carbon-based life, cognitive bias, complexity theory, computer age, coronavirus, corporate governance, cosmic microwave background, cosmological constant, cosmological principle, cuban missile crisis, dark matter, death of newspapers, demographic transition, Deng Xiaoping, distributed generation, Doomsday Clock, Drosophila, endogenous growth, Ernest Rutherford, failed state, feminist movement, framing effect, friendly AI, Georg Cantor, global pandemic, global village, Gödel, Escher, Bach, hindsight bias, Intergovernmental Panel on Climate Change (IPCC), invention of agriculture, Kevin Kelly, Kuiper Belt, Law of Accelerating Returns, life extension, means of production, meta analysis, meta-analysis, Mikhail Gorbachev, millennium bug, mutually assured destruction, nuclear winter, P = NP, peak oil, phenotype, planetary scale, Ponzi scheme, prediction markets, RAND corporation, Ray Kurzweil, reversible computing, Richard Feynman, Ronald Reagan, scientific worldview, Singularitarianism, social intelligence, South China Sea, strong AI, superintelligent machines, supervolcano, technological singularity, technoutopianism, The Coming Technological Singularity, Tunguska event, twin studies, uranium enrichment, Vernor Vinge, War on Poverty, Westphalian system, Y2K

Humanity did not rise to prominence on Earth by holding its breath longer than other species. The catastrophic scenario that stems from underestimating the power of intelligence is that someone builds a button, and does not care enough what the button does, because they do not think the button is powerful enough to hurt them. Or the wider field of AI researchers will not pay enough attention to risks of strong AI, and therefore good tools and firm foundations for friendliness will not be available when it becomes possible to build strong intelligences. And one should not fail to mention - for it also impacts upon existential risk ­ that AI could be the powerful solution to other existential risks, and by mistake we will ignore our best hope of survival. The point about underestimating the potential impact of AI is symmetrical around potential good impacts and potential bad impacts.


pages: 1,152 words: 266,246

Why the West Rules--For Now: The Patterns of History, and What They Reveal About the Future by Ian Morris

addicted to oil, Admiral Zheng, agricultural Revolution, Albert Einstein, anti-communist, Arthur Eddington, Atahualpa, Berlin Wall, British Empire, Columbian Exchange, conceptual framework, cuban missile crisis, defense in depth, demographic transition, Deng Xiaoping, discovery of the americas, Doomsday Clock, en.wikipedia.org, falling living standards, Flynn Effect, Francisco Pizarro, global village, God and Mammon, hiring and firing, indoor plumbing, Intergovernmental Panel on Climate Change (IPCC), invention of agriculture, Isaac Newton, James Watt: steam engine, Kickstarter, Kitchen Debate, knowledge economy, market bubble, mass immigration, Menlo Park, Mikhail Gorbachev, mutually assured destruction, New Journalism, out of africa, Peter Thiel, phenotype, pink-collar, place-making, purchasing power parity, RAND corporation, Ray Kurzweil, Ronald Reagan, Scientific racism, sexual politics, Silicon Valley, Sinatra Doctrine, South China Sea, special economic zone, Steve Jobs, Steve Wozniak, Steven Pinker, strong AI, The inhabitant of London could order by telephone, sipping his morning tea in bed, the various products of the whole earth, The Wealth of Nations by Adam Smith, Thomas Kuhn: the structure of scientific revolutions, Thomas L Friedman, Thomas Malthus, trade route, upwardly mobile, wage slave, washing machines reduced drudgery

Archaeogenetics. Cambridge, UK: Cambridge University Press, 2000. Renfrew, Colin, and Iain Morley, eds. Becoming Human: Innovation in Prehistoric Material and Spiritual Culture. Cambridge, UK: Cambridge University Press, 2009. Reynolds, David. One World Divisible: A Global History Since 1945. New York: Norton, 2000. Richards, Jay, et al. Are We Spiritual Machines? Ray Kurzweil vs. the Critics of Strong A.I. Seattle: Discovery Institute, 2002. Richards, John. Unending Frontier: An Environmental History of the Early Modern World. Berkeley: University of California Press, 2003. Richardson, Lewis Fry. Statistics of Deadly Quarrels. Pacific Grove, CA: Boxwood Press, 1960. Richerson, Peter, Robert Boyd, and Robert Bettinger. “Was Agriculture Impossible During the Pleistocene but Mandatory During the Holocene?”


pages: 1,737 words: 491,616

Rationality: From AI to Zombies by Eliezer Yudkowsky

Albert Einstein, Alfred Russel Wallace, anthropic principle, anti-pattern, anti-work, Arthur Eddington, artificial general intelligence, availability heuristic, Bayesian statistics, Berlin Wall, Build a better mousetrap, Cass Sunstein, cellular automata, cognitive bias, cognitive dissonance, correlation does not imply causation, cosmological constant, creative destruction, Daniel Kahneman / Amos Tversky, dematerialisation, different worldview, discovery of DNA, Douglas Hofstadter, Drosophila, effective altruism, experimental subject, Extropian, friendly AI, fundamental attribution error, Gödel, Escher, Bach, hindsight bias, index card, index fund, Isaac Newton, John Conway, John von Neumann, Long Term Capital Management, Louis Pasteur, mental accounting, meta analysis, meta-analysis, money market fund, Nash equilibrium, Necker cube, NP-complete, P = NP, pattern recognition, Paul Graham, Peter Thiel, Pierre-Simon Laplace, placebo effect, planetary scale, prediction markets, random walk, Ray Kurzweil, reversible computing, Richard Feynman, risk tolerance, Rubik’s Cube, Saturday Night Live, Schrödinger's Cat, scientific mainstream, scientific worldview, sensible shoes, Silicon Valley, Silicon Valley startup, Singularitarianism, Solar eclipse in 1919, speech recognition, statistical model, Steven Pinker, strong AI, technological singularity, The Bell Curve by Richard Herrnstein and Charles Murray, the map is not the territory, the scientific method, Turing complete, Turing machine, ultimatum game, X Prize, Y Combinator, zero-sum game

Say to them rather: “I’m sorry, I’ve never seen a human brain, or any other intelligence, and I have no reason as yet to believe that any such thing can exist. Now please explain to me what your AI does, and why you believe it will do it, without pointing to humans as an example.” Planes would fly just as well, given a fixed design, if birds had never existed; they are not kept aloft by analogies. So now you perceive, I hope, why, if you wanted to teach someone to do fundamental work on strong AI—bearing in mind that this is demonstrably a very difficult art, which is not learned by a supermajority of students who are just taught existing reductions such as search trees—then you might go on for some length about such matters as the fine art of reductionism, about playing rationalist’s Taboo to excise problematic words and replace them with their referents, about anthropomorphism, and, of course, about early stopping on mysterious answers to mysterious questions