strong AI

25 results back to index


The Singularity Is Near: When Humans Transcend Biology by Ray Kurzweil

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

additive manufacturing, AI winter, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Albert Einstein, anthropic principle, Any sufficiently advanced technology is indistinguishable from magic, artificial general intelligence, augmented reality, autonomous vehicles, Benoit Mandelbrot, Bill Joy: nanobots, bioinformatics, brain emulation, Brewster Kahle, Brownian motion, business intelligence, c2.com, call centre, carbon-based life, cellular automata, Claude Shannon: information theory, complexity theory, conceptual framework, Conway's Game of Life, cosmological constant, cosmological principle, cuban missile crisis, data acquisition, Dava Sobel, David Brooks, Dean Kamen, disintermediation, double helix, Douglas Hofstadter, en.wikipedia.org, epigenetics, factory automation, friendly AI, George Gilder, Gödel, Escher, Bach, informal economy, information retrieval, invention of the telephone, invention of the telescope, invention of writing, Isaac Newton, iterative process, Jaron Lanier, Jeff Bezos, job automation, job satisfaction, John von Neumann, Kevin Kelly, Law of Accelerating Returns, life extension, linked data, Loebner Prize, Louis Pasteur, mandelbrot fractal, Mikhail Gorbachev, mouse model, Murray Gell-Mann, mutually assured destruction, natural language processing, Network effects, new economy, Norbert Wiener, oil shale / tar sands, optical character recognition, pattern recognition, phenotype, premature optimization, randomized controlled trial, Ray Kurzweil, remote working, reversible computing, Richard Feynman, Richard Feynman, Rodney Brooks, Search for Extraterrestrial Intelligence, semantic web, Silicon Valley, Singularitarianism, speech recognition, statistical model, stem cell, Stephen Hawking, Stewart Brand, strong AI, superintelligent machines, technological singularity, Ted Kaczynski, telepresence, The Coming Technological Singularity, transaction costs, Turing machine, Turing test, Vernor Vinge, Y2K, Yogi Berra

For these reasons, once a computer is able to match the subtlety and range of human intelligence, it will necessarily soar past it and then continue its double-exponential ascent. A key question regarding the Singularity is whether the "chicken" (strong AI) or the "egg"(nanotechnology) will come first. In other words, will strong AI lead to full nanotechnology (molecular-manufacturing assemblers that can turn information into physical products), or will full nanotechnology lead to strong AI? The logic of the first premise is that strong AI would imply superhuman AI for the reasons just cited, and superhuman AI would be in a position to solve any remaining design problems required to implement full nanotechnology. The second premise is based on the realization that the hardware requirements for strong AI will be met by nanotechnology-based computation. Likewise the software requirements will be facilitated by nanobots that could create highly detailed scans of human brain functioning and thereby achieve the completion of reverse engineering the human brain.

The reality is that progress in both areas will necessarily use our most advanced tools, so advances in each field will simultaneously facilitate the other. However, I do expect that full MNT will emerge prior to strong AI, but only by a few years (around 2025 for nanotechnology, around 2029 for strong AI). As revolutionary as nanotechnology will be, strong AI will have far more profound consequences. Nanotechnology is powerful but not necessarily intelligent. We can devise ways of at least trying to manage the enormous powers of nanotechnology, but superintelligence innately cannot be controlled. Runaway AI. Once strong AI is achieved, it can readily be advanced and its powers multiplied, as that is the fundamental nature of machine abilities. As one strong AI immediately begets many strong Als, the latter access their own design, understand and improve it, and thereby very rapidly evolve into a yet more capable, more intelligent AI, with the cycle repeating itself indefinitely.

Such robots may make great assistants, but who's to say that we can count on them to remain reliably friendly to mere biological humans? Strong AI. Strong AI promises to continue the exponential gains of human civilization. (As I discussed earlier, I include the nonbiological intelligence derived from our human civilization as still human.) But the dangers it presents are also profound precisely because of its amplification of intelligence. Intelligence is inherently impossible to control, so the various strategies that have been devised to control nanotechnology (for example, the "broadcast architecture" described below) won't work for strong AI. There have been discussions and proposals to guide AI development toward what Eliezer Yudkowsky calls "friendly AI"30 (see the section "Protection from 'Unfriendly' Strong AI," p. 420). These are useful for discussion, but it is infeasible today to devise strategies that will absolutely ensure that future AI embodies human ethics and values.

 

pages: 261 words: 10,785

The Lights in the Tunnel by Martin Ford

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Albert Einstein, Bill Joy: nanobots, Black-Scholes formula, call centre, cloud computing, collateralized debt obligation, credit crunch, double helix, en.wikipedia.org, factory automation, full employment, income inequality, index card, industrial robot, inventory management, invisible hand, Isaac Newton, job automation, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, knowledge worker, low skilled workers, moral hazard, pattern recognition, prediction markets, Productivity paradox, Ray Kurzweil, Search for Extraterrestrial Intelligence, Silicon Valley, Stephen Hawking, strong AI, superintelligent machines, technological singularity, Thomas L Friedman, Turing test, Vernor Vinge, War on Poverty

While narrow AI is increasingly deployed to solve real world problems and attracts most of the current commercial interest, the Holy Grail of artificial intelligence is, of course, strong AI—the construction of a truly intelligent machine. The realization of strong AI would mean the existence of a machine that is genuinely competitive with, or perhaps even superior to, a human being in its ability to reason and conceive ideas. The arguments I have made in Copyrighted Material – Paperback/Kindle available @ Amazon THE LIGHTS IN THE TUNNEL / 242 this book do not depend on strong AI, but it is worth noting that if truly intelligent machines were built and became affordable, the trends I have predicted here would likely be amplified, and the economic impact would certainly be dramatic and might unfold in an accelerating fashion. Research into strong AI has suffered because of some overly optimistic predictions and expectations back in the 1980s—long before computer hardware was fast enough to make true machine intelligence feasible.

Research into strong AI has suffered because of some overly optimistic predictions and expectations back in the 1980s—long before computer hardware was fast enough to make true machine intelligence feasible. When reality fell far short of the projections, focus and financial backing shifted away from research into strong AI. Nonetheless, there is evidence that the vastly superior performance and affordability of today’s processors is helping to revitalize the field. Research into strong AI can be roughly divided into two main approaches. The direct computational approach attempts to extend traditional, algorithmic computing into the realm of true intelligence. This involves the development of sophisticated software applications that exhibit general reasoning. A second approach begins by attempting to understand and then simulate the human brain. The Blue Brain Project,57 a collaboration between Switzerland’s EPFL (one of Europe’s top technical universities) and IBM, is one such effort to simulate the workings of the brain.

Once researchers gain an understanding of the basic operating principles of the brain, it may be possible to build an artificial intelligence based on that framework. This would not be an exact replication of a human brain; instead, it would be something completely new, but based on a similar architecture. Copyrighted Material – Paperback/Kindle available @ Amazon Appendix / Final Thoughts / 243 When might strong AI become reality—if ever? I suspect that if you were to survey the top experts working in the field, you would get a fairly wide range of estimates. Optimists might say it will happen within the next 20 to 30 years. A more cautious group would place it 50 or more years in the future, and some might argue that it will never happen. True machine intelligence is an idea that, in many ways, intrudes into the realm of philosophy, and for some people, perhaps even religion.

 

pages: 797 words: 227,399

Robotics Revolution and Conflict in the 21st Century by P. W. Singer

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

agricultural Revolution, Albert Einstein, Any sufficiently advanced technology is indistinguishable from magic, Atahualpa, barriers to entry, Berlin Wall, Bill Joy: nanobots, blue-collar work, borderless world, clean water, Craig Reynolds: boids flock, cuban missile crisis, en.wikipedia.org, Ernest Rutherford, failed state, Fall of the Berlin Wall, Firefox, Francisco Pizarro, Frank Gehry, friendly fire, game design, George Gilder, Google Earth, Grace Hopper, I think there is a world market for maybe five computers, if you build it, they will come, illegal immigration, industrial robot, interchangeable parts, invention of gunpowder, invention of movable type, invention of the steam engine, Isaac Newton, Jacques de Vaucanson, job automation, Johann Wolfgang von Goethe, Law of Accelerating Returns, Mars Rover, Menlo Park, New Urbanism, pattern recognition, private military company, RAND corporation, Ray Kurzweil, RFID, robot derives from the Czech word robota Czech, meaning slave, Rodney Brooks, Ronald Reagan, Schrödinger's Cat, Silicon Valley, speech recognition, Stephen Hawking, strong AI, technological singularity, The Coming Technological Singularity, The Wisdom of Crowds, Turing test, Vernor Vinge, Wall-E, Yogi Berra

A machine takeover is generally imagined as following a path of evolution to revolution. Computers eventually develop to the equivalent of human intelligence (“strong AI”) and then rapidly push past any attempts at human control. Ray Kurzweil explains how this would work. “As one strong AI immediately begets many strong AIs, the latter access their own design, understand and improve it, and thereby very rapidly evolve into a yet more capable, more intelligent AI, with the cycle repeating itself indefinitely. Each cycle not only creates a more intelligent AI, but takes less time than the cycle before it as is the nature of technological evolution. The premise is that once strong AI is achieved, it will immediately become a runaway phenomenon of rapidly escalating super-intelligence.” Or as the AI Agent Smith says to his human adversary in The Matrix, “Evolution, Morpheus, evolution, like the dinosaur.

Despite all the robots having the same initial software, the researchers are seeing the emergence of “good” robots that cooperate and “bad” robots that constantly attack each other. There was even one robot that became the equivalent of artificially stupid or suicidal, that is, a robot that evolved to constantly make the worst possible decision. This idea of robots, one day being able to problem-solve, create, and even develop personalities past what their human designers intended is what some call “strong AI.” That is, the computer might learn so much that, at a certain point, it is not just mimicking human capabilities but has finally equaled, and even surpassed, its creators’ human intelligence. This is the essence of the so-called Turing test. Alan Turing was one of the pioneers of AI, who worked on the early computers like Colossus that helped crack the German codes during World War II. His test is now encapsulated in a real-world prize that will go to the first designer of a computer intelligent enough to trick human experts into thinking that it is human.

Wireless capacity doubles every nine months. Optical capacity doubles every twelve months. The cost/performance ratio of Internet service providers is doubling every twelve months. Internet bandwidth backbone is doubling roughly every twelve months. The number of human genes mapped per year doubles every eighteen months. The resolution of brain scans (a key to understanding how the brain works, an important part of creating strong AI) doubles every twelve months. And, as a by-product, the number of personal and service robots has so far doubled every nine months. The darker side of these trends has been exponential change in our capability not merely to create, but also to destroy. The modern-day bomber jet has roughly half a million times the killing capacity of the Roman legionnaire carrying a sword in hand. Even within the twentieth century, the range and effectiveness of artillery fire increased by a factor of twenty, antitank fire by a factor of sixty.

 

pages: 303 words: 67,891

Advances in Artificial General Intelligence: Concepts, Architectures and Algorithms: Proceedings of the Agi Workshop 2006 by Ben Goertzel, Pei Wang

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

AI winter, artificial general intelligence, bioinformatics, brain emulation, combinatorial explosion, complexity theory, computer vision, conceptual framework, correlation coefficient, epigenetics, friendly AI, information retrieval, Isaac Newton, John Conway, Loebner Prize, Menlo Park, natural language processing, Occam's razor, p-value, pattern recognition, performance metric, Ray Kurzweil, Rodney Brooks, semantic web, statistical model, strong AI, theory of mind, traveling salesman, Turing machine, Turing test, Von Neumann architecture, Y2K

Introduction Early AI researchers aimed at what was later called “strong AI,” the simulation of human level intelligence. One of AI’s founders, Herbert Simon, claimed (circa 1957) that “… there are now in the world machines that think, that learn and that create.” He went on to predict that with 10 years a computer would beat a grandmaster at chess, would prove an “important new mathematical theorem, and would write music of “considerable aesthetic value.” Science fiction writer Arthur C. Clarke predicted that, “[AI] technology will become sufficiently advanced that it will be indistinguishable from magic” [1]. AI research had as its goal the simulation of human-like intelligence. Within a decade of so, it became abundantly clear that the problems AI had to overcome for this “strong AI” to become a reality were immense, perhaps intractable.

The next major step in this direction was the May 2006 AGIRI Workshop, of which this volume is essentially a proceedings. The term AGI, artificial general intelligence, was introduced as a modern successor to the earlier strong AI. Artificial General Intelligence What is artificial general intelligence? The AGIRI website lists several features, describing machines • • • • with human-level, and even superhuman, intelligence. that generalize their knowledge across different domains. that reflect on themselves. and that create fundamental innovations and insights. Even strong AI wouldn’t push for this much, and this general, an intelligence. Can there be such an artificial general intelligence? I think there can be, but that it can’t be done with a brain in a vat, with humans providing input and utilizing computational output.

Machine learning algorithms may be applied quite broadly in a variety of contexts, but the breadth and generality in this case is supplied largely by the human user of the algorithm; any particular machine learning program, considered as a holistic system taking in inputs and producing outputs without detailed human intervention, can solve only problems of a very specialized sort. Specified in this way, what we call AGI is similar to some other terms that have been used by other authors, such as “strong AI” [7], “human-level AI” [8], “true synthetic intelligence” [9], “general intelligent system” [10], and even “thinking machine” [11]. Though no term is perfect, we chose to use “AGI” because it correctly stresses the general nature of the research goal and scope, without committing too much to any theory or technique. We will also refer in this chapter to “AGI projects.” We use this term to refer to an AI research project that satisfies all the following criteria: 1.

 

pages: 219 words: 63,495

50 Future Ideas You Really Need to Know by Richard Watson

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

23andMe, 3D printing, access to a mobile phone, Albert Einstein, artificial general intelligence, augmented reality, autonomous vehicles, BRICs, Buckminster Fuller, call centre, clean water, cloud computing, collaborative consumption, computer age, computer vision, crowdsourcing, dark matter, dematerialisation, digital Maoism, Elon Musk, energy security, failed state, future of work, Geoffrey West, Santa Fe Institute, germ theory of disease, happiness index / gross national happiness, hive mind, hydrogen economy, Internet of things, Jaron Lanier, life extension, Marshall McLuhan, megacity, natural language processing, Network effects, new economy, oil shale / tar sands, pattern recognition, peak oil, personalized medicine, phenotype, precision agriculture, profit maximization, RAND corporation, Ray Kurzweil, RFID, Richard Florida, Search for Extraterrestrial Intelligence, self-driving car, semantic web, Skype, smart cities, smart meter, smart transportation, statistical model, stem cell, Stephen Hawking, Steve Jobs, Steven Pinker, Stewart Brand, strong AI, Stuxnet, supervolcano, telepresence, The Wisdom of Crowds, Thomas Malthus, Turing test, urban decay, Vernor Vinge, Watson beat the top human players on Jeopardy!, web application, women in the workforce, working-age population, young professional

By around 2040 machine brains should, in theory, be able to handle around 100 trillion instructions per second. That’s about the same as a human brain. So what happens when machine intelligence starts to rival that of its human designers? Before we descend down this rabbit hole we should first split AI in two. “Strong AI” is the term generally used to describe true thinking machines. “Weak AI” (sometimes known as “Narrow AI”) is intelligence intended to supplement rather than exceed human intelligence. So far most machines are preprogrammed or taught logical courses of action. But in the future, machines with strong AI will be able to learn as they go and respond to unexpected events. The implications? Think of automated disease diagnosis and surgery, military planning and battle command, customer-service avatars, artificial creativity and autonomous robots that predict then respond to crime (a “Department of Future Crime”—see also Chapter 32 and Biocriminology).

Sumner Redstone, chairman, Viacom and CBS 2002 “There is no doubt that Saddam Hussein has weapons of mass destruction.” Dick Cheney Glossary 3D printer A way to produce 3D objects from digital instructions and layered materials dispersed or sprayed on via a printer. Affective computing Machines and systems that recognize or simulate human effects or emotions. AGI Artificial general intelligence, a term usually used to describe strong AI (the opposite of narrow or weak AI). It is machine intelligence that is equivalent to, or exceeds, human intelligence and it’s usually regarded as the long-term goal of AI research and development. Ambient intelligence Electronic or artificial environments that recognize the presence of other machines or people and respond to their needs. Artificial photosynthesis The artificial replication of natural photosynthesis to create or store solar fuels.

 

pages: 144 words: 43,356

Surviving AI: The Promise and Peril of Artificial Intelligence by Calum Chace

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

3D printing, Ada Lovelace, AI winter, Airbnb, artificial general intelligence, augmented reality, barriers to entry, bitcoin, blockchain, brain emulation, Buckminster Fuller, cloud computing, computer age, computer vision, correlation does not imply causation, credit crunch, cryptocurrency, cuban missile crisis, dematerialisation, discovery of the americas, disintermediation, don't be evil, Elon Musk, en.wikipedia.org, epigenetics, Erik Brynjolfsson, everywhere but in the productivity statistics, Flash crash, friendly AI, Google Glasses, industrial robot, Internet of things, invention of agriculture, job automation, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John von Neumann, Kevin Kelly, life extension, low skilled workers, Mahatma Gandhi, means of production, mutually assured destruction, Nicholas Carr, pattern recognition, Peter Thiel, Ray Kurzweil, Rodney Brooks, Second Machine Age, self-driving car, Silicon Valley, Silicon Valley ideology, Skype, South Sea Bubble, speech recognition, Stanislav Petrov, Stephen Hawking, Steve Jobs, strong AI, technological singularity, theory of mind, Turing machine, Turing test, universal basic income, Vernor Vinge, wage slave, Wall-E

The down-to-earth clarity of Chace’s style will help take humanity into what could be a very violent, “Transcendence” movie-like, real-life, phase four. If you want to survive this coming fourth phase in the next fewdecades and prepare for it, you cannot afford NOT to read Chace’s book. Prof. Dr. Hugo de Garis, author of The Artilect War, former director of the Artificial Brain Lab, Xiamen University, China. Advances in AI are set to affect progress in all other areas in the coming decades. If this momentum leads to the achievement of strong AI within the century, then in the words of one field leader it would be “the biggest event in human history”. Now is therefore a perfect time for the thoughtful discussion ofchallenges and opportunities that Chace provides. Surviving AI is an exceptionally clear, well-researched and balanced introduction to a complex and controversial topic, and is a compelling read to boot. Seán Ó hÉigeartaigh, executive director, Cambridge Centrefor the Study of Existential Risk CALUM writes fiction and non-fiction, primarily on the subject of artificial intelligence.

Whether intelligence resides in the machine or in the software is analogous to the question of whether it resides in the neurons in your brain or in the electrochemical signals that they transmit and receive. Fortunately we don’t need to answer that question here. ANI and AGI We do need to discriminate between two very different types of artificial intelligence: artificial narrow intelligence (ANI) and artificial general intelligence (AGI (4)), which are also known as weak AI and strong AI, and as ordinary AI and full AI. The easiest way to do this is to say that artificial general intelligence, or AGI, is an AI which can carry out any cognitive function that a human can. We have long had computers which can add up much better than any human, and computers which can play chess better than the best human chess grandmaster. However, no computer can yet beat humans at every intellectual endeavour.

But it is daft to dismiss as failures today’s best pattern recognition systems, self-driving cars, and machines which can beat any human at many games of skill. Informed scepticism about near-term AGI We should take more seriously the arguments of very experienced AI researchers who claim that although the AGI undertaking is possible, it won’t be achieved for a very long time. Rodney Brooks, a veteran AI researcher and robot builder, says “I think it is a mistake to be worrying about us developing [strong] AI any time in the next few hundred years. I think the worry stems from a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI, and the enormity and complexity of building sentient volitional intelligence.” Andrew Ng at Baidu and Yann LeCun at Facebook are of a similar mind, as we saw in the last chapter. Less sceptical experts However there are also plenty of veteran AI researchers who think AGI may arrive soon.

 

pages: 294 words: 81,292

Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

3D printing, AI winter, Amazon Web Services, artificial general intelligence, Automated Insights, Bernie Madoff, Bill Joy: nanobots, brain emulation, cellular automata, cloud computing, cognitive bias, computer vision, cuban missile crisis, Daniel Kahneman / Amos Tversky, Danny Hillis, data acquisition, don't be evil, Extropian, finite state, Flash crash, friendly AI, friendly fire, Google Glasses, Google X / Alphabet X, Isaac Newton, Jaron Lanier, John von Neumann, Kevin Kelly, Law of Accelerating Returns, life extension, Loebner Prize, lone genius, mutually assured destruction, natural language processing, Nicholas Carr, optical character recognition, PageRank, pattern recognition, Peter Thiel, prisoner's dilemma, Ray Kurzweil, Rodney Brooks, Search for Extraterrestrial Intelligence, self-driving car, semantic web, Silicon Valley, Singularitarianism, Skype, smart grid, speech recognition, statistical model, stealth mode startup, stem cell, Stephen Hawking, Steve Jobs, Steve Wozniak, strong AI, Stuxnet, superintelligent machines, technological singularity, The Coming Technological Singularity, traveling salesman, Turing machine, Turing test, Vernor Vinge, Watson beat the top human players on Jeopardy!, zero day

“Chapter eight is the deeply intertwined promise and peril in GNR [genetics, nanotechnology, and robotics] and I go into pretty graphic detail on the downsides of those three areas of technology. And the downside of robotics, which really refers to AI, is the most profound because intelligence is the most important phenomenon in the world. Inherently there is no absolute protection against strong AI.” Kurzweil’s book does underline the dangers of genetic engineering and nanotechnology, but it gives only a couple of anemic pages to strong AI, the old name for AGI. And in that chapter he also argues that relinquishment, or turning our backs on some technologies because they’re too dangerous, as advocated by Bill Joy and others, isn’t just a bad idea, but an immoral one. I agree relinquishment is unworkable. But immoral? “Relinquishment is immoral because it would deprive us of profound benefits.

* * * So far we’ve explored three drives that Omohundro argues will motivate self-aware, self-improving systems: efficiency, self-protection, and resource acquisition. We’ve seen how all of these drives will lead to very bad outcomes without extremely careful planning and programming. And we’re compelled to ask ourselves, are we capable of such careful work? Do you, like me, look around the world at expensive and lethal accidents and wonder how we’ll get it right the first time with very strong AI? Three-Mile Island, Chernobyl, Fukushima—in these nuclear power plant catastrophes, weren’t highly qualified designers and administrators trying their best to avoid the disasters that befell them? The 1986 Chernobyl meltdown occurred during a safety test. All three disasters are what organizational theorist Charles Perrow would call “normal accidents.” In his seminal book Normal Accidents: Living with High-Risk Technologies, Perrow proposes that accidents, even catastrophes, are “normal” features of systems with complex infrastructures.

Yet the analogy doesn’t fit—advanced AI isn’t at all like fire, or any other technology. It will be capable of thinking, planning, and gaming its makers. No other tool does anything like that. Kurzweil believes that a way to limit the dangerous aspects of AI, especially ASI, is to pair it with humans through intelligence augmentation—IA. From his uncomfortable metal chair the optimist said, “As I have pointed out, strong AI is emerging from many diverse efforts and will be deeply integrated into our civilization’s infrastructure. Indeed, it will be intimately embedded in our bodies and brains. As such it will reflect our values because it will be us.” And so, the argument goes, it will be as “safe” as we are. But, as I told Kurzweil, Homo sapiens are not known to be particularly harmless when in contact with one another, other animals, or the environment.

 

pages: 742 words: 137,937

The Future of the Professions: How Technology Will Transform the Work of Human Experts by Richard Susskind, Daniel Susskind

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

23andMe, 3D printing, additive manufacturing, AI winter, Albert Einstein, Amazon Mechanical Turk, Amazon Web Services, Andrew Keen, Atul Gawande, Automated Insights, autonomous vehicles, Big bang: deregulation of the City of London, big data - Walmart - Pop Tarts, Bill Joy: nanobots, business process, business process outsourcing, Cass Sunstein, Checklist Manifesto, Clapham omnibus, Clayton Christensen, clean water, cloud computing, computer age, computer vision, conceptual framework, corporate governance, crowdsourcing, Daniel Kahneman / Amos Tversky, death of newspapers, disintermediation, Douglas Hofstadter, en.wikipedia.org, Erik Brynjolfsson, Filter Bubble, Frank Levy and Richard Murnane: The New Division of Labor, full employment, future of work, Google Glasses, Google X / Alphabet X, Hacker Ethic, industrial robot, informal economy, information retrieval, interchangeable parts, Internet of things, Isaac Newton, James Hargreaves, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, Joseph Schumpeter, Khan Academy, knowledge economy, lump of labour, Marshall McLuhan, Narrative Science, natural language processing, Network effects, optical character recognition, personalized medicine, pre–internet, Ray Kurzweil, Richard Feynman, Richard Feynman, Second Machine Age, self-driving car, semantic web, Skype, social web, speech recognition, spinning jenny, strong AI, supply-chain management, telepresence, the market place, The Wealth of Nations by Adam Smith, The Wisdom of Crowds, transaction costs, Turing test, Watson beat the top human players on Jeopardy!, young professional

Like Watson, although vastly less ambitious, ours was a non-thinking, high-performing system. In the language of some AI scientists and philosophers of the 1980s, these systems would be labelled, perhaps a little pejoratively, as ‘weak AI’ rather than ‘strong AI’.8 Broadly speaking, ‘weak AI’ is a term applied to systems that appear, behaviourally, to engage in intelligent human-like thought but in fact enjoy no form of consciousness; whereas systems that exhibit ‘strong AI’ are those that, it is maintained, do have thoughts and cognitive states. On this latter view, the brain is often equated with the digital computer. Today, fascination with ‘strong AI’ is perhaps more intense than ever, even though really big questions remain unanswered and unanswerable. How can we know if machines are conscious in the way that human beings are? How, for that matter, do we know that consciousness feels the same for all of us as human beings?

Undeterred by these philosophical challenges, books and projects abound on building brains and creating minds.9 In the 1980s, in our speeches, we used to joke about the claim of one of the fathers of AI, Marvin Minsky, who reportedly said that ‘the next generation of computers will be so intelligent, we will be lucky if they keep us around as household pets’.10 Today, it is no longer laugh-worthy or sciencefictional11 to contemplate a future in which our computers are vastly more intelligent than us—this prospect is discussed at length in Superintelligence by Nick Bostrom, who runs the Future of Humanity Institute at the Oxford Martin School at the University of Oxford.12 Ironically, this growth in confidence in the possibility of ‘strong AI’, at least in part, has been fuelled by the success of Watson itself. The irony here is that Watson in fact belongs in the category of ‘weak AI’, and it is precisely because it cannot meaningfully be said to think that the system is not deemed very interesting by some AI scientists, psychologists, and philosophers. For pragmatists (like us) rather than purists, whether Watson is an example of ‘weak’ or ‘strong’ AI is of little moment. Pragmatists are interested in high-performing systems, whether or not they can think. Watson did not need to be able to think to win. Nor does a computer need to be able to think or be conscious to pass the celebrated ‘Turing Test’.

 

pages: 372 words: 101,174

How to Create a Mind: The Secret of Human Thought Revealed by Ray Kurzweil

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Albert Einstein, Albert Michelson, anesthesia awareness, anthropic principle, brain emulation, cellular automata, Claude Shannon: information theory, cloud computing, computer age, Dean Kamen, discovery of DNA, double helix, en.wikipedia.org, epigenetics, George Gilder, Google Earth, Isaac Newton, iterative process, Jacquard loom, Jacquard loom, John von Neumann, Law of Accelerating Returns, linear programming, Loebner Prize, mandelbrot fractal, Norbert Wiener, optical character recognition, pattern recognition, Peter Thiel, Ralph Waldo Emerson, random walk, Ray Kurzweil, reversible computing, self-driving car, speech recognition, Steven Pinker, strong AI, the scientific method, theory of mind, Turing complete, Turing machine, Turing test, Wall-E, Watson beat the top human players on Jeopardy!, X Prize

The current state of the art in AI does in fact enable systems to also learn from their own experience. The Google self-driving cars learn from their own driving experience as well as from data from Google cars driven by human drivers; Watson learned most of its knowledge by reading on its own. It is interesting to note that the methods deployed today in AI have evolved to be mathematically very similar to the mechanisms in the neocortex. Another objection to the feasibility of “strong AI” (artificial intelligence at human levels and beyond) that is often raised is that the human brain makes extensive use of analog computing, whereas digital methods inherently cannot replicate the gradations of value that analog representations can embody. It is true that one bit is either on or off, but multiple-bit words easily represent multiple gradations and can do so to any desired degree of accuracy.

Carver Mead, Analog VLSI and Neural Systems (Reading, MA: Addison-Wesley, 1986). 7. “IBM Unveils Cognitive Computing Chips,” IBM news release, August 18, 2011, http://www-03.ibm.com/press/us/en/pressrelease/35251.wss. 8. “Japan’s K Computer Tops 10 Petaflop/s to Stay Atop TOP500 List.” Chapter 9: Thought Experiments on the Mind 1. John R. Searle, “I Married a Computer,” in Jay W. Richards, ed., Are We Spiritual Machines? Ray Kurzweil vs. the Critics of Strong AI (Seattle: Discovery Institute, 2002). 2. Stuart Hameroff, Ultimate Computing: Biomolecular Consciousness and Nanotechnology (Amsterdam: Elsevier Science, 1987). 3. P. S. Sebel et al., “The Incidence of Awareness during Anesthesia: A Multicenter United States Study,” Anesthesia and Analgesia 99 (2004): 833–39. 4. Stuart Sutherland, The International Dictionary of Psychology (New York: Macmillan, 1990). 5.

., “Cognitive Computing,” Communications of the ACM 54, no. 8 (2011): 62–71, http://cacm.acm.org/magazines/2011/8/114944-cognitive-computing/fulltext. 9. Kurzweil, The Singularity Is Near, chapter 9, section titled “The Criticism from Ontology: Can a Computer Be Conscious?” (pp. 458–69). 10. Michael Denton, “Organism and Machine: The Flawed Analogy,” in Are We Spiritual Machines? Ray Kurzweil vs. the Critics of Strong AI (Seattle: Discovery Institute, 2002). 11. Hans Moravec, Mind Children (Cambridge, MA: Harvard University Press, 1988). Epilogue 1. “In U.S., Optimism about Future for Youth Reaches All-Time Low,” Gallup Politics, May 2, 2011, http://www.gallup.com/poll/147350/optimism-future-youth-reaches-time-low.aspx. 2. James C. Riley, Rising Life Expectancy: A Global History (Cambridge: Cambridge University Press, 2001). 3.

 

pages: 247 words: 43,430

Think Complexity by Allen B. Downey

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Benoit Mandelbrot, cellular automata, Conway's Game of Life, Craig Reynolds: boids flock, discrete time, en.wikipedia.org, Frank Gehry, Gini coefficient, Guggenheim Bilbao, mandelbrot fractal, Occupy movement, Paul Erdős, sorting algorithm, stochastic process, strong AI, Thomas Kuhn: the structure of scientific revolutions, Turing complete, Turing machine, We are the 99%

The view that free will is compatible with determinism is called compatibilism. One of the strongest challenges to compatibilism is the consequence argument. What is the consequence argument? What response can you give to the consequence argument based on what you have read in this book? Example 10-7. In the philosophy of mind, Strong AI is the position that an appropriately programmed computer could have a mind in the same sense that humans have minds. John Searle presented a thought experiment called The Chinese Room, intended to show that Strong AI is false. You can read about it at http://en.wikipedia.org/wiki/Chinese_room. What is the system reply to the Chinese Room argument? How does what you have learned about complexity science influence your reaction to the system response? Chapter 11. Case Study: Sugarscape Dan Kearney, Natalie Mattison, and Theo Thompson The Original Sugarscape Sugarscape is an agent-based model developed by Joshua M.

 

pages: 574 words: 164,509

Superintelligence: Paths, Dangers, Strategies by Nick Bostrom

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

agricultural Revolution, AI winter, Albert Einstein, algorithmic trading, anthropic principle, anti-communist, artificial general intelligence, autonomous vehicles, barriers to entry, bioinformatics, brain emulation, cloud computing, combinatorial explosion, computer vision, cosmological constant, dark matter, DARPA: Urban Challenge, data acquisition, delayed gratification, demographic transition, Douglas Hofstadter, Drosophila, Elon Musk, en.wikipedia.org, epigenetics, fear of failure, Flash crash, Flynn Effect, friendly AI, Gödel, Escher, Bach, income inequality, industrial robot, informal economy, information retrieval, interchangeable parts, iterative process, job automation, John von Neumann, knowledge worker, Menlo Park, meta analysis, meta-analysis, mutually assured destruction, Nash equilibrium, Netflix Prize, new economy, Norbert Wiener, NP-complete, nuclear winter, optical character recognition, pattern recognition, performance metric, phenotype, prediction markets, price stability, principal–agent problem, race to the bottom, random walk, Ray Kurzweil, recommendation engine, reversible computing, social graph, speech recognition, Stanislav Petrov, statistical model, stem cell, Stephen Hawking, strong AI, superintelligent machines, supervolcano, technological singularity, technoutopianism, The Coming Technological Singularity, The Nature of the Firm, Thomas Kuhn: the structure of scientific revolutions, transaction costs, Turing machine, Vernor Vinge, Watson beat the top human players on Jeopardy!, World Values Survey

Now that we have made solid progress, let us not risk losing our respectability.” One result of this conservatism has been increased concentration on “weak AI”—the variety devoted to providing aids to human thought—and away from “strong AI”—the variety that attempts to mechanize human-level intelligence.73 Nilsson’s sentiment has been echoed by several others of the founders, including Marvin Minsky, John McCarthy, and Patrick Winston.74 The last few years have seen a resurgence of interest in AI, which might yet spill over into renewed efforts towards artificial general intelligence (what Nilsson calls “strong AI”). In addition to faster hardware, a contemporary project would benefit from the great strides that have been made in the many subfields of AI, in software engineering more generally, and in neighboring fields such as computational neuroscience.

13 K Kasparov, Garry 12 Kepler, Johannes 14 Knuth, Donald 14, 264 Kurzweil, Ray 2, 261, 269 L Lenat, Douglas 12, 263 Logic Theorist (system) 6 logicist paradigm, see Good Old-Fashioned Artificial Intelligence (GOFAI) Logistello 12 M machine intelligence; see also artificial intelligence human-level (HLMI) 4, 19–21, 27–35, 73–74, 207, 243, 264, 267 revolution, see intelligence explosion machine learning 8–18, 28, 121, 152, 188, 274, 290 machine translation 15 macro-structural development accelerator 233–235 malignant failure 123–126, 149, 196 Malthusian condition 163–165, 252 Manhattan Project 75, 80–87, 276 McCarthy, John 5–18 McCulloch–Pitts neuron 237 MegaEarth 56 memory capacity 7–9, 60, 71 memory sharing 61 Mill, John Stuart 210 mind crime 125–126, 153, 201–208, 213, 226, 297 Minsky, Marvin 18, 261, 262, 282 Monte Carlo method 9–13 Moore’s law 24–25, 73–77, 274, 286; see also computing power moral growth 214 moral permissibility (MP)218–220, 297 moral rightness (MR)217–220.296, 297 moral status 125–126, 166–169, 173, 202–205, 268, 288, 296 Moravec, Hans 24, 265, 288 motivation selection 29, 127–129, 138–144, 147, 158, 168, 180–191, 222 definition 138 motivational scaffolding 191, 207 multipolar scenarios 90, 132, 159–184, 243–254, 301 mutational load 41 N nanotechnology 53, 94–98, 103, 113, 177, 231, 239, 276, 277, 299, 300 natural language 14 neural networks 5–9, 28, 46, 173, 237, 262, 274 neurocomputational modeling 25–30, 35, 61, 301; see also whole brain emulation (WBE) and neuromorphic AI neuromorphic AI 28, 34, 47, 237–245, 267, 300, 301 Newton, Isaac 56 Nilsson, Nils 18–20, 264 nootropics 36–44, 66–67, 201, 267 Norvig, Peter 19, 264, 282 O observation selection theory, see anthropics Oliphant, Mark 85 O’Neill, Gerard 101 ontological crisis 146, 197 optimality notions 10, 186, 194, 291–293 Bayesian agent 9–11 value learner (AI-VL) 194 observation-utility-maximizer (AI-OUM) 194 reinforcement learner (AI-RL) 194 optimization power 24, 62–75, 83, 92–96, 227, 274 definition 65 oracle AI 141–158, 222–226, 285, 286 definition 146 orthogonality thesis 105–109, 115, 279, 280 P paperclip AI 107–108, 123–125, 132–135, 153, 212, 243 Parfit, Derek 279 Pascal’s mugging 223, 298 Pascal’s wager 223 person-affecting perspective 228, 245–246, 301 perverse instantiation 120–124, 153, 190–196 poker 13 principal–agent problem 127–128, 184 Principle of Epistemic Deference 211, 221 Proverb (program) 12 Q qualia, see consciousness quality superintelligence 51–58, 72, 243, 272 definition 56 R race dynamic, see technology race rate of growth, see growth ratification 222–225 Rawls, John 150 Reagan, Ronald 86–87 reasons-based goal 220 recalcitrance 62–77, 92, 241, 274 definition 65 recursive self-improvement 29, 75, 96, 142, 259; see also seed AI reinforcement learning 12, 28, 188–189, 194–196, 207, 237, 277, 282, 290 resource acquisition 113–116, 123, 193 reward signal 71, 121–122, 188, 194, 207 Riemann hypothesis catastrophe 123, 141 robotics 9–19, 94–97, 117–118, 139, 238, 276, 290 Roosevelt, Franklin D.85 RSA encryption scheme 80 Russell, Bertrand 6, 87, 139, 277 S Samuel, Arthur 12 Sandberg, Anders 265, 267, 272, 274 scanning, see whole brain emulation (WBE) Schaeffer, Jonathan 12 scheduling 15 Schelling point 147, 183, 296 Scrabble 13 second transition 176–178, 238, 243–245, 252 second-guessing (arguments) 238–239 seed AI 23–29, 36, 75, 83, 92–96, 107, 116–120, 142, 151, 189–198, 201–217, 224–225, 240–241, 266, 274, 275, 282 self-limiting goal 123 Shakey (robot) 6 SHRDLU (program) 6 Shulman, Carl 178–180, 265, 287, 300, 302, 304 simulation hypothesis 134–135, 143, 278, 288, 292 singleton 78–90, 95–104, 112–114, 115–126, 136, 159, 176–184, 242, 275, 276, 279, 281, 287, 299, 301, 303 definition 78, 100 singularity 1, 2, 49, 75, 261, 274; see also intelligence explosion social signaling 110 somatic gene therapy 42 sovereign AI 148–158, 187, 226, 285 speech recognition 15–16, 46 speed superintelligence 52–58, 75, 270, 271 definition 53 Strategic Defense Initiative (“Star Wars”) 86 strong AI 18 stunting 135–137, 143 sub-symbolic processing, see connectionism superintelligence; see also collective superintelligence, quality superintelligence and speed superintelligence definition 22, 52 forms 52, 59 paths to 22, 50 predicting the behavior of 108, 155, 302 superorganisms 178–180 superpowers 52–56, 80, 86–87, 91–104, 119, 133, 148, 277, 279, 296 types 94 surveillance 15, 49, 64, 82–85, 94, 117, 132, 181, 232, 253, 276, 294, 299 Szilárd, Leó 85 T TD-Gammon 12 Technological Completion Conjecture 112–113, 229 technology race 80–82, 86–90 203–205, 231, 246–252, 302 teleological threads 110 Tesauro, Gerry 12 TextRunner (system) 71 theorem prover 15, 266 three laws of robotics 139, 284 Thrun, Sebastian 19 tool-AI 151–158 definition 151 treacherous turn 116–119, 128 Tribolium castaneum 154 tripwires 137–143 Truman, Harry 85 Turing, Alan 4, 23, 29, 44, 225, 265, 271, 272 U unemployment 65, 159–180, 287 United Nations 87–89, 252–253 universal accelerator 233 unmanned vehicle, see drone uploading, see whole brain emulation (WBE) utility function 10–11, 88, 100, 110, 119, 124–125, 133–134, 172, 185–187, 192–208, 290, 292, 293, 303 V value learning 191–198, 208, 293 value-accretion 189–190, 207 value-loading 185–208, 293, 294 veil of ignorance 150, 156, 253, 285 Vinge, Vernor 2, 49, 270 virtual reality 30, 31, 53, 113, 166, 171, 198, 204, 300 von Neumann probe 100–101, 113 von Neumann, John 44, 87, 114, 261, 277, 281 W wages 65, 69, 160–169 Watson (IBM) 13, 71 WBE, see whole brain emulation (WBE) Whitehead, Alfred N.6 whole brain emulation (WBE) 28–36, 50, 60, 68–73, 77, 84–85, 108, 172, 198, 201–202, 236–245, 252, 266, 267, 274, 299, 300, 301 Wigner, Eugene 85 windfall clause 254, 303 Winston, Patrick 18 wire-heading 122–123, 133, 189, 194, 207, 282, 291 wise-singleton sustainability threshold 100–104, 279 world economy 2–3, 63, 74, 83, 159–184, 274, 277, 285 Y Yudkowsky, Eliezer 70, 92, 98, 106, 197, 211–216, 266, 273, 282, 286, 291, 299

 

pages: 677 words: 206,548

Future Crimes: Everything Is Connected, Everyone Is Vulnerable and What We Can Do About It by Marc Goodman

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

23andMe, 3D printing, additive manufacturing, Affordable Care Act / Obamacare, Airbnb, airport security, Albert Einstein, algorithmic trading, artificial general intelligence, augmented reality, autonomous vehicles, Baxter: Rethink Robotics, Bill Joy: nanobots, bitcoin, Black Swan, blockchain, borderless world, Brian Krebs, business process, butterfly effect, call centre, Chelsea Manning, cloud computing, cognitive dissonance, computer vision, connected car, corporate governance, crowdsourcing, cryptocurrency, data acquisition, data is the new oil, Dean Kamen, disintermediation, don't be evil, double helix, Downton Abbey, Edward Snowden, Elon Musk, Erik Brynjolfsson, Filter Bubble, Firefox, Flash crash, future of work, game design, Google Chrome, Google Earth, Google Glasses, Gordon Gekko, high net worth, High speed trading, hive mind, Howard Rheingold, hypertext link, illegal immigration, impulse control, industrial robot, Internet of things, Jaron Lanier, Jeff Bezos, job automation, John Harrison: Longitude, Jony Ive, Julian Assange, Kevin Kelly, Khan Academy, Kickstarter, knowledge worker, Kuwabatake Sanjuro: assassination market, Law of Accelerating Returns, Lean Startup, license plate recognition, litecoin, M-Pesa, Mark Zuckerberg, Marshall McLuhan, Menlo Park, mobile money, more computing power than Apollo, move fast and break things, Nate Silver, national security letter, natural language processing, obamacare, Occupy movement, Oculus Rift, offshore financial centre, optical character recognition, pattern recognition, personalized medicine, Peter H. Diamandis: Planetary Resources, Peter Thiel, pre–internet, RAND corporation, ransomware, Ray Kurzweil, refrigerator car, RFID, ride hailing / ride sharing, Rodney Brooks, Satoshi Nakamoto, Second Machine Age, security theater, self-driving car, shareholder value, Silicon Valley, Silicon Valley startup, Skype, smart cities, smart grid, smart meter, Snapchat, social graph, software as a service, speech recognition, stealth mode startup, Stephen Hawking, Steve Jobs, Steve Wozniak, strong AI, Stuxnet, supply-chain management, technological singularity, telepresence, telepresence robot, Tesla Model S, The Wisdom of Crowds, Tim Cook: Apple, trade route, uranium enrichment, Wall-E, Watson beat the top human players on Jeopardy!, Wave and Pay, We are Anonymous. We are Legion, web application, WikiLeaks, Y Combinator, zero day

The device is inherently of no value to us (internal memo at Western Union, 1878). Somehow, the impossible always seems to become the possible. In the world of artificial intelligence, that next phase of development is called artificial general intelligence (AGI), or strong AI. In contrast to narrow AI, which cleverly performs a specific limited task, such as machine translation or auto navigation, strong AI refers to “thinking machines” that might perform any intellectual task that a human being could. Characteristics of a strong AI would include the ability to reason, make judgments, plan, learn, communicate, and unify these skills toward achieving common goals across a variety of domains, and commercial interest is growing. In 2014, Google purchased DeepMind Technologies for more than $500 million in order to strengthen its already strong capabilities in deep learning AI.

His algorithmic programming requires him to complete the vessel’s mission near Jupiter, but for national security reasons he cannot disclose the true purpose of the voyage to the crew. To resolve the contradiction in his program, he attempts to kill the crew. As narrow AI becomes more powerful, robots grow more autonomous, and AGI looms large, we need to ensure that the algorithms of tomorrow are better equipped to resolve programming conflicts and moral judgments than was HAL. It’s not that any strong AI would necessarily be “evil” and attempt to destroy humanity, but in pursuit of its primary goal as programmed, an AGI might not stop until it had achieved its mission at all costs, even if that meant competing with or harming human beings, seizing our resources, or damaging our environment. As the perceived risks from AGI have grown, numerous nonprofit institutes have been formed to address and study them, including Oxford’s Future of Humanity Institute, the Machine Intelligence Research Institute, the Future of Life Institute, and the Cambridge Centre for the Study of Existential Risk.

 

pages: 846 words: 232,630

Darwin's Dangerous Idea: Evolution and the Meanings of Life by Daniel C. Dennett

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Albert Einstein, Alfred Russel Wallace, anthropic principle, buy low sell high, cellular automata, combinatorial explosion, complexity theory, computer age, conceptual framework, Conway's Game of Life, Danny Hillis, double helix, Douglas Hofstadter, Drosophila, finite state, Gödel, Escher, Bach, In Cold Blood by Truman Capote, invention of writing, Isaac Newton, Johann Wolfgang von Goethe, John von Neumann, Murray Gell-Mann, New Journalism, non-fiction novel, Peter Singer: altruism, phenotype, price mechanism, prisoner's dilemma, QWERTY keyboard, random walk, Richard Feynman, Richard Feynman, Rodney Brooks, Schrödinger's Cat, Stephen Hawking, Steven Pinker, strong AI, the scientific method, theory of mind, Thomas Malthus, Turing machine, Turing test

Simpler survival machines — plants, for instance — never achieve the heights of self-redefinition made possible by the complexities of your robot; considering them just as survival machines for their comatose inhabitants leaves no patterns in their behavior unexplained. If you pursue this avenue, which of course I recommend, then you must abandon Searle's and Fodor's "principled" objection to "strong AI." The imagined robot, however difficult or unlikely an engineering feat, is not an impossibility — nor do they claim it to be. They concede the possibility of such a robot, but just dispute its "metaphysical status"; however adroitly it managed its affairs, they say, its intentionality would not be the real thing. That's cutting it mighty fine. I recommend abandoning such a forlorn disclaimer and acknowledging that the meaning such a robot would discover in its world, and exploit in its own communications with others, would be exactly as real as the meaning you enjoy.

This difficulty had been widely seen as systematically blocking any argument from Godel's Theorem to the impossibility of AI. Certainly everybody in AI has always known about Godel's Theorem, and they have all continued, unworried, with their labors. In fact, Hofstadter's classic Godel Escher Bach (1979) can be read as the demonstration that Godel is an unwilling champion of AI, providing essential insights about the paths to follow to strong AI, not showing the futility of the field. But Roger Penrose, Rouse Ball Professor of Mathematics at Oxford, and one of the world's leading mathematical physicists, thinks otherwise. His challenge has to be taken seriously, even if, as I and others in AI are convinced, he is making a fairly simple mistake. When Penrose's book appeared, I pointed out the problem in a review: his argument is highly convoluted, and bristling with details of physics and mathematics, and it is unlikely that such an enterprise would succumb to a single, crashing oversight on the part of its creator — that the argument could be 'refuted' by any simple observation.

As a product of biological design processes (both genetic and individual), it is almost certainly one of those algorithms that are somewhere or other in the Vast space of interesting algorithms, full of typographical errors or "bugs," but good enough to bet your life on — so far. Penrose sees this as a "far-fetched" possibility, but if that is all he can say against it, he has not yet come to grips with the best version of "strong AI." {444} 3. THE PHANTOM QUANTUM-GRAVITY COMPUTER: LESSONS FROM LAPLAND I am a strong believer in the power of natural selection. But I do not see how natural selection, in itself, can evolve algorithms which could have the kind of conscious judgements of the validity of other algorithms that we seem to have. — ROGER PENROSE 1989, p. 414 I don't think the brain came in the Darwinian manner.

 

Pandora's Brain by Calum Chace

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

3D printing, AI winter, Any sufficiently advanced technology is indistinguishable from magic, artificial general intelligence, brain emulation, Extropian, friendly AI, hive mind, Ray Kurzweil, self-driving car, Silicon Valley, Singularitarianism, Skype, speech recognition, stealth mode startup, Stephen Hawking, strong AI, technological singularity, theory of mind, Turing test, Wall-E

And there is nothing to prevent an AI’s cognitive capability being expanded simply by increasing its hardware capacity.’ ‘This all sounds like an argument for stopping people working on strong AI?’ asked Matt. ‘Although I guess that would be hard to do. There are too many people working in the field, and as you say, a lot of them show no sign of understanding the danger.’ ‘You’re right,’ Ivan agreed, ‘we’re on a runaway train that cannot be stopped. Some science fiction novels feature a powerful police force – the Turing Police – that keeps watch to ensure that no-one creates a human-level artificial intelligence. But that’s hopelessly unrealistic. The prize – both intellectual and material – for owning an AGI is too great. Strong AI is coming, whether we like it or not.’ TEN ‘But surely, if you’re right about all this,’ Leo protested, sounding genuinely concerned, ‘people – governments, voters – will wake up when it gets closer, and slow it down or stop it?’

 

pages: 481 words: 125,946

What to Think About Machines That Think: Today's Leading Thinkers on the Age of Machine Intelligence by John Brockman

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

3D printing, agricultural Revolution, AI winter, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, algorithmic trading, artificial general intelligence, augmented reality, autonomous vehicles, bitcoin, blockchain, clean water, cognitive dissonance, Colonization of Mars, complexity theory, computer age, computer vision, constrained optimization, corporate personhood, cosmological principle, cryptocurrency, cuban missile crisis, Danny Hillis, dark matter, discrete time, Elon Musk, Emanuel Derman, endowment effect, epigenetics, Ernest Rutherford, experimental economics, Flash crash, friendly AI, Google Glasses, hive mind, income inequality, information trail, Internet of things, invention of writing, iterative process, Jaron Lanier, job automation, John von Neumann, Kevin Kelly, knowledge worker, loose coupling, microbiome, Moneyball by Michael Lewis explains big data, natural language processing, Network effects, Norbert Wiener, pattern recognition, Peter Singer: altruism, phenotype, planetary scale, Ray Kurzweil, recommendation engine, Republic of Letters, RFID, Richard Thaler, Rory Sutherland, Search for Extraterrestrial Intelligence, self-driving car, sharing economy, Silicon Valley, Skype, smart contracts, speech recognition, statistical model, stem cell, Stephen Hawking, Steve Jobs, Steven Pinker, Stewart Brand, strong AI, Stuxnet, superintelligent machines, supervolcano, the scientific method, The Wisdom of Crowds, theory of mind, Thorstein Veblen, too big to fail, Turing machine, Turing test, Von Neumann architecture, Watson beat the top human players on Jeopardy!, Y2K

Learning to detect a cat in full frontal position after 10 million frames drawn from Internet videos is a long way from understanding what a cat is, and anybody who thinks that we’ve “solved” AI doesn’t realize the limitations of the current technology. To be sure, there have been exponential advances in narrow-engineering applications of artificial intelligence, such as playing chess, calculating travel routes, or translating texts in rough fashion, but there’s been scarcely more than linear progress in five decades of working toward strong AI. For example, the different flavors of intelligent personal assistants available on your smartphone are only modestly better than Eliza, an early example of primitive natural-language-processing from the mid-1960s. We still have no machine that can, for instance, read all that the Web has to say about war and plot a decent campaign, nor do we even have an open-ended AI system that can figure out how to write an essay to pass a freshman composition class or an eighth-grade science exam.

AI can easily look like the real thing but still be a million miles away from being the real thing—like kissing through a pane of glass: It looks like a kiss but is only a faint shadow of the actual concept. I concede to AI proponents all of the semantic prowess of Shakespeare, the symbol juggling they do perfectly. Missing is the direct relationship with the ideas the symbols represent. Much of what is certain to come soon would have belonged in the old-school “Strong AI” territory. Anything that can be approached in an iterative process can and will be achieved, sooner than many think. On this point I reluctantly side with the proponents: exaflops in CPU+GPU performance, 10K resolution immersive VR, personal petabyte databases . . . here in a couple of decades. But it is not all “iterative.” There’s a huge gap between that and the level of conscious understanding that truly deserves to be called Strong, as in “Alive AI.”

 

pages: 413 words: 119,587

Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots by John Markoff

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

A Declaration of the Independence of Cyberspace, AI winter, airport security, Apple II, artificial general intelligence, augmented reality, autonomous vehicles, Baxter: Rethink Robotics, Bill Duvall, bioinformatics, Brewster Kahle, Burning Man, call centre, cellular automata, Chris Urmson, Claude Shannon: information theory, Clayton Christensen, clean water, cloud computing, collective bargaining, computer age, computer vision, crowdsourcing, Danny Hillis, DARPA: Urban Challenge, data acquisition, Dean Kamen, deskilling, don't be evil, Douglas Engelbart, Douglas Hofstadter, Dynabook, Edward Snowden, Elon Musk, Erik Brynjolfsson, factory automation, From Mathematics to the Technologies of Life and Death, future of work, Galaxy Zoo, Google Glasses, Google X / Alphabet X, Grace Hopper, Gödel, Escher, Bach, Hacker Ethic, haute couture, hive mind, hypertext link, indoor plumbing, industrial robot, information retrieval, Internet Archive, Internet of things, invention of the wheel, Jacques de Vaucanson, Jaron Lanier, Jeff Bezos, job automation, John Conway, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John von Neumann, Kevin Kelly, knowledge worker, Kodak vs Instagram, labor-force participation, loose coupling, Mark Zuckerberg, Marshall McLuhan, medical residency, Menlo Park, Mother of all demos, natural language processing, new economy, Norbert Wiener, PageRank, pattern recognition, pre–internet, RAND corporation, Ray Kurzweil, Richard Stallman, Robert Gordon, Rodney Brooks, Sand Hill Road, Second Machine Age, self-driving car, semantic web, shareholder value, side project, Silicon Valley, Silicon Valley startup, Singularitarianism, skunkworks, Skype, social software, speech recognition, stealth mode startup, Stephen Hawking, Steve Ballmer, Steve Jobs, Steve Wozniak, Steven Levy, Stewart Brand, strong AI, superintelligent machines, technological singularity, Ted Nelson, telemarketer, telepresence, telepresence robot, Tenerife airport disaster, The Coming Technological Singularity, the medium is the message, Thorstein Veblen, Turing test, Vannevar Bush, Vernor Vinge, Watson beat the top human players on Jeopardy!, Whole Earth Catalog, William Shockley: the traitorous eight

When pressed, the computer scientists, roboticists, and technologists offer conflicting views. Some want to replace humans with machines; some are resigned to the inevitability—“I for one, welcome our insect overlords” (later “robot overlords”) was a meme that was popularized by The Simpsons—and some of them just as passionately want to build machines to extend the reach of humans. The question of whether true artificial intelligence—the concept known as “Strong AI” or Artificial General Intelligence—will emerge, and whether machines can do more than mimic humans, has also been debated for decades. Today there is a growing chorus of scientists and technologists raising new alarms about the possibility of the emergence of self-aware machines and their consequences. Discussions about the state of AI technology today veer into the realm of science fiction or perhaps religion.

The experiment was made possible by Google’s immense computing resources that allowed the researchers to turn loose a cluster of sixteen thousand processors on the problem—which of course is still a tiny fraction of the brain’s billions of neurons, a huge portion of which are devoted to vision. Whether or not Google is on the trail of a genuine artificial “brain” has become increasingly controversial. There is certainly no question that the deep learning techniques are paying off in a wealth of increasingly powerful AI achievements in vision and speech. And there remains in Silicon Valley a growing group of engineers and scientists who believe they are once again closing in on “Strong AI”—the creation of a self-aware machine with human or greater intelligence. Ray Kurzweil, the artificial intelligence researcher and barnstorming advocate for technologically induced immortality, joined Google in 2013 to take over the brain work from Ng, shortly after publishing How to Create a Mind, a book that purported to offer a recipe for creating a working AI. Kurzweil, of course, has all along been one of the most eloquent backers of the idea of a singularity.

 

pages: 405 words: 117,219

In Our Own Image: Savior or Destroyer? The History and Future of Artificial Intelligence by George Zarkadakis

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

3D printing, Ada Lovelace, agricultural Revolution, Airbnb, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, anthropic principle, Asperger Syndrome, autonomous vehicles, barriers to entry, battle of ideas, Berlin Wall, bioinformatics, British Empire, business process, carbon-based life, cellular automata, Claude Shannon: information theory, combinatorial explosion, complexity theory, continuous integration, Conway's Game of Life, cosmological principle, dark matter, dematerialisation, double helix, Douglas Hofstadter, Edward Snowden, epigenetics, Flash crash, Google Glasses, Gödel, Escher, Bach, income inequality, index card, industrial robot, Internet of things, invention of agriculture, invention of the steam engine, invisible hand, Isaac Newton, Jacquard loom, Jacquard loom, Jacques de Vaucanson, James Watt: steam engine, job automation, John von Neumann, Joseph-Marie Jacquard, millennium bug, natural language processing, Norbert Wiener, On the Economy of Machinery and Manufactures, packet switching, pattern recognition, Paul Erdős, post-industrial society, prediction markets, Ray Kurzweil, Rodney Brooks, Second Machine Age, self-driving car, Silicon Valley, speech recognition, stem cell, Stephen Hawking, Steven Pinker, strong AI, technological singularity, The Coming Technological Singularity, the scientific method, theory of mind, Turing complete, Turing machine, Turing test, Tyler Cowen: Great Stagnation, Vernor Vinge, Von Neumann architecture, Watson beat the top human players on Jeopardy!, Y2K

Thirdly, that intelligence, from its simplest manifestation in a squirming worm to self-awareness and consciousness in sophisticated cappuccino-sipping humans, is a purely material, indeed biological, phenomenon. Finally, that if a material object called ‘brain’ can be conscious then it is theoretically feasible that another material object, made of some other material stuff, can also be conscious. Based on those four propositions, empiricism tells us that ‘strong AI’ is possible. And that’s because, for empiricists, a brain is an information-processing machine, not metaphorically but literarily. We have several billion cells in our body.27 If we adopt an empirical perspective, the scientific problem of intelligence – or consciousness, natural or artificial – can be (re)defined as a simple question: how can several billion unconscious nanorobots arrive at consciousness?

The pioneers of AI explored many ideas including using algorithms for solving general logical problems, or simulating parts of the brain using artificial neural nets. And although they produced some very capable systems, none of them could arguably be called intelligent. Of course, how one defines intelligence is also crucial. For the pioneers of AI, ‘artificial intelligence’ was nothing less than the artificial equivalent of human intelligence, a position nowadays referred to as ‘strong AI’. An intelligent machine ought to be one that possessed general intelligence, just like a human. This meant that the machine ought to be able to solve any problem using first principles and experience derived from learning. Early models of general-solving were built, but could not scale up. Systems could solve one general problem but not any general problem.6 Algorithms that searched data in order to make general inferences failed quickly because of something called ‘combinatorial explosion’: there were simply too many interrelated parameters and variables to calculate after a number of steps.

 

Toast by Stross, Charles

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

anthropic principle, Buckminster Fuller, cosmological principle, dark matter, double helix, Ernest Rutherford, Extropian, Francis Fukuyama: the end of history, glass ceiling, gravity well, Khyber Pass, Mars Rover, Mikhail Gorbachev, NP-complete, oil shale / tar sands, peak oil, performance metric, phenotype, Plutocrats, plutocrats, Ronald Reagan, Silicon Valley, slashdot, speech recognition, strong AI, traveling salesman, Turing test, urban renewal, Vernor Vinge, Whole Earth Review, Y2K

Suicide by the numbers.” A glass appeared by my right hand. “Way I see it, we’ve been fighting a losing battle here. Maybe if we hadn’t put a spike in Babbage’s gears he’d have developed computing technology on an ad-hoc basis and we might have been able to finesse the mathematicians into ignoring it as being beneath them—brute engineering—but I’m not optimistic. Immunizing a civilization against developing strong AI is one of those difficult problems that no algorithm exists to solve. The way I see it, once a civilization develops the theory of the general purpose computer, and once someone comes up with the goal of artificial intelligence, the foundations are rotten and the dam is leaking. You might as well take off and nuke them from orbit; it can’t do any more damage.” “You remind me of the story of the little Dutch boy.”

 

pages: 484 words: 104,873

Rise of the Robots: Technology and the Threat of a Jobless Future by Martin Ford

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

3D printing, additive manufacturing, Affordable Care Act / Obamacare, AI winter, algorithmic trading, Amazon Mechanical Turk, artificial general intelligence, autonomous vehicles, banking crisis, Baxter: Rethink Robotics, Bernie Madoff, Bill Joy: nanobots, call centre, Capital in the Twenty-First Century by Thomas Piketty, Chris Urmson, Clayton Christensen, clean water, cloud computing, collateralized debt obligation, computer age, debt deflation, deskilling, diversified portfolio, Erik Brynjolfsson, factory automation, financial innovation, Flash crash, Fractional reserve banking, Freestyle chess, full employment, Goldman Sachs: Vampire Squid, High speed trading, income inequality, indoor plumbing, industrial robot, informal economy, iterative process, Jaron Lanier, job automation, John Maynard Keynes: technological unemployment, John von Neumann, Khan Academy, knowledge worker, labor-force participation, labour mobility, liquidity trap, low skilled workers, low-wage service sector, Lyft, manufacturing employment, McJob, moral hazard, Narrative Science, Network effects, new economy, Nicholas Carr, Norbert Wiener, obamacare, optical character recognition, passive income, performance metric, Peter Thiel, Plutocrats, plutocrats, post scarcity, precision agriculture, price mechanism, Ray Kurzweil, rent control, rent-seeking, reshoring, RFID, Richard Feynman, Richard Feynman, Rodney Brooks, secular stagnation, self-driving car, Silicon Valley, Silicon Valley startup, single-payer health, software is eating the world, sovereign wealth fund, speech recognition, Spread Networks laid a new fibre optics cable between New York and Chicago, stealth mode startup, stem cell, Stephen Hawking, Steve Jobs, Steven Levy, Steven Pinker, strong AI, Stuxnet, technological singularity, telepresence, telepresence robot, The Bell Curve by Richard Herrnstein and Charles Murray, The Coming Technological Singularity, Thomas L Friedman, too big to fail, Tyler Cowen: Great Stagnation, union organizing, Vernor Vinge, very high income, Watson beat the top human players on Jeopardy!, women in the workforce

The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible.”9 Gordon Moore, whose name seems destined to be forever associated with exponentially advancing technology, is likewise skeptical that anything like the Singularity will ever occur.10 Kurzweil’s timeframe for the arrival of human-level artificial intelligence has plenty of defenders, however. MIT physicist Max Tegmark, one of the co-authors of the Hawking article, told The Atlantic’s James Hamblin that “this is very near-term stuff. Anyone who’s thinking about what their kids should study in high school or college should care a lot about this.”11 Others view a thinking machine as fundamentally possible, but much further out. Gary Marcus, for example, thinks strong AI will take at least twice as long as Kurzweil predicts, but that “it’s likely that machines will be smarter than us before the end of the century—not just at chess or trivia questions but at just about everything, from mathematics and engineering to science and medicine.”12 In recent years, speculation about human-level AI has shifted increasingly away from a top-down programming approach and, instead, toward an emphasis on reverse engineering and then simulating the human brain.

 

pages: 379 words: 108,129

An Optimist's Tour of the Future by Mark Stevenson

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

23andMe, Albert Einstein, Andy Kessler, augmented reality, bank run, carbon footprint, carbon-based life, clean water, computer age, decarbonisation, double helix, Douglas Hofstadter, Elon Musk, flex fuel, Gödel, Escher, Bach, Hans Rosling, Internet of things, invention of agriculture, Isaac Newton, Jeff Bezos, Kevin Kelly, Law of Accelerating Returns, life extension, Louis Pasteur, mutually assured destruction, Naomi Klein, packet switching, peak oil, pre–internet, Ray Kurzweil, Richard Feynman, Richard Feynman, Rodney Brooks, self-driving car, Silicon Valley, smart cities, stem cell, Stephen Hawking, Steven Pinker, Stewart Brand, strong AI, the scientific method, Wall-E, X Prize

If this were then subjected to an appropriate course of education one would obtain the adult brain.’ This proposed necessity of having to raise robots might lead you to the conclusion that truly intelligent robots will be few and far between. But the thing about robots is you can replicate them. Once we’ve got one intelligent robot brain, we can copy it to another machine, and another, and another. The robots have finally arrived, bringing an explosion of ‘strong AI’. Of course, it may not just be us (the humans) doing the copying, it might be the robots themselves. And because technology improves at a startling rate (way faster than biological evolution), one has to consider the possibility that things won’t stop there. Once we achieve a robot with human-level (if not human-like) intelligence, it won’t be very long until robot cognition outstrips the human mind – marrying the human-like intelligence with instant recall, flawless memory and the number-crunching ability of Deep Blue.

 

pages: 329 words: 95,309

Digital Bank: Strategies for Launching or Becoming a Digital Bank by Chris Skinner

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

algorithmic trading, Amazon Web Services, Any sufficiently advanced technology is indistinguishable from magic, augmented reality, bank run, Basel III, bitcoin, business intelligence, business process, business process outsourcing, call centre, cashless society, clean water, cloud computing, corporate social responsibility, credit crunch, crowdsourcing, cryptocurrency, demand response, disintermediation, don't be evil, en.wikipedia.org, fault tolerance, fiat currency, financial innovation, Google Glasses, high net worth, informal economy, Infrastructure as a Service, Internet of things, Jeff Bezos, Kevin Kelly, Kickstarter, M-Pesa, margin call, mass affluent, mobile money, Mohammed Bouazizi, new economy, Northern Rock, Occupy movement, platform as a service, Ponzi scheme, prediction markets, pre–internet, quantitative easing, ransomware, reserve currency, RFID, Satoshi Nakamoto, Silicon Valley, smart cities, software as a service, Steve Jobs, strong AI, Stuxnet, trade route, unbanked and underbanked, underbanked, upwardly mobile, We are the 99%, web application, Y2K

I know for a fact that the new economies and new values that we discuss within Innotribe are driven by social media. Social media is creating new currencies and new economic models, and this will be very big and very important in the two to three years downstream from now. The question for the banks is how will they position in this new world of peer-to-peer currencies in social media. That is going to be a key question for banks in innovation for the next few years. The other area is what I call strong AI. This is a modern way of looking at AI. The old way was mechanical and thought of this as expert systems. Today, we have this enormous computational power in our hands now, and we should make a big splash around this for the next four or five years. So social data, social media, alternative currencies and peer-to-peer payments will dominate for the near term, and then big data and AI in four or five years from now.

 

pages: 347 words: 97,721

Only Humans Need Apply: Winners and Losers in the Age of Smart Machines by Thomas H. Davenport, Julia Kirby

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

AI winter, Andy Kessler, artificial general intelligence, asset allocation, Automated Insights, autonomous vehicles, Baxter: Rethink Robotics, business intelligence, business process, call centre, carbon-based life, Clayton Christensen, clockwork universe, conceptual framework, dark matter, David Brooks, deliberate practice, deskilling, Edward Lloyd's coffeehouse, Elon Musk, Erik Brynjolfsson, estate planning, follow your passion, Frank Levy and Richard Murnane: The New Division of Labor, Freestyle chess, game design, general-purpose programming language, Google Glasses, Hans Lippershey, haute cuisine, income inequality, index fund, industrial robot, information retrieval, intermodal, Internet of things, inventory management, Isaac Newton, job automation, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, Khan Academy, knowledge worker, labor-force participation, loss aversion, Mark Zuckerberg, Narrative Science, natural language processing, Norbert Wiener, nuclear winter, pattern recognition, performance metric, Peter Thiel, precariat, quantitative trading / quantitative finance, Ray Kurzweil, Richard Feynman, Richard Feynman, risk tolerance, Robert Shiller, Robert Shiller, Rodney Brooks, Second Machine Age, self-driving car, Silicon Valley, six sigma, Skype, speech recognition, spinning jenny, statistical model, Stephen Hawking, Steve Jobs, Steve Wozniak, strong AI, superintelligent machines, supply-chain management, transaction costs, Tyler Cowen: Great Stagnation, Watson beat the top human players on Jeopardy!, Works Progress Administration, Zipcar

Vendors like IBM, Cognitive Scale, SAS, and Tibco are adding new cognitive functions and integrating them into solutions. Deloitte is working with companies like IBM and Cognitive Scale to create not just a single application, but a broad “Intelligent Automation Platform.” Even when progress is made on these types of integration, the result will still fall short of the all-knowing “artificial general intelligence” or “strong AI” that we discussed in Chapter 2. That may well be coming, but not anytime soon. Still, these short-term combinations of tools and methods may well make automation solutions much more useful. Broadening Application of the Same Tools —In addition to employing broader types of technology, organizations that are stepping forward are using their existing technology to address different industries and business functions.

 

pages: 463 words: 118,936

Darwin Among the Machines by George Dyson

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Ada Lovelace, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Albert Einstein, anti-communist, British Empire, carbon-based life, cellular automata, Claude Shannon: information theory, combinatorial explosion, computer age, Danny Hillis, fault tolerance, Fellow of the Royal Society, finite state, IFF: identification friend or foe, invention of the telescope, invisible hand, Isaac Newton, Jacquard loom, Jacquard loom, James Watt: steam engine, John Nash: game theory, John von Neumann, Menlo Park, Nash equilibrium, Norbert Wiener, On the Economy of Machinery and Manufactures, packet switching, pattern recognition, phenotype, RAND corporation, Richard Feynman, Richard Feynman, spectrum auction, strong AI, the scientific method, The Wealth of Nations by Adam Smith, Turing machine, Von Neumann architecture

Gödel’s second incompleteness theorem—showing that no formal system can prove its own consistency—has been construed as limiting the ability of mechanical processes to comprehend levels of meaning that are accessible to our minds. The argument over where to draw this distinction has been going on for a long time. Can machines calculate? Can machines think? Can machines become conscious? Can machines have souls? Although Leibniz believed that the process of thought could be arithmetized and that mechanism could perform the requisite arithmetic, he disagreed with the “strong AI” of Hobbes that reduced everything to mechanism, even our own consciousness or the existence (and corporeal mortality) of a soul. “Whatever is performed in the body of man and of every animal is no less mechanical than what is performed in a watch,” wrote Leibniz to Samuel Clarke.51 But, in the Monadology, Leibniz argued that “perception, and that which depends upon it, are inexplicable by mechanical causes,” and he presented a thought experiment to support his views: “Supposing that there were a machine whose structure produced thought, sensation, and perception, we could conceive of it as increased in size with the same proportions until one was able to enter into its interior, as he would into a mill.

 

pages: 561 words: 167,631

2312 by Kim Stanley Robinson

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

agricultural Revolution, double helix, full employment, hive mind, if you see hoof prints, think horses—not zebras, Kuiper Belt, late capitalism, mutually assured destruction, offshore financial centre, pattern recognition, phenotype, post scarcity, precariat, retrograde motion, stem cell, strong AI, the built environment, the High Line, Turing machine, Turing test, Winter of Discontent

In these years all the bad trends converged in “perfect storm” fashion, leading to a rise in average global temperature of five K, and sea level rise of five meters—and as a result, in the 2120s, food shortages, mass riots, catastrophic death on all continents, and an immense spike in the extinction rate of other species. Early lunar bases, scientific stations on Mars. The Turnaround: 2130 to 2160. Verteswandel (Shortback’s famous “mutation of values”), followed by revolutions; strong AI; self-replicating factories; terraforming of Mars begun; fusion power; strong synthetic biology; climate modification efforts, including the disastrous Little Ice Age of 2142–54; space elevators on Earth and Mars; fast space propulsion; the space diaspora begun; the Mondragon Accord signed. And thus: The Accelerando: 2160 to 2220. Full application of all the new technological powers, including human longevity increases; terraforming of Mars and subsequent Martian revolution; full diaspora into solar system; hollowing of the terraria; start of the terraforming of Venus; the construction of Terminator; and Mars joining the Mondragon Accord.

 

pages: 1,152 words: 266,246

Why the West Rules--For Now: The Patterns of History, and What They Reveal About the Future by Ian Morris

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Admiral Zheng, agricultural Revolution, Albert Einstein, anti-communist, Arthur Eddington, Atahualpa, Berlin Wall, British Empire, Columbian Exchange, conceptual framework, cuban missile crisis, defense in depth, demographic transition, Deng Xiaoping, discovery of the americas, Doomsday Clock, en.wikipedia.org, falling living standards, Flynn Effect, Francisco Pizarro, global village, hiring and firing, indoor plumbing, invention of agriculture, Isaac Newton, James Watt: steam engine, knowledge economy, market bubble, Menlo Park, Mikhail Gorbachev, mutually assured destruction, New Journalism, out of africa, Peter Thiel, phenotype, pink-collar, place-making, purchasing power parity, RAND corporation, Ray Kurzweil, Ronald Reagan, Scientific racism, Silicon Valley, Sinatra Doctrine, South China Sea, special economic zone, Steve Jobs, Steve Wozniak, Steven Pinker, strong AI, The Wealth of Nations by Adam Smith, Thomas Kuhn: the structure of scientific revolutions, Thomas L Friedman, Thomas Malthus, trade route, upwardly mobile, wage slave, washing machines reduced drudgery

Archaeogenetics. Cambridge, UK: Cambridge University Press, 2000. Renfrew, Colin, and Iain Morley, eds. Becoming Human: Innovation in Prehistoric Material and Spiritual Culture. Cambridge, UK: Cambridge University Press, 2009. Reynolds, David. One World Divisible: A Global History Since 1945. New York: Norton, 2000. Richards, Jay, et al. Are We Spiritual Machines? Ray Kurzweil vs. the Critics of Strong A.I. Seattle: Discovery Institute, 2002. Richards, John. Unending Frontier: An Environmental History of the Early Modern World. Berkeley: University of California Press, 2003. Richardson, Lewis Fry. Statistics of Deadly Quarrels. Pacific Grove, CA: Boxwood Press, 1960. Richerson, Peter, Robert Boyd, and Robert Bettinger. “Was Agriculture Impossible During the Pleistocene but Mandatory During the Holocene?”