12 results back to index
3D printing, AI winter, Amazon Web Services, artificial general intelligence, Automated Insights, Bernie Madoff, Bill Joy: nanobots, brain emulation, cellular automata, cloud computing, cognitive bias, computer vision, cuban missile crisis, Daniel Kahneman / Amos Tversky, Danny Hillis, data acquisition, don't be evil, Extropian, finite state, Flash crash, friendly AI, friendly fire, Google Glasses, Google X / Alphabet X, Isaac Newton, Jaron Lanier, John von Neumann, Kevin Kelly, Law of Accelerating Returns, life extension, Loebner Prize, lone genius, mutually assured destruction, natural language processing, Nicholas Carr, optical character recognition, PageRank, pattern recognition, Peter Thiel, prisoner's dilemma, Ray Kurzweil, Rodney Brooks, Search for Extraterrestrial Intelligence, self-driving car, semantic web, Silicon Valley, Singularitarianism, Skype, smart grid, speech recognition, statistical model, stealth mode startup, stem cell, Stephen Hawking, Steve Jobs, Steve Wozniak, strong AI, Stuxnet, superintelligent machines, technological singularity, The Coming Technological Singularity, traveling salesman, Turing machine, Turing test, Vernor Vinge, Watson beat the top human players on Jeopardy!, zero day
When posed with this question some of the most accomplished scientists I spoke with cited science-fiction writer Isaac Asimov’s Three Laws of Robotics. These rules, they blithely replied, would be “built in” to the AIs, so we have nothing to fear. They spoke as if this were settled science. We’ll discuss the three laws in chapter 1, but it’s enough to say for now that when someone proposes Asimov’s laws as the solution to the dilemma of superintelligent machines, it means they’ve spent little time thinking or exchanging ideas about the problem. How to make friendly intelligent machines and what to fear from superintelligent machines has moved beyond Asimov’s tropes. Being highly capable and accomplished in AI doesn’t inoculate you from naïveté about its perils. I’m not the first to propose that we’re on a collision course. Our species is going to mortally struggle with this problem. This book explores the plausibility of losing control of our future to machines that won’t necessarily hate us, but that will develop unexpected behaviors as they attain high levels of the most unpredictable and powerful force in the universe, levels that we cannot ourselves reach, and behaviors that probably won’t be compatible with our survival.
Thus the first ultraintelligent machine is the last invention that man need ever make … The Singularity has three well-developed definitions—Good’s, above, is the first. Good never used the term “singularity” but he got the ball rolling by positing what he thought of as an inescapable and beneficial milestone in human history—the invention of smarter-than-human machines. To paraphrase Good, if you make a superintelligent machine, it will be better than humans at everything we use our brains for, and that includes making superintelligent machines. The first machine would then set off an intelligence explosion, a rapid increase in intelligence, as it repeatedly self-improved, or simply made smarter machines. This machine or machines would leave man’s brainpower in the dust. After the intelligence explosion, man wouldn’t have to invent anything else—all his needs would be met by machines.
The last sentence of Good’s most often quoted paragraph should read in its entirety: Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control (italics mine). These two sentences tell us important things about Good’s intentions. He felt that we humans were beset by so many complex, looming problems—the nuclear arms race, pollution, war, and so on—that we could only be saved by better thinking, and that would come from superintelligent machines. The second sentence lets us know that the father of the intelligence explosion concept was acutely aware that producing superintelligent machines, however necessary for our survival, could blow up in our faces. Keeping an ultraintelligent machine under control isn’t a given, Good tells us. He doesn’t believe we will even know how to do it—the machine will have to tell us itself. Good knew a few things about machines that could save the world—he had helped build and run the earliest electrical computers ever, used at Bletchley Park to help defeat Germany.
3D printing, agricultural Revolution, AI winter, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, algorithmic trading, artificial general intelligence, augmented reality, autonomous vehicles, bitcoin, blockchain, clean water, cognitive dissonance, Colonization of Mars, complexity theory, computer age, computer vision, constrained optimization, corporate personhood, cosmological principle, cryptocurrency, cuban missile crisis, Danny Hillis, dark matter, discrete time, Elon Musk, Emanuel Derman, endowment effect, epigenetics, Ernest Rutherford, experimental economics, Flash crash, friendly AI, Google Glasses, hive mind, income inequality, information trail, Internet of things, invention of writing, iterative process, Jaron Lanier, job automation, John von Neumann, Kevin Kelly, knowledge worker, loose coupling, microbiome, Moneyball by Michael Lewis explains big data, natural language processing, Network effects, Norbert Wiener, pattern recognition, Peter Singer: altruism, phenotype, planetary scale, Ray Kurzweil, recommendation engine, Republic of Letters, RFID, Richard Thaler, Rory Sutherland, Search for Extraterrestrial Intelligence, self-driving car, sharing economy, Silicon Valley, Skype, smart contracts, speech recognition, statistical model, stem cell, Stephen Hawking, Steve Jobs, Steven Pinker, Stewart Brand, strong AI, Stuxnet, superintelligent machines, supervolcano, the scientific method, The Wisdom of Crowds, theory of mind, Thorstein Veblen, too big to fail, Turing machine, Turing test, Von Neumann architecture, Watson beat the top human players on Jeopardy!, Y2K
But there are some kinds of foolishness that seem only to afflict the very intelligent. Worrying about the dangers of unfriendly AI is a prime example. A preoccupation with the risks of superintelligent machines is the smart person’s Kool-Aid. This is not to say that superintelligent machines pose no danger to humanity. It’s simply that there are many other more pressing and more probable risks facing us in this century. People who worry about unfriendly AI tend to argue that the other risks are already the subject of much discussion, and that even if the probability of being wiped out by superintelligent machines is low, it’s surely wise to allocate some brainpower to preventing such an event, given the existential nature of the threat. Not coincidentally, the problem with this argument was first identified by some of its most vocal proponents.
Computers share knowledge much more easily than humans do, and they can keep that knowledge longer, becoming wiser than humans. Many forward-thinking companies already see this writing on the wall and are luring the best computer scientists out of academia with better pay and advanced hardware. A world with superintelligent-machine-run corporations won’t be that different for humans than it is now; it will just be better, with more advanced goods and services available for very little cost and more leisure time available to those who want it. Of course, the first superintelligent machines probably won’t be corporate; they’ll be operated by governments. And this will be much more hazardous. Governments are more flexible in their actions than corporations; they create their own laws. And as we’ve seen, even the best can engage in torture when they think their survival is at stake.
Even if no large leaps in understanding intelligence algorithmically are made, computers will eventually be able to simulate the workings of a human brain (itself a biological machine) and attain superhuman intelligence using brute-force computation. However, although computational power is increasing exponentially, supercomputer costs and electrical-power efficiency aren’t keeping pace. The first machines capable of superhuman intelligence will be expensive and require enormous amounts of electrical power—they’ll need to earn money to survive. The environmental playing field for superintelligent machines is already in place; in fact, the Darwinian game is afoot. The trading machines of investment banks are competing, for serious money, on the world’s exchanges, having put human day traders out of business years ago. As computers and algorithms advance beyond investing and accounting, machines will be making more and more corporate decisions, including strategic decisions, until they’re running the world.
Superintelligence: Paths, Dangers, Strategies by Nick Bostrom
agricultural Revolution, AI winter, Albert Einstein, algorithmic trading, anthropic principle, anti-communist, artificial general intelligence, autonomous vehicles, barriers to entry, bioinformatics, brain emulation, cloud computing, combinatorial explosion, computer vision, cosmological constant, dark matter, DARPA: Urban Challenge, data acquisition, delayed gratification, demographic transition, Douglas Hofstadter, Drosophila, Elon Musk, en.wikipedia.org, epigenetics, fear of failure, Flash crash, Flynn Effect, friendly AI, Gödel, Escher, Bach, income inequality, industrial robot, informal economy, information retrieval, interchangeable parts, iterative process, job automation, John von Neumann, knowledge worker, Menlo Park, meta analysis, meta-analysis, mutually assured destruction, Nash equilibrium, Netflix Prize, new economy, Norbert Wiener, NP-complete, nuclear winter, optical character recognition, pattern recognition, performance metric, phenotype, prediction markets, price stability, principal–agent problem, race to the bottom, random walk, Ray Kurzweil, recommendation engine, reversible computing, social graph, speech recognition, Stanislav Petrov, statistical model, stem cell, Stephen Hawking, strong AI, superintelligent machines, supervolcano, technological singularity, technoutopianism, The Coming Technological Singularity, The Nature of the Firm, Thomas Kuhn: the structure of scientific revolutions, transaction costs, Turing machine, Vernor Vinge, Watson beat the top human players on Jeopardy!, World Values Survey
They suggest that (at least in lieu of better data or analysis) it may be reasonable to believe that human-level machine intelligence has a fairly sizeable chance of being developed by mid-century, and that it has a non-trivial chance of being developed considerably sooner or much later; that it might perhaps fairly soon thereafter result in superintelligence; and that a wide range of outcomes may have a significant chance of occurring, including extremely good outcomes and outcomes that are as bad as human extinction.84 At the very least, they suggest that the topic is worth a closer look. CHAPTER 2 Paths to superintelligence Machines are currently far inferior to humans in general intelligence. Yet one day (we have suggested) they will be superintelligent. How do we get from here to there? This chapter explores several conceivable technological paths. We look at artificial intelligence, whole brain emulation, biological cognition, and human–machine interfaces, as well as networks and organizations. We evaluate their different degrees of plausibility as pathways to superintelligence.
Let us consider some of the capabilities that a superintelligence could have and how it could use them. Functionalities and superpowers It is important not to anthropomorphize superintelligence when thinking about its potential impacts. Anthropomorphic frames encourage unfounded expectations about the growth trajectory of a seed AI and about the psychology, motivations, and capabilities of a mature superintelligence. For example, a common assumption is that a superintelligent machine would be like a very clever but nerdy human being. We imagine that the AI has book smarts but lacks social savvy, or that it is logical but not intuitive and creative. This idea probably originates in observation: we look at present-day computers and see that they are good at calculation, remembering facts, and at following the letter of instructions while being oblivious to social contexts and subtexts, norms, emotions, and politics.
If these developments take place on digital rather than biological timescales, then the glacial humans might find themselves expropriated before they could say Jack Robinson.15 Life in an algorithmic economy Life for biological humans in a post-transition Malthusian state need not resemble any of the historical states of man (as hunter–gatherer, farmer, or office worker). Instead, the majority of humans in this scenario might be idle rentiers who eke out a marginal living on their savings.16 They would be very poor, yet derive what little income they have from savings or state subsidies. They would live in a world with extremely advanced technology, including not only superintelligent machines but also anti-aging medicine, virtual reality, and various enhancement technologies and pleasure drugs: yet these might be generally unaffordable. Perhaps instead of using enhancement medicine, they would take drugs to stunt their growth and slow their metabolism in order to reduce their cost of living (fast-burners being unable to survive at the gradually declining subsistence income).
The Seventh Sense: Power, Fortune, and Survival in the Age of Networks by Joshua Cooper Ramo
Airbnb, Albert Einstein, algorithmic trading, barriers to entry, Berlin Wall, bitcoin, British Empire, cloud computing, crowdsourcing, Danny Hillis, defense in depth, Deng Xiaoping, Edward Snowden, Fall of the Berlin Wall, Firefox, Google Chrome, income inequality, Isaac Newton, Jeff Bezos, job automation, market bubble, Menlo Park, natural language processing, Network effects, Norbert Wiener, Oculus Rift, packet switching, Paul Graham, price stability, quantitative easing, RAND corporation, recommendation engine, Republic of Letters, Richard Feynman, Richard Feynman, road to serfdom, Sand Hill Road, secular stagnation, self-driving car, Silicon Valley, Skype, Snapchat, social web, sovereign wealth fund, Steve Jobs, Steve Wozniak, Stewart Brand, Stuxnet, superintelligent machines, technological singularity, The Coming Technological Singularity, The Wealth of Nations by Adam Smith, too big to fail, Vernor Vinge, zero day
It was easy enough for Vinge to see how this would end. It wouldn’t be with the sort of intended polite, lapdog domesticity of artificial intelligence that we might hope for but with a rottweiler of a device, alive to the meaty smell of power, violence, and greed. This puzzle has interested the Oxford philosopher Nick Bostrom, who has described the following thought experiment: Imagine a superintelligent machine programmed to do whatever is needed to make paper clips as fast as possible, a machine that is connected to every resource that task might demand. Go figure it out! might be all its human instructors tell it. As the clip-making AI becomes better and better at its task, it demands more and still more resources: more electricity, steel, manufacturing, shipping. The paper clips pile up. The machine looks around: If only I could control the power supply, it thinks.
In the spring of 1993: See Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace, proceedings of a symposium cosponsored by the NASA Lewis Research Center and the Ohio Aerospace Institute, Westlake, Ohio, March 30–31, 1993 (Hampton, VA: National Aeronautics and Space Administration Scientific and Technical Information Program), iii. “Within thirty years”: Ibid., 12. Imagine a superintelligent machine: Nick Bostrom, “Ethical Issues in Advanced Artificial Intelligence,” in Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in AI, vol. 2, ed. Iva Smit et al. (Windsor, ON: International Institute for Advanced Studies in Systems Research and Cybernetics, 2003), 12–17, and Nick Bostrom, “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents,” Minds and Machines, 22, no. 2 (2012): 71–85.
The Lights in the Tunnel by Martin Ford
Albert Einstein, Bill Joy: nanobots, Black-Scholes formula, call centre, cloud computing, collateralized debt obligation, credit crunch, double helix, en.wikipedia.org, factory automation, full employment, income inequality, index card, industrial robot, inventory management, invisible hand, Isaac Newton, job automation, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, knowledge worker, low skilled workers, moral hazard, pattern recognition, prediction markets, Productivity paradox, Ray Kurzweil, Search for Extraterrestrial Intelligence, Silicon Valley, Stephen Hawking, strong AI, superintelligent machines, technological singularity, Thomas L Friedman, Turing test, Vernor Vinge, War on Poverty
If average—or even exceptional—human beings are unable to find employment within their capabilities, then how will they acquire the income necessary to create the demand that in turn drives production? If we consider the singularity in this context, then is it really something that will necessarily push us forward exponentially? Or could it in actuality lead to rapid economic decline?* The technologists who speculate about the singularity don’t seem too concerned about this problem. Perhaps they assume that the superintelligent machines of the future will figure all this out for us. How- * Copyrighted Material – Paperback/Kindle available @ Amazon Acceleration / 103 In this book, we won’t again stray into this more speculative arena (except in the last sections of the Appendix). The ideas presented in this book do not depend on the occurrence of the technological singularity. The standard we have set is much lower: we are concerned only with the possibility that machines will become capable of performing most average, routine jobs.
Overcomplicated: Technology at the Limits of Comprehension by Samuel Arbesman
3D printing, algorithmic trading, Anton Chekhov, Apple II, Benoit Mandelbrot, citation needed, combinatorial explosion, Danny Hillis, David Brooks, discovery of the americas, en.wikipedia.org, Erik Brynjolfsson, Flash crash, friendly AI, game design, Google X / Alphabet X, Googley, HyperCard, Inbox Zero, Isaac Newton, iterative process, Kevin Kelly, Machine translation of "The spirit is willing, but the flesh is weak." to Russian and back, mandelbrot fractal, Minecraft, Netflix Prize, Nicholas Carr, Parkinson's law, Ray Kurzweil, recommendation engine, Richard Feynman, Richard Feynman, Richard Feynman: Challenger O-ring, Second Machine Age, self-driving car, software studies, statistical model, Steve Jobs, Steve Wozniak, Steven Pinker, Stewart Brand, superintelligent machines, Therac-25, Tyler Cowen: Great Stagnation, urban planning, Watson beat the top human players on Jeopardy!, Whole Earth Catalog, Y2K
Living with Complexity by Don Norman examines the origins of (and need for) complexity, particularly from the perspective of design. The Techno-Human Condition by Braden R. Allenby and Daniel Sarewitz is a discussion of how to grapple with coming technological change and is particularly intriguing when it discusses “wicked complexity.” Superintelligence by Nick Bostrom explores the many issues and implications related to the development of superintelligent machines. The Works, The Heights, and The Way to Go by Kate Ascher examine how cities, skyscrapers, and our transportation networks, respectively, actually work. Beautifully rendered and fascinating books. The Second Machine Age by Erik Brynjolfsson and Andrew McAfee examines the rapid technological change we are experiencing and can come to expect, and how it will affect our economy, as well as how to handle this change.
The Transhumanist Reader by Max More, Natasha Vita-More
23andMe, Any sufficiently advanced technology is indistinguishable from magic, artificial general intelligence, augmented reality, Bill Joy: nanobots, bioinformatics, brain emulation, Buckminster Fuller, cellular automata, clean water, cloud computing, cognitive bias, cognitive dissonance, combinatorial explosion, conceptual framework, Conway's Game of Life, cosmological principle, data acquisition, discovery of DNA, Drosophila, en.wikipedia.org, experimental subject, Extropian, fault tolerance, Flynn Effect, Francis Fukuyama: the end of history, Frank Gehry, friendly AI, game design, germ theory of disease, hypertext link, impulse control, index fund, John von Neumann, joint-stock company, Kevin Kelly, Law of Accelerating Returns, life extension, Louis Pasteur, Menlo Park, meta analysis, meta-analysis, moral hazard, Network effects, Norbert Wiener, P = NP, pattern recognition, phenotype, positional goods, prediction markets, presumed consent, Ray Kurzweil, reversible computing, RFID, Richard Feynman, Ronald Reagan, silicon-based life, Singularitarianism, stem cell, stochastic process, superintelligent machines, supply-chain management, supply-chain management software, technological singularity, Ted Nelson, telepresence, telepresence robot, telerobotics, the built environment, The Coming Technological Singularity, the scientific method, The Wisdom of Crowds, transaction costs, Turing machine, Turing test, Upton Sinclair, Vernor Vinge, Von Neumann architecture, Whole Earth Review, women in the workforce
A better example, albeit rather extreme, for making this point is Homo sapiens’ relationship with bacteria. Both human beings and bacteria have good claims to being the “dominant species” on Earth – depending upon how one defines dominant. It is possible that superintelligent machines may wish to dominate some niche that is not presently occupied in any serious fashion by human beings. If this is the case, then from a human being’s point of view, such an AI would not be a Dominant AI. Instead, we would have a “Limited AI” scenario. How could Limited AI occur? I can imagine several scenarios, and I’m sure other people can imagine more. Perhaps the most important point to make is that superintelligent machines may not be competing in the same niche with human beings for resources, and would therefore have little incentive to dominate us. In such a Limited AI scenario, there will be aspects of human life which continue on, much as before, with human beings remaining number one.
Only Humans Need Apply: Winners and Losers in the Age of Smart Machines by Thomas H. Davenport, Julia Kirby
AI winter, Andy Kessler, artificial general intelligence, asset allocation, Automated Insights, autonomous vehicles, Baxter: Rethink Robotics, business intelligence, business process, call centre, carbon-based life, Clayton Christensen, clockwork universe, conceptual framework, dark matter, David Brooks, deliberate practice, deskilling, Edward Lloyd's coffeehouse, Elon Musk, Erik Brynjolfsson, estate planning, follow your passion, Frank Levy and Richard Murnane: The New Division of Labor, Freestyle chess, game design, general-purpose programming language, Google Glasses, Hans Lippershey, haute cuisine, income inequality, index fund, industrial robot, information retrieval, intermodal, Internet of things, inventory management, Isaac Newton, job automation, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, Khan Academy, knowledge worker, labor-force participation, loss aversion, Mark Zuckerberg, Narrative Science, natural language processing, Norbert Wiener, nuclear winter, pattern recognition, performance metric, Peter Thiel, precariat, quantitative trading / quantitative ﬁnance, Ray Kurzweil, Richard Feynman, Richard Feynman, risk tolerance, Robert Shiller, Robert Shiller, Rodney Brooks, Second Machine Age, self-driving car, Silicon Valley, six sigma, Skype, speech recognition, spinning jenny, statistical model, Stephen Hawking, Steve Jobs, Steve Wozniak, strong AI, superintelligent machines, supply-chain management, transaction costs, Tyler Cowen: Great Stagnation, Watson beat the top human players on Jeopardy!, Works Progress Administration, Zipcar
Colvin suggests, for example, that no one would want to be judged by a computer in a courtroom. But a minority defendant given the choice between a probably prejudiced jury, a possibly prejudiced judge, and a race-blind machine might well choose the latter option. In addition, not everyone agrees that we humans will remain in a position to dictate which decisions and actions will be reserved for us. What would prevent a superintelligent machine from denying our commands, they ask, if it thought better of the situation? To prepare for that possibility (familiar to those who remember HAL in 2001: A Space Odyssey), some insist that computer scientists had better figure out how to program values into the machines, and values that are “human-friendly,” to color the decision-making that might proceed logically but tragically from their narrowly specified goals.
A Declaration of the Independence of Cyberspace, AI winter, airport security, Apple II, artificial general intelligence, augmented reality, autonomous vehicles, Baxter: Rethink Robotics, Bill Duvall, bioinformatics, Brewster Kahle, Burning Man, call centre, cellular automata, Chris Urmson, Claude Shannon: information theory, Clayton Christensen, clean water, cloud computing, collective bargaining, computer age, computer vision, crowdsourcing, Danny Hillis, DARPA: Urban Challenge, data acquisition, Dean Kamen, deskilling, don't be evil, Douglas Engelbart, Douglas Hofstadter, Dynabook, Edward Snowden, Elon Musk, Erik Brynjolfsson, factory automation, From Mathematics to the Technologies of Life and Death, future of work, Galaxy Zoo, Google Glasses, Google X / Alphabet X, Grace Hopper, Gödel, Escher, Bach, Hacker Ethic, haute couture, hive mind, hypertext link, indoor plumbing, industrial robot, information retrieval, Internet Archive, Internet of things, invention of the wheel, Jacques de Vaucanson, Jaron Lanier, Jeff Bezos, job automation, John Conway, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John von Neumann, Kevin Kelly, knowledge worker, Kodak vs Instagram, labor-force participation, loose coupling, Mark Zuckerberg, Marshall McLuhan, medical residency, Menlo Park, Mother of all demos, natural language processing, new economy, Norbert Wiener, PageRank, pattern recognition, pre–internet, RAND corporation, Ray Kurzweil, Richard Stallman, Robert Gordon, Rodney Brooks, Sand Hill Road, Second Machine Age, self-driving car, semantic web, shareholder value, side project, Silicon Valley, Silicon Valley startup, Singularitarianism, skunkworks, Skype, social software, speech recognition, stealth mode startup, Stephen Hawking, Steve Ballmer, Steve Jobs, Steve Wozniak, Steven Levy, Stewart Brand, strong AI, superintelligent machines, technological singularity, Ted Nelson, telemarketer, telepresence, telepresence robot, Tenerife airport disaster, The Coming Technological Singularity, the medium is the message, Thorstein Veblen, Turing test, Vannevar Bush, Vernor Vinge, Watson beat the top human players on Jeopardy!, Whole Earth Catalog, William Shockley: the traitorous eight
Part of Her is also about the singularity, the idea that machine intelligence is accelerating at such a pace that it will eventually surpass human intelligence and become independent, rendering humans “left behind.” Both Her and Transcendence, another singularity-obsessed science-fiction movie introduced the following spring, are most intriguing for the way they portray human-machine relationships. In Transcendence the human-computer interaction moves from pleasant to dark, and eventually a superintelligent machine destroys human civilization. In Her, ironically, the relationship between the man and his operating system disintegrates as the computer’s intelligence develops so quickly that, not satisfied even with thousands of simultaneous relationships, it transcends humanity and . . . departs. This may be science fiction, but in the real world, this territory had become familiar to Liesl Capper almost a decade earlier.
3D printing, Albert Einstein, Amazon Mechanical Turk, Arthur Eddington, Benoit Mandelbrot, bioinformatics, Black Swan, Brownian motion, cellular automata, Claude Shannon: information theory, combinatorial explosion, computer vision, constrained optimization, correlation does not imply causation, crowdsourcing, Danny Hillis, data is the new oil, double helix, Douglas Hofstadter, Erik Brynjolfsson, experimental subject, Filter Bubble, future of work, global village, Google Glasses, Gödel, Escher, Bach, information retrieval, job automation, John Snow's cholera map, John von Neumann, Joseph Schumpeter, Kevin Kelly, lone genius, mandelbrot fractal, Mark Zuckerberg, Moneyball by Michael Lewis explains big data, Narrative Science, Nate Silver, natural language processing, Netflix Prize, Network effects, NP-complete, P = NP, PageRank, pattern recognition, phenotype, planetary scale, pre–internet, random walk, Ray Kurzweil, recommendation engine, Richard Feynman, Richard Feynman, Second Machine Age, self-driving car, Silicon Valley, speech recognition, statistical model, Stephen Hawking, Steven Levy, Steven Pinker, superintelligent machines, the scientific method, The Signal and the Noise by Nate Silver, theory of mind, transaction costs, Turing machine, Turing test, Vernor Vinge, Watson beat the top human players on Jeopardy!, white flight
Craig Mundie argues for a balanced approach to data collection and use in “Privacy pragmatism” (Foreign Affairs, 2014). The Second Machine Age, by Erik Brynjolfsson and Andrew McAfee (Norton, 2014), discusses how progress in AI will shape the future of work and the economy. “World War R,” by Chris Baraniuk (New Scientist, 2014) reports on the debate surrounding the use of robots in battle. “Transcending complacency on superintelligent machines,” by Stephen Hawking et al. (Huffington Post, 2014), argues that now is the time to worry about AI’s risks. Nick Bostrom’s Superintelligence (Oxford University Press, 2014) considers those dangers and what to do about them. A Brief History of Life, by Richard Hawking (Random Penguin, 1982), summarizes the quantum leaps of evolution in the eons BC. (Before Computers. Just kidding.) The Singularity Is Near, by Ray Kurzweil (Penguin, 2005), is your guide to the transhuman future.
The Singularity Is Near: When Humans Transcend Biology by Ray Kurzweil
additive manufacturing, AI winter, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Albert Einstein, anthropic principle, Any sufficiently advanced technology is indistinguishable from magic, artificial general intelligence, augmented reality, autonomous vehicles, Benoit Mandelbrot, Bill Joy: nanobots, bioinformatics, brain emulation, Brewster Kahle, Brownian motion, business intelligence, c2.com, call centre, carbon-based life, cellular automata, Claude Shannon: information theory, complexity theory, conceptual framework, Conway's Game of Life, cosmological constant, cosmological principle, cuban missile crisis, data acquisition, Dava Sobel, David Brooks, Dean Kamen, disintermediation, double helix, Douglas Hofstadter, en.wikipedia.org, epigenetics, factory automation, friendly AI, George Gilder, Gödel, Escher, Bach, informal economy, information retrieval, invention of the telephone, invention of the telescope, invention of writing, Isaac Newton, iterative process, Jaron Lanier, Jeff Bezos, job automation, job satisfaction, John von Neumann, Kevin Kelly, Law of Accelerating Returns, life extension, linked data, Loebner Prize, Louis Pasteur, mandelbrot fractal, Mikhail Gorbachev, mouse model, Murray Gell-Mann, mutually assured destruction, natural language processing, Network effects, new economy, Norbert Wiener, oil shale / tar sands, optical character recognition, pattern recognition, phenotype, premature optimization, randomized controlled trial, Ray Kurzweil, remote working, reversible computing, Richard Feynman, Richard Feynman, Rodney Brooks, Search for Extraterrestrial Intelligence, semantic web, Silicon Valley, Singularitarianism, speech recognition, statistical model, stem cell, Stephen Hawking, Stewart Brand, strong AI, superintelligent machines, technological singularity, Ted Kaczynski, telepresence, The Coming Technological Singularity, transaction costs, Turing machine, Turing test, Vernor Vinge, Y2K, Yogi Berra
The author runs a company, FATKAT (Financial Accelerating Transactions by Kurzweil Adaptive Technologies), which applies computerized pattern recognition to financial data to make stock-market investment decisions, http://www.FatKat.com. 159. See discussion in chapter 2 on price-performance improvements in computer memory and electronics in general. 160. Runaway AI refers to a scenario where, as Max More describes, "superintelligent machines, initially harnessed for human benefit, soon leave us behind." Max More, "Embrace, Don't Relinquish, the Future," http://www.KurzweilAI.net/articles/art0106.html?printable=1. See also Damien Broderick's description of the "Seed AI": "A self-improving seed AI could run glacially slowly on a limited machine substrate. The point is, so long as it has the capacity to improve itself, at some point it will do so convulsively, bursting through any architectural bottlenecks to design its own improved hardware, maybe even build it (if it's allowed control of tools in a fabrication plant)."
Airbnb, artificial general intelligence, asset allocation, Atul Gawande, augmented reality, back-to-the-land, Bernie Madoff, Bertrand Russell: In Praise of Idleness, Black Swan, blue-collar work, Buckminster Fuller, business process, Cal Newport, call centre, Checklist Manifesto, cognitive bias, cognitive dissonance, Colonization of Mars, Columbine, correlation does not imply causation, David Brooks, David Graeber, diversification, diversified portfolio, Donald Trump, effective altruism, Elon Musk, fault tolerance, fear of failure, Firefox, follow your passion, future of work, Google X / Alphabet X, Howard Zinn, Hugh Fearnley-Whittingstall, Jeff Bezos, job satisfaction, Johann Wolfgang von Goethe, Kevin Kelly, Kickstarter, Lao Tzu, life extension, Mahatma Gandhi, Mark Zuckerberg, Mason jar, Menlo Park, Mikhail Gorbachev, Nicholas Carr, optical character recognition, PageRank, passive income, pattern recognition, Paul Graham, Peter H. Diamandis: Planetary Resources, Peter Singer: altruism, Peter Thiel, phenotype, post scarcity, premature optimization, QWERTY keyboard, Ralph Waldo Emerson, Ray Kurzweil, recommendation engine, rent-seeking, Richard Feynman, Richard Feynman, risk tolerance, Ronald Reagan, sharing economy, side project, Silicon Valley, skunkworks, Skype, Snapchat, social graph, software as a service, software is eating the world, stem cell, Stephen Hawking, Steve Jobs, Stewart Brand, superintelligent machines, Tesla Model S, The Wisdom of Crowds, Thomas L Friedman, Wall-E, Washington Consensus, Whole Earth Catalog, Y Combinator
On Appreciating the Risks of Artificial Intelligence “Jaan Tallinn, one of the founders of Skype, said that when he talks to people about this issue, he asks only two questions to get an understanding of whether the person he’s talking to is going to be able to grok just how pressing a concern artificial intelligence is. The first is, ‘Are you a programmer?’—the relevance of which is obvious—and the second is, ‘Do you have children?’ He claims to have found that if people don’t have children, their concern about the future isn’t sufficiently well-calibrated so as to get just how terrifying the prospect of building superintelligent machines is in the absence of having figured out the control problem [ensuring the AI converges with our interests, even when a thousand or a billion times smarter]. I think there’s something to that. It’s not limited, of course, to artificial intelligence. It spreads to every topic of concern. To worry about the fate of civilization in the abstract is harder than worrying about what sorts of experiences your children are going to have in the future.”