Geoffrey Hinton

57 results back to index


pages: 414 words: 109,622

Genius Makers: The Mavericks Who Brought A. I. To Google, Facebook, and the World by Cade Metz

AI winter, air gap, Airbnb, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, AlphaGo, Amazon Robotics, artificial general intelligence, Asilomar, autonomous vehicles, backpropagation, Big Tech, British Empire, Cambridge Analytica, carbon-based life, cloud computing, company town, computer age, computer vision, deep learning, deepfake, DeepMind, Demis Hassabis, digital map, Donald Trump, driverless car, drone strike, Elon Musk, fake news, Fellow of the Royal Society, Frank Gehry, game design, Geoffrey Hinton, Google Earth, Google X / Alphabet X, Googley, Internet Archive, Isaac Newton, Jeff Hawkins, Jeffrey Epstein, job automation, John Markoff, life extension, machine translation, Mark Zuckerberg, means of production, Menlo Park, move 37, move fast and break things, Mustafa Suleyman, new economy, Nick Bostrom, nuclear winter, OpenAI, PageRank, PalmPilot, pattern recognition, Paul Graham, paypal mafia, Peter Thiel, profit motive, Richard Feynman, ride hailing / ride sharing, Ronald Reagan, Rubik’s Cube, Sam Altman, Sand Hill Road, self-driving car, side project, Silicon Valley, Silicon Valley billionaire, Silicon Valley startup, Skype, speech recognition, statistical model, stem cell, Stephen Hawking, Steve Ballmer, Steven Levy, Steven Pinker, tech worker, telemarketer, The Future of Employment, Turing test, warehouse automation, warehouse robotics, Y Combinator

TIMELINE 1960—Cornell professor Frank Rosenblatt builds the Mark I Perceptron, an early “neural network,” at a lab in Buffalo, New York. 1969—MIT professors Marvin Minsky and Seymour Papert publish Perceptrons, pinpointing the flaws in Rosenblatt’s technology. 1971—Geoff Hinton starts a PhD in artificial intelligence at the University of Edinburgh. 1973—The first AI winter sets in. 1978—Geoff Hinton starts a postdoc at the University of California–San Diego. 1982—Carnegie Mellon University hires Geoff Hinton. 1984—Geoff Hinton and Yann LeCun meet in France. 1986—David Rumelhart, Geoff Hinton, and Richard Williams publish their paper on “backpropagation,” expanding the powers of neural networks. Yann LeCun joins Bell Labs in Holmdel, New Jersey, where he begins building LeNet, a neural network that can recognize handwritten digits. 1987—Geoff Hinton leaves Carnegie Mellon for the University of Toronto. 1989—Carnegie Mellon graduate student Dean Pomerleau builds ALVINN, a self-driving car based on a neural network. 1992—Yoshua Bengio meets Yann LeCun while doing postdoctoral research at Bell Labs. 1993—The University of Montreal hires Yoshua Bengio. 1998—Geoff Hinton founds the Gatsby Neuroscience Unit at University College London. 1990s—2000s—Another AI winter. 2000—Geoff Hinton returns to the University of Toronto. 2003—Yann LeCun moves to New York University. 2004—Geoff Hinton starts “neural computation and adaptive perception” workshops with funding from the Canadian government.

Yann LeCun joins Bell Labs in Holmdel, New Jersey, where he begins building LeNet, a neural network that can recognize handwritten digits. 1987—Geoff Hinton leaves Carnegie Mellon for the University of Toronto. 1989—Carnegie Mellon graduate student Dean Pomerleau builds ALVINN, a self-driving car based on a neural network. 1992—Yoshua Bengio meets Yann LeCun while doing postdoctoral research at Bell Labs. 1993—The University of Montreal hires Yoshua Bengio. 1998—Geoff Hinton founds the Gatsby Neuroscience Unit at University College London. 1990s—2000s—Another AI winter. 2000—Geoff Hinton returns to the University of Toronto. 2003—Yann LeCun moves to New York University. 2004—Geoff Hinton starts “neural computation and adaptive perception” workshops with funding from the Canadian government.

Andrew Ng, Jeff Dean, and Greg Corrado found Google Brain. Google deploys speech recognition service based on deep learning. 2012—Andrew Ng, Jeff Dean, and Greg Corrado publish the Cat Paper. Andrew Ng leaves Google. Geoff Hinton “interns” at Google Brain. Geoff Hinton, Ilya Sutskever, and Alex Krizhevsky publish the AlexNet paper. Geoff Hinton, Ilya Sutskever, and Alex Krizhevsky auction their company, DNNresearch. 2013—Geoff Hinton, Ilya Sutskever, and Alex Krizhevsky join Google. Mark Zuckerberg and Yann LeCun found the Facebook Artificial Intelligence Research lab. 2014—Google acquires DeepMind. Ian Goodfellow publishes the GAN paper, describing a way of generating photos.


pages: 288 words: 86,995

Rule of the Robots: How Artificial Intelligence Will Transform Everything by Martin Ford

AI winter, Airbnb, algorithmic bias, algorithmic trading, Alignment Problem, AlphaGo, Amazon Mechanical Turk, Amazon Web Services, artificial general intelligence, Automated Insights, autonomous vehicles, backpropagation, basic income, Big Tech, big-box store, call centre, carbon footprint, Chris Urmson, Claude Shannon: information theory, clean water, cloud computing, commoditize, computer age, computer vision, Computing Machinery and Intelligence, coronavirus, correlation does not imply causation, COVID-19, crowdsourcing, data is the new oil, data science, deep learning, deepfake, DeepMind, Demis Hassabis, deskilling, disruptive innovation, Donald Trump, Elon Musk, factory automation, fake news, fulfillment center, full employment, future of work, general purpose technology, Geoffrey Hinton, George Floyd, gig economy, Gini coefficient, global pandemic, Googley, GPT-3, high-speed rail, hype cycle, ImageNet competition, income inequality, independent contractor, industrial robot, informal economy, information retrieval, Intergovernmental Panel on Climate Change (IPCC), Internet of things, Jeff Bezos, job automation, John Markoff, Kiva Systems, knowledge worker, labor-force participation, Law of Accelerating Returns, license plate recognition, low interest rates, low-wage service sector, Lyft, machine readable, machine translation, Mark Zuckerberg, Mitch Kapor, natural language processing, Nick Bostrom, Northpointe / Correctional Offender Management Profiling for Alternative Sanctions, Ocado, OpenAI, opioid epidemic / opioid crisis, passive income, pattern recognition, Peter Thiel, Phillips curve, post scarcity, public intellectual, Ray Kurzweil, recommendation engine, remote working, RFID, ride hailing / ride sharing, Robert Gordon, Rodney Brooks, Rubik’s Cube, Sam Altman, self-driving car, Silicon Valley, Silicon Valley startup, social distancing, SoftBank, South of Market, San Francisco, special economic zone, speech recognition, stealth mode startup, Stephen Hawking, superintelligent machines, TED Talk, The Future of Employment, The Rise and Fall of American Growth, the scientific method, Turing machine, Turing test, Tyler Cowen, Tyler Cowen: Great Stagnation, Uber and Lyft, uber lyft, universal basic income, very high income, warehouse automation, warehouse robotics, Watson beat the top human players on Jeopardy!, WikiLeaks, women in the workforce, Y Combinator

Rumelhart, along with Ronald Williams, a computer scientist at Northeastern University, and Geoffrey Hinton, then at Carnegie Mellon, described how the algorithm could be used in what is now considered to be one of the most important scientific papers in artificial intelligence, published in the journal Nature in 1986.10 Backpropagation represented the fundamental conceptual breakthrough that would someday lead deep learning to dominate the field of AI, but it would be decades before computers would become fast enough to truly leverage the approach. Geoffrey Hinton, who had been a young postdoctoral researcher working with Rumelhart at UC San Diego in 1981,11 would go on to become perhaps the most prominent figure in the deep learning revolution.

Williams, “Learning representations by back-propagating errors,” Nature, volume 323, issue 6088, pp. 533–536 (1986), October 9, 1986, www.nature.com/articles/323533a0. 11. Ford, Interview with Geoffrey Hinton, in Architects of Intelligence, p. 73. 12. Dave Gershgorn, “The data that transformed AI research—and possibly the world,” Quartz, July 26, 2017, qz.com/1034972/the-data-that-changed-the-direction-of-ai-research-and-possibly-the-world/. 13. Ford, Interview with Geoffrey Hinton, in Architects of Intelligence, p. 77. 14. Email from Jürgen Schmidhuber to Martin Ford, January 28, 2019. 15. Jürgen Schmidhuber, “Critique of paper by ‘Deep Learning Conspiracy’ (Nature 521 p 436),” June 2015, people.idsia.ch/~juergen/deep-learning-conspiracy.html. 16.

The systems, he wrote, have no ability to integrate information from “clinical notes, laboratory values, prior images” and the like. As a result, the technology has so far excelled only with “entities that can be detected with high specificity and sensitivity using only one image (or a few contiguous images) without access to clinical information or prior studies.”48 I suspect that Geoff Hinton would argue that these limitations will inevitably be overcome, and he will very likely turn out to be right in the long run, but I think it will be a gradual process rather than a sudden disruption. An additional reality is that there are a variety of challenging hurdles beyond the capability of the technology itself that will probably make it very difficult to send radiologists—or any other medical specialists—to the unemployment line anytime soon.


pages: 586 words: 186,548

Architects of Intelligence by Martin Ford

3D printing, agricultural Revolution, AI winter, algorithmic bias, Alignment Problem, AlphaGo, Apple II, artificial general intelligence, Asilomar, augmented reality, autonomous vehicles, backpropagation, barriers to entry, basic income, Baxter: Rethink Robotics, Bayesian statistics, Big Tech, bitcoin, Boeing 747, Boston Dynamics, business intelligence, business process, call centre, Cambridge Analytica, cloud computing, cognitive bias, Colonization of Mars, computer vision, Computing Machinery and Intelligence, correlation does not imply causation, CRISPR, crowdsourcing, DARPA: Urban Challenge, data science, deep learning, DeepMind, Demis Hassabis, deskilling, disruptive innovation, Donald Trump, Douglas Hofstadter, driverless car, Elon Musk, Erik Brynjolfsson, Ernest Rutherford, fake news, Fellow of the Royal Society, Flash crash, future of work, general purpose technology, Geoffrey Hinton, gig economy, Google X / Alphabet X, Gödel, Escher, Bach, Hans Moravec, Hans Rosling, hype cycle, ImageNet competition, income inequality, industrial research laboratory, industrial robot, information retrieval, job automation, John von Neumann, Large Hadron Collider, Law of Accelerating Returns, life extension, Loebner Prize, machine translation, Mark Zuckerberg, Mars Rover, means of production, Mitch Kapor, Mustafa Suleyman, natural language processing, new economy, Nick Bostrom, OpenAI, opioid epidemic / opioid crisis, optical character recognition, paperclip maximiser, pattern recognition, phenotype, Productivity paradox, radical life extension, Ray Kurzweil, recommendation engine, Robert Gordon, Rodney Brooks, Sam Altman, self-driving car, seminal paper, sensor fusion, sentiment analysis, Silicon Valley, smart cities, social intelligence, sparse data, speech recognition, statistical model, stealth mode startup, stem cell, Stephen Hawking, Steve Jobs, Steve Wozniak, Steven Pinker, strong AI, superintelligent machines, synthetic biology, systems thinking, Ted Kaczynski, TED Talk, The Rise and Fall of American Growth, theory of mind, Thomas Bayes, Travis Kalanick, Turing test, universal basic income, Wall-E, Watson beat the top human players on Jeopardy!, women in the workforce, working-age population, workplace surveillance , zero-sum game, Zipcar

MARTIN FORD: So, it was a strategic investment on the part of the Canadian government to keep deep learning alive? GEOFFREY HINTON: Yes. Basically, the Canadian government is significantly investing in advanced deep learning by spending half a million dollars a year, which is pretty efficient for something that’s going to turn into a multi-billion-dollar industry. MARTIN FORD: Speaking of Canadians, do you have any interaction with your fellow faculty member, Jordan Peterson? It seems like there’s all kinds of disruption coming out of the University of Toronto... GEOFFREY HINTON: Ha! Well, all I’ll say about that is that he’s someone who doesn’t know when to keep his mouth shut. GEOFFREY HINTON received his undergraduate degree from Kings College, Cambridge and his PhD in Artificial Intelligence from the University of Edinburgh in 1978.

His research has covered many topics related to AI, such as machine learning, knowledge representation, and computer vision, and he has received numerous awards and distinctions, including the IJCAI Computers and Thought Award and election as a fellow to the American Association for the Advancement of Science, the Association for the Advancement of Artificial Intelligence and the Association of Computing Machinery. Chapter 4. GEOFFREY HINTON In the past when AI has been overhyped—including backpropagation in the 1980s—people were expecting it to do great things, and it didn’t actually do things as great as they hoped. Today, it’s already done great things, so it can’t possibly all be just hype. EMERITUS DISTINGUISHED PROFESSOR OF COMPUTER SCIENCE, UNIVERSITY OF TORONTO VICE PRESIDENT & ENGINEERING FELLOW, GOOGLE Geoffrey Hinton is sometimes known as the Godfather of Deep Learning, and he has been the driving force behind some of its key technologies, such as backpropagation, Boltzmann machines, and the Capsules neural network.

They got almost half the error rate of the best computer vision systems, and they were using mainly techniques developed in Yann LeCun’s lab but mixed in with a few of our own techniques as well. MARTIN FORD: This was the ImageNet competition? GEOFFREY HINTON: Yes, and what happened then was what should happen in science. One method that people used to think of as complete nonsense had now worked much better than the method they believed in, and within two years, they all switched. So, for things like object classification, nobody would dream of trying to do it without using a neural network now. MARTIN FORD: This was back in 2012, I believe. Was that the inflection point for deep learning? GEOFFREY HINTON: For computer vision, that was the inflection point. For speech, the inflection point was a few years earlier.


pages: 252 words: 74,167

Thinking Machines: The Inside Story of Artificial Intelligence and Our Race to Build the Future by Luke Dormehl

"World Economic Forum" Davos, Ada Lovelace, agricultural Revolution, AI winter, Albert Einstein, Alexey Pajitnov wrote Tetris, algorithmic management, algorithmic trading, AlphaGo, Amazon Mechanical Turk, Apple II, artificial general intelligence, Automated Insights, autonomous vehicles, backpropagation, Bletchley Park, book scanning, borderless world, call centre, cellular automata, Charles Babbage, Claude Shannon: information theory, cloud computing, computer vision, Computing Machinery and Intelligence, correlation does not imply causation, crowdsourcing, deep learning, DeepMind, driverless car, drone strike, Elon Musk, Flash crash, Ford Model T, friendly AI, game design, Geoffrey Hinton, global village, Google X / Alphabet X, Hans Moravec, hive mind, industrial robot, information retrieval, Internet of things, iterative process, Jaron Lanier, John Markoff, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John von Neumann, Kickstarter, Kodak vs Instagram, Law of Accelerating Returns, life extension, Loebner Prize, machine translation, Marc Andreessen, Mark Zuckerberg, Menlo Park, Mustafa Suleyman, natural language processing, Nick Bostrom, Norbert Wiener, out of africa, PageRank, paperclip maximiser, pattern recognition, radical life extension, Ray Kurzweil, recommendation engine, remote working, RFID, scientific management, self-driving car, Silicon Valley, Skype, smart cities, Smart Cities: Big Data, Civic Hackers, and the Quest for a New Utopia, social intelligence, speech recognition, Stephen Hawking, Steve Jobs, Steve Wozniak, Steven Pinker, strong AI, superintelligent machines, tech billionaire, technological singularity, The Coming Technological Singularity, The Future of Employment, Tim Cook: Apple, Tony Fadell, too big to fail, traumatic brain injury, Turing machine, Turing test, Vernor Vinge, warehouse robotics, Watson beat the top human players on Jeopardy!

These included the likes of David Rumelhart and James McClelland, two cognitive scientists at the University of California San Diego, who formed an artificial neural network group which became incredibly influential in its own right. There was also a man named Geoff Hinton. The Patron Saint of Neural Networks Born in 1947, Geoff Hinton is the one of the most important figures in modern neural networks. An unassuming British computer scientist, Hinton has influenced the development of his chosen field on a level few others can approach. He comes from a long line of impressive mathematical thinkers: his great-great-grandfather is the famous logician George Boole, whose Boolean algebra laid the foundations for modern computer science.

The noise it produces sounds like vocal exercises a singer might perform to warm up his or her voice. After training on 1,000 words, NETtalk’s speech became far more recognisably human. ‘We were absolutely amazed,’ Sejnowski says. ‘Not least because computers at the time had less computing power than your watch does today.’ The Connectionists Aided by the work of Geoff Hinton and others, the field of neural nets boomed. In the grand tradition of each successive generation renaming themselves, the new researchers described themselves as ‘connectionists’, since they were interested in replicating the neural connections in the brain. By 1991, there were 10,000 active connectionist researchers in the United States alone.

It would be another fifteen years, until October 2010, before Google announced its own self-driving car initiative. However, thanks to his groundbreaking work in neural nets, Dean Pomerleau had proved his point. Welcome to Deep Learning The next significant advance for neural networks took place in the mid-2000s. In 2005, Geoff Hinton was working at the University of Toronto, having recently returned from setting up the Gatsby Computational Neuroscience Unit at University College London. By this time it was clear that the Internet was helping to generate enormous data sets which would have been unimaginable even a decade before.


The Deep Learning Revolution (The MIT Press) by Terrence J. Sejnowski

AI winter, Albert Einstein, algorithmic bias, algorithmic trading, AlphaGo, Amazon Web Services, Any sufficiently advanced technology is indistinguishable from magic, augmented reality, autonomous vehicles, backpropagation, Baxter: Rethink Robotics, behavioural economics, bioinformatics, cellular automata, Claude Shannon: information theory, cloud computing, complexity theory, computer vision, conceptual framework, constrained optimization, Conway's Game of Life, correlation does not imply causation, crowdsourcing, Danny Hillis, data science, deep learning, DeepMind, delayed gratification, Demis Hassabis, Dennis Ritchie, discovery of DNA, Donald Trump, Douglas Engelbart, driverless car, Drosophila, Elon Musk, en.wikipedia.org, epigenetics, Flynn Effect, Frank Gehry, future of work, Geoffrey Hinton, Google Glasses, Google X / Alphabet X, Guggenheim Bilbao, Gödel, Escher, Bach, haute couture, Henri Poincaré, I think there is a world market for maybe five computers, industrial robot, informal economy, Internet of things, Isaac Newton, Jim Simons, John Conway, John Markoff, John von Neumann, language acquisition, Large Hadron Collider, machine readable, Mark Zuckerberg, Minecraft, natural language processing, Neil Armstrong, Netflix Prize, Norbert Wiener, OpenAI, orbital mechanics / astrodynamics, PageRank, pattern recognition, pneumatic tube, prediction markets, randomized controlled trial, Recombinant DNA, recommendation engine, Renaissance Technologies, Rodney Brooks, self-driving car, Silicon Valley, Silicon Valley startup, Socratic dialogue, speech recognition, statistical model, Stephen Hawking, Stuart Kauffman, theory of mind, Thomas Bayes, Thomas Kuhn: the structure of scientific revolutions, traveling salesman, Turing machine, Von Neumann architecture, Watson beat the top human players on Jeopardy!, world market for maybe five computers, X Prize, Yogi Berra

At the opening session at NIPS 2018 in Long Beach, I marveled at the growth of NIPS: “Little did I know 30 years ago at the first NIPS conference that I would be standing here today addressing 8,000 attendees—I thought it would only take 10 years.” I visited Geoff Hinton at Mountainview in April, 2016. Google Brain has an entire floor of a building. We reminisced about the old days and came to the conclusion that we had won, but it took a lot longer than we had expected. Along the way, Geoff was elected to the Royal Societies of both England and Canada and I was elected to the National Academy of Sciences, the National Academy of Medicine, the National Academy of Engineering, the National Academy of Inventors, and the American Academy of Arts and Sciences, a rare honor. I owe Geoffrey Hinton a great debt of gratitude for sharing his insights into computing with networks over many years.

Recent experiments on neural network learning of language support the gradual acquisition of inflectional morphology, consistent with human learning.12 The success of deep learning with Google Translate and other natural language applications in capturing the nuances of language further supports the possibility that brains do not need to use explicit rules for language, even though behavior might suggest that they do. Geoffrey Hinton, David Touretzky, and I organized the first Connectionist Summer School at Carnegie Mellon in 1986 (figure 8.3), at a time when Figure 8.3 Students at the 1986 Connectionist Summer School at Carnegie Mellon University. Geoffrey Hinton is in the first row, third from right, flanked by Terry Sejnowski and James McClelland. This photo is a who’s who in neural computing today. Neural networks in the 1980s were a bit of twenty-first-century science in the twentieth century. Courtesy of Geoffrey Hinton. 118 Chapter 8 only a few universities had faculty who offered courses on neural networks.

Even a perfect physical model of how a neuron worked wouldn’t tell us what its purpose was. Neurons are in the business of processing signals that carry information, and computation was Figure 4.6 Terry Sejnowski and Geoffrey Hinton discussing network models of vision in Boston in 1980. This was one year after Geoffrey and I met at the Parallel Models of Associative Memory workshop in La Jolla and one year before I started my lab at Johns Hopkins in Baltimore and Geoffrey started his research group at Carnegie Mellon in Pittsburgh. Courtesy of Geoffrey Hinton. Brain-style Computing 61 the missing link in trying to understand nature. I have over the last forty years been pursuing this goal, pioneering a new field called “computational neuroscience.”


pages: 424 words: 114,905

Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again by Eric Topol

"World Economic Forum" Davos, 23andMe, Affordable Care Act / Obamacare, AI winter, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, algorithmic bias, AlphaGo, Apollo 11, artificial general intelligence, augmented reality, autism spectrum disorder, autonomous vehicles, backpropagation, Big Tech, bioinformatics, blockchain, Cambridge Analytica, cloud computing, cognitive bias, Colonization of Mars, computer age, computer vision, Computing Machinery and Intelligence, conceptual framework, creative destruction, CRISPR, crowdsourcing, Daniel Kahneman / Amos Tversky, dark matter, data science, David Brooks, deep learning, DeepMind, Demis Hassabis, digital twin, driverless car, Elon Musk, en.wikipedia.org, epigenetics, Erik Brynjolfsson, fake news, fault tolerance, gamification, general purpose technology, Geoffrey Hinton, George Santayana, Google Glasses, ImageNet competition, Jeff Bezos, job automation, job satisfaction, Joi Ito, machine translation, Mark Zuckerberg, medical residency, meta-analysis, microbiome, move 37, natural language processing, new economy, Nicholas Carr, Nick Bostrom, nudge unit, OpenAI, opioid epidemic / opioid crisis, pattern recognition, performance metric, personalized medicine, phenotype, placebo effect, post-truth, randomized controlled trial, recommendation engine, Rubik’s Cube, Sam Altman, self-driving car, Silicon Valley, Skinner box, speech recognition, Stephen Hawking, techlash, TED Talk, text mining, the scientific method, Tim Cook: Apple, traumatic brain injury, trolley problem, War on Poverty, Watson beat the top human players on Jeopardy!, working-age population

That souring, along with a serious reduction of research output and grant support, led to the “AI winter,” as it became known, which lasted about twenty years. It started to come out of hibernation when the term “deep learning” was coined by Rina Dechter in 1986 and later popularized by Geoffrey Hinton, Yann LeCun and Yoshua Bengio. By the late 1980s, multilayered or deep neural networks (DNN) were gaining considerable interest, and the field came back to life. A seminal Nature paper in 1986 by David Rumelhart and Geoffrey Hinton on backpropagation provided an algorithmic method for automatic error correction in neural networks and reignited interest in the field.15 It turned out this was the heart of deep learning, adjusting the weights of the neurons of prior layers to achieve maximal accuracy for the network output.

But instead of the static BLT, we’ve got data moving through layers of computations, extracting high-level features from raw sensory data, a veritable sequence of computations. Importantly, the layers are not designed by humans; indeed, they are hidden from the human users, and they are adjusted by techniques like Geoff Hinton’s backpropagation as a DNN interacts with the data. We’ll use an example of a machine being trained to read chest X-rays. Thousands of chest X-rays, read and labeled with diagnoses by expert radiologists, provide the ground truths for the network to learn from (Figure 4.5). Once trained, the network is ready for an unlabeled chest X-ray to be input.

Versus M.D.”17 The adversarial relationship between humans and their technology, which had a long history dating back to the steam engine and the first Industrial Revolution, had been rekindled. 1936—Turing paper (Alan Turing) 1943—Artificial neural network (Warren McCullogh, Walter Pitts) 1955—Term “artificial intelligence” coined (John McCarthy), 1957—Predicted ten years for AI to beat human at chess (Herbert Simon) 1958—Perceptron (single-layer neural network) (Frank Rosenblatt) 1959—Machine learning described (Arthur Samuel) 1964—ELIZA, the first chatbot 1964—We know more than we can tell (Michael Polany’s paradox) 1969—Question AI viability (Marvin Minsky) 1986—Multilayer neural network (NN) (Geoffrey Hinton) 1989—Convolutional NN (Yann LeCun) 1991—Natural-language processing NN (Sepp Hochreiter, Jurgen Schmidhuber) 1997—Deep Blue wins in chess (Garry Kasparov) 2004—Self-driving vehicle, Mojave Desert (DARPA Challenge) 2007—ImageNet launches 2011—IBM vs. Jeopardy! champions 2011—Speech recognition NN (Microsoft) 2012—University of Toronto ImageNet classification and cat video recognition (Google Brain, Andrew Ng, Jeff Dean) 2014—DeepFace facial recognition (Facebook) 2015—DeepMind vs.


pages: 416 words: 112,268

Human Compatible: Artificial Intelligence and the Problem of Control by Stuart Russell

3D printing, Ada Lovelace, AI winter, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Alfred Russel Wallace, algorithmic bias, AlphaGo, Andrew Wiles, artificial general intelligence, Asilomar, Asilomar Conference on Recombinant DNA, augmented reality, autonomous vehicles, basic income, behavioural economics, Bletchley Park, blockchain, Boston Dynamics, brain emulation, Cass Sunstein, Charles Babbage, Claude Shannon: information theory, complexity theory, computer vision, Computing Machinery and Intelligence, connected car, CRISPR, crowdsourcing, Daniel Kahneman / Amos Tversky, data science, deep learning, deepfake, DeepMind, delayed gratification, Demis Hassabis, Elon Musk, en.wikipedia.org, Erik Brynjolfsson, Ernest Rutherford, fake news, Flash crash, full employment, future of work, Garrett Hardin, Geoffrey Hinton, Gerolamo Cardano, Goodhart's law, Hans Moravec, ImageNet competition, Intergovernmental Panel on Climate Change (IPCC), Internet of things, invention of the wheel, job automation, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John Nash: game theory, John von Neumann, Kenneth Arrow, Kevin Kelly, Law of Accelerating Returns, luminiferous ether, machine readable, machine translation, Mark Zuckerberg, multi-armed bandit, Nash equilibrium, Nick Bostrom, Norbert Wiener, NP-complete, OpenAI, openstreetmap, P = NP, paperclip maximiser, Pareto efficiency, Paul Samuelson, Pierre-Simon Laplace, positional goods, probability theory / Blaise Pascal / Pierre de Fermat, profit maximization, RAND corporation, random walk, Ray Kurzweil, Recombinant DNA, recommendation engine, RFID, Richard Thaler, ride hailing / ride sharing, Robert Shiller, robotic process automation, Rodney Brooks, Second Machine Age, self-driving car, Shoshana Zuboff, Silicon Valley, smart cities, smart contracts, social intelligence, speech recognition, Stephen Hawking, Steven Pinker, superintelligent machines, surveillance capitalism, Thales of Miletus, The Future of Employment, The Theory of the Leisure Class by Thorstein Veblen, Thomas Bayes, Thorstein Veblen, Tragedy of the Commons, transport as a service, trolley problem, Turing machine, Turing test, universal basic income, uranium enrichment, vertical integration, Von Neumann architecture, Wall-E, warehouse robotics, Watson beat the top human players on Jeopardy!, web application, zero-sum game

For the task of recognizing objects in photographs, deep learning algorithms have demonstrated remarkable performance. The first inkling of this came in the 2012 ImageNet competition, which provides training data consisting of 1.2 million labeled images in one thousand categories, and then requires the algorithm to label one hundred thousand new images.4 Geoff Hinton, a British computational psychologist who was at the forefront of the first neural network revolution in the 1980s, had been experimenting with a very large deep convolutional network: 650,000 nodes and 60 million parameters. He and his group at the University of Toronto achieved an ImageNet error rate of 15 percent, a dramatic improvement on the previous best of 26 percent.5 By 2015, dozens of teams were using deep learning methods and the error rate was down to 5 percent, comparable to that of a human who had spent weeks learning to recognize the thousand categories in the test.6 By 2017, the machine error rate was 2 percent.

Blog post on inceptionism research at Google: Alexander Mordvintsev, Christopher Olah, and Mike Tyka, “Inceptionism: Going deeper into neural networks,” Google AI Blog, June 17, 2015. The idea seems to have originated with J. P. Lewis, “Creation by refinement: A creativity paradigm for gradient descent learning networks,” in Proceedings of the IEEE International Conference on Neural Networks (IEEE, 1988). 8. News article on Geoff Hinton having second thoughts about deep networks: Steve LeVine, “Artificial intelligence pioneer says we need to start over,” Axios, September 15, 2017. 9. A catalog of shortcomings of deep learning: Gary Marcus, “Deep learning: A critical appraisal,” arXiv:1801.00631 (2018). 10. A popular textbook on deep learning, with a frank assessment of its weaknesses: François Chollet, Deep Learning with Python (Manning Publications, 2017). 11.

The Baldwin effect in evolution is usually attributed to the following paper: James Baldwin, “A new factor in evolution,” American Naturalist 30 (1896): 441–51. 8. The core idea of the Baldwin effect also appears in the following work: Conwy Lloyd Morgan, Habit and Instinct (Edward Arnold, 1896). 9. A modern analysis and computer implementation demonstrating the Baldwin effect: Geoffrey Hinton and Steven Nowlan, “How learning can guide evolution,” Complex Systems 1 (1987): 495–502. 10. Further elucidation of the Baldwin effect by a computer model that includes the evolution of the internal reward-signaling circuitry: David Ackley and Michael Littman, “Interactions between learning and evolution,” in Artificial Life II, ed.


pages: 625 words: 167,349

The Alignment Problem: Machine Learning and Human Values by Brian Christian

Albert Einstein, algorithmic bias, Alignment Problem, AlphaGo, Amazon Mechanical Turk, artificial general intelligence, augmented reality, autonomous vehicles, backpropagation, butterfly effect, Cambridge Analytica, Cass Sunstein, Claude Shannon: information theory, computer vision, Computing Machinery and Intelligence, data science, deep learning, DeepMind, Donald Knuth, Douglas Hofstadter, effective altruism, Elaine Herzberg, Elon Musk, Frances Oldham Kelsey, game design, gamification, Geoffrey Hinton, Goodhart's law, Google Chrome, Google Glasses, Google X / Alphabet X, Gödel, Escher, Bach, Hans Moravec, hedonic treadmill, ImageNet competition, industrial robot, Internet Archive, John von Neumann, Joi Ito, Kenneth Arrow, language acquisition, longitudinal study, machine translation, mandatory minimum, mass incarceration, multi-armed bandit, natural language processing, Nick Bostrom, Norbert Wiener, Northpointe / Correctional Offender Management Profiling for Alternative Sanctions, OpenAI, Panopticon Jeremy Bentham, pattern recognition, Peter Singer: altruism, Peter Thiel, precautionary principle, premature optimization, RAND corporation, recommendation engine, Richard Feynman, Rodney Brooks, Saturday Night Live, selection bias, self-driving car, seminal paper, side project, Silicon Valley, Skinner box, sparse data, speech recognition, Stanislav Petrov, statistical model, Steve Jobs, strong AI, the map is not the territory, theory of mind, Tim Cook: Apple, W. E. B. Du Bois, Wayback Machine, zero-sum game

By 1973, both the US and British governments have pulled their funding support for neural network research, and when a young English psychology student named Geoffrey Hinton declares that he wants to do his doctoral work on neural networks, again and again he is met with the same reply: “Minsky and Papert,” he is told, “have proved that these models were no good.”10 THE STORY OF ALEXNET It is 2012 in Toronto, and Alex Krizhevsky’s bedroom is too hot to sleep. His computer, attached to twin Nvidia GTX 580 GPUs, has been running day and night at its maximum thermal load, its fans pushing out hot exhaust, for two weeks. “It was very hot,” he says. “And it was loud.”11 He is teaching the machine how to see. Geoffrey Hinton, Krizhevsky’s mentor, is now 64 years old and has not given up.

But I am certain that, at a minimum, conversations and exchanges with the following people have made the book what it is: Pieter Abbeel, Rebecca Ackerman, Dave Ackley, Ross Exo Adams, Blaise Agüera y Arcas, Jacky Alciné, Dario Amodei, McKane Andrus, Julia Angwin, Stuart Armstrong, Gustaf Arrhenius, Amanda Askell, Mayank Bansal, Daniel Barcay, Solon Barocas, Renata Barreto, Andrew Barto, Basia Bartz, Marc Bellemare, Tolga Bolukbasi, Nick Bostrom, Malo Bourgon, Tim Brennan, Miles Brundage, Joanna Bryson, Krister Bykvist, Maya Çakmak, Ryan Carey, Joseph Carlsmith, Rich Caruana, Ruth Chang, Alexandra Chouldechova, Randy Christian, Paul Christiano, Jonathan Cohen, Catherine Collins, Sam Corbett-Davies, Meehan Crist, Andrew Critch, Fiery Cushman, Allan Dafoe, Raph D’Amico, Peter Dayan, Michael Dennis, Shiri Dori-Hacohen, Anca Drăgan, Eric Drexler, Rachit Dubey, Cynthia Dwork, Peter Eckersley, Joe Edelman, Owain Evans, Tom Everitt, Ed Felten, Daniel Filan, Jaime Fisac, Luciano Floridi, Carrick Flynn, Jeremy Freeman, Yarin Gal, Surya Ganguli, Scott Garrabrant, Vael Gates, Tom Gilbert, Adam Gleave, Paul Glimcher, Sharad Goel, Adam Goldstein, Ian Goodfellow, Bryce Goodman, Alison Gopnik, Samir Goswami, Hilary Greaves, Joshua Greene, Tom Griffiths, David Gunning, Gillian Hadfield, Dylan Hadfield-Menell, Moritz Hardt, Tristan Harris, David Heeger, Dan Hendrycks, Geoff Hinton, Matt Huebert, Tim Hwang, Geoffrey Irving, Adam Kalai, Henry Kaplan, Been Kim, Perri Klass, Jon Kleinberg, Caroline Knapp, Victoria Krakovna, Frances Kreimer, David Kreuger, Kaitlyn Krieger, Mike Krieger, Alexander Krizhevsky, Jacob Lagerros, Lily Lamboy, Lydia Laurenson, James Lee, Jan Leike, Ayden LeRoux, Karen Levy, Falk Lieder, Michael Littman, Tania Lombrozo, Will MacAskill, Scott Mauvais, Margaret McCarthy, Andrew Meltzoff, Smitha Milli, Martha Minow, Karthika Mohan, Adrien Morisot, Julia Mosquera, Sendhil Mullainathan, Elon Musk, Yael Niv, Brandie Nonnecke, Peter Norvig, Alexandr Notchenko, Chris Olah, Catherine Olsson, Toby Ord, Tim O’Reilly, Laurent Orseau, Pedro Ortega, Michael Page, Deepak Pathak, Alex Peysakhovich, Gualtiero Piccinini, Dean Pomerleau, James Portnow, Aza Raskin, Stéphane Ross, Cynthia Rudin, Jack Rusher, Stuart Russell, Anna Salamon, Anders Sandberg, Wolfram Schultz, Laura Schulz, Julie Shah, Rohin Shah, Max Shron, Carl Shulman, Satinder Singh, Holly Smith, Nate Soares, Daisy Stanton, Jacob Steinhardt, Jonathan Stray, Rachel Sussman, Jaan Tallinn, Milind Tambe, Sofi Thanhauser, Tena Thau, Jasjeet Thind, Travis Timmerman, Brian Tse, Alexander Matt Turner, Phebe Vayanos, Kerstin Vignard, Chris Wiggins, Cutter Wood, and Elana Zeide.

“We shall then pull our wheel chairs together, look at the tasteless cottage cheese in front of us, & recount the famous story of the conversation at the house of old GLAUCUS, where PROTAGORAS & the sophist HIPPIAS were staying: & try once more to penetrate their subtle & profound paradoxes about the knower & the known.” And then, in a trembling script, all caps: “BE THOU WELL.” 10. Geoff Hinton, “Lecture 2.2—Perceptrons: First-generation Neural Networks” (lecture), Neural Networks for Machine Learning, Coursera, 2012. 11. Alex Krizhevsky, personal interview, June 12, 2019. 12. The method for determining the gradient update in a deep network is known as “backpropagation”; it is essentially the chain rule from calculus, although it requires the use of differentiable neurons, not the all-or-nothing neurons considered by McCulloch, Pitts, and Rosenblatt.


pages: 371 words: 108,317

The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future by Kevin Kelly

A Declaration of the Independence of Cyberspace, Aaron Swartz, AI winter, Airbnb, Albert Einstein, Alvin Toffler, Amazon Web Services, augmented reality, bank run, barriers to entry, Baxter: Rethink Robotics, bitcoin, blockchain, book scanning, Brewster Kahle, Burning Man, cloud computing, commoditize, computer age, Computer Lib, connected car, crowdsourcing, dark matter, data science, deep learning, DeepMind, dematerialisation, Downton Abbey, driverless car, Edward Snowden, Elon Musk, Filter Bubble, Freestyle chess, Gabriella Coleman, game design, Geoffrey Hinton, Google Glasses, hive mind, Howard Rheingold, index card, indoor plumbing, industrial robot, Internet Archive, Internet of things, invention of movable type, invisible hand, Jaron Lanier, Jeff Bezos, job automation, John Markoff, John Perry Barlow, Kevin Kelly, Kickstarter, lifelogging, linked data, Lyft, M-Pesa, machine readable, machine translation, Marc Andreessen, Marshall McLuhan, Mary Meeker, means of production, megacity, Minecraft, Mitch Kapor, multi-sided market, natural language processing, Netflix Prize, Network effects, new economy, Nicholas Carr, off-the-grid, old-boy network, peer-to-peer, peer-to-peer lending, personalized medicine, placebo effect, planetary scale, postindustrial economy, Project Xanadu, recommendation engine, RFID, ride hailing / ride sharing, robo advisor, Rodney Brooks, self-driving car, sharing economy, Silicon Valley, slashdot, Snapchat, social graph, social web, software is eating the world, speech recognition, Stephen Hawking, Steven Levy, Ted Nelson, TED Talk, The future is already here, the long tail, the scientific method, transport as a service, two-sided market, Uber for X, uber lyft, value engineering, Watson beat the top human players on Jeopardy!, WeWork, Whole Earth Review, Yochai Benkler, yottabyte, zero-sum game

thousand games of chess: Personal correspondence with Daylen Yang (author of the Stockfish chess app), Stefan Meyer-Kahlen (developed the multiple award-winning computer chess program Shredder), and Danny Kopec (American chess International Master and cocreator of one of the standard computer chess testing systems), September 2014. “akin to building a rocket ship”: Caleb Garling, “Andrew Ng: Why ‘Deep Learning’ Is a Mandate for Humans, Not Just Machines,” Wired, May 5, 2015. In 2006, Geoff Hinton: Kate Allen, “How a Toronto Professor’s Research Revolutionized Artificial Intelligence,” Toronto Star, April 17, 2015. he dubbed “deep learning”: Yann LeCun, Yoshua Bengio, and Geoffrey Hinton, “Deep Learning,” Nature 521, no. 7553 (2015): 436–44. the network effect: Carl Shapiro and Hal R. Varian, Information Rules: A Strategic Guide to the Network Economy (Boston: Harvard Business Review Press, 1998).

The next level might group two eyes together and pass that meaningful chunk on to another level of hierarchical structure that associates it with the pattern of a nose. It can take many millions of these nodes (each one producing a calculation feeding others around it), stacked up to 15 levels high, to recognize a human face. In 2006, Geoff Hinton, then at the University of Toronto, made a key tweak to this method, which he dubbed “deep learning.” He was able to mathematically optimize results from each layer so that the learning accumulated faster as it proceeded up the stack of layers. Deep-learning algorithms accelerated enormously a few years later when they were ported to GPUs.


pages: 447 words: 111,991

Exponential: How Accelerating Technology Is Leaving Us Behind and What to Do About It by Azeem Azhar

"Friedman doctrine" OR "shareholder theory", "World Economic Forum" Davos, 23andMe, 3D printing, A Declaration of the Independence of Cyberspace, Ada Lovelace, additive manufacturing, air traffic controllers' union, Airbnb, algorithmic management, algorithmic trading, Amazon Mechanical Turk, autonomous vehicles, basic income, Berlin Wall, Bernie Sanders, Big Tech, Bletchley Park, Blitzscaling, Boeing 737 MAX, book value, Boris Johnson, Bretton Woods, carbon footprint, Chris Urmson, Citizen Lab, Clayton Christensen, cloud computing, collective bargaining, computer age, computer vision, contact tracing, contact tracing app, coronavirus, COVID-19, creative destruction, crowdsourcing, cryptocurrency, cuban missile crisis, Daniel Kahneman / Amos Tversky, data science, David Graeber, David Ricardo: comparative advantage, decarbonisation, deep learning, deglobalization, deindustrialization, dematerialisation, Demis Hassabis, Diane Coyle, digital map, digital rights, disinformation, Dissolution of the Soviet Union, Donald Trump, Double Irish / Dutch Sandwich, drone strike, Elon Musk, emotional labour, energy security, Fairchild Semiconductor, fake news, Fall of the Berlin Wall, Firefox, Frederick Winslow Taylor, fulfillment center, future of work, Garrett Hardin, gender pay gap, general purpose technology, Geoffrey Hinton, gig economy, global macro, global pandemic, global supply chain, global value chain, global village, GPT-3, Hans Moravec, happiness index / gross national happiness, hiring and firing, hockey-stick growth, ImageNet competition, income inequality, independent contractor, industrial robot, intangible asset, Jane Jacobs, Jeff Bezos, job automation, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John Perry Barlow, Just-in-time delivery, Kickstarter, Kiva Systems, knowledge worker, Kodak vs Instagram, Law of Accelerating Returns, lockdown, low skilled workers, lump of labour, Lyft, manufacturing employment, Marc Benioff, Mark Zuckerberg, megacity, Mitch Kapor, Mustafa Suleyman, Network effects, new economy, NSO Group, Ocado, offshore financial centre, OpenAI, PalmPilot, Panopticon Jeremy Bentham, Peter Thiel, Planet Labs, price anchoring, RAND corporation, ransomware, Ray Kurzweil, remote working, RFC: Request For Comment, Richard Florida, ride hailing / ride sharing, Robert Bork, Ronald Coase, Ronald Reagan, Salesforce, Sam Altman, scientific management, Second Machine Age, self-driving car, Shoshana Zuboff, Silicon Valley, Social Responsibility of Business Is to Increase Its Profits, software as a service, Steve Ballmer, Steve Jobs, Stuxnet, subscription business, synthetic biology, tacit knowledge, TaskRabbit, tech worker, The Death and Life of Great American Cities, The Future of Employment, The Nature of the Firm, Thomas Malthus, TikTok, Tragedy of the Commons, Turing machine, Uber and Lyft, Uber for X, uber lyft, universal basic income, uranium enrichment, vertical integration, warehouse automation, winner-take-all economy, workplace surveillance , Yom Kippur War

, Time, 7 February 1972 <http://content.time.com/time/subscriber/article/0,33009,905747,00.html> [accessed 3 April 2021]. 4 John Maynard Keynes, ‘Economic Possibilities for Our Grandchildren’, in Essays in Persuasion (London: Palgrave Macmillan UK, 2010), pp. 321–332 <https://doi.org/10.1007/978-1-349-59072-8_25>. 5 Creative Destruction Lab, ‘Geoff Hinton: On Radiology’, 24 November 2016 <https://www.youtube.com/watch?v=2HMPRXstSvQ> [accessed 24 February 2021]. 6 Paul Daugherty, H. James Wilson and Paul Michelman, ‘Revisiting the Jobs That Artificial Intelligence Will Create’, MIT Sloan Management Review (Summer 2017). 7 Lana Bandoim, ‘Robots Are Cleaning Grocery Store Floors During the Coronavirus Outbreak’, Forbes, 8 April 2020 <https://www.forbes.com/sites/lanabandoim/2020/04/08/robots-are-cleaning-grocery-store-floors-during-the-coronavirus-outbreak/> [accessed 24 February 2021]. 8 Jame DiBiasio, ‘A.I.

By 2010, Moore’s Law had resulted in enough power to facilitate a new kind of machine learning, ‘deep learning’, which involved creating layers of artificial neurons modelled on the cells that underpin human brains. These ‘neural networks’ had long been heralded as the next big thing in AI. Yet they had been stymied by a lack of computational power. Not any more, however. In 2012, a group of leading AI researchers – Alex Krizhevsky, Ilya Sutskever and Geoffrey Hinton – developed a ‘deep convolutional neural network’ which applied deep learning to the kinds of image-sorting tasks that AIs had long struggled with. It was rooted in extraordinary computing clout. The neural network contained 650,000 neurons and 60 million ‘parameters’, settings you could use to tune the system.

The rise of newly automated workplaces raises the prospect of mass redundancy. And it is framed as a more existential threat than Keynes’s fears of technological unemployment. Soon, we are told, we’ll reach a point where automated systems will render most of us unemployed and unemployable. In 2016, for example, Geoffrey Hinton – one of the AI pioneers we met earlier – publicly mused on the prospects of radiologists, the specialist doctors who deal with X-rays, computerised tomography and magnetic resonance imaging scans. Radiologists, Hinton told a small crowd of AI researchers and founders, were ‘like the coyote that’s already over the edge of the cliff, but hasn’t yet looked down, so doesn’t know there’s no ground underneath him.


pages: 345 words: 75,660

Prediction Machines: The Simple Economics of Artificial Intelligence by Ajay Agrawal, Joshua Gans, Avi Goldfarb

Abraham Wald, Ada Lovelace, AI winter, Air France Flight 447, Airbus A320, algorithmic bias, AlphaGo, Amazon Picking Challenge, artificial general intelligence, autonomous vehicles, backpropagation, basic income, Bayesian statistics, Black Swan, blockchain, call centre, Capital in the Twenty-First Century by Thomas Piketty, Captain Sullenberger Hudson, carbon tax, Charles Babbage, classic study, collateralized debt obligation, computer age, creative destruction, Daniel Kahneman / Amos Tversky, data acquisition, data is the new oil, data science, deep learning, DeepMind, deskilling, disruptive innovation, driverless car, Elon Musk, en.wikipedia.org, Erik Brynjolfsson, everywhere but in the productivity statistics, financial engineering, fulfillment center, general purpose technology, Geoffrey Hinton, Google Glasses, high net worth, ImageNet competition, income inequality, information retrieval, inventory management, invisible hand, Jeff Hawkins, job automation, John Markoff, Joseph Schumpeter, Kevin Kelly, Lyft, Minecraft, Mitch Kapor, Moneyball by Michael Lewis explains big data, Nate Silver, new economy, Nick Bostrom, On the Economy of Machinery and Manufactures, OpenAI, paperclip maximiser, pattern recognition, performance metric, profit maximization, QWERTY keyboard, race to the bottom, randomized controlled trial, Ray Kurzweil, ride hailing / ride sharing, Robert Solow, Salesforce, Second Machine Age, self-driving car, shareholder value, Silicon Valley, statistical model, Stephen Hawking, Steve Jobs, Steve Jurvetson, Steven Levy, strong AI, The Future of Employment, the long tail, The Signal and the Noise by Nate Silver, Tim Cook: Apple, trolley problem, Turing test, Uber and Lyft, uber lyft, US Airways Flight 1549, Vernor Vinge, vertical integration, warehouse automation, warehouse robotics, Watson beat the top human players on Jeopardy!, William Langewiesche, Y Combinator, zero-sum game

Also, we thank our colleagues for discussions and feedback, including Nick Adams, Umair Akeel, Susan Athey, Naresh Bangia, Nick Beim, Dennis Bennie, James Bergstra, Dror Berman, Vincent Bérubé, Jim Bessen, Scott Bonham, Erik Brynjolfsson, Andy Burgess, Elizabeth Caley, Peter Carrescia, Iain Cockburn, Christian Catalini, James Cham, Nicolas Chapados, Tyson Clark, Paul Cubbon, Zavain Dar, Sally Daub, Dan Debow, Ron Dembo, Helene Desmarais, JP Dube, Candice Faktor, Haig Farris, Chen Fong, Ash Fontana, John Francis, April Franco, Suzanne Gildert, Anindya Ghose, Ron Glozman, Ben Goertzel, Shane Greenstein, Kanu Gulati, John Harris, Deepak Hegde, Rebecca Henderson, Geoff Hinton, Tim Hodgson, Michael Hyatt, Richard Hyatt, Ben Jones, Chad Jones, Steve Jurvetson, Satish Kanwar, Danny Kahneman, John Kelleher, Moe Kermani, Vinod Khosla, Karin Klein, Darrell Kopke, Johann Koss, Katya Kudashkina, Michael Kuhlmann, Tony Lacavera, Allen Lau, Eva Lau, Yann LeCun, Mara Lederman, Lisha Li, Ted Livingston, Jevon MacDonald, Rupam Mahmood, Chris Matys, Kristina McElheran, John McHale, Sanjog Misra, Matt Mitchell, Sanjay Mittal, Ash Munshi, Michael Murchison, Ken Nickerson, Olivia Norton, Alex Oettl, David Ossip, Barney Pell, Andrea Prat, Tomi Poutanen, Marzio Pozzuoli, Lally Rementilla, Geordie Rose, Maryanna Saenko, Russ Salakhutdinov, Reza Satchu, Michael Serbinis, Ashmeet Sidana, Micah Siegel, Dilip Soman, John Stackhouse, Scott Stern, Ted Sum, Rich Sutton, Steve Tadelis, Shahram Tafazoli, Graham Taylor, Florenta Teodoridis, Richard Titus, Dan Trefler, Catherine Tucker, William Tunstall-Pedoe, Stephan Uhrenbacher, Cliff van der Linden, Miguel Villas-Boas, Neil Wainwright, Boris Wertz, Dan Wilson, Peter Wittek, Alexander Wong, Shelley Zhuang, and Shivon Zilis.

Long term, however, Kindred is using a prediction machine trained on many observations of a human grasping via teleoperation to teach the robot to do that part itself. Should We Stop Training Radiologists? In October 2016, standing on stage in front of an audience of six hundred at our annual CDL conference on the business of machine intelligence, Geoffrey Hinton—a pioneer in deep learning neural networks—declared, “We should stop training radiologists now.” A key part of a radiologist’s job is to read images and detect the presence of irregularities that suggest medical problems. In Hinton’s view, AI would soon be better able to identify medically important objects in an image than any human.


pages: 413 words: 119,587

Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots by John Markoff

A Declaration of the Independence of Cyberspace, AI winter, airport security, Andy Rubin, Apollo 11, Apple II, artificial general intelligence, Asilomar, augmented reality, autonomous vehicles, backpropagation, basic income, Baxter: Rethink Robotics, Bill Atkinson, Bill Duvall, bioinformatics, Boston Dynamics, Brewster Kahle, Burning Man, call centre, cellular automata, Charles Babbage, Chris Urmson, Claude Shannon: information theory, Clayton Christensen, clean water, cloud computing, cognitive load, collective bargaining, computer age, Computer Lib, computer vision, crowdsourcing, Danny Hillis, DARPA: Urban Challenge, data acquisition, Dean Kamen, deep learning, DeepMind, deskilling, Do you want to sell sugared water for the rest of your life?, don't be evil, Douglas Engelbart, Douglas Engelbart, Douglas Hofstadter, Dr. Strangelove, driverless car, dual-use technology, Dynabook, Edward Snowden, Elon Musk, Erik Brynjolfsson, Evgeny Morozov, factory automation, Fairchild Semiconductor, Fillmore Auditorium, San Francisco, From Mathematics to the Technologies of Life and Death, future of work, Galaxy Zoo, General Magic , Geoffrey Hinton, Google Glasses, Google X / Alphabet X, Grace Hopper, Gunnar Myrdal, Gödel, Escher, Bach, Hacker Ethic, Hans Moravec, haute couture, Herbert Marcuse, hive mind, hype cycle, hypertext link, indoor plumbing, industrial robot, information retrieval, Internet Archive, Internet of things, invention of the wheel, Ivan Sutherland, Jacques de Vaucanson, Jaron Lanier, Jeff Bezos, Jeff Hawkins, job automation, John Conway, John Markoff, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John Perry Barlow, John von Neumann, Kaizen: continuous improvement, Kevin Kelly, Kiva Systems, knowledge worker, Kodak vs Instagram, labor-force participation, loose coupling, Marc Andreessen, Mark Zuckerberg, Marshall McLuhan, medical residency, Menlo Park, military-industrial complex, Mitch Kapor, Mother of all demos, natural language processing, Neil Armstrong, new economy, Norbert Wiener, PageRank, PalmPilot, pattern recognition, Philippa Foot, pre–internet, RAND corporation, Ray Kurzweil, reality distortion field, Recombinant DNA, Richard Stallman, Robert Gordon, Robert Solow, Rodney Brooks, Sand Hill Road, Second Machine Age, self-driving car, semantic web, Seymour Hersh, shareholder value, side project, Silicon Valley, Silicon Valley startup, Singularitarianism, skunkworks, Skype, social software, speech recognition, stealth mode startup, Stephen Hawking, Steve Ballmer, Steve Jobs, Steve Wozniak, Steven Levy, Stewart Brand, Strategic Defense Initiative, strong AI, superintelligent machines, tech worker, technological singularity, Ted Nelson, TED Talk, telemarketer, telepresence, telepresence robot, Tenerife airport disaster, The Coming Technological Singularity, the medium is the message, Thorstein Veblen, Tony Fadell, trolley problem, Turing test, Vannevar Bush, Vernor Vinge, warehouse automation, warehouse robotics, Watson beat the top human players on Jeopardy!, We are as Gods, Whole Earth Catalog, William Shockley: the traitorous eight, zero-sum game

Finding only a small ministry of science laboratory and a professor who was working in a related field, LeCun obtained funding and laboratory space. His new professor told him, “I’ve no idea what you’re doing, but you seem like a smart guy so I’ll sign the papers.” But he didn’t stay long. First he went off to Geoff Hinton’s neural network group at the University of Toronto, and when the Bell Labs offer arrived he moved to New Jersey, continuing to refine his approach known as convolutional neural nets, initially focusing on the problem of recognizing handwritten characters for automated mail-sorting applications.

Interest in neural networks would not reemerge until 1978, with the work of Terry Sejnowski, a postdoctoral student in neurobiology at Harvard. Sejnowski had given up his early focus on physics and turned to neuroscience. After taking a summer course in Woods Hole, Massachusetts, he found himself captivated by the mystery of the brain. That year a British postdoctoral psychologist, Geoffrey Hinton, was studying at the University of California at San Diego under David Rumelhart. The older UC scientist had created the parallel-distributed processing group with Donald Norman, the founder of the cognitive psychology department at the school. Hinton, who was the great-great-grandson of logician George Boole, had come to the United States as a “refugee” as a direct consequence of the original AI Winter in England.

Known as the Neural Computation and Adaptive Perception project, it permitted him to handpick the most suitable researchers in the world across a range of fields stretching from neuroscience to electrical engineering. It helped crystallize a community of people interested in the neural network research. Terry Sejnowski, Yann LeCun, and Geoffrey Hinton (from left to right), three scientists who helped revive artificial intelligence by developing biologically inspired neural network algorithms. (Photo courtesy of Yann LeCun) This time they had something else going for them—the pace of computing power had accelerated, making it possible to build neural networks of vast scale, processing data sets orders of magnitude larger than before.


pages: 396 words: 117,149

The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World by Pedro Domingos

Albert Einstein, Amazon Mechanical Turk, Arthur Eddington, backpropagation, basic income, Bayesian statistics, Benoit Mandelbrot, bioinformatics, Black Swan, Brownian motion, cellular automata, Charles Babbage, Claude Shannon: information theory, combinatorial explosion, computer vision, constrained optimization, correlation does not imply causation, creative destruction, crowdsourcing, Danny Hillis, data is not the new oil, data is the new oil, data science, deep learning, DeepMind, double helix, Douglas Hofstadter, driverless car, Erik Brynjolfsson, experimental subject, Filter Bubble, future of work, Geoffrey Hinton, global village, Google Glasses, Gödel, Escher, Bach, Hans Moravec, incognito mode, information retrieval, Jeff Hawkins, job automation, John Markoff, John Snow's cholera map, John von Neumann, Joseph Schumpeter, Kevin Kelly, large language model, lone genius, machine translation, mandelbrot fractal, Mark Zuckerberg, Moneyball by Michael Lewis explains big data, Narrative Science, Nate Silver, natural language processing, Netflix Prize, Network effects, Nick Bostrom, NP-complete, off grid, P = NP, PageRank, pattern recognition, phenotype, planetary scale, power law, pre–internet, random walk, Ray Kurzweil, recommendation engine, Richard Feynman, scientific worldview, Second Machine Age, self-driving car, Silicon Valley, social intelligence, speech recognition, Stanford marshmallow experiment, statistical model, Stephen Hawking, Steven Levy, Steven Pinker, superintelligent machines, the long tail, the scientific method, The Signal and the Noise by Nate Silver, theory of mind, Thomas Bayes, transaction costs, Turing machine, Turing test, Vernor Vinge, Watson beat the top human players on Jeopardy!, white flight, yottabyte, zero-sum game

Another big issue that Hopfield’s model ignored is that real neurons are statistical: they don’t deterministically turn on and off as a function of their inputs; rather, as the weighted sum of inputs increases, the neuron becomes more likely to fire, but it’s not certain that it will. In 1985, David Ackley, Geoff Hinton, and Terry Sejnowski replaced the deterministic neurons in Hopfield networks with probabilistic ones. A neural network now had a probability distribution over its states, with higher-energy states being exponentially less likely than lower-energy ones. In fact, the probability of finding the network in a particular state was given by the well-known Boltzmann distribution from thermodynamics, so they called their network a Boltzmann machine.

If two neurons tend to fire together during the day but less so while asleep, the weight of their connection goes up; if it’s the opposite, they go down. By doing this day after day, the predicted correlations between sensory neurons evolve until they match the real ones. At this point, the Boltzmann machine has learned a good model of the data and effectively solved the credit-assignment problem. Geoff Hinton went on to try many variations on Boltzmann machines over the following decades. Hinton, a psychologist turned computer scientist and great-great-grandson of George Boole, the inventor of the logical calculus used in all digital computers, is the world’s leading connectionist. He has tried longer and harder to understand how the brain works than anyone else.

A linear brain, no matter how large, is dumber than a roundworm. S curves are a nice halfway house between the dumbness of linear functions and the hardness of step functions. The perceptron’s revenge Backprop was invented in 1986 by David Rumelhart, a psychologist at the University of California, San Diego, with the help of Geoff Hinton and Ronald Williams. Among other things, they showed that backprop can learn XOR, enabling connectionists to thumb their noses at Minsky and Papert. Recall the Nike example: young men and middle-aged women are the most likely buyers of Nike shoes. We can represent this with a network of three neurons: one that fires when it sees a young male, another that fires when it sees a middle-aged female, and another that fires when either of those does.


pages: 144 words: 43,356

Surviving AI: The Promise and Peril of Artificial Intelligence by Calum Chace

3D printing, Ada Lovelace, AI winter, Airbnb, Alvin Toffler, artificial general intelligence, augmented reality, barriers to entry, basic income, bitcoin, Bletchley Park, blockchain, brain emulation, Buckminster Fuller, Charles Babbage, cloud computing, computer age, computer vision, correlation does not imply causation, credit crunch, cryptocurrency, cuban missile crisis, deep learning, DeepMind, dematerialisation, Demis Hassabis, discovery of the americas, disintermediation, don't be evil, driverless car, Elon Musk, en.wikipedia.org, epigenetics, Erik Brynjolfsson, everywhere but in the productivity statistics, Flash crash, friendly AI, Geoffrey Hinton, Google Glasses, hedonic treadmill, hype cycle, industrial robot, Internet of things, invention of agriculture, job automation, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John von Neumann, Kevin Kelly, life extension, low skilled workers, machine translation, Mahatma Gandhi, means of production, mutually assured destruction, Neil Armstrong, Nicholas Carr, Nick Bostrom, paperclip maximiser, pattern recognition, peer-to-peer, peer-to-peer model, Peter Thiel, radical life extension, Ray Kurzweil, Robert Solow, Rodney Brooks, Second Machine Age, self-driving car, Silicon Valley, Silicon Valley ideology, Skype, South Sea Bubble, speech recognition, Stanislav Petrov, Stephen Hawking, Steve Jobs, strong AI, technological singularity, TED Talk, The future is already here, The Future of Employment, theory of mind, Turing machine, Turing test, universal basic income, Vernor Vinge, wage slave, Wall-E, zero-sum game

Most famously, it paid $500m in January 2014 for DeepMind, a two year-old company employing just 75 people which builds AIs that can learn to play video games better than people. Later in the year it paid another eight-figure sum to hire the seven academics who had established Dark Blue Labs and Vision Factory, two more AI start-ups based in the UK. Before that, in March 2013, it had hired Geoff Hinton, one of the pioneers of machine learning, based in Toronto. All this activity is partly a matter of economic ambition, but it goes wider than that. Google’s founders and leaders want the company to be financially successful, but they also want it to make a difference to people’s lives. Founders Larry Page and Sergei Brin think the future will be a better place for humans than the present, and they are impatient for it to arrive.

The first experiments with ANNs were made in the 1950s, and Frank Rosenblatt used them to construct the Mark I Perceptron, the first computer which could learn new skills by trial and error. Early hopes for the quick development of thinking machines were dashed, however, and neural nets fell into disuse until the late 1980s, when they experienced a renaissance along with what came to be known as deep learning thanks to pioneers Yann LeCun (now at Facebook), Geoff Hinton (now at Google) and Yoshua Bengio, a professor at the University of Montreal. Yann LeCun describes deep learning as follows. “A pattern recognition system is like a black box with a camera at one end, a green light and a red light on top, and a whole bunch of knobs on the front. The learning algorithm tries to adjust the knobs so that when, say, a dog is in front of the camera, the red light turns on, and when a car is put in front of the camera, the green light turns on.

We are able to learn about categories of items at a higher level of abstraction. AGI optimists think that we will work out how to do that with computers too. There are plenty of serious AI researchers who do believe that the probabilistic techniques of machine learning will lead to AGI within a few decades rather than centuries. The veteran AI researcher Geoff Hinton, now working at Google, forecast in May 2015 that the first machine with common sense could be developed in ten years. (34) Part of the reason for the difference of opinion may be that the latter group take very seriously the notion that exponential progress in computing capability will speed progress towards the creation of an AGI.


pages: 346 words: 97,890

The Road to Conscious Machines by Michael Wooldridge

Ada Lovelace, AI winter, algorithmic bias, AlphaGo, Andrew Wiles, Anthropocene, artificial general intelligence, Asilomar, augmented reality, autonomous vehicles, backpropagation, basic income, Bletchley Park, Boeing 747, British Empire, call centre, Charles Babbage, combinatorial explosion, computer vision, Computing Machinery and Intelligence, DARPA: Urban Challenge, deep learning, deepfake, DeepMind, Demis Hassabis, don't be evil, Donald Trump, driverless car, Elaine Herzberg, Elon Musk, Eratosthenes, factory automation, fake news, future of work, gamification, general purpose technology, Geoffrey Hinton, gig economy, Google Glasses, intangible asset, James Watt: steam engine, job automation, John von Neumann, Loebner Prize, Minecraft, Mustafa Suleyman, Nash equilibrium, Nick Bostrom, Norbert Wiener, NP-complete, P = NP, P vs NP, paperclip maximiser, pattern recognition, Philippa Foot, RAND corporation, Ray Kurzweil, Rodney Brooks, self-driving car, Silicon Valley, Stephen Hawking, Steven Pinker, strong AI, technological singularity, telemarketer, Tesla Model S, The Coming Technological Singularity, The Future of Employment, the scientific method, theory of mind, Thomas Bayes, Thomas Kuhn: the structure of scientific revolutions, traveling salesman, trolley problem, Turing machine, Turing test, universal basic income, Von Neumann architecture, warehouse robotics

A neuron in a state-of-the-art neural network at the time of writing would have about as many connections as there are in a cat brain; a human neuron has on average about 10,000. So, deep neural networks have more layers, and more, better-connected neurons. To train such networks, techniques beyond backprop were needed, and these were provided in 2006 by Geoff Hinton, a British-Canadian researcher who, more than anyone else, is identified with the deep learning movement. Hinton is, by any reckoning, a remarkable individual. He was one of the leaders of the PDP movement in the 1980s, and one of the inventors of backprop. What I find personally so remarkable is that Hinton didn’t lose heart when PDP research began to lose favour.

If you look at the ‘frisbee’ category, then you’ll see that really the only thing they feature in common is, well, frisbees. In some images, of course, the frisbees are being thrown from one person to another, but in some, the frisbee is on a table, with nobody in view. They are all different – except that they all feature frisbees. The eureka moment for image classification came in 2012, when Geoff Hinton and two colleagues, Alex Krizhevsky and Ilya Sutskever, demonstrated a system called AlexNet, a neural net that dramatically improved performance in an international image recognition competition.10 The final ingredient required to make deep learning work was raw computer-processing power. Training a deep neural net requires a huge amount of computer-processing time.

In the remainder of this chapter, I want to look in more detail at two of the most prominent opportunities for AI: the first is the use of AI in healthcare; the second is the long-held dream of driverless cars. AI-Powered Healthcare People should stop training radiologists now. It is just completely obvious that within five years deep learning is going to do better than radiologists. –– Geoff Hinton (2016) Cardiogram is building your personal healthcare assistant. We want to turn your wearable device into a continuous health monitor that can be used to not only track sleep and fitness, but one day may also prevent a stroke and save your life. –– Cardiogram company website4 Anybody with even the vaguest interest in politics and economics will recognize that the provision of healthcare is one of the most important global financial problems for private citizens and for governments.


pages: 1,331 words: 163,200

Hands-On Machine Learning With Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems by Aurélien Géron

AlphaGo, Amazon Mechanical Turk, Anton Chekhov, backpropagation, combinatorial explosion, computer vision, constrained optimization, correlation coefficient, crowdsourcing, data science, deep learning, DeepMind, don't repeat yourself, duck typing, Elon Musk, en.wikipedia.org, friendly AI, Geoffrey Hinton, ImageNet competition, information retrieval, iterative process, John von Neumann, Kickstarter, machine translation, natural language processing, Netflix Prize, NP-complete, OpenAI, optical character recognition, P = NP, p-value, pattern recognition, pull request, recommendation engine, self-driving car, sentiment analysis, SpamAssassin, speech recognition, stochastic process

Polyak (1964). 12 “A Method for Unconstrained Convex Minimization Problem with the Rate of Convergence O(1/k2 ),” Yurii Nesterov (1983). 13 “Adaptive Subgradient Methods for Online Learning and Stochastic Optimization,” J. Duchi et al. (2011). 14 This algorithm was created by Tijmen Tieleman and Geoffrey Hinton in 2012, and presented by Geoffrey Hinton in his Coursera class on neural networks (slides: http://goo.gl/RsQeis; video: https://goo.gl/XUbIyJ). Amusingly, since the authors have not written a paper to describe it, researchers often cite “slide 29 in lecture 6” in their papers. 15 “Adam: A Method for Stochastic Optimization,” D.

If any code samples or other technology this work contains or describes is subject to open source licenses or the intellectual property rights of others, it is your responsibility to ensure that your use thereof complies with such licenses and/or rights. 978-1-491-96229-9 [LSI] Preface The Machine Learning Tsunami In 2006, Geoffrey Hinton et al. published a paper1 showing how to train a deep neural network capable of recognizing handwritten digits with state-of-the-art precision (>98%). They branded this technique “Deep Learning.” Training a deep neural net was widely considered impossible at the time,2 and most researchers had abandoned the idea since the 1990s.

Deep Learning is best suited for complex problems such as image recognition, speech recognition, or natural language processing, provided you have enough data, computing power, and patience. Other Resources Many resources are available to learn about Machine Learning. Andrew Ng’s ML course on Coursera and Geoffrey Hinton’s course on neural networks and Deep Learning are amazing, although they both require a significant time investment (think months). There are also many interesting websites about Machine Learning, including of course Scikit-Learn’s exceptional User Guide. You may also enjoy Dataquest, which provides very nice interactive tutorials, and ML blogs such as those listed on Quora.


pages: 307 words: 88,180

AI Superpowers: China, Silicon Valley, and the New World Order by Kai-Fu Lee

"World Economic Forum" Davos, AI winter, Airbnb, Albert Einstein, algorithmic bias, algorithmic trading, Alignment Problem, AlphaGo, artificial general intelligence, autonomous vehicles, barriers to entry, basic income, bike sharing, business cycle, Cambridge Analytica, cloud computing, commoditize, computer vision, corporate social responsibility, cotton gin, creative destruction, crony capitalism, data science, deep learning, DeepMind, Demis Hassabis, Deng Xiaoping, deskilling, Didi Chuxing, Donald Trump, driverless car, Elon Musk, en.wikipedia.org, Erik Brynjolfsson, fake news, full employment, future of work, general purpose technology, Geoffrey Hinton, gig economy, Google Chrome, Hans Moravec, happiness index / gross national happiness, high-speed rail, if you build it, they will come, ImageNet competition, impact investing, income inequality, informal economy, Internet of things, invention of the telegraph, Jeff Bezos, job automation, John Markoff, Kickstarter, knowledge worker, Lean Startup, low skilled workers, Lyft, machine translation, mandatory minimum, Mark Zuckerberg, Menlo Park, minimum viable product, natural language processing, Neil Armstrong, new economy, Nick Bostrom, OpenAI, pattern recognition, pirate software, profit maximization, QR code, Ray Kurzweil, recommendation engine, ride hailing / ride sharing, risk tolerance, Robert Mercer, Rodney Brooks, Rubik’s Cube, Sam Altman, Second Machine Age, self-driving car, sentiment analysis, sharing economy, Silicon Valley, Silicon Valley ideology, Silicon Valley startup, Skype, SoftBank, Solyndra, special economic zone, speech recognition, Stephen Hawking, Steve Jobs, strong AI, TED Talk, The Future of Employment, Travis Kalanick, Uber and Lyft, uber lyft, universal basic income, urban planning, vertical integration, Vision Fund, warehouse robotics, Y Combinator

But the networks themselves were still severely limited in what they could do. Accurate results to complex problems required many layers of artificial neurons, but researchers hadn’t found a way to efficiently train those layers as they were added. Deep learning’s big technical break finally arrived in the mid-2000s, when leading researcher Geoffrey Hinton discovered a way to efficiently train those new layers in neural networks. The result was like giving steroids to the old neural networks, multiplying their power to perform tasks such as speech and object recognition. Soon, these juiced-up neural networks—now rebranded as “deep learning”—could outperform older models at a variety of tasks.

People are so excited about deep learning precisely because its core power—its ability to recognize a pattern, optimize for a specific outcome, make a decision—can be applied to so many different kinds of everyday problems. That’s why companies like Google and Facebook have scrambled to snap up the small core of deep-learning experts, paying them millions of dollars to pursue ambitious research projects. In 2013, Google acquired the startup founded by Geoffrey Hinton, and the following year scooped up British AI startup DeepMind—the company that went on to build AlphaGo—for over $500 million. The results of these projects have continued to awe observers and grab headlines. They’ve shifted the cultural zeitgeist and given us a sense that we stand at the precipice of a new era, one in which machines will radically empower and/or violently displace human beings.

That’s a process that requires well-trained AI scientists, the tinkerers of this age. Today, those tinkerers are putting AI’s superhuman powers of pattern recognition to use making loans, driving cars, translating text, playing Go, and powering your Amazon Alexa. Deep-learning pioneers like Geoffrey Hinton, Yann LeCun, and Yoshua Bengio—the Enrico Fermis of AI—continue to push the boundaries of artificial intelligence. And they may yet produce another game-changing breakthrough, one that scrambles the global technological pecking order. But in the meantime, the real action today is with the tinkerers.


pages: 340 words: 97,723

The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity by Amy Webb

"Friedman doctrine" OR "shareholder theory", Ada Lovelace, AI winter, air gap, Airbnb, airport security, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, algorithmic bias, AlphaGo, Andy Rubin, artificial general intelligence, Asilomar, autonomous vehicles, backpropagation, Bayesian statistics, behavioural economics, Bernie Sanders, Big Tech, bioinformatics, Black Lives Matter, blockchain, Bretton Woods, business intelligence, Cambridge Analytica, Cass Sunstein, Charles Babbage, Claude Shannon: information theory, cloud computing, cognitive bias, complexity theory, computer vision, Computing Machinery and Intelligence, CRISPR, cross-border payments, crowdsourcing, cryptocurrency, Daniel Kahneman / Amos Tversky, data science, deep learning, DeepMind, Demis Hassabis, Deng Xiaoping, disinformation, distributed ledger, don't be evil, Donald Trump, Elon Musk, fail fast, fake news, Filter Bubble, Flynn Effect, Geoffrey Hinton, gig economy, Google Glasses, Grace Hopper, Gödel, Escher, Bach, Herman Kahn, high-speed rail, Inbox Zero, Internet of things, Jacques de Vaucanson, Jeff Bezos, Joan Didion, job automation, John von Neumann, knowledge worker, Lyft, machine translation, Mark Zuckerberg, Menlo Park, move fast and break things, Mustafa Suleyman, natural language processing, New Urbanism, Nick Bostrom, one-China policy, optical character recognition, packet switching, paperclip maximiser, pattern recognition, personalized medicine, RAND corporation, Ray Kurzweil, Recombinant DNA, ride hailing / ride sharing, Rodney Brooks, Rubik’s Cube, Salesforce, Sand Hill Road, Second Machine Age, self-driving car, seminal paper, SETI@home, side project, Silicon Valley, Silicon Valley startup, skunkworks, Skype, smart cities, South China Sea, sovereign wealth fund, speech recognition, Stephen Hawking, strong AI, superintelligent machines, surveillance capitalism, technological singularity, The Coming Technological Singularity, the long tail, theory of mind, Tim Cook: Apple, trade route, Turing machine, Turing test, uber lyft, Von Neumann architecture, Watson beat the top human players on Jeopardy!, zero day

In all of these cases, the computers would make incomprehensible moves, or they’d play too aggressively, or they’d miscalculate their opponent’s posture. Sometime in the middle of all that work were a handful of researchers who, once again, were workshopping neural networks, an idea championed by Marvin Minsky and Frank Rosenblatt during the initial Dartmouth meeting. Cognitive scientist Geoff Hinton and computer scientists Yann Lecun and Yoshua Bengio each believed that neural net–based systems would not only have serious practical applications—like automatic fraud detection for credit cards and automatic optical character recognition for reading documents and checks—but that it would become the basis for what artificial intelligence would become.

This tribe of groundbreaking, brilliant comics laid the foundation for the future of American entertainment.14 Collectively, this group of men still wields influence today. In a way, AI went through a similar radical transformation because of a modern-day tribe that shared the same values, ideas, and goals. Those three deep-learning pioneers discussed earlier—Geoff Hinton, Yann Lecun, and Yoshua Bengio—were the Sam Kinisons and Richard Pryors of the AI world in the early days of deep neural nets. Lecun studied under Hinton at the University of Toronto where the Canadian Institute for Advanced Research (CIFAR) inculcated a small group of researchers, which included Yoshua Bengio.

But the white man is currently serving an eight-year prison term for yet another crime—breaking into a warehouse and stealing thousands of dollars’ worth of electronics.16 ProPublica looked at the risk scores assigned to more than 7,000 people arrested in Florida to see whether this was an anomaly—and again, they found significant bias encoded within the algorithms, which were twice as likely to incorrectly flag Black defendants as future criminals while mislabeling white defendants as low risk. The optimization effect sometimes causes brilliant AI tribes to make dumb decisions. Recall DeepMind, which built the AlphaGo and AlphaGo Zero systems and stunned the AI community as it dominated grandmaster Go matches. Before Google acquired the company, it sent Geoff Hinton (the University of Toronto professor who was on leave working on deep learning there) and Jeff Dean, who was in charge of Google Brain, to London on a private jet to meet its supernetwork of top PhDs in AI. Impressed with the technology and DeepMind’s remarkable team, they recommended that Google make an acquisition.


pages: 350 words: 98,077

Artificial Intelligence: A Guide for Thinking Humans by Melanie Mitchell

Ada Lovelace, AI winter, Alignment Problem, AlphaGo, Amazon Mechanical Turk, Apple's 1984 Super Bowl advert, artificial general intelligence, autonomous vehicles, backpropagation, Bernie Sanders, Big Tech, Boston Dynamics, Cambridge Analytica, Charles Babbage, Claude Shannon: information theory, cognitive dissonance, computer age, computer vision, Computing Machinery and Intelligence, dark matter, deep learning, DeepMind, Demis Hassabis, Douglas Hofstadter, driverless car, Elon Musk, en.wikipedia.org, folksonomy, Geoffrey Hinton, Gödel, Escher, Bach, I think there is a world market for maybe five computers, ImageNet competition, Jaron Lanier, job automation, John Markoff, John von Neumann, Kevin Kelly, Kickstarter, license plate recognition, machine translation, Mark Zuckerberg, natural language processing, Nick Bostrom, Norbert Wiener, ought to be enough for anybody, paperclip maximiser, pattern recognition, performance metric, RAND corporation, Ray Kurzweil, recommendation engine, ride hailing / ride sharing, Rodney Brooks, self-driving car, sentiment analysis, Silicon Valley, Singularitarianism, Skype, speech recognition, Stephen Hawking, Steve Jobs, Steve Wozniak, Steven Pinker, strong AI, superintelligent machines, tacit knowledge, tail risk, TED Talk, the long tail, theory of mind, There's no reason for any individual to have a computer in his home - Ken Olsen, trolley problem, Turing test, Vernor Vinge, Watson beat the top human players on Jeopardy!, world market for maybe five computers

In the next chapter, I’ll recount the extraordinary ascent of ConvNets from relative obscurity to near-complete dominance in machine vision, a transformation made possible by a concurrent technological revolution: that of “big data.” 5 ConvNets and ImageNet Yann LeCun, the inventor of ConvNets, has worked on neural networks all of his professional life, starting in the 1980s and continuing through the winters and springs of the field. As a graduate student and postdoctoral fellow, he was fascinated by Rosenblatt’s perceptrons and Fukushima’s neocognitron, but noted that the latter lacked a good supervised-learning algorithm. Along with other researchers (most notably, his postdoctoral advisor Geoffrey Hinton), LeCun helped develop such a learning method—essentially the same form of back-propagation used on ConvNets today.1 In the 1980s and ’90s, while working at Bell Labs, LeCun turned to the problem of recognizing handwritten digits and letters. He combined ideas from the neocognitron with the back-propagation algorithm to create the semi-eponymous “LeNet”—one of the earliest ConvNets.

LeNet and its successor ConvNets did not do well in scaling up to more complex vision tasks. By the mid-1990s, neural networks started falling out of favor in the AI community, and other methods came to dominate the field. But LeCun, still a believer, kept working on ConvNets, gradually improving them. As Geoffrey Hinton later said of LeCun, “He kind of carried the torch through the dark ages.”2 LeCun, Hinton, and other neural network loyalists believed that improved, larger versions of ConvNets and other deep networks would conquer computer vision if only they could be trained with enough data. Stubbornly, they kept working on the sidelines throughout the 2000s.

What’s more, the winning entry did not use support vector machines or any of the other dominant computer-vision methods of the day. Instead, it was a convolutional neural network. This particular ConvNet has come to be known as AlexNet, named after its main creator, Alex Krizhevsky, then a graduate student at the University of Toronto, supervised by the eminent neural network researcher Geoffrey Hinton. Krizhevsky, working with Hinton and a fellow student, Ilya Sutskever, created a scaled-up version of Yann LeCun’s LeNet from the 1990s; training such a large network was now made possible by increases in computer power. AlexNet had eight layers, with about sixty million weights whose values were learned via back-propagation from the million-plus training images.7 The Toronto group came up with some clever methods for making the network training work better, and it took a cluster of powerful computers about a week to train AlexNet.


System Error by Rob Reich

"Friedman doctrine" OR "shareholder theory", "World Economic Forum" Davos, 2021 United States Capitol attack, A Declaration of the Independence of Cyberspace, Aaron Swartz, AI winter, Airbnb, airport security, Alan Greenspan, Albert Einstein, algorithmic bias, AlphaGo, AltaVista, artificial general intelligence, Automated Insights, autonomous vehicles, basic income, Ben Horowitz, Berlin Wall, Bernie Madoff, Big Tech, bitcoin, Blitzscaling, Cambridge Analytica, Cass Sunstein, clean water, cloud computing, computer vision, contact tracing, contact tracing app, coronavirus, corporate governance, COVID-19, creative destruction, CRISPR, crowdsourcing, data is the new oil, data science, decentralized internet, deep learning, deepfake, DeepMind, deplatforming, digital rights, disinformation, disruptive innovation, Donald Knuth, Donald Trump, driverless car, dual-use technology, Edward Snowden, Elon Musk, en.wikipedia.org, end-to-end encryption, Fairchild Semiconductor, fake news, Fall of the Berlin Wall, Filter Bubble, financial engineering, financial innovation, fulfillment center, future of work, gentrification, Geoffrey Hinton, George Floyd, gig economy, Goodhart's law, GPT-3, Hacker News, hockey-stick growth, income inequality, independent contractor, informal economy, information security, Jaron Lanier, Jeff Bezos, Jim Simons, jimmy wales, job automation, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John Perry Barlow, Lean Startup, linear programming, Lyft, Marc Andreessen, Mark Zuckerberg, meta-analysis, minimum wage unemployment, Monkeys Reject Unequal Pay, move fast and break things, Myron Scholes, Network effects, Nick Bostrom, Northpointe / Correctional Offender Management Profiling for Alternative Sanctions, NP-complete, Oculus Rift, OpenAI, Panopticon Jeremy Bentham, Parler "social media", pattern recognition, personalized medicine, Peter Thiel, Philippa Foot, premature optimization, profit motive, quantitative hedge fund, race to the bottom, randomized controlled trial, recommendation engine, Renaissance Technologies, Richard Thaler, ride hailing / ride sharing, Ronald Reagan, Sam Altman, Sand Hill Road, scientific management, self-driving car, shareholder value, Sheryl Sandberg, Shoshana Zuboff, side project, Silicon Valley, Snapchat, social distancing, Social Responsibility of Business Is to Increase Its Profits, software is eating the world, spectrum auction, speech recognition, stem cell, Steve Jobs, Steven Levy, strong AI, superintelligent machines, surveillance capitalism, Susan Wojcicki, tech billionaire, tech worker, techlash, technoutopianism, Telecommunications Act of 1996, telemarketer, The Future of Employment, TikTok, Tim Cook: Apple, traveling salesman, Triangle Shirtwaist Factory, trolley problem, Turing test, two-sided market, Uber and Lyft, uber lyft, ultimatum game, union organizing, universal basic income, washing machines reduced drudgery, Watson beat the top human players on Jeopardy!, When a measure becomes a target, winner-take-all economy, Y Combinator, you are the product

., “International Evaluation of an AI System for Breast Cancer Screening,” Nature 577 (January 2020): 89–94, https://doi.org/10.1038/s41586-019-1799-6. “an algorithm that can detect”: Pranav Rajpurkar et al., “Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning,” CheXNet, December 25, 2017, http://arxiv.org/abs/1711.05225. “people should stop training radiologists”: Geoff Hinton, “Geoff Hinton: On Radiology,” Creative Destruction Lab, uploaded to YouTube November 24, 2016, https://www.youtube.com/watch?v=2HMPRXstSvQ. the work radiologists and other medical professionals do: Hugh Harvey, “Why AI Will Not Replace Radiologists,” Medium, April 7, 2018, https://towardsdatascience.com/why-ai-will-not-replace-radiologists-c7736f2c7d80.

They noted, “In an independent study of six radiologists, the AI system outperformed all of the human [mammogram] readers.” Similarly, a team from Stanford developed “an algorithm that can detect pneumonia from chest X-rays at a level exceeding practicing radiologists.” Developments such as these led Geoff Hinton, a pioneer in neural networks and deep learning and a winner of the 2018 A. M. Turing Award, to state that “people should stop training radiologists now. It’s just completely obvious that within five years deep learning is going to do better than radiologists.” That was in 2016. Since that time, it has been noted that the work radiologists and other medical professionals do is much broader than just interpreting X-rays.


pages: 477 words: 75,408

The Economic Singularity: Artificial Intelligence and the Death of Capitalism by Calum Chace

"World Economic Forum" Davos, 3D printing, additive manufacturing, agricultural Revolution, AI winter, Airbnb, AlphaGo, Alvin Toffler, Amazon Robotics, Andy Rubin, artificial general intelligence, augmented reality, autonomous vehicles, banking crisis, basic income, Baxter: Rethink Robotics, Berlin Wall, Bernie Sanders, bitcoin, blockchain, Boston Dynamics, bread and circuses, call centre, Chris Urmson, congestion charging, credit crunch, David Ricardo: comparative advantage, deep learning, DeepMind, Demis Hassabis, digital divide, Douglas Engelbart, Dr. Strangelove, driverless car, Elon Musk, en.wikipedia.org, Erik Brynjolfsson, Fairchild Semiconductor, Flynn Effect, full employment, future of work, Future Shock, gender pay gap, Geoffrey Hinton, gig economy, Google Glasses, Google X / Alphabet X, Hans Moravec, Herman Kahn, hype cycle, ImageNet competition, income inequality, industrial robot, Internet of things, invention of the telephone, invisible hand, James Watt: steam engine, Jaron Lanier, Jeff Bezos, job automation, John Markoff, John Maynard Keynes: technological unemployment, John von Neumann, Kevin Kelly, Kiva Systems, knowledge worker, lifelogging, lump of labour, Lyft, machine translation, Marc Andreessen, Mark Zuckerberg, Martin Wolf, McJob, means of production, Milgram experiment, Narrative Science, natural language processing, Neil Armstrong, new economy, Nick Bostrom, Occupy movement, Oculus Rift, OpenAI, PageRank, pattern recognition, post scarcity, post-industrial society, post-work, precariat, prediction markets, QWERTY keyboard, railway mania, RAND corporation, Ray Kurzweil, RFID, Rodney Brooks, Sam Altman, Satoshi Nakamoto, Second Machine Age, self-driving car, sharing economy, Silicon Valley, Skype, SoftBank, software is eating the world, speech recognition, Stephen Hawking, Steve Jobs, TaskRabbit, technological singularity, TED Talk, The future is already here, The Future of Employment, Thomas Malthus, transaction costs, Two Sigma, Tyler Cowen, Tyler Cowen: Great Stagnation, Uber for X, uber lyft, universal basic income, Vernor Vinge, warehouse automation, warehouse robotics, working-age population, Y Combinator, young professional

This became known as symbolic AI, or Good Old-Fashioned AI (GOFAI). Machine learning, by contrast, is the process of creating and refining algorithms which can produce conclusions based on data without being explicitly programmed to do so. The turning point came in 2012 when researchers in Toronto led by Geoff Hinton won an AI image recognition competition called ImageNet.[lxiv] Hinton is a British researcher now at Toronto University and Google, and perhaps the most important figure behind the rise of deep learning as the most powerful of today's AI techniques. (The word algorithm comes from the name of a 9th-century Persian mathematician, Al-Khwarizmi.

Even the worst case predictions envisage continued rapid improvement in computer processing power, albeit perhaps slower than previously. In December 2015, Microsoft's chief speech scientist Xuedong Huang noted that speech recognition has improved 20% a year consistently for the last 20 years. He predicted that computers would be as good as humans at understanding human speech within five years. Geoff Hinton – the man whose team won the landmark 2012 ImageNet competition – went further. In May 2015 he said that he expects machines to demonstrate common sense within a decade. Common sense can be described as having a mental model of the world which allows you to predict what will happen if certain actions are taken.


pages: 339 words: 94,769

Possible Minds: Twenty-Five Ways of Looking at AI by John Brockman

AI winter, airport security, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Alignment Problem, AlphaGo, artificial general intelligence, Asilomar, autonomous vehicles, basic income, Benoit Mandelbrot, Bill Joy: nanobots, Bletchley Park, Buckminster Fuller, cellular automata, Claude Shannon: information theory, Computing Machinery and Intelligence, CRISPR, Daniel Kahneman / Amos Tversky, Danny Hillis, data science, David Graeber, deep learning, DeepMind, Demis Hassabis, easy for humans, difficult for computers, Elon Musk, Eratosthenes, Ernest Rutherford, fake news, finite state, friendly AI, future of work, Geoffrey Hinton, Geoffrey West, Santa Fe Institute, gig economy, Hans Moravec, heat death of the universe, hype cycle, income inequality, industrial robot, information retrieval, invention of writing, it is difficult to get a man to understand something, when his salary depends on his not understanding it, James Watt: steam engine, Jeff Hawkins, Johannes Kepler, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John von Neumann, Kevin Kelly, Kickstarter, Laplace demon, Large Hadron Collider, Loebner Prize, machine translation, market fundamentalism, Marshall McLuhan, Menlo Park, military-industrial complex, mirror neurons, Nick Bostrom, Norbert Wiener, OpenAI, optical character recognition, paperclip maximiser, pattern recognition, personalized medicine, Picturephone, profit maximization, profit motive, public intellectual, quantum cryptography, RAND corporation, random walk, Ray Kurzweil, Recombinant DNA, Richard Feynman, Rodney Brooks, self-driving car, sexual politics, Silicon Valley, Skype, social graph, speech recognition, statistical model, Stephen Hawking, Steven Pinker, Stewart Brand, strong AI, superintelligent machines, supervolcano, synthetic biology, systems thinking, technological determinism, technological singularity, technoutopianism, TED Talk, telemarketer, telerobotics, The future is already here, the long tail, the scientific method, theory of mind, trolley problem, Turing machine, Turing test, universal basic income, Upton Sinclair, Von Neumann architecture, Whole Earth Catalog, Y2K, you are the product, zero-sum game

Within a few years, Judea’s Bayesian networks had completely overshadowed the previous rule-based approaches to artificial intelligence. The advent of deep learning—in which computers, in effect, teach themselves to be smarter by observing tons of data—has given him pause, because this method lacks transparency. While recognizing the impressive achievements in deep learning by colleagues such as Michael I. Jordan and Geoffrey Hinton, he feels uncomfortable with this kind of opacity. He set out to understand the theoretical limitations of deep-learning systems and points out that basic barriers exist that will prevent them from achieving a human kind of intelligence, no matter what we do. Leveraging the computational benefits of Bayesian networks, Judea realized that the combination of simple graphical models and data could also be used to represent and infer cause-effect relationships.

Another strong incentive to turn a blind eye to the AI risk is the (very human) curiosity that knows no bounds. “When you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you have had your technical success. That is the way it was with the atomic bomb,” said J. Robert Oppenheimer. His words were echoed recently by Geoffrey Hinton, arguably the inventor of deep learning, in the context of AI risk: “I could give you the usual arguments, but the truth is that the prospect of discovery is too sweet.” Undeniably, we have both entrepreneurial attitude and scientific curiosity to thank for almost all the nice things we take for granted in the modern era.

We tend to underestimate the complexity and creativity of the human brain and how amazingly general it is. If AI is to become more humanlike in its abilities, the machine-learning and neuroscience communities need to interact closely, something that is happening already. Some of today’s greatest exponents of machine learning—such as Geoffrey Hinton, Zoubin Ghahramani, and Demis Hassabis—have backgrounds in cognitive neuroscience, and their success has been at least in part due to attempts to model brainlike behavior in their algorithms. At the same time, neurobiology has also flourished. All sorts of tools have been developed to watch which neurons are firing and genetically manipulate them and see what’s happening in real time with inputs.


pages: 332 words: 93,672

Life After Google: The Fall of Big Data and the Rise of the Blockchain Economy by George Gilder

23andMe, Airbnb, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Albert Einstein, AlphaGo, AltaVista, Amazon Web Services, AOL-Time Warner, Asilomar, augmented reality, Ben Horowitz, bitcoin, Bitcoin Ponzi scheme, Bletchley Park, blockchain, Bob Noyce, British Empire, Brownian motion, Burning Man, business process, butterfly effect, carbon footprint, cellular automata, Claude Shannon: information theory, Clayton Christensen, cloud computing, computer age, computer vision, crony capitalism, cross-subsidies, cryptocurrency, Danny Hillis, decentralized internet, deep learning, DeepMind, Demis Hassabis, disintermediation, distributed ledger, don't be evil, Donald Knuth, Donald Trump, double entry bookkeeping, driverless car, Elon Musk, Erik Brynjolfsson, Ethereum, ethereum blockchain, fake news, fault tolerance, fiat currency, Firefox, first square of the chessboard, first square of the chessboard / second half of the chessboard, floating exchange rates, Fractional reserve banking, game design, Geoffrey Hinton, George Gilder, Google Earth, Google Glasses, Google Hangouts, index fund, inflation targeting, informal economy, initial coin offering, Internet of things, Isaac Newton, iterative process, Jaron Lanier, Jeff Bezos, Jim Simons, Joan Didion, John Markoff, John von Neumann, Julian Assange, Kevin Kelly, Law of Accelerating Returns, machine translation, Marc Andreessen, Mark Zuckerberg, Mary Meeker, means of production, Menlo Park, Metcalfe’s law, Money creation, money: store of value / unit of account / medium of exchange, move fast and break things, Neal Stephenson, Network effects, new economy, Nick Bostrom, Norbert Wiener, Oculus Rift, OSI model, PageRank, pattern recognition, Paul Graham, peer-to-peer, Peter Thiel, Ponzi scheme, prediction markets, quantitative easing, random walk, ransomware, Ray Kurzweil, reality distortion field, Recombinant DNA, Renaissance Technologies, Robert Mercer, Robert Metcalfe, Ronald Coase, Ross Ulbricht, Ruby on Rails, Sand Hill Road, Satoshi Nakamoto, Search for Extraterrestrial Intelligence, self-driving car, sharing economy, Silicon Valley, Silicon Valley ideology, Silicon Valley startup, Singularitarianism, Skype, smart contracts, Snapchat, Snow Crash, software is eating the world, sorting algorithm, South Sea Bubble, speech recognition, Stephen Hawking, Steve Jobs, Steven Levy, Stewart Brand, stochastic process, Susan Wojcicki, TED Talk, telepresence, Tesla Model S, The Soul of a New Machine, theory of mind, Tim Cook: Apple, transaction costs, tulip mania, Turing complete, Turing machine, Vernor Vinge, Vitalik Buterin, Von Neumann architecture, Watson beat the top human players on Jeopardy!, WikiLeaks, Y Combinator, zero-sum game

Google, meanwhile, under its new CEO, Sundar Pichai, pivoted away from its highly publicized “mobile first” mantra, which had led to its acquisitions of Android and Ad Mob, and toward “AI first.” Google was the recognized intellectual leader of the industry, and its AI ostentation was widely acclaimed. Indeed it signed up most of the world’s AI celebrities, including its spearheads of “deep learning” prowess, from Geoffrey Hinton and Andrew Ng to Jeff Dean, the beleaguered Anthony Levandowski, and Demis Hassabis of DeepMind. If Google had been a university, it would have utterly outshone all others in AI talent. It must have been discouraging, then, to find that Amazon had shrewdly captured much of the market for AI services with its 2014 Alexa and Echo projects.

The most prominent participants were the bright lights of Google: Larry Page, Eric Schmidt, Ray Kurzweil, Demis Hassabis, and Peter Norvig, along with former Googler Andrew Ng, later of Baidu and Stanford. Also there was Facebook’s Yann LeCun, an innovator in deep-learning math and a protégé of Google’s Geoffrey Hinton. A tenured contingent consisted of the technologist Stuart Russell, the philosopher David Chalmers, the catastrophe theorist Nick Bostrom, the nanotech prophet Eric Drexler, the cosmologist Lawrence Krauss, the economist Erik Brynjolfsson, and the “Singularitarian” Vernor Vinge, along with scores of other celebrity scientists.1 They gathered at Asilomar preparing to alert the world to the dire threat posed by . . . well, by themselves—Silicon Valley.

The blog post laconically presented “some simple techniques for peeking inside these [neural] networks” and then showed a series of increasingly trippy photos, as if the machine were hallucinating. A little gray kitten became the stuff of nightmares: a shaggy beast with forehead and haunches bubbling with dark dog eyes and noses. To Balaban, the code and its results were a visual confirmation of what Yoshua Bengio, a colleague of Geoffrey Hinton in the Montreal crucible of AI, calls the “manifold learning hypothesis.” Bengio sees the essential job of a neural network as learning a hierarchy of representations in which each new layer is built up out of representations resolved in a previous layer. The machine begins with raw pixels and combines them into lines and curves transitioning from dark to light and then into geometrical shapes, which finally can be encoded into elements of human faces or other targeted figures.


pages: 368 words: 96,825

Bold: How to Go Big, Create Wealth and Impact the World by Peter H. Diamandis, Steven Kotler

3D printing, additive manufacturing, adjacent possible, Airbnb, Amazon Mechanical Turk, Amazon Web Services, Apollo 11, augmented reality, autonomous vehicles, Boston Dynamics, Charles Lindbergh, cloud computing, company town, creative destruction, crowdsourcing, Daniel Kahneman / Amos Tversky, data science, deal flow, deep learning, dematerialisation, deskilling, disruptive innovation, driverless car, Elon Musk, en.wikipedia.org, Exxon Valdez, fail fast, Fairchild Semiconductor, fear of failure, Firefox, Galaxy Zoo, Geoffrey Hinton, Google Glasses, Google Hangouts, gravity well, hype cycle, ImageNet competition, industrial robot, information security, Internet of things, Jeff Bezos, John Harrison: Longitude, John Markoff, Jono Bacon, Just-in-time delivery, Kickstarter, Kodak vs Instagram, Law of Accelerating Returns, Lean Startup, life extension, loss aversion, Louis Pasteur, low earth orbit, Mahatma Gandhi, Marc Andreessen, Mark Zuckerberg, Mars Rover, meta-analysis, microbiome, minimum viable product, move fast and break things, Narrative Science, Netflix Prize, Network effects, Oculus Rift, OpenAI, optical character recognition, packet switching, PageRank, pattern recognition, performance metric, Peter H. Diamandis: Planetary Resources, Peter Thiel, pre–internet, Ray Kurzweil, recommendation engine, Richard Feynman, ride hailing / ride sharing, risk tolerance, rolodex, Scaled Composites, self-driving car, sentiment analysis, shareholder value, Sheryl Sandberg, Silicon Valley, Silicon Valley startup, skunkworks, Skype, smart grid, SpaceShipOne, stem cell, Stephen Hawking, Steve Jobs, Steven Levy, Stewart Brand, Stuart Kauffman, superconnector, Susan Wojcicki, synthetic biology, technoutopianism, TED Talk, telepresence, telepresence robot, Turing test, urban renewal, Virgin Galactic, Wayback Machine, web application, X Prize, Y Combinator, zero-sum game

Now imagine that this same AI also has contextual understanding—meaning the system recognizes that your conversation with your friend is heading in the direction of family life—so the AI reminds you of the names of each of your friend’s family members, as well as any upcoming birthdays they might have. Behind many of the AI successes mentioned in this section is an algorithm called Deep Learning. Developed by University of Toronto’s Geoffrey Hinton for image recognition, Deep Learning has become the dominant approach in the field. And it should come as no surprise that in spring of 2013, Hinton was recruited, like Kurzweil, to join Google41—a development that will most likely lead to even faster progress. More recently, Google and NASA Ames Research Center—one of NASA’s field centers—jointly acquired a 512 qubit (quantum bit) computer manufactured by D-Wave Systems to study machine learning.

v=6adugDEmqBk. 30 John Ward, “The Services Sector: How Best To Measure It?,” International Trade Administration, October 2010, http://trade.gov/publications/ita-newsletter/1010/services-sector-how-best-to-measure-it.asp. 31 AI with Jeremy Howard, 2013. 32 For information on the German Traffic Sign Recognition Benchmark see http://benchmark.ini.rub.de. 33 Geoffrey Hinton et al., “ImageNet Classification with Deep Convolutional Neural Networks,” http://www.cs.toronto.edu/~fritz/absps/imagenet.pdf. 34 John Markoff, “Armies of Expensive Lawyers, Replaced By Cheaper Software,” New York Times, March 4, 2011, http://www.nytimes.com/2011/03/05/science/05legal.html?

pagewanted=all. 37 “IBM Watson’s Next Venture: Fueling New Era of Cognitive Apps Built in the Cloud by Developers,” IBM Press Release, November 14, 2013, http://www-03.ibm.com/press/us/en/pressrelease/42451.wss. 38 Nancy Dahlberg, “Modernizing Medicine, supercomputer Watson partner up,” Miami Herald, May 16, 2014. 39 AI with Daniel Cane, 2014. 40 Ray Kurzweil, “The Law of Accelerating Returns.” 41 Daniela Hernandez, “Meet the Man Google Hired to Make AI a Reality,” Wired, January 2014, http://www.wired.com/2014/01/geoffrey-hinton-deep-learning/. 42 AI with Geordie Rose, 2014. 43 See http://1qbit.com. 44 John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude E. Shannon, “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence,” AI Magazine, August 31, 1955, 12–14. 45 Jim Lewis, “Robots of Arabia,” Wired, Issue 13.11 (November 2005). 46 Garry Mathiason et al., “The Transformation of the Workplace Through Robotics, Artificial Intelligence, and Automation,” The Littler Report, February 2014, http://documents.jdsupra.com/d4936b1e-ca6c-4ce9-9e83-07906bfca22c.pdf. 47 See http://www.rethinkrobotics.com. 48 All Dan Barry quotes in this section come from an AI conducted 2013. 49 The Cambrian explosion was an evolutionary event beginning about 542 million years ago, during which most of the major animal phyla appeared. 50 See “Amazon Prime Air,” Amazon.com, http://www.amazon.com/b?


pages: 385 words: 111,113

Augmented: Life in the Smart Lane by Brett King

23andMe, 3D printing, additive manufacturing, Affordable Care Act / Obamacare, agricultural Revolution, Airbnb, Albert Einstein, Amazon Web Services, Any sufficiently advanced technology is indistinguishable from magic, Apollo 11, Apollo Guidance Computer, Apple II, artificial general intelligence, asset allocation, augmented reality, autonomous vehicles, barriers to entry, bitcoin, Bletchley Park, blockchain, Boston Dynamics, business intelligence, business process, call centre, chief data officer, Chris Urmson, Clayton Christensen, clean water, Computing Machinery and Intelligence, congestion charging, CRISPR, crowdsourcing, cryptocurrency, data science, deep learning, DeepMind, deskilling, different worldview, disruptive innovation, distributed generation, distributed ledger, double helix, drone strike, electricity market, Elon Musk, Erik Brynjolfsson, Fellow of the Royal Society, fiat currency, financial exclusion, Flash crash, Flynn Effect, Ford Model T, future of work, gamification, Geoffrey Hinton, gig economy, gigafactory, Google Glasses, Google X / Alphabet X, Hans Lippershey, high-speed rail, Hyperloop, income inequality, industrial robot, information asymmetry, Internet of things, invention of movable type, invention of the printing press, invention of the telephone, invention of the wheel, James Dyson, Jeff Bezos, job automation, job-hopping, John Markoff, John von Neumann, Kevin Kelly, Kickstarter, Kim Stanley Robinson, Kiva Systems, Kodak vs Instagram, Leonard Kleinrock, lifelogging, low earth orbit, low skilled workers, Lyft, M-Pesa, Mark Zuckerberg, Marshall McLuhan, megacity, Metcalfe’s law, Minecraft, mobile money, money market fund, more computing power than Apollo, Neal Stephenson, Neil Armstrong, Network effects, new economy, Nick Bostrom, obamacare, Occupy movement, Oculus Rift, off grid, off-the-grid, packet switching, pattern recognition, peer-to-peer, Ray Kurzweil, retail therapy, RFID, ride hailing / ride sharing, Robert Metcalfe, Salesforce, Satoshi Nakamoto, Second Machine Age, selective serotonin reuptake inhibitor (SSRI), self-driving car, sharing economy, Shoshana Zuboff, Silicon Valley, Silicon Valley startup, Skype, smart cities, smart grid, smart transportation, Snapchat, Snow Crash, social graph, software as a service, speech recognition, statistical model, stem cell, Stephen Hawking, Steve Jobs, Steve Wozniak, strong AI, synthetic biology, systems thinking, TaskRabbit, technological singularity, TED Talk, telemarketer, telepresence, telepresence robot, Tesla Model S, The future is already here, The Future of Employment, Tim Cook: Apple, trade route, Travis Kalanick, TSMC, Turing complete, Turing test, Twitter Arab Spring, uber lyft, undersea cable, urban sprawl, V2 rocket, warehouse automation, warehouse robotics, Watson beat the top human players on Jeopardy!, white picket fence, WikiLeaks, yottabyte

For example, Uber could advertise its AI, self-driving cars as “The Safest Drivers in the World”, knowing that statistically an autonomous vehicle will be 20 times safer than a human out of the gate. Key to this future is the need for AIs to learn language, to learn to converse. In an interview with the Guardian newspaper in May 2015, Professor Geoff Hinton, an expert in artificial neural networks, said Google is “on the brink of developing algorithms with the capacity for logic, natural conversation and even flirtation.” Google is currently working to encode thoughts as vectors described by a sequence of numbers. These “thought vectors” could endow AI systems with a human-like “common sense” within a decade, according to Hinton.

Some aspects of communication are likely to prove more challenging, Hinton predicted. “Irony is going to be hard to get,” he said. “You have to be master of the literal first. But then, Americans don’t get irony either. Computers are going to reach the level of Americans before Brits...” Professor Geoff Hinton, from an interview with the Guardian newspaper, 21st May 2015 These types of algorithms, which allow for leaps in cognitive understanding for machines, have only been possible with the application of massive data processing and computing power. Is the Turing Test or a machine that can mimic a human the required benchmark for human interactions with a computer?


pages: 444 words: 117,770

The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma by Mustafa Suleyman

"World Economic Forum" Davos, 23andMe, 3D printing, active measures, Ada Lovelace, additive manufacturing, agricultural Revolution, AI winter, air gap, Airbnb, Alan Greenspan, algorithmic bias, Alignment Problem, AlphaGo, Alvin Toffler, Amazon Web Services, Anthropocene, artificial general intelligence, Asilomar, Asilomar Conference on Recombinant DNA, ASML, autonomous vehicles, backpropagation, barriers to entry, basic income, benefit corporation, Big Tech, biodiversity loss, bioinformatics, Bletchley Park, Blitzscaling, Boston Dynamics, business process, business process outsourcing, call centre, Capital in the Twenty-First Century by Thomas Piketty, ChatGPT, choice architecture, circular economy, classic study, clean tech, cloud computing, commoditize, computer vision, coronavirus, corporate governance, correlation does not imply causation, COVID-19, creative destruction, CRISPR, critical race theory, crowdsourcing, cryptocurrency, cuban missile crisis, data science, decarbonisation, deep learning, deepfake, DeepMind, deindustrialization, dematerialisation, Demis Hassabis, disinformation, drone strike, drop ship, dual-use technology, Easter island, Edward Snowden, effective altruism, energy transition, epigenetics, Erik Brynjolfsson, Ernest Rutherford, Extinction Rebellion, facts on the ground, failed state, Fairchild Semiconductor, fear of failure, flying shuttle, Ford Model T, future of work, general purpose technology, Geoffrey Hinton, global pandemic, GPT-3, GPT-4, hallucination problem, hive mind, hype cycle, Intergovernmental Panel on Climate Change (IPCC), Internet Archive, Internet of things, invention of the wheel, job automation, John Maynard Keynes: technological unemployment, John von Neumann, Joi Ito, Joseph Schumpeter, Kickstarter, lab leak, large language model, Law of Accelerating Returns, Lewis Mumford, license plate recognition, lockdown, machine readable, Marc Andreessen, meta-analysis, microcredit, move 37, Mustafa Suleyman, mutually assured destruction, new economy, Nick Bostrom, Nikolai Kondratiev, off grid, OpenAI, paperclip maximiser, personalized medicine, Peter Thiel, planetary scale, plutocrats, precautionary principle, profit motive, prompt engineering, QAnon, quantum entanglement, ransomware, Ray Kurzweil, Recombinant DNA, Richard Feynman, Robert Gordon, Ronald Reagan, Sam Altman, Sand Hill Road, satellite internet, Silicon Valley, smart cities, South China Sea, space junk, SpaceX Starlink, stealth mode startup, stem cell, Stephen Fry, Steven Levy, strong AI, synthetic biology, tacit knowledge, tail risk, techlash, techno-determinism, technoutopianism, Ted Kaczynski, the long tail, The Rise and Fall of American Growth, Thomas Malthus, TikTok, TSMC, Turing test, Tyler Cowen, Tyler Cowen: Great Stagnation, universal basic income, uranium enrichment, warehouse robotics, William MacAskill, working-age population, world market for maybe five computers, zero day

Keep doing this, modifying the weights again and again, and you gradually improve the performance of the neural network so that eventually it’s able to go all the way from taking in single pixels to learning the existence of lines, edges, shapes, and then ultimately entire objects in scenes. This, in a nutshell, is deep learning. And this remarkable technique, long derided in the field, cracked computer vision and took the AI world by storm. AlexNet was built by the legendary researcher Geoffrey Hinton and two of his students, Alex Krizhevsky and Ilya Sutskever, at the University of Toronto. They entered the ImageNet Large Scale Visual Recognition Challenge, an annual competition designed by the Stanford professor Fei-Fei Li to focus the field’s efforts around a simple goal: identifying the primary object in an image.

It helps fly drones, flags inappropriate content on Facebook, and diagnoses a growing list of medical conditions: at DeepMind, one system my team developed read eye scans as accurately as world-leading expert doctors. Following the AlexNet breakthrough, AI suddenly became a major priority in academia, government, and corporate life. Geoffrey Hinton and his colleagues were hired by Google. Major tech companies in both the United States and China put machine learning at the heart of their R&D efforts. Shortly after DQN, we sold DeepMind to Google, and the tech giant soon switched to a strategy of “AI first” across all its products. Industry research output and patents soared.

These unlikely avenues are the foundation for arguably the biggest biotech story of the twenty-first century. Likewise, fields can stall for decades but then change dramatically in months. Neural networks spent decades in the wilderness, trashed by luminaries like Marvin Minsky. Only a few isolated researchers like Geoffrey Hinton and Yann LeCun kept them going through a period when the word “neural” was so controversial that researchers would deliberately remove it from their papers. It seemed impossible in the 1990s, but neural networks came to dominate AI. And yet it was also LeCun who said AlphaGo was impossible just days before it made its first big breakthrough.


pages: 472 words: 117,093

Machine, Platform, Crowd: Harnessing Our Digital Future by Andrew McAfee, Erik Brynjolfsson

"World Economic Forum" Davos, 3D printing, additive manufacturing, AI winter, Airbnb, airline deregulation, airport security, Albert Einstein, algorithmic bias, AlphaGo, Amazon Mechanical Turk, Amazon Web Services, Andy Rubin, AOL-Time Warner, artificial general intelligence, asset light, augmented reality, autism spectrum disorder, autonomous vehicles, backpropagation, backtesting, barriers to entry, behavioural economics, bitcoin, blockchain, blood diamond, British Empire, business cycle, business process, carbon footprint, Cass Sunstein, centralized clearinghouse, Chris Urmson, cloud computing, cognitive bias, commoditize, complexity theory, computer age, creative destruction, CRISPR, crony capitalism, crowdsourcing, cryptocurrency, Daniel Kahneman / Amos Tversky, data science, Dean Kamen, deep learning, DeepMind, Demis Hassabis, discovery of DNA, disintermediation, disruptive innovation, distributed ledger, double helix, driverless car, Elon Musk, en.wikipedia.org, Erik Brynjolfsson, Ethereum, ethereum blockchain, everywhere but in the productivity statistics, Evgeny Morozov, fake news, family office, fiat currency, financial innovation, general purpose technology, Geoffrey Hinton, George Akerlof, global supply chain, Great Leap Forward, Gregor Mendel, Hernando de Soto, hive mind, independent contractor, information asymmetry, Internet of things, inventory management, iterative process, Jean Tirole, Jeff Bezos, Jim Simons, jimmy wales, John Markoff, joint-stock company, Joseph Schumpeter, Kickstarter, Kiva Systems, law of one price, longitudinal study, low interest rates, Lyft, Machine translation of "The spirit is willing, but the flesh is weak." to Russian and back, Marc Andreessen, Marc Benioff, Mark Zuckerberg, meta-analysis, Mitch Kapor, moral hazard, multi-sided market, Mustafa Suleyman, Myron Scholes, natural language processing, Network effects, new economy, Norbert Wiener, Oculus Rift, PageRank, pattern recognition, peer-to-peer lending, performance metric, plutocrats, precision agriculture, prediction markets, pre–internet, price stability, principal–agent problem, Project Xanadu, radical decentralization, Ray Kurzweil, Renaissance Technologies, Richard Stallman, ride hailing / ride sharing, risk tolerance, Robert Solow, Ronald Coase, Salesforce, Satoshi Nakamoto, Second Machine Age, self-driving car, sharing economy, Silicon Valley, Skype, slashdot, smart contracts, Snapchat, speech recognition, statistical model, Steve Ballmer, Steve Jobs, Steven Pinker, supply-chain management, synthetic biology, tacit knowledge, TaskRabbit, Ted Nelson, TED Talk, the Cathedral and the Bazaar, The Market for Lemons, The Nature of the Firm, the strength of weak ties, Thomas Davenport, Thomas L Friedman, too big to fail, transaction costs, transportation-network company, traveling salesman, Travis Kalanick, Two Sigma, two-sided market, Tyler Cowen, Uber and Lyft, Uber for X, uber lyft, ubercab, Vitalik Buterin, warehouse robotics, Watson beat the top human players on Jeopardy!, winner-take-all economy, yield management, zero day

They did this with a combination of sophisticated math, ever-more-powerful computer hardware, and a pragmatic approach that allowed them to take inspiration from how the brain works but not to be constrained by it. Electric signals flow in only one direction through the brain’s neurons, for example, but the successful machine learning systems built in the eighties by Paul Werbos, Geoff Hinton, Yann LeCun, and others allowed information to travel both forward and backward through the network. This “back-propagation” led to much better performance, but progress remained frustratingly slow. By the 1990s, a machine learning system developed by LeCun to recognize numbers was reading as many as 20% of all handwritten checks in the United States, but there were few other real-world applications.

Byrne, “Introduction to Neurons and Neuronal Networks,” Neuroscience Online, accessed January 26, 2017, http://neuroscience.uth.tmc.edu/s1/introduction.html. 73 “the embryo of an electronic computer”: Mikel Olazaran, “A Sociological Study of the Official History of the Perceptrons Controversy,” Social Studies of Science 26 (1996): 611–59, http://journals.sagepub.com/doi/pdf/10.1177/030631296026003005. 74 Paul Werbos: Jürgen Schmidhuber, “Who Invented Backpropagation?” last modified 2015, http://people.idsia.ch/~juergen/who-invented-backpropagation.html. 74 Geoff Hinton: David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams, “Learning Representations by Back-propagating Errors,” Nature 323 (1986): 533–36, http://www.nature.com/nature/journal/v323/n6088/abs/323533a0.html. 74 Yann LeCun: Jürgen Schmidhuber, Deep Learning in Neural Networks: An Overview, Technical Report IDSIA-03-14, October 8, 2014, https://arxiv.org/pdf/1404.7828v4.pdf. 74 as many as 20% of all handwritten checks: Yann LeCun, “Biographical Sketch,” accessed January 26, 2017, http://yann.lecun.com/ex/bio.html. 74 “a new approach to computer Go”: David Silver et al., “Mastering the Game of Go with Deep Neural Networks and Search Trees,” Nature 529 (2016): 484–89, http://www.nature.com/nature/journal/v529/n7587/full/nature16961.html. 75 approximately $13,000 by the fall of 2016: Elliott Turner, Twitter post, September 30, 2016 (9:18 a.m.), https://twitter.com/eturner303/status/781900528733261824. 75 “the teams at the leading edge”: Andrew Ng, interview by the authors, August 2015. 76 “Retrospectively, [success with machine learning]”: Paul Voosen, “The Believers,” Chronicle of Higher Education, February 23, 2015, http://www.chronicle.com/article/The-Believers/190147. 76 His 2006 paper: G.


pages: 193 words: 51,445

On the Future: Prospects for Humanity by Martin J. Rees

23andMe, 3D printing, air freight, Alfred Russel Wallace, AlphaGo, Anthropocene, Asilomar, autonomous vehicles, Benoit Mandelbrot, biodiversity loss, blockchain, Boston Dynamics, carbon tax, circular economy, CRISPR, cryptocurrency, cuban missile crisis, dark matter, decarbonisation, DeepMind, Demis Hassabis, demographic transition, Dennis Tito, distributed ledger, double helix, driverless car, effective altruism, Elon Musk, en.wikipedia.org, Geoffrey Hinton, global village, Great Leap Forward, Higgs boson, Hyperloop, Intergovernmental Panel on Climate Change (IPCC), Internet of things, James Webb Space Telescope, Jeff Bezos, job automation, Johannes Kepler, John Conway, Large Hadron Collider, life extension, mandelbrot fractal, mass immigration, megacity, Neil Armstrong, Nick Bostrom, nuclear winter, ocean acidification, off-the-grid, pattern recognition, precautionary principle, quantitative hedge fund, Ray Kurzweil, Recombinant DNA, Rodney Brooks, Search for Extraterrestrial Intelligence, sharing economy, Silicon Valley, smart grid, speech recognition, Stanford marshmallow experiment, Stanislav Petrov, stem cell, Stephen Hawking, Steven Pinker, Stuxnet, supervolcano, technological singularity, the scientific method, Tunguska event, uranium enrichment, Walter Mischel, William MacAskill, Yogi Berra

Successive layers of processing identify horizontal and vertical lines, sharp edges, and so forth; each layer processes information from a ‘lower’ layer and then passes its output to other layers.8 The basic machine-learning concepts date from the 1980s; an important pioneer was the Anglo-Canadian Geoff Hinton. But the applications only really ‘took off’ two decades later, when the steady operation of Moore’s law—a doubling of computer speeds every two years—led to machines with a thousand times faster processing speed. Computers use ‘brute force’ methods. They learn to translate by reading millions of pages of (for example) multilingual European Union documents (they never get bored!).


pages: 499 words: 144,278

Coders: The Making of a New Tribe and the Remaking of the World by Clive Thompson

"Margaret Hamilton" Apollo, "Susan Fowler" uber, 2013 Report for America's Infrastructure - American Society of Civil Engineers - 19 March 2013, 4chan, 8-hour work day, Aaron Swartz, Ada Lovelace, AI winter, air gap, Airbnb, algorithmic bias, AlphaGo, Amazon Web Services, Andy Rubin, Asperger Syndrome, augmented reality, Ayatollah Khomeini, backpropagation, barriers to entry, basic income, behavioural economics, Bernie Sanders, Big Tech, bitcoin, Bletchley Park, blockchain, blue-collar work, Brewster Kahle, Brian Krebs, Broken windows theory, call centre, Cambridge Analytica, cellular automata, Charles Babbage, Chelsea Manning, Citizen Lab, clean water, cloud computing, cognitive dissonance, computer vision, Conway's Game of Life, crisis actor, crowdsourcing, cryptocurrency, Danny Hillis, data science, David Heinemeier Hansson, deep learning, DeepMind, Demis Hassabis, disinformation, don't be evil, don't repeat yourself, Donald Trump, driverless car, dumpster diving, Edward Snowden, Elon Musk, Erik Brynjolfsson, Ernest Rutherford, Ethereum, ethereum blockchain, fake news, false flag, Firefox, Frederick Winslow Taylor, Free Software Foundation, Gabriella Coleman, game design, Geoffrey Hinton, glass ceiling, Golden Gate Park, Google Hangouts, Google X / Alphabet X, Grace Hopper, growth hacking, Guido van Rossum, Hacker Ethic, hockey-stick growth, HyperCard, Ian Bogost, illegal immigration, ImageNet competition, information security, Internet Archive, Internet of things, Jane Jacobs, John Markoff, Jony Ive, Julian Assange, Ken Thompson, Kickstarter, Larry Wall, lone genius, Lyft, Marc Andreessen, Mark Shuttleworth, Mark Zuckerberg, Max Levchin, Menlo Park, meritocracy, microdosing, microservices, Minecraft, move 37, move fast and break things, Nate Silver, Network effects, neurotypical, Nicholas Carr, Nick Bostrom, no silver bullet, Northpointe / Correctional Offender Management Profiling for Alternative Sanctions, Oculus Rift, off-the-grid, OpenAI, operational security, opioid epidemic / opioid crisis, PageRank, PalmPilot, paperclip maximiser, pattern recognition, Paul Graham, paypal mafia, Peter Thiel, pink-collar, planetary scale, profit motive, ransomware, recommendation engine, Richard Stallman, ride hailing / ride sharing, Rubik’s Cube, Ruby on Rails, Sam Altman, Satoshi Nakamoto, Saturday Night Live, scientific management, self-driving car, side project, Silicon Valley, Silicon Valley ideology, Silicon Valley startup, single-payer health, Skype, smart contracts, Snapchat, social software, software is eating the world, sorting algorithm, South of Market, San Francisco, speech recognition, Steve Wozniak, Steven Levy, systems thinking, TaskRabbit, tech worker, techlash, TED Talk, the High Line, Travis Kalanick, Uber and Lyft, Uber for X, uber lyft, universal basic income, urban planning, Wall-E, Watson beat the top human players on Jeopardy!, WeWork, WikiLeaks, women in the workforce, Y Combinator, Zimmermann PGP, éminence grise

Even better, computers in the ’00s were running faster and faster, at cheaper and cheaper prices. You could now create neural nets with many layers, or even dozens: “deep learning,” as it’s called, because of how many layers are stacked up. By 2012, the field had a seismic breakthrough. Up at the University of Toronto, the British computer scientist Geoff Hinton had been beavering away for two decades on improving neural networks. That year he and a team of students showed off the most impressive neural net yet—by soundly beating competitors at an annual AI shootout. The ImageNet challenge, as it’s known, is an annual competition among AI researchers to see whose system is best at recognizing images.

(One of the more talented AI coders I know spends his time training neural nets on TV and movie scripts, autogenerating new scripts, then shooting the best ones.) All coders adore the “Hello, World!” moment, but with AI, the romance is decidedly Promethean. Matt Zeiler was a young engineering science student at the University of Toronto when one of Geoff Hinton’s students showed him a video of a flickering candle flame and told him it had been automatically generated by a neural net. “I was like, ‘Holy crap!’” Zeiler told me. The flame was so freakily lifelike that he took Hinton’s course and did his undergraduate thesis with Hinton, intent on absorbing deep learning.


pages: 665 words: 159,350

Shape: The Hidden Geometry of Information, Biology, Strategy, Democracy, and Everything Else by Jordan Ellenberg

Albert Einstein, AlphaGo, Andrew Wiles, autonomous vehicles, British Empire, Brownian motion, Charles Babbage, Claude Shannon: information theory, computer age, coronavirus, COVID-19, deep learning, DeepMind, Donald Knuth, Donald Trump, double entry bookkeeping, East Village, Edmond Halley, Edward Jenner, Elliott wave, Erdős number, facts on the ground, Fellow of the Royal Society, Geoffrey Hinton, germ theory of disease, global pandemic, government statistician, GPT-3, greed is good, Henri Poincaré, index card, index fund, Isaac Newton, Johannes Kepler, John Conway, John Nash: game theory, John Snow's cholera map, Louis Bachelier, machine translation, Mercator projection, Mercator projection distort size, especially Greenland and Africa, Milgram experiment, multi-armed bandit, Nate Silver, OpenAI, Paul Erdős, pets.com, pez dispenser, probability theory / Blaise Pascal / Pierre de Fermat, Ralph Nelson Elliott, random walk, Rubik’s Cube, self-driving car, side hustle, Snapchat, social distancing, social graph, transcontinental railway, urban renewal

And if you change the weights on the lines—that is, if you turn the fourteen knobs—you change the strategy. The picture gives you a fourteen-dimensional landscape you can explore, looking for a strategy that fits best whatever data you already have. If you’re finding it hard to imagine what a fourteen-dimensional landscape looks like, I recommend following the advice of Geoffrey Hinton, one of the founders of the modern theory of neural nets: “Visualize a 3-space and say ‘fourteen’ to yourself very loudly. Everyone does it.” Hinton comes from a lineage of high-dimension enthusiasts: his great-grandfather Charles wrote an entire book in 1904 about how to visualize four-dimensional cubes, and invented the word “tesseract” to describe them.* If you’ve seen Dalí’s painting Crucifixion (Corpus Hypercubus), that’s one of Hinton’s visualizations.

by Frank Rosenblatt: Frank Rosenblatt, “The perceptron: a probabilistic model for information storage and organization in the brain.” Psychological Review 65, no. 6 (1958): 386. Rosenblatt’s perceptron was a generalization of a less refined mathematical model of neural processing developed in the 1940s by Warren McCulloch and Walter Pitts. “Visualize a 3-space”: Lecture 2c of Geoffrey Hinton’s notes for “Neural Networks for Machine Learning.” Available at www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec2.pdf. his great-grandfather: For the familial relation between the two Hintons, see K. Onstad, “Mr. Robot,” Toronto Life, Jan. 28, 2018. Chapter 8: You Are Your Own Negative-First Cousin, and Other Maps The geometry of chords: Dmitri Tymoczko, A Geometry of Music (New York: Oxford University Press, 2010).

) * The sequence that counts the number of paraffins with more and more carbon atoms is, of course, recorded in the On-Line Encyclopedia of Integer Sequences: it is sequence A000602. * It’s linear algebra that provides us the theory of “vectors” that’s so central to machine learning, and which gave Geoffrey Hinton the wherewithal to describe fourteen-dimensional space as just like a three-dimensional space to which you loudly say “fourteen!” every so often. * Daniel Brown, in his extremely interesting book The Poetry of Victorian Scientists, argues that this poem can be read as addressing Sylvester’s exclusion from the university system on account of his faith, casting Sylvester himself as the “missing member.”


pages: 392 words: 108,745

Talk to Me: How Voice Computing Will Transform the Way We Live, Work, and Think by James Vlahos

Albert Einstein, AltaVista, Amazon Mechanical Turk, Amazon Web Services, augmented reality, Automated Insights, autonomous vehicles, backpropagation, Big Tech, Cambridge Analytica, Chuck Templeton: OpenTable:, cloud computing, Colossal Cave Adventure, computer age, deep learning, DeepMind, Donald Trump, Elon Musk, fake news, Geoffrey Hinton, information retrieval, Internet of things, Jacques de Vaucanson, Jeff Bezos, lateral thinking, Loebner Prize, machine readable, machine translation, Machine translation of "The spirit is willing, but the flesh is weak." to Russian and back, Mark Zuckerberg, Menlo Park, natural language processing, Neal Stephenson, Neil Armstrong, OpenAI, PageRank, pattern recognition, Ponzi scheme, randomized controlled trial, Ray Kurzweil, Ronald Reagan, Rubik’s Cube, self-driving car, sentiment analysis, Silicon Valley, Skype, Snapchat, speech recognition, statistical model, Steve Jobs, Steve Wozniak, Steven Levy, TechCrunch disrupt, Turing test, Watson beat the top human players on Jeopardy!

., “Gradient-Based Learning Applied to Document Recognition,” Proceedings of the IEEE, November 1998, 1, https://goo.gl/NtNKJB. 92 Toward the end of the 1990s: email from Geoffrey Hinton to author, July 28, 2018. 92 “Smart scientists,” he said: Bergen and Wagner, “Welcome to the AI Conspiracy.” 92 What’s more, they needed more layers: Yoshua Bengio, email to author, August 3, 2018. 92 In 2006 a groundbreaking pair of papers: Geoffrey Hinton and R. R. Salakhutdinov, “Reducing the Dimensionality of Data with Neural Networks,” Science 313 (July 28, 2006): 504–07, https://goo.gl/Ki41L8; and Yoshua Bengio et al., “Greedy Layer-Wise Training of Deep Networks,” Proceedings of the 19th International Conference on Neural Information Processing Systems (2006): 153–60, https://goo.gl/P5ZcV7. 93 Then, in 2012, a team of computer scientists from Stanford and Google Brain: Quoc Le et al., “Building High-level Features Using Large Scale Unsupervised Learning,” Proceedings of the 29th International Conference on Machine Learning, 2012, https://goo.gl/Vc1GeS. 93 The next breakthrough came in 2012: Alex Krizhevsky et al., “ImageNet Classification with Deep Convolutional Neural Networks,” Advances in Neural Information Processing Systems 25 (2012): 1097–105, https://goo.gl/x9IIwr. 94 In 2018 Google announced that one of its researchers: Kaz Sato, “Noodle on this: Machine learning that can identify ramen by shop,” Google blog, April 2, 2018, https://goo.gl/YnCujn. 94 “They said, ‘Okay, now we buy it’”: Tom Simonite, “Teaching Machines to Understand Us,” MIT Technology Review, August 6, 2015, https://goo.gl/nPkpll. 94 But with the efficacy of the technique: Among the many sources consulted for the science of speech recognition and language understanding, some of the most helpful were: Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach (Noida, India: Pearson Education, 2015); Lane Greene, “Finding a Voice,” The Economist, May 2017, https://goo.gl/hss3oL; and Hongshen Chen et al., “A Survey on Dialogue Systems: Recent Advances and New Frontiers,” ACM SIGKDD Explorations Newsletter 19, no. 2 (December 2017), https://goo.gl/GVQUKc. 95 To pinpoint those, an iPhone: “Hey Siri: An On-device DNN-powered Voice Trigger for Apple’s Personal Assistant,” Apple blog, October 2017, https://goo.gl/gWKjQN. 97 But in 2016 IBM and Microsoft independently announced: Allison Linn, “Historic Achievement: Microsoft researchers reach human parity in conversational speech recognition,” Microsoft blog, October 18, 2016, https://goo.gl/4Vz3YF. 98 Apple has patented a technique: “Digital Assistant Providing Whispered Speech,” United States Patent Application by Apple, December 14, 2017, https://goo.gl/3QRddB. 98 In 2016 researchers at Google and Oxford University: Yannis Assael et al., “LipNet: End-to-End Sentence-level Lipreading,” conference paper submitted for ICLR 2017 (December 2016), https://goo.gl/Bhoz7N. 101 Neural networks need much more compact word embeddings: Tomas Mikolov et al., “Efficient Estimation of Word Representations in Vector Space,” proceedings of workshop at ICLR, September 7, 2013, https://goo.gl/gHURjZ. 102 “Deep learning,” he says: Steve Young, interview with author, September 19, 2017. 104 The method, which is known as sequence-to-sequence: Ilya Sutskever et al., “Sequence to Sequence Learning with Neural Networks,” Advances in Neural Information Processing Systems 27 (December 14, 2014), https://goo.gl/U3KtxJ. 105 When Vinyals and Le published the results: Oriol Vinyals and Quoc Le, “A Neural Conversational Model,” Proceedings of the 31st International Conference on Machine Learning 37 (2015): https://goo.gl/sZjDy1. 106 “can home in on the part of the incoming email”: Greg Corrado, “Computer, respond to this email,” Google AI blog, November 3, 2015, https://goo.gl/YHMvnA. 108 “This organic writer, for one, could hardly tell one from the other”: Siddhartha Mukherjee, “The Future of Humans?


pages: 281 words: 71,242

World Without Mind: The Existential Threat of Big Tech by Franklin Foer

artificial general intelligence, back-to-the-land, Berlin Wall, big data - Walmart - Pop Tarts, Big Tech, big-box store, Buckminster Fuller, citizen journalism, Colonization of Mars, computer age, creative destruction, crowdsourcing, data is the new oil, data science, deep learning, DeepMind, don't be evil, Donald Trump, Double Irish / Dutch Sandwich, Douglas Engelbart, driverless car, Edward Snowden, Electric Kool-Aid Acid Test, Elon Musk, Evgeny Morozov, Fall of the Berlin Wall, Filter Bubble, Geoffrey Hinton, global village, Google Glasses, Haight Ashbury, hive mind, income inequality, intangible asset, Jeff Bezos, job automation, John Markoff, Kevin Kelly, knowledge economy, Law of Accelerating Returns, Marc Andreessen, Mark Zuckerberg, Marshall McLuhan, means of production, move fast and break things, new economy, New Journalism, Norbert Wiener, off-the-grid, offshore financial centre, PageRank, Peace of Westphalia, Peter Thiel, planetary scale, Ray Kurzweil, scientific management, self-driving car, Silicon Valley, Singularitarianism, software is eating the world, Steve Jobs, Steven Levy, Stewart Brand, strong AI, supply-chain management, TED Talk, the medium is the message, the scientific method, The Wealth of Nations by Adam Smith, The Wisdom of Crowds, Thomas L Friedman, Thorstein Veblen, Upton Sinclair, Vernor Vinge, vertical integration, We are as Gods, Whole Earth Catalog, yellow journalism

Google has spearheaded the revival of a concept first explored in the sixties, one that has failed until recently: neural networks, which involve computing modeled on the workings of the human brain. Algorithms replicate the brain’s information processing and its methods for learning. Google has hired the British-born professor Geoff Hinton, who has made the greatest progress in this direction. It also acquired a London-based company called DeepMind, which created neural networks that taught themselves, without human instruction, to play video games. Because DeepMind feared the dangers of a single company possessing such powerful algorithms, it insisted that Google never permit its work to be militarized or sold to intelligence services.


pages: 161 words: 39,526

Applied Artificial Intelligence: A Handbook for Business Leaders by Mariya Yao, Adelyn Zhou, Marlene Jia

Airbnb, algorithmic bias, AlphaGo, Amazon Web Services, artificial general intelligence, autonomous vehicles, backpropagation, business intelligence, business process, call centre, chief data officer, cognitive load, computer vision, conceptual framework, data science, deep learning, DeepMind, en.wikipedia.org, fake news, future of work, Geoffrey Hinton, industrial robot, information security, Internet of things, iterative process, Jeff Bezos, job automation, machine translation, Marc Andreessen, natural language processing, new economy, OpenAI, pattern recognition, performance metric, price discrimination, randomized controlled trial, recommendation engine, robotic process automation, Salesforce, self-driving car, sentiment analysis, Silicon Valley, single source of truth, skunkworks, software is eating the world, source of truth, sparse data, speech recognition, statistical model, strong AI, subscription business, technological singularity, The future is already here

This is a good way to get international talent to work on your problem and will also build your reputation as a company that supports AI. As with any industry, like attracts like. Dominant tech companies build strong AI departments by hiring superstar leaders. Google and Facebook attracted university professors and AI research pioneers such as Geoffrey Hinton, Fei-Fei Li, and Yann LeCun with plum appointments and endless resources. These professors either take a sabbatical from their universities or split their time between academia and industry. Effective Alternatives to Hiring Despite your best efforts, hiring new AI talent may prove to be slow or impossible.


pages: 263 words: 81,527

The Mind Is Flat: The Illusion of Mental Depth and the Improvised Mind by Nick Chater

Albert Einstein, battle of ideas, behavioural economics, classic study, computer vision, Daniel Kahneman / Amos Tversky, deep learning, double helix, Geoffrey Hinton, Henri Poincaré, Jacquard loom, lateral thinking, loose coupling, machine translation, speech recognition, tacit knowledge

This requires a systematic rethink of large parts of psychology, neuroscience and the social sciences, but it also requires a radical shake-up of how each of us thinks about ourselves and those around us. I have had a lot of help writing this book. My thinking has been shaped by decades of conversations with Mike Oaksford and Morten Christiansen, and discussions over the years with John Anderson, Gordon Brown, Ulrike Hahn, Geoff Hinton, Richard Holton, George Loewenstein, Jay McClelland, Adam Sanborn, Jerry Seligman, Neil Stewart, Josh Tenenbaum and James Tresilian, and so many other wonderful friends and colleagues. Writing this book has been supported by generous financial support through grants from the ERC (grant 295917-RATIONALITY), the ESRC Network for Integrated Behavioural Science (grant number ES/K002201/1) and the Leverhulme Trust (grant number RP2012-V-022).


pages: 180 words: 55,805

The Price of Tomorrow: Why Deflation Is the Key to an Abundant Future by Jeff Booth

3D printing, Abraham Maslow, activist fund / activist shareholder / activist investor, additive manufacturing, AI winter, Airbnb, Albert Einstein, AlphaGo, Amazon Web Services, artificial general intelligence, augmented reality, autonomous vehicles, basic income, bitcoin, blockchain, Bretton Woods, business intelligence, butterfly effect, Charles Babbage, Claude Shannon: information theory, clean water, cloud computing, cognitive bias, collapse of Lehman Brothers, Computing Machinery and Intelligence, corporate raider, creative destruction, crony capitalism, crowdsourcing, cryptocurrency, currency manipulation / currency intervention, dark matter, deep learning, DeepMind, deliberate practice, digital twin, distributed ledger, Donald Trump, Elon Musk, fiat currency, Filter Bubble, financial engineering, full employment, future of work, game design, gamification, general purpose technology, Geoffrey Hinton, Gordon Gekko, Great Leap Forward, Hyman Minsky, hype cycle, income inequality, inflation targeting, information asymmetry, invention of movable type, Isaac Newton, Jeff Bezos, John Maynard Keynes: Economic Possibilities for our Grandchildren, John von Neumann, Joseph Schumpeter, late fees, low interest rates, Lyft, Maslow's hierarchy, Milgram experiment, Minsky moment, Modern Monetary Theory, moral hazard, Nelson Mandela, Network effects, Nick Bostrom, oil shock, OpenAI, pattern recognition, Ponzi scheme, quantitative easing, race to the bottom, ride hailing / ride sharing, self-driving car, software as a service, technoutopianism, TED Talk, the long tail, the scientific method, Thomas Bayes, Turing test, Uber and Lyft, uber lyft, universal basic income, winner-take-all economy, X Prize, zero-sum game

No longer constrained by human knowledge, it took only three days of the computer playing itself to best previous AlphaGo versions developed by top researchers and it continued to improve from there. It mastered the masters, then mastered itself, and kept on going. How does this relate to our own intelligence? Geoffrey Hinton has long been trying to understand how our brains work. Hinton, the “godfather of deep learning,” is a cognitive psychologist and computer scientist who moved to Canada because of its continued research funding through the second AI winter in the early 1990s. He currently divides his time between his work at Google and as a professor at the University of Toronto.


pages: 208 words: 57,602

Futureproof: 9 Rules for Humans in the Age of Automation by Kevin Roose

"World Economic Forum" Davos, adjacent possible, Airbnb, Albert Einstein, algorithmic bias, algorithmic management, Alvin Toffler, Amazon Web Services, Atul Gawande, augmented reality, automated trading system, basic income, Bayesian statistics, Big Tech, big-box store, Black Lives Matter, business process, call centre, choice architecture, coronavirus, COVID-19, data science, deep learning, deepfake, DeepMind, disinformation, Elon Musk, Erik Brynjolfsson, factory automation, fake news, fault tolerance, Frederick Winslow Taylor, Freestyle chess, future of work, Future Shock, Geoffrey Hinton, George Floyd, gig economy, Google Hangouts, GPT-3, hiring and firing, hustle culture, hype cycle, income inequality, industrial robot, Jeff Bezos, job automation, John Markoff, Kevin Roose, knowledge worker, Kodak vs Instagram, labor-force participation, lockdown, Lyft, mandatory minimum, Marc Andreessen, Mark Zuckerberg, meta-analysis, Narrative Science, new economy, Norbert Wiener, Northpointe / Correctional Offender Management Profiling for Alternative Sanctions, off-the-grid, OpenAI, pattern recognition, planetary scale, plutocrats, Productivity paradox, QAnon, recommendation engine, remote working, risk tolerance, robotic process automation, scientific management, Second Machine Age, self-driving car, Shoshana Zuboff, Silicon Valley, Silicon Valley startup, social distancing, Steve Jobs, Stuart Kauffman, surveillance capitalism, tech worker, The Future of Employment, The Wealth of Nations by Adam Smith, TikTok, Travis Kalanick, Uber and Lyft, uber lyft, universal basic income, warehouse robotics, Watson beat the top human players on Jeopardy!, work culture

They are either unaware of or unconcerned with the ground-level consequences of their work, and although they might pledge to care about the responsible use of AI, they’re not doing anything to slow down or consider how the tools they build could enable harm. Trust me, I would love to be an AI optimist again. But right now, humans are getting in the way. Two The Myth of the Robot-Proof Job We humans are neural nets. What we can do, machines can do. —Geoffrey Hinton, computer scientist and AI pioneer A few years ago, I got invited to dinner with a big group of executives. It was an unusually fancy spread—expensive Champagne, foie gras, beef tenderloin—and as our entrées arrived, the conversation turned, as it often does in these circles, to AI and automation.


pages: 336 words: 93,672

The Future of the Brain: Essays by the World's Leading Neuroscientists by Gary Marcus, Jeremy Freeman

23andMe, Albert Einstein, backpropagation, bioinformatics, bitcoin, brain emulation, cloud computing, complexity theory, computer age, computer vision, conceptual framework, correlation does not imply causation, crowdsourcing, dark matter, data acquisition, data science, deep learning, Drosophila, epigenetics, Geoffrey Hinton, global pandemic, Google Glasses, ITER tokamak, iterative process, language acquisition, linked data, mouse model, optical character recognition, pattern recognition, personalized medicine, phenotype, race to the bottom, Richard Feynman, Ronald Reagan, semantic web, speech recognition, stem cell, Steven Pinker, supply-chain management, synthetic biology, tacit knowledge, traumatic brain injury, Turing machine, twin studies, web application

But as Steven Pinker and I showed, the details were rarely correct empirically; more than that, nobody was ever able to turn a neural network into a functioning system for understanding language. Today neural networks have finally found a valuable home—in machine learning, especially in speech recognition and image classification, due in part to innovative work by researchers such as Geoff Hinton and Yann LeCun. But the utility of neural networks as models of mind and brain remains marginal, useful, perhaps, in aspects of low-level perception but of limited utility in explaining more complex, higher-level cognition. Why is the scope of neural networks so limited if the brain itself is so obviously a neural network?


pages: 764 words: 261,694

The Elements of Statistical Learning (Springer Series in Statistics) by Trevor Hastie, Robert Tibshirani, Jerome Friedman

algorithmic bias, backpropagation, Bayesian statistics, bioinformatics, computer age, conceptual framework, correlation coefficient, data science, G4S, Geoffrey Hinton, greed is good, higher-order functions, linear programming, p-value, pattern recognition, random walk, selection bias, sparse data, speech recognition, statistical model, stochastic process, The Wisdom of Crowds

Just as we have learned a great deal from researchers outside of the field of statistics, our statistical viewpoint may help others to better understand different aspects of learning: There is no true interpretation of anything; interpretation is a vehicle in the service of human comprehension. The value of interpretation is in enabling others to fruitfully think about an idea. –Andreas Buja We would like to acknowledge the contribution of many people to the conception and completion of this book. David Andrews, Leo Breiman, Andreas Buja, John Chambers, Bradley Efron, Geoffrey Hinton, Werner Stuetzle, and John Tukey have greatly influenced our careers. Balasubramanian Narasimhan gave us advice and help on many computational problems, and maintained an excellent computing environment. Shin-Ho Bang helped in the production of a number of the figures. Lee Wilkinson gave valuable tips on color production.

Compute the leading principal component and factor analysis directions. Hence show that the leading principal component aligns itself in the maximal variance direction X3 , while the leading factor essentially ignores the uncorrelated component X3 , and picks up the correlated component X2 + X1 (Geoffrey Hinton, personal communication). Ex. 14.16 Consider the kernel principal component procedure outlined in Section 14.5.4. Argue that the number M of principal components is equal to the rank of K, which is the number of non-zero elements in D. Show that the mth component zm (mth column of Z) can be written (up to PN centering) as zim = j=1 αjm K(xi , xj ), where αjm = ujm /dm .

The restricted form of this model simplifies the Gibbs sampling for estimating the expectations in (17.37), since the variables in each layer are independent of one another, given the variables in the other layers. Hence they can be sampled together, using the conditional probabilities given by expression (17.30). The resulting model is less general than a Boltzmann machine, but is still useful; for example it can learn to extract interesting features from images. 5 We thank Geoffrey Hinton for assistance in the preparation of the material on RBMs. 644 17. Undirected Graphical Models By alternately sampling the variables in each layer of the RBM shown in Figure 17.6, it is possible to generate samples from the joint density model. If the V1 part of the visible layer is clamped at a particular feature vector during the alternating sampling, it is possible to sample from the distribution over labels given V1 .


pages: 913 words: 265,787

How the Mind Works by Steven Pinker

affirmative action, agricultural Revolution, Alfred Russel Wallace, Apple Newton, backpropagation, Buckminster Fuller, cognitive dissonance, Columbine, combinatorial explosion, complexity theory, computer age, computer vision, Computing Machinery and Intelligence, Daniel Kahneman / Amos Tversky, delayed gratification, disinformation, double helix, Dr. Strangelove, experimental subject, feminist movement, four colour theorem, Geoffrey Hinton, Gordon Gekko, Great Leap Forward, greed is good, Gregor Mendel, hedonic treadmill, Henri Poincaré, Herman Kahn, income per capita, information retrieval, invention of agriculture, invention of the wheel, Johannes Kepler, John von Neumann, lake wobegon effect, language acquisition, lateral thinking, Linda problem, Machine translation of "The spirit is willing, but the flesh is weak." to Russian and back, Mikhail Gorbachev, Murray Gell-Mann, mutually assured destruction, Necker cube, out of africa, Parents Music Resource Center, pattern recognition, phenotype, Plato's cave, plutocrats, random walk, Richard Feynman, Ronald Reagan, Rubik’s Cube, Saturday Night Live, scientific worldview, Search for Extraterrestrial Intelligence, sexual politics, social intelligence, Steven Pinker, Stuart Kauffman, tacit knowledge, theory of mind, Thorstein Veblen, Tipper Gore, Turing machine, urban decay, Yogi Berra

But that poses a problem the perceptron did not have to worry about: how to adjust the connections from the input units to the hidden units. It is problematic because the teacher, unless it is a mind reader, has no way of knowing the “correct” states for the hidden units, which are sealed inside the network. The psychologists David Rumelhart, Geoffrey Hinton, and Ronald Williams hit on a clever solution. The output units propagate back to each hidden unit a signal that represents the sum of the hidden unit’s errors across all the output units it connects to (“you’re sending too much activation,” or “you’re sending too little activation,” and by what amount).

The mind needs a representation for the proposition itself. In this example, the model needs an extra layer of units—most straightforwardly, a layer dedicated to representing the entire proposition, separately from the concepts and their roles. The bottom of page 121 shows, in simplified form, a model devised by Geoffrey Hinton that does handle the sentences. The bank of “proposition” units light up in arbitrary patterns, a bit like serial numbers, that label complete thoughts. It acts as a superstructure keeping the concepts in each proposition in their proper slots. Note how closely the architecture of the network implements standard, language-like mentalese!

Simulated evolution gives the networks a big head start in their learning careers. So evolution can guide learning in neural networks. Surprisingly, learning can guide evolution as well. Remember Darwin’s discussion of “the incipient stages of useful structures”—the what-good-is-half-an-eye problem. The neural-network theorists Geoffrey Hinton and Steven Nowlan invented a fiendish example. Imagine an animal controlled by a neural network with twenty connections, each either excitatory (on) or neutral (off). But the network is utterly useless unless all twenty connections are correctly set. Not only is it no good to have half a network; it is no good to have ninety-five percent of one.


pages: 523 words: 61,179

Human + Machine: Reimagining Work in the Age of AI by Paul R. Daugherty, H. James Wilson

3D printing, AI winter, algorithmic management, algorithmic trading, AlphaGo, Amazon Mechanical Turk, Amazon Robotics, augmented reality, autonomous vehicles, blockchain, business process, call centre, carbon footprint, circular economy, cloud computing, computer vision, correlation does not imply causation, crowdsourcing, data science, deep learning, DeepMind, digital twin, disintermediation, Douglas Hofstadter, driverless car, en.wikipedia.org, Erik Brynjolfsson, fail fast, friendly AI, fulfillment center, future of work, Geoffrey Hinton, Hans Moravec, industrial robot, Internet of things, inventory management, iterative process, Jeff Bezos, job automation, job satisfaction, knowledge worker, Lyft, machine translation, Marc Benioff, natural language processing, Neal Stephenson, personalized medicine, precision agriculture, Ray Kurzweil, recommendation engine, RFID, ride hailing / ride sharing, risk tolerance, robotic process automation, Rodney Brooks, Salesforce, Second Machine Age, self-driving car, sensor fusion, sentiment analysis, Shoshana Zuboff, Silicon Valley, Snow Crash, software as a service, speech recognition, tacit knowledge, telepresence, telepresence robot, text mining, the scientific method, uber lyft, warehouse automation, warehouse robotics

Many other researchers provided relevant findings and insights that enriched our thinking, including Mark Purdy, Ladan Davarzani, Athena Peppes, Philippe Roussiere, Svenja Falk, Raghav Narsalay, Madhu Vazirani, Sybille Berjoan, Mamta Kapur, Renee Byrnes, Tomas Castagnino, Caroline Liu, Lauren Finkelstein, Andrew Cavanaugh, and Nick Yennaco. We owe a special debt to the many visionaries and pioneers who have blazed AI trails and whose work has inspired and informed us, including Herbert Simon, John McCarthy, Marvin Minsky, Arthur Samuel, Edward Feigenbaum, Joseph Weizenbaum, Geoffrey Hinton, Hans Moravec, Peter Norvig, Douglas Hofstadter, Ray Kurzweil, Rodney Brooks, Yann LeCun, and Andrew Ng, among many others. And huge gratitude to our colleagues who provided insights and inspiration, including Nicola Morini Bianzino, Mike Sutcliff, Ellyn Shook, Marc Carrel-Billiard, Narendra Mulani, Dan Elron, Frank Meerkamp, Adam Burden, Mark McDonald, Cyrille Bataller, Sanjeev Vohra, Rumman Chowdhury, Lisa Neuberger-Fernandez, Dadong Wan, Sanjay Podder, and Michael Biltz.


The Myth of Artificial Intelligence: Why Computers Can't Think the Way We Do by Erik J. Larson

AI winter, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Albert Einstein, Alignment Problem, AlphaGo, Amazon Mechanical Turk, artificial general intelligence, autonomous vehicles, Big Tech, Black Swan, Bletchley Park, Boeing 737 MAX, business intelligence, Charles Babbage, Claude Shannon: information theory, Computing Machinery and Intelligence, conceptual framework, correlation does not imply causation, data science, deep learning, DeepMind, driverless car, Elon Musk, Ernest Rutherford, Filter Bubble, Geoffrey Hinton, Georg Cantor, Higgs boson, hive mind, ImageNet competition, information retrieval, invention of the printing press, invention of the wheel, Isaac Newton, Jaron Lanier, Jeff Hawkins, John von Neumann, Kevin Kelly, Large Hadron Collider, Law of Accelerating Returns, Lewis Mumford, Loebner Prize, machine readable, machine translation, Nate Silver, natural language processing, Nick Bostrom, Norbert Wiener, PageRank, PalmPilot, paperclip maximiser, pattern recognition, Peter Thiel, public intellectual, Ray Kurzweil, retrograde motion, self-driving car, semantic web, Silicon Valley, social intelligence, speech recognition, statistical model, Stephen Hawking, superintelligent machines, tacit knowledge, technological singularity, TED Talk, The Coming Technological Singularity, the long tail, the scientific method, The Signal and the Noise by Nate Silver, The Wisdom of Crowds, theory of mind, Turing machine, Turing test, Vernor Vinge, Watson beat the top human players on Jeopardy!, Yochai Benkler

Classical AI scientists dismissed t­ hese as “shallow” or “empirical,” ­because statistical approaches using data ­d idn’t use knowledge and c­ ouldn’t h­ andle reasoning or planning very well (if at all). But with the web providing the much-­needed data, the approaches started showing promise. The deep learning “revolution” began around 2006, with early work by Geoff Hinton, Yann LeCun, and Yoshua Bengio. By 2010, Google, Microsoft, and other Big Tech companies w ­ ere using neural networks for major consumer applications such as voice recognition, and by 2012, Android smartphones featured neural network technology. From about this time up through 2020 (as I write this), deep learning has been the hammer causing all the prob­lems of AI to look like a nail—­ prob­lems that can be approached “from the ground up,” like playing games and recognizing voice and image data, now account for most of the research and commercial dollars in AI.


The Ethical Algorithm: The Science of Socially Aware Algorithm Design by Michael Kearns, Aaron Roth

23andMe, affirmative action, algorithmic bias, algorithmic trading, Alignment Problem, Alvin Roth, backpropagation, Bayesian statistics, bitcoin, cloud computing, computer vision, crowdsourcing, data science, deep learning, DeepMind, Dr. Strangelove, Edward Snowden, Elon Musk, fake news, Filter Bubble, general-purpose programming language, Geoffrey Hinton, Google Chrome, ImageNet competition, Lyft, medical residency, Nash equilibrium, Netflix Prize, p-value, Pareto efficiency, performance metric, personalized medicine, pre–internet, profit motive, quantitative trading / quantitative finance, RAND corporation, recommendation engine, replication crisis, ride hailing / ride sharing, Robert Bork, Ronald Coase, self-driving car, short selling, sorting algorithm, sparse data, speech recognition, statistical model, Stephen Hawking, superintelligent machines, TED Talk, telemarketer, Turing machine, two-sided market, Vilfredo Pareto

The technical name for the algorithmic framework we have been describing is a generative adversarial network (GAN), and the approach we’ve outlined above indeed seems to be highly effective: GANs are an important component of the collection of techniques known as deep learning, which has resulted in qualitative improvements in machine learning for image classification, speech recognition, automatic natural language translation, and many other fundamental problems. (The Turing Award, widely considered the Nobel Prize of computer science, was recently awarded to Yoshua Bengio, Geoffrey Hinton, and Yann LeCun for their pioneering contributions to deep learning.) Fig. 21. Synthetic cat images created by a generative adversarial network (GAN), from https://ajolicoeur.wordpress.com/cats. But with all of this discussion of simulated self-play and fake cats, it might seem like we have strayed far from the core topic of this book, which is the interaction between societal norms and values and algorithmic decision-making.


pages: 296 words: 66,815

The AI-First Company by Ash Fontana

23andMe, Amazon Mechanical Turk, Amazon Web Services, autonomous vehicles, barriers to entry, blockchain, business intelligence, business process, business process outsourcing, call centre, Charles Babbage, chief data officer, Clayton Christensen, cloud computing, combinatorial explosion, computer vision, crowdsourcing, data acquisition, data science, deep learning, DevOps, en.wikipedia.org, Geoffrey Hinton, independent contractor, industrial robot, inventory management, John Conway, knowledge economy, Kubernetes, Lean Startup, machine readable, minimum viable product, natural language processing, Network effects, optical character recognition, Pareto efficiency, performance metric, price discrimination, recommendation engine, Ronald Coase, Salesforce, single source of truth, software as a service, source of truth, speech recognition, the scientific method, transaction costs, vertical integration, yield management

AI researchers made breakthroughs in stringing neurons together in a network at the start of the millennium. The Canadian computer scientist Yoshua Bengio devised a language model based on a neural network that figured out the next best word to use among all the available words in a language based on where that word usually appeared with respect to other words. Geoffrey Hinton, a British-born computer scientist and psychologist, developed a neural network that linked many layers of neurons together, the precursor to deep learning. Importantly, researchers worked to get these neural networks running efficiently on the available computer chips, settling on the chips used for computer graphics because they are particularly good at running many numerical computations in parallel.


pages: 296 words: 78,631

Hello World: Being Human in the Age of Algorithms by Hannah Fry

23andMe, 3D printing, Air France Flight 447, Airbnb, airport security, algorithmic bias, algorithmic management, augmented reality, autonomous vehicles, backpropagation, Brixton riot, Cambridge Analytica, chief data officer, computer vision, crowdsourcing, DARPA: Urban Challenge, data science, deep learning, DeepMind, Douglas Hofstadter, driverless car, Elon Musk, fake news, Firefox, Geoffrey Hinton, Google Chrome, Gödel, Escher, Bach, Ignaz Semmelweis: hand washing, John Markoff, Mark Zuckerberg, meta-analysis, Northpointe / Correctional Offender Management Profiling for Alternative Sanctions, pattern recognition, Peter Thiel, RAND corporation, ransomware, recommendation engine, ride hailing / ride sharing, selection bias, self-driving car, Shai Danziger, Silicon Valley, Silicon Valley startup, Snapchat, sparse data, speech recognition, Stanislav Petrov, statistical model, Stephen Hawking, Steven Levy, systematic bias, TED Talk, Tesla Model S, The Wisdom of Crowds, Thomas Bayes, trolley problem, Watson beat the top human players on Jeopardy!, web of trust, William Langewiesche, you are the product

Neural networks have been around since the middle of the twentieth century, but until quite recently we’ve lacked the widespread access to really powerful computers necessary to get the best out of them. The world was finally forced to sit up and take them seriously in 2012 when computer scientist Geoffrey Hinton and two of his students entered a new kind of neural network into an image recognition competition.12 The challenge was to recognize – among other things – dogs. Their artificially intelligent algorithm blew the best of its competitors out of the water and kicked off a massive renaissance in deep learning.


pages: 280 words: 74,559

Fully Automated Luxury Communism by Aaron Bastani

"Peter Beck" AND "Rocket Lab", Alan Greenspan, Anthropocene, autonomous vehicles, banking crisis, basic income, Berlin Wall, Bernie Sanders, Boston Dynamics, Bretton Woods, Brexit referendum, capital controls, capitalist realism, cashless society, central bank independence, collapse of Lehman Brothers, computer age, computer vision, CRISPR, David Ricardo: comparative advantage, decarbonisation, deep learning, dematerialisation, DIY culture, Donald Trump, double helix, driverless car, electricity market, Elon Musk, energy transition, Erik Brynjolfsson, fake news, financial independence, Francis Fukuyama: the end of history, future of work, Future Shock, G4S, general purpose technology, Geoffrey Hinton, Gregor Mendel, housing crisis, income inequality, industrial robot, Intergovernmental Panel on Climate Change (IPCC), Internet of things, Isaac Newton, James Watt: steam engine, Jeff Bezos, Jeremy Corbyn, Jevons paradox, job automation, John Markoff, John Maynard Keynes: technological unemployment, Joseph Schumpeter, Kevin Kelly, Kuiper Belt, land reform, Leo Hollis, liberal capitalism, low earth orbit, low interest rates, low skilled workers, M-Pesa, market fundamentalism, means of production, mobile money, more computing power than Apollo, new economy, off grid, pattern recognition, Peter H. Diamandis: Planetary Resources, post scarcity, post-work, price mechanism, price stability, private spaceflight, Productivity paradox, profit motive, race to the bottom, rewilding, RFID, rising living standards, Robert Solow, scientific management, Second Machine Age, self-driving car, sensor fusion, shareholder value, Silicon Valley, Simon Kuznets, Slavoj Žižek, SoftBank, stem cell, Stewart Brand, synthetic biology, technological determinism, technoutopianism, the built environment, the scientific method, The Wealth of Nations by Adam Smith, Thomas Malthus, transatlantic slave trade, Travis Kalanick, universal basic income, V2 rocket, Watson beat the top human players on Jeopardy!, We are as Gods, Whole Earth Catalog, working-age population

Incredibly, it has a self-teaching neural network which constantly adds to its knowledge of how the heart works with each new case it examines. It is in areas such as this where automation will make initial incursions into medicine, boosting productivity by accompanying, rather than replacing, existing workers. Yet such systems will improve with each passing year and some, like ‘godfather of deep learning’ Geoffrey Hinton, believe that medical schools will soon stop training radiologists altogether. Perhaps that is presumptuous – after all, we’d want a level of quality control and maybe even the final diagnosis to involve a human – but even then, this massively upgraded, faster process might need one trained professional where at present there are dozens, resulting in a quicker, superior service that costs less in both time and money.


Hands-On Machine Learning With Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems by Aurelien Geron

AlphaGo, Amazon Mechanical Turk, Bayesian statistics, centre right, combinatorial explosion, constrained optimization, correlation coefficient, crowdsourcing, data science, deep learning, DeepMind, duck typing, en.wikipedia.org, Geoffrey Hinton, iterative process, Netflix Prize, NP-complete, optical character recognition, P = NP, p-value, pattern recognition, performance metric, recommendation engine, self-driving car, SpamAssassin, speech recognition, statistical model

However, after a while the validation error stops decreasing and actually starts to go back up. This indicates that the model has started to overfit the training data. With early stopping you just stop training as soon as the validation error reaches the minimum. It is such a simple and efficient regularization technique that Geoffrey Hinton called it a “beautiful free lunch.” Figure 4-20. Early stopping regularization Tip With Stochastic and Mini-batch Gradient Descent, the curves are not so smooth, and it may be hard to know whether you have reached the minimum or not. One solution is to stop only after the validation error has been above the minimum for some time (when you are confident that the model will not do any better), then roll back the model parameters to the point where the validation error was at a minimum.


Four Battlegrounds by Paul Scharre

2021 United States Capitol attack, 3D printing, active measures, activist lawyer, AI winter, AlphaGo, amateurs talk tactics, professionals talk logistics, artificial general intelligence, ASML, augmented reality, Automated Insights, autonomous vehicles, barriers to entry, Berlin Wall, Big Tech, bitcoin, Black Lives Matter, Boeing 737 MAX, Boris Johnson, Brexit referendum, business continuity plan, business process, carbon footprint, chief data officer, Citizen Lab, clean water, cloud computing, commoditize, computer vision, coronavirus, COVID-19, crisis actor, crowdsourcing, DALL-E, data is not the new oil, data is the new oil, data science, deep learning, deepfake, DeepMind, Demis Hassabis, Deng Xiaoping, digital map, digital rights, disinformation, Donald Trump, drone strike, dual-use technology, Elon Musk, en.wikipedia.org, endowment effect, fake news, Francis Fukuyama: the end of history, future of journalism, future of work, game design, general purpose technology, Geoffrey Hinton, geopolitical risk, George Floyd, global supply chain, GPT-3, Great Leap Forward, hive mind, hustle culture, ImageNet competition, immigration reform, income per capita, interchangeable parts, Internet Archive, Internet of things, iterative process, Jeff Bezos, job automation, Kevin Kelly, Kevin Roose, large language model, lockdown, Mark Zuckerberg, military-industrial complex, move fast and break things, Nate Silver, natural language processing, new economy, Nick Bostrom, one-China policy, Open Library, OpenAI, PalmPilot, Parler "social media", pattern recognition, phenotype, post-truth, purchasing power parity, QAnon, QR code, race to the bottom, RAND corporation, recommendation engine, reshoring, ride hailing / ride sharing, robotic process automation, Rodney Brooks, Rubik’s Cube, self-driving car, Shoshana Zuboff, side project, Silicon Valley, slashdot, smart cities, smart meter, Snapchat, social software, sorting algorithm, South China Sea, sparse data, speech recognition, Steve Bannon, Steven Levy, Stuxnet, supply-chain attack, surveillance capitalism, systems thinking, tech worker, techlash, telemarketer, The Brussels Effect, The Signal and the Noise by Nate Silver, TikTok, trade route, TSMC

His career covered computer-assisted detection applications in medicine and national security, from improving 3D mammography to remotely scanning cargo containers coming into U.S. ports for contraband. He was doing deep learning with CPUs to map mouse brains before what he referred to as “the Big Bang” in 2012, when Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton published a paper showing groundbreaking performance on ImageNet. Before then, John explained, “It took a month to train” the models he was using and “error rates were poor.” Yet he said, “The moment ImageNet happens, everybody in the computer vision community changed from whatever they were doing to deep learning, which was appropriate.”

., “Specification Gaming: the Flip Side of AI Ingenuity,” Deepmind Blog, April 21, 2020, https://deepmind.com/blog/article/Specification-gaming-the-flip-side-of-AI-ingenuity. 250Micro Air Vehicle Lab (MAVLab): Micro Air Vehicle Lab—TUDelft (website), 2021, https://mavlab.tudelft.nl/. 250“mix of neural networks and control theory”: Federico Paredes, interview by author, January 15, 2019. 250“for a lot of what we want to do”: Chuck Howell, interview by author, May 25, 2021. 251“model distillation”: Geoffrey Hinton, Oriol Vinyals, and Jeff Dean, Distilling the Knowledge in a Neural Network (arXiv.org, March 9, 2015), https://arxiv.org/pdf/1503.02531.pdf 251some pharmaceuticals that are approved: Paul Gerrard and Robert Malcolm, “Mechanisms of Modafinil: A Review of Current Research,” Neuropsychiatric Disease and Treatment 3, no. 3 (June 2007): 349–64, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2654794/; PROVIGIL(R) (Modafinil) Tablets [C-IV], package insert, October 2010, https://www.accessdata.fda.gov/drugsatfda_docs/label/2010/020717s030s034s036lbl.pdf; Jonathan Zittrain, “Intellectual Debt: With Great Power Comes Great Ignorance,” Berkman Klein Center, July 24, 2019, https://medium.com/berkman-klein-center/from-technical-debt-to-intellectual-debt-in-ai-e05ac56a502c; Jonathan Zittrain, “The Hidden Costs of Automated Thinking,” New Yorker, July 23, 2019, https://www.newyorker.com/tech/annals-of-technology/the-hidden-costs-of-automated-thinking. 251“We rely on a complex socio-technical system”: Howell, interview. 251necessary processes for AI assurance to establish justified confidence: Pedro A.


pages: 276 words: 81,153

Outnumbered: From Facebook and Google to Fake News and Filter-Bubbles – the Algorithms That Control Our Lives by David Sumpter

affirmative action, algorithmic bias, AlphaGo, Bernie Sanders, Brexit referendum, Cambridge Analytica, classic study, cognitive load, Computing Machinery and Intelligence, correlation does not imply causation, crowdsourcing, data science, DeepMind, Demis Hassabis, disinformation, don't be evil, Donald Trump, Elon Musk, fake news, Filter Bubble, Geoffrey Hinton, Google Glasses, illegal immigration, James Webb Space Telescope, Jeff Bezos, job automation, Kenneth Arrow, Loebner Prize, Mark Zuckerberg, meta-analysis, Minecraft, Nate Silver, natural language processing, Nelson Mandela, Nick Bostrom, Northpointe / Correctional Offender Management Profiling for Alternative Sanctions, p-value, post-truth, power law, prediction markets, random walk, Ray Kurzweil, Robert Mercer, selection bias, self-driving car, Silicon Valley, Skype, Snapchat, social contagion, speech recognition, statistical model, Stephen Hawking, Steve Bannon, Steven Pinker, TED Talk, The Signal and the Noise by Nate Silver, traveling salesman, Turing test

They realised that convolutional neural networks solved problems at the heart of their businesses. An algorithm that can automatically recognise our friends’ faces, our favourite cute animals and the exotic places that we have visited can allow these companies to better target our interests. Alex and his PhD supervisor, Geoffrey Hinton, were recruited by Google. The following year, one of the competition winners, Rob Fergus, was offered a position at Facebook. In 2014, Google put together its own winning team, and promptly recruited Oxford PhD student Karen Simonyan, who came in second place. In 2015, it was Microsoft researcher Kaiming He and his colleagues who took the prize.


pages: 283 words: 81,376

The Doomsday Calculation: How an Equation That Predicts the Future Is Transforming Everything We Know About Life and the Universe by William Poundstone

Albert Einstein, anthropic principle, Any sufficiently advanced technology is indistinguishable from magic, Arthur Eddington, Bayesian statistics, behavioural economics, Benoit Mandelbrot, Berlin Wall, bitcoin, Black Swan, conceptual framework, cosmic microwave background, cosmological constant, cosmological principle, CRISPR, cuban missile crisis, dark matter, DeepMind, digital map, discounted cash flows, Donald Trump, Doomsday Clock, double helix, Dr. Strangelove, Eddington experiment, Elon Musk, Geoffrey Hinton, Gerolamo Cardano, Hans Moravec, heat death of the universe, Higgs boson, if you see hoof prints, think horses—not zebras, index fund, Isaac Newton, Jaron Lanier, Jeff Bezos, John Markoff, John von Neumann, Large Hadron Collider, mandelbrot fractal, Mark Zuckerberg, Mars Rover, Neil Armstrong, Nick Bostrom, OpenAI, paperclip maximiser, Peter Thiel, Pierre-Simon Laplace, Plato's cave, probability theory / Blaise Pascal / Pierre de Fermat, RAND corporation, random walk, Richard Feynman, ride hailing / ride sharing, Rodney Brooks, Ronald Reagan, Ronald Reagan: Tear down this wall, Sam Altman, Schrödinger's Cat, Search for Extraterrestrial Intelligence, self-driving car, Silicon Valley, Skype, Stanislav Petrov, Stephen Hawking, strong AI, tech billionaire, Thomas Bayes, Thomas Malthus, time value of money, Turing test

In 2014 Google paid more than $500 million for the British AI start-up Deep Mind. Corporate parent Alphabet is establishing well-funded AI centers across the globe. “I don’t buy into the killer robot [theory],” Google director of research Peter Norvig told CNBC. Another Google researcher, the psychologist and computer scientist Geoffrey Hinton, said, “I am in the camp that it is hopeless.” Mark Zuckerberg and several Facebook executives went so far as to stage an intervention for Musk, inviting him to dinner at Zuckerberg’s house so they could ply him with arguments that AI is okay. It didn’t work. Ever since, Musk and Zuckerberg have waged a social media feud on the topic.


pages: 321

Finding Alphas: A Quantitative Approach to Building Trading Strategies by Igor Tulchinsky

algorithmic trading, asset allocation, automated trading system, backpropagation, backtesting, barriers to entry, behavioural economics, book value, business cycle, buy and hold, capital asset pricing model, constrained optimization, corporate governance, correlation coefficient, credit crunch, Credit Default Swap, currency risk, data science, deep learning, discounted cash flows, discrete time, diversification, diversified portfolio, Eugene Fama: efficient market hypothesis, financial engineering, financial intermediation, Flash crash, Geoffrey Hinton, implied volatility, index arbitrage, index fund, intangible asset, iterative process, Long Term Capital Management, loss aversion, low interest rates, machine readable, market design, market microstructure, merger arbitrage, natural language processing, passive investing, pattern recognition, performance metric, Performance of Mutual Funds in the Period, popular capitalism, prediction markets, price discovery process, profit motive, proprietary trading, quantitative trading / quantitative finance, random walk, Reminiscences of a Stock Operator, Renaissance Technologies, risk free rate, risk tolerance, risk-adjusted returns, risk/return, selection bias, sentiment analysis, shareholder value, Sharpe ratio, short selling, Silicon Valley, speech recognition, statistical arbitrage, statistical model, stochastic process, survivorship bias, systematic bias, systematic trading, text mining, transaction costs, Vanguard fund, yield curve

Another alternative is FloatBoost, which incorporates the backtracking mechanism of floating search and repeatedly performs a backtracking to remove unfavorable weak classifiers after a new weak classifier is added by AdaBoost; this ensures a lower error rate and reduced feature set at the cost of about five times longer training time. Deep Learning Deep learning (DL) is a popular topic today – and a term that is used to discuss a number of rather distinct things. Some data scientists think DL is just a buzz word or a rebranding of neural networks. The name comes from Canadian scientist Geoffrey Hinton, who created an unsupervised method known as the restricted Boltzmann machine (RBM) for pretraining NNs with a large number of neuron layers. That was meant to improve on the backpropagation training method, but there is no strong evidence that it really was an improvement. Another direction in deep learning is recurrent neural networks (RNNs) and natural language processing.


pages: 253 words: 84,238

A Thousand Brains: A New Theory of Intelligence by Jeff Hawkins

AI winter, Albert Einstein, artificial general intelligence, carbon-based life, clean water, cloud computing, deep learning, different worldview, discovery of DNA, Doomsday Clock, double helix, en.wikipedia.org, estate planning, Geoffrey Hinton, Jeff Hawkins, PalmPilot, Search for Extraterrestrial Intelligence, self-driving car, sensor fusion, Silicon Valley, superintelligent machines, the scientific method, Thomas Kuhn: the structure of scientific revolutions, Turing machine, Turing test

Today, AI and robotics are largely separate fields of research, although the line is starting to blur. Once AI researchers understand the essential role of movement and reference frames for creating AGI, the separation between artificial intelligence and robotics will disappear completely. One AI scientist who understands the importance of reference frames is Geoffrey Hinton. Today’s neural networks rely on ideas that Hinton developed in the 1980s. Recently, he has become critical of the field because deep learning networks lack any sense of location and, therefore, he argues, they can’t learn the structure of the world. In essence, this is the same criticism I am making, that AI needs reference frames.


Driverless: Intelligent Cars and the Road Ahead by Hod Lipson, Melba Kurman

AI winter, Air France Flight 447, AlphaGo, Amazon Mechanical Turk, autonomous vehicles, backpropagation, barriers to entry, butterfly effect, carbon footprint, Chris Urmson, cloud computing, computer vision, connected car, creative destruction, crowdsourcing, DARPA: Urban Challenge, deep learning, digital map, Donald Shoup, driverless car, Elon Musk, en.wikipedia.org, Erik Brynjolfsson, General Motors Futurama, Geoffrey Hinton, Google Earth, Google X / Alphabet X, Hans Moravec, high net worth, hive mind, ImageNet competition, income inequality, industrial robot, intermodal, Internet of things, Jeff Hawkins, job automation, Joseph Schumpeter, lone genius, Lyft, megacity, Network effects, New Urbanism, Oculus Rift, pattern recognition, performance metric, Philippa Foot, precision agriculture, RFID, ride hailing / ride sharing, Second Machine Age, self-driving car, Silicon Valley, smart cities, speech recognition, statistical model, Steve Jobs, technoutopianism, TED Talk, Tesla Model S, Travis Kalanick, trolley problem, Uber and Lyft, uber lyft, Unsafe at Any Speed, warehouse robotics

A neural network named SuperVision, created by a team of researchers from the University of Toronto, correctly identified objects 85 percent of the time, a phenomenal performance in the world of image-recognition software.9 A drop from a 25 percent to 15 percent error rate might not sound like a lot, but for the computer-vision community, which was used to seeing annual improvements of a fraction of a percent each year, it was like seeing a man run the first four-minute mile. SuperVision’s creators were students Alex Krizhevsky and Ilya Sutskever, and their professor, Geoffrey Hinton. SuperVision was a type of neural network called a convolutional network. Many of the convolutional network’s features were based on techniques laid out more than thirty years earlier by Dr. Fukushima for the Neocognitron. Additional refinements stemmed from work conducted by the research groups of Yann LeCun at NYU, Andrew Ng at Stanford, and Yoshua Bengio at the University of Montreal.


pages: 340 words: 90,674

The Perfect Police State: An Undercover Odyssey Into China's Terrifying Surveillance Dystopia of the Future by Geoffrey Cain

airport security, Alan Greenspan, AlphaGo, anti-communist, Bellingcat, Berlin Wall, Black Lives Matter, Citizen Lab, cloud computing, commoditize, computer vision, coronavirus, COVID-19, deep learning, DeepMind, Deng Xiaoping, Edward Snowden, European colonialism, fake news, Geoffrey Hinton, George Floyd, ghettoisation, global supply chain, Kickstarter, land reform, lockdown, mass immigration, military-industrial complex, Nelson Mandela, Panopticon Jeremy Bentham, pattern recognition, phenotype, pirate software, post-truth, purchasing power parity, QR code, RAND corporation, Ray Kurzweil, ride hailing / ride sharing, Right to Buy, self-driving car, sharing economy, Silicon Valley, Skype, smart cities, South China Sea, speech recognition, TikTok, Tim Cook: Apple, trade liberalization, trade route, undersea cable, WikiLeaks

The Chinese, he said, had less than 10 percent of their population linked up to the internet in 2005, but had rapidly become the world’s most enthusiastic users of social media, mobile apps, and mobile payments.7 In 2011, almost 40 percent of the population, or about 513 million people, had their own internet connections.8 All those internet users were producing the data, through their purchases and clicks, that could train the neural networks to solve myriad tasks, including surveilling the users. The same year, in 2011, a pair of research assistants working for the famed AI researcher Geoffrey Hinton, a computer science professor at the University of Toronto and Google AI researcher, made a hardware breakthrough that made these advances possible. The researchers realized they could repurpose graphics processing units (GPUs), the components installed in devices that allowed for advances in computer game graphics, to improve the processing speeds of a deep neural net.9 With GPUs, AI developers could utilize the same techniques for displaying shapes and images on a computer screen, and use them to train a neural network in finding patterns.


pages: 339 words: 92,785

I, Warbot: The Dawn of Artificially Intelligent Conflict by Kenneth Payne

Abraham Maslow, AI winter, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, AlphaGo, anti-communist, Any sufficiently advanced technology is indistinguishable from magic, artificial general intelligence, Asperger Syndrome, augmented reality, Automated Insights, autonomous vehicles, backpropagation, Black Lives Matter, Bletchley Park, Boston Dynamics, classic study, combinatorial explosion, computer age, computer vision, Computing Machinery and Intelligence, coronavirus, COVID-19, CRISPR, cuban missile crisis, data science, deep learning, deepfake, DeepMind, delayed gratification, Demis Hassabis, disinformation, driverless car, drone strike, dual-use technology, Elon Musk, functional programming, Geoffrey Hinton, Google X / Alphabet X, Internet of things, job automation, John Nash: game theory, John von Neumann, Kickstarter, language acquisition, loss aversion, machine translation, military-industrial complex, move 37, mutually assured destruction, Nash equilibrium, natural language processing, Nick Bostrom, Norbert Wiener, nuclear taboo, nuclear winter, OpenAI, paperclip maximiser, pattern recognition, RAND corporation, ransomware, risk tolerance, Ronald Reagan, self-driving car, semantic web, side project, Silicon Valley, South China Sea, speech recognition, Stanislav Petrov, stem cell, Stephen Hawking, Steve Jobs, strong AI, Stuxnet, technological determinism, TED Talk, theory of mind, TikTok, Turing machine, Turing test, uranium enrichment, urban sprawl, V2 rocket, Von Neumann architecture, Wall-E, zero-sum game

Connectionism redux Even with symbolic logic dominant, research on connectionist AI continued in the background. Some of today’s superstar researchers began academic life toiling away in what was often seen as a relatively unglamorous backwater. Facebook’s Yann LeCun spent the late 1980s, working on ConvNets, a neural network specialised in visual tasks. Geoffrey Hinton, another titan of the field today, was also plugging away on neural networks in the 1980s—making important contributions to a vital breakthrough in the maths underpinning some of today’s connectionism. In the last decade, though, these relative outsiders have emphatically moved to the mainstream.


pages: 360 words: 100,991

Heart of the Machine: Our Future in a World of Artificial Emotional Intelligence by Richard Yonck

3D printing, AI winter, AlphaGo, Apollo 11, artificial general intelligence, Asperger Syndrome, augmented reality, autism spectrum disorder, backpropagation, Berlin Wall, Bletchley Park, brain emulation, Buckminster Fuller, call centre, cognitive bias, cognitive dissonance, computer age, computer vision, Computing Machinery and Intelligence, crowdsourcing, deep learning, DeepMind, Dunning–Kruger effect, Elon Musk, en.wikipedia.org, epigenetics, Fairchild Semiconductor, friendly AI, Geoffrey Hinton, ghettoisation, industrial robot, Internet of things, invention of writing, Jacques de Vaucanson, job automation, John von Neumann, Kevin Kelly, Law of Accelerating Returns, Loebner Prize, Menlo Park, meta-analysis, Metcalfe’s law, mirror neurons, Neil Armstrong, neurotypical, Nick Bostrom, Oculus Rift, old age dependency ratio, pattern recognition, planned obsolescence, pneumatic tube, RAND corporation, Ray Kurzweil, Rodney Brooks, self-driving car, Skype, social intelligence, SoftBank, software as a service, SQL injection, Stephen Hawking, Steven Pinker, superintelligent machines, technological singularity, TED Talk, telepresence, telepresence robot, The future is already here, The Future of Employment, the scientific method, theory of mind, Turing test, twin studies, Two Sigma, undersea cable, Vernor Vinge, Watson beat the top human players on Jeopardy!, Whole Earth Review, working-age population, zero day

One of the reasons for this was that pattern recognition technology and other branches of artificial intelligence had already changed so much in the short time since the software had first been built. For instance, though artificial neural networks (ANNs) had fallen out of favor since the 1990s, two important papers on machine learning by Geoffrey Hinton and Ruslan Salakhutdinov in 2006 presented major improvements that returned ANNs to the forefront of AI research.3 Their work and that of others introduced important new methods for setting up and training many-layered neural networks that would go on to transform entire fields. From voice recognition and language translation to image search and fraud detection, these new methods began to be used seemingly everywhere.


pages: 484 words: 104,873

Rise of the Robots: Technology and the Threat of a Jobless Future by Martin Ford

3D printing, additive manufacturing, Affordable Care Act / Obamacare, AI winter, algorithmic management, algorithmic trading, Amazon Mechanical Turk, artificial general intelligence, assortative mating, autonomous vehicles, banking crisis, basic income, Baxter: Rethink Robotics, Bernie Madoff, Bill Joy: nanobots, bond market vigilante , business cycle, call centre, Capital in the Twenty-First Century by Thomas Piketty, carbon tax, Charles Babbage, Chris Urmson, Clayton Christensen, clean water, cloud computing, collateralized debt obligation, commoditize, computer age, creative destruction, data science, debt deflation, deep learning, deskilling, digital divide, disruptive innovation, diversified portfolio, driverless car, Erik Brynjolfsson, factory automation, financial innovation, Flash crash, Ford Model T, Fractional reserve banking, Freestyle chess, full employment, general purpose technology, Geoffrey Hinton, Goldman Sachs: Vampire Squid, Gunnar Myrdal, High speed trading, income inequality, indoor plumbing, industrial robot, informal economy, iterative process, Jaron Lanier, job automation, John Markoff, John Maynard Keynes: technological unemployment, John von Neumann, Kenneth Arrow, Khan Academy, Kiva Systems, knowledge worker, labor-force participation, large language model, liquidity trap, low interest rates, low skilled workers, low-wage service sector, Lyft, machine readable, machine translation, manufacturing employment, Marc Andreessen, McJob, moral hazard, Narrative Science, Network effects, new economy, Nicholas Carr, Norbert Wiener, obamacare, optical character recognition, passive income, Paul Samuelson, performance metric, Peter Thiel, plutocrats, post scarcity, precision agriculture, price mechanism, public intellectual, Ray Kurzweil, rent control, rent-seeking, reshoring, RFID, Richard Feynman, Robert Solow, Rodney Brooks, Salesforce, Sam Peltzman, secular stagnation, self-driving car, Silicon Valley, Silicon Valley billionaire, Silicon Valley startup, single-payer health, software is eating the world, sovereign wealth fund, speech recognition, Spread Networks laid a new fibre optics cable between New York and Chicago, stealth mode startup, stem cell, Stephen Hawking, Steve Jobs, Steven Levy, Steven Pinker, strong AI, Stuxnet, technological singularity, telepresence, telepresence robot, The Bell Curve by Richard Herrnstein and Charles Murray, The Coming Technological Singularity, The Future of Employment, the long tail, Thomas L Friedman, too big to fail, Tragedy of the Commons, Tyler Cowen, Tyler Cowen: Great Stagnation, uber lyft, union organizing, Vernor Vinge, very high income, warehouse automation, warehouse robotics, Watson beat the top human players on Jeopardy!, women in the workforce

Researchers at Facebook have likewise developed an experimental system—consisting of nine levels of artificial neurons—that can correctly determine whether two photographs are of the same person 97.25 percent of the time, even if lighting conditions and orientation of the faces vary. That compares with 97.53 percent accuracy for human observers.9 Geoffrey Hinton of the University of Toronto, one of the leading researchers in the field, notes that deep learning technology “scales beautifully. Basically you just need to keep making it bigger and faster, and it will get better.”10 In other words, even without accounting for likely future improvements in their design, machine learning systems powered by deep learning networks are virtually certain to see continued dramatic progress simply as a result of Moore’s Law.


pages: 370 words: 107,983

Rage Inside the Machine: The Prejudice of Algorithms, and How to Stop the Internet Making Bigots of Us All by Robert Elliott Smith

"World Economic Forum" Davos, Ada Lovelace, adjacent possible, affirmative action, AI winter, Alfred Russel Wallace, algorithmic bias, algorithmic management, AlphaGo, Amazon Mechanical Turk, animal electricity, autonomous vehicles, behavioural economics, Black Swan, Brexit referendum, British Empire, Cambridge Analytica, cellular automata, Charles Babbage, citizen journalism, Claude Shannon: information theory, combinatorial explosion, Computing Machinery and Intelligence, corporate personhood, correlation coefficient, crowdsourcing, Daniel Kahneman / Amos Tversky, data science, deep learning, DeepMind, desegregation, discovery of DNA, disinformation, Douglas Hofstadter, Elon Musk, fake news, Fellow of the Royal Society, feminist movement, Filter Bubble, Flash crash, Geoffrey Hinton, Gerolamo Cardano, gig economy, Gödel, Escher, Bach, invention of the wheel, invisible hand, Jacquard loom, Jacques de Vaucanson, John Harrison: Longitude, John von Neumann, Kenneth Arrow, Linda problem, low skilled workers, Mark Zuckerberg, mass immigration, meta-analysis, mutually assured destruction, natural language processing, new economy, Northpointe / Correctional Offender Management Profiling for Alternative Sanctions, On the Economy of Machinery and Manufactures, p-value, pattern recognition, Paul Samuelson, performance metric, Pierre-Simon Laplace, post-truth, precariat, profit maximization, profit motive, Silicon Valley, social intelligence, statistical model, Stephen Hawking, stochastic process, Stuart Kauffman, telemarketer, The Bell Curve by Richard Herrnstein and Charles Murray, The Future of Employment, the scientific method, The Wealth of Nations by Adam Smith, The Wisdom of Crowds, theory of mind, Thomas Bayes, Thomas Malthus, traveling salesman, Turing machine, Turing test, twin studies, Vilfredo Pareto, Von Neumann architecture, warehouse robotics, women in the workforce, Yochai Benkler

The winner of the game is the player who captures the largest territory of the board, based on various scoring rules that evaluate the territories occupied by the stones.14 Although it has simple elements and rules, Go is considered one of the most intellectually challenging games ever devised, with a complexity that dwarfs Chess. Thus, it was a great surprise when, in 2016, AlphaGo beat South Korean Go grandmaster Lee Sedol four-out-of-five times and was declared the winner in that five-game match.15 It was a victory that no one thought possible for an algorithm, prompting Geoffrey Hinton, professor and senior Google AI researcher, to rather ambitiously explain the victory’s significance to a questioning reporter thus:16 It relies on a lot of intuition. The really skilled players just sort of see where a good place to put a stone would be. They do a lot of reasoning as well, which they call reading, but they also have very good intuition about where a good place to go would be, and that’s the kind of thing that people just thought computers couldn’t do.


pages: 523 words: 143,139

Algorithms to Live By: The Computer Science of Human Decisions by Brian Christian, Tom Griffiths

4chan, Ada Lovelace, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Albert Einstein, algorithmic bias, algorithmic trading, anthropic principle, asset allocation, autonomous vehicles, Bayesian statistics, behavioural economics, Berlin Wall, Big Tech, Bill Duvall, bitcoin, Boeing 747, Charles Babbage, cognitive load, Community Supported Agriculture, complexity theory, constrained optimization, cosmological principle, cryptocurrency, Danny Hillis, data science, David Heinemeier Hansson, David Sedaris, delayed gratification, dematerialisation, diversification, Donald Knuth, Donald Shoup, double helix, Dutch auction, Elon Musk, exponential backoff, fault tolerance, Fellow of the Royal Society, Firefox, first-price auction, Flash crash, Frederick Winslow Taylor, fulfillment center, Garrett Hardin, Geoffrey Hinton, George Akerlof, global supply chain, Google Chrome, heat death of the universe, Henri Poincaré, information retrieval, Internet Archive, Jeff Bezos, Johannes Kepler, John Nash: game theory, John von Neumann, Kickstarter, knapsack problem, Lao Tzu, Leonard Kleinrock, level 1 cache, linear programming, martingale, multi-armed bandit, Nash equilibrium, natural language processing, NP-complete, P = NP, packet switching, Pierre-Simon Laplace, power law, prediction markets, race to the bottom, RAND corporation, RFC: Request For Comment, Robert X Cringely, Sam Altman, scientific management, sealed-bid auction, second-price auction, self-driving car, Silicon Valley, Skype, sorting algorithm, spectrum auction, Stanford marshmallow experiment, Steve Jobs, stochastic process, Thomas Bayes, Thomas Malthus, Tragedy of the Commons, traveling salesman, Turing machine, urban planning, Vickrey auction, Vilfredo Pareto, Walter Mischel, Y Combinator, zero-sum game

Cringely, Peter Denning, Raymond Dong, Elizabeth Dupuis, Joseph Dwyer, David Estlund, Christina Fang, Thomas Ferguson, Jessica Flack, James Fogarty, Jean E. Fox Tree, Robert Frank, Stuart Geman, Jim Gettys, John Gittins, Alison Gopnik, Deborah Gordon, Michael Gottlieb, Steve Hanov, Andrew Harbison, Isaac Haxton, John Hennessy, Geoff Hinton, David Hirshliefer, Jordan Ho, Tony Hoare, Kamal Jain, Chris Jones, William Jones, Leslie Kaelbling, David Karger, Richard Karp, Scott Kirkpatrick, Byron Knoll, Con Kolivas, Michael Lee, Jan Karel Lenstra, Paul Lynch, Preston McAfee, Jay McClelland, Laura Albert McLay, Paul Milgrom, Anthony Miranda, Michael Mitzenmacher, Rosemarie Nagel, Christof Neumann, Noam Nisan, Yukio Noguchi, Peter Norvig, Christos Papadimitriou, Meghan Peterson, Scott Plagenhoef, Anita Pomerantz, Balaji Prabhakar, Kirk Pruhs, Amnon Rapoport, Ronald Rivest, Ruth Rosenholtz, Tim Roughgarden, Stuart Russell, Roma Shah, Donald Shoup, Steven Skiena, Dan Smith, Paul Smolensky, Mark Steyvers, Chris Stucchio, Milind Tambe, Robert Tarjan, Geoff Thorpe, Jackson Tolins, Michael Trick, Hal Varian, James Ware, Longhair Warrior, Steve Whittaker, Avi Wigderson, Jacob Wobbrock, Jason Wolfe, and Peter Zijlstra.


pages: 574 words: 164,509

Superintelligence: Paths, Dangers, Strategies by Nick Bostrom

agricultural Revolution, AI winter, Albert Einstein, algorithmic trading, anthropic principle, Anthropocene, anti-communist, artificial general intelligence, autism spectrum disorder, autonomous vehicles, backpropagation, barriers to entry, Bayesian statistics, bioinformatics, brain emulation, cloud computing, combinatorial explosion, computer vision, Computing Machinery and Intelligence, cosmological constant, dark matter, DARPA: Urban Challenge, data acquisition, delayed gratification, Demis Hassabis, demographic transition, different worldview, Donald Knuth, Douglas Hofstadter, driverless car, Drosophila, Elon Musk, en.wikipedia.org, endogenous growth, epigenetics, fear of failure, Flash crash, Flynn Effect, friendly AI, general purpose technology, Geoffrey Hinton, Gödel, Escher, Bach, hallucination problem, Hans Moravec, income inequality, industrial robot, informal economy, information retrieval, interchangeable parts, iterative process, job automation, John Markoff, John von Neumann, knowledge worker, Large Hadron Collider, longitudinal study, machine translation, megaproject, Menlo Park, meta-analysis, mutually assured destruction, Nash equilibrium, Netflix Prize, new economy, Nick Bostrom, Norbert Wiener, NP-complete, nuclear winter, operational security, optical character recognition, paperclip maximiser, pattern recognition, performance metric, phenotype, prediction markets, price stability, principal–agent problem, race to the bottom, random walk, Ray Kurzweil, recommendation engine, reversible computing, search costs, social graph, speech recognition, Stanislav Petrov, statistical model, stem cell, Stephen Hawking, Strategic Defense Initiative, strong AI, superintelligent machines, supervolcano, synthetic biology, technological singularity, technoutopianism, The Coming Technological Singularity, The Nature of the Firm, Thomas Kuhn: the structure of scientific revolutions, time dilation, Tragedy of the Commons, transaction costs, trolley problem, Turing machine, Vernor Vinge, WarGames: Global Thermonuclear War, Watson beat the top human players on Jeopardy!, World Values Survey, zero-sum game

For many applications, however, the learning that takes place in a neural network is little different from the learning that takes place in linear regression, a statistical technique developed by Adrien-Marie Legendre and Carl Friedrich Gauss in the early 1800s. 24. The basic algorithm was described by Arthur Bryson and Yu-Chi Ho as a multi-stage dynamic optimization method in 1969 (Bryson and Ho 1969). The application to neural networks was suggested by Paul Werbos in 1974 (Werbos 1994), but it was only after the work by David Rumelhart, Geoffrey Hinton, and Ronald Williams in 1986 (Rumelhart et al. 1986) that the method gradually began to seep into the awareness of a wider community. 25. Nets lacking hidden layers had previously been shown to have severely limited functionality (Minsky and Papert 1969). 26. E.g., MacKay (2003). 27.


pages: 619 words: 177,548

Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity by Daron Acemoglu, Simon Johnson

"Friedman doctrine" OR "shareholder theory", "World Economic Forum" Davos, 4chan, agricultural Revolution, AI winter, Airbnb, airline deregulation, algorithmic bias, algorithmic management, Alignment Problem, AlphaGo, An Inconvenient Truth, artificial general intelligence, augmented reality, basic income, Bellingcat, Bernie Sanders, Big Tech, Bletchley Park, blue-collar work, British Empire, carbon footprint, carbon tax, carried interest, centre right, Charles Babbage, ChatGPT, Clayton Christensen, clean water, cloud computing, collapse of Lehman Brothers, collective bargaining, computer age, Computer Lib, Computing Machinery and Intelligence, conceptual framework, contact tracing, Corn Laws, Cornelius Vanderbilt, coronavirus, corporate social responsibility, correlation does not imply causation, cotton gin, COVID-19, creative destruction, declining real wages, deep learning, DeepMind, deindustrialization, Demis Hassabis, Deng Xiaoping, deskilling, discovery of the americas, disinformation, Donald Trump, Douglas Engelbart, Douglas Engelbart, Edward Snowden, Elon Musk, en.wikipedia.org, energy transition, Erik Brynjolfsson, European colonialism, everywhere but in the productivity statistics, factory automation, facts on the ground, fake news, Filter Bubble, financial innovation, Ford Model T, Ford paid five dollars a day, fulfillment center, full employment, future of work, gender pay gap, general purpose technology, Geoffrey Hinton, global supply chain, Gordon Gekko, GPT-3, Grace Hopper, Hacker Ethic, Ida Tarbell, illegal immigration, income inequality, indoor plumbing, industrial robot, interchangeable parts, invisible hand, Isaac Newton, Jacques de Vaucanson, James Watt: steam engine, Jaron Lanier, Jeff Bezos, job automation, Johannes Kepler, John Markoff, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, Joseph-Marie Jacquard, Kenneth Arrow, Kevin Roose, Kickstarter, knowledge economy, labor-force participation, land reform, land tenure, Les Trente Glorieuses, low skilled workers, low-wage service sector, M-Pesa, manufacturing employment, Marc Andreessen, Mark Zuckerberg, megacity, mobile money, Mother of all demos, move fast and break things, natural language processing, Neolithic agricultural revolution, Norbert Wiener, NSO Group, offshore financial centre, OpenAI, PageRank, Panopticon Jeremy Bentham, paperclip maximiser, pattern recognition, Paul Graham, Peter Thiel, Productivity paradox, profit maximization, profit motive, QAnon, Ralph Nader, Ray Kurzweil, recommendation engine, ride hailing / ride sharing, Robert Bork, Robert Gordon, Robert Solow, robotic process automation, Ronald Reagan, scientific management, Second Machine Age, self-driving car, seminal paper, shareholder value, Sheryl Sandberg, Shoshana Zuboff, Silicon Valley, social intelligence, Social Responsibility of Business Is to Increase Its Profits, social web, South Sea Bubble, speech recognition, spice trade, statistical model, stem cell, Steve Jobs, Steve Wozniak, strikebreaker, subscription business, Suez canal 1869, Suez crisis 1956, supply-chain management, surveillance capitalism, tacit knowledge, tech billionaire, technoutopianism, Ted Nelson, TED Talk, The Future of Employment, The Rise and Fall of American Growth, The Structural Transformation of the Public Sphere, The Wealth of Nations by Adam Smith, theory of mind, Thomas Malthus, too big to fail, total factor productivity, trade route, transatlantic slave trade, trickle-down economics, Turing machine, Turing test, Twitter Arab Spring, Two Sigma, Tyler Cowen, Tyler Cowen: Great Stagnation, union organizing, universal basic income, Unsafe at Any Speed, Upton Sinclair, upwardly mobile, W. E. B. Du Bois, War on Poverty, WikiLeaks, wikimedia commons, working poor, working-age population

This makes the customer-service representative less effective and may encourage managers and technologists to seek additional ways of reducing the tasks allocated to them even further. These lessons about human intelligence and adaptability are often ignored in the AI community, which rushes to automate a range of tasks, regardless of the role of human skill. The triumph of AI in radiology is much trumpeted. In 2016 Geoffrey Hinton, cocreator of modern deep-learning methods, Turing Award winner, and Google scientist, suggested that “people should stop training radiologists now. It’s just completely obvious that within five years deep learning is going to do better than radiologists.” Nothing of the sort has yet happened, and demand for radiologists has increased since 2016, for a very simple reason.


pages: 706 words: 202,591

Facebook: The Inside Story by Steven Levy

active measures, Airbnb, Airbus A320, Amazon Mechanical Turk, AOL-Time Warner, Apple's 1984 Super Bowl advert, augmented reality, Ben Horowitz, Benchmark Capital, Big Tech, Black Lives Matter, Blitzscaling, blockchain, Burning Man, business intelligence, Cambridge Analytica, cloud computing, company town, computer vision, crowdsourcing, cryptocurrency, data science, deep learning, disinformation, don't be evil, Donald Trump, Dunbar number, East Village, Edward Snowden, El Camino Real, Elon Musk, end-to-end encryption, fake news, Firefox, Frank Gehry, Geoffrey Hinton, glass ceiling, GPS: selective availability, growth hacking, imposter syndrome, indoor plumbing, information security, Jeff Bezos, John Markoff, Jony Ive, Kevin Kelly, Kickstarter, lock screen, Lyft, machine translation, Mahatma Gandhi, Marc Andreessen, Marc Benioff, Mark Zuckerberg, Max Levchin, Menlo Park, Metcalfe’s law, MITM: man-in-the-middle, move fast and break things, natural language processing, Network effects, Oculus Rift, operational security, PageRank, Paul Buchheit, paypal mafia, Peter Thiel, pets.com, post-work, Ray Kurzweil, recommendation engine, Robert Mercer, Robert Metcalfe, rolodex, Russian election interference, Salesforce, Sam Altman, Sand Hill Road, self-driving car, sexual politics, Sheryl Sandberg, Shoshana Zuboff, side project, Silicon Valley, Silicon Valley startup, skeuomorphism, slashdot, Snapchat, social contagion, social graph, social software, South of Market, San Francisco, Startup school, Steve Ballmer, Steve Bannon, Steve Jobs, Steven Levy, Steven Pinker, surveillance capitalism, tech billionaire, techlash, Tim Cook: Apple, Tragedy of the Commons, web application, WeWork, WikiLeaks, women in the workforce, Y Combinator, Y2K, you are the product

He wasn’t thinking about content moderation then, but rather improvement in things like News Feed ranking, better targeting in ad auctions, and facial recognition to better identify your friends in photographs, so you’d engage more with those posts. But the competition to hire AI wizards was fierce. The godfather of deep learning was a British computer scientist working in Toronto named Geoffrey Hinton. He was like the Batman of this new and irreverent form of AI, and his acolytes were a trio of brilliant Robins who individually were making their own huge contributions. One of the Robins, a Parisian named Yann LeCun, jokingly dubbed Hinton’s movement “the Conspiracy.” But the potential of deep learning was no joke to the big tech companies who saw it as a way to perform amazing tasks at scale, everything from facial recognition to instant translation from one language to another.