14 results back to index
Artificial Intelligence: A Guide for Thinking Humans by Melanie Mitchell
Ada Lovelace, AI winter, Amazon Mechanical Turk, Apple's 1984 Super Bowl advert, artificial general intelligence, autonomous vehicles, Bernie Sanders, Claude Shannon: information theory, cognitive dissonance, computer age, computer vision, dark matter, Douglas Hofstadter, Elon Musk, en.wikipedia.org, Gödel, Escher, Bach, I think there is a world market for maybe five computers, ImageNet competition, Jaron Lanier, job automation, John Markoff, John von Neumann, Kevin Kelly, Kickstarter, license plate recognition, Mark Zuckerberg, natural language processing, Norbert Wiener, ought to be enough for anybody, pattern recognition, performance metric, RAND corporation, Ray Kurzweil, recommendation engine, ride hailing / ride sharing, Rodney Brooks, self-driving car, sentiment analysis, Silicon Valley, Singularitarianism, Skype, speech recognition, Stephen Hawking, Steve Jobs, Steve Wozniak, Steven Pinker, strong AI, superintelligent machines, theory of mind, There's no reason for any individual to have a computer in his home - Ken Olsen, Turing test, Vernor Vinge, Watson beat the top human players on Jeopardy!
For the ImageNet project, Mechanical Turk was “a godsend.”6 The service continues to be widely used by AI researchers for creating data sets; nowadays, academic grant proposals in AI commonly include a line item for “Mechanical Turk workers.” The ImageNet Competitions In 2010, the ImageNet project launched the first ImageNet Large Scale Visual Recognition Challenge, in order to spur progress toward more general object-recognition algorithms. Thirty-five programs competed, representing computer-vision researchers from academia and industry around the world. The competitors were given labeled training images—1.2 million of them—and a list of possible categories. The task for the trained programs was to output the correct category of each input image. The ImageNet competition had a thousand possible categories, compared with PASCAL’s twenty. The thousand possible categories were a subset of WordNet terms chosen by the organizers.
While this story is merely an interesting footnote to the larger history of deep learning in computer vision, I tell it to illustrate the extent to which the ImageNet competition came to be seen as the key symbol of progress in computer vision, and AI in general. Cheating aside, progress on ImageNet continued. The final competition was held in 2017, with a winning top-5 accuracy of 98 percent. As one journalist commented, “Today, many consider ImageNet solved,”11 at least for the classification task. The community is moving on to new benchmark data sets and new problems, especially ones that integrate vision and language. What was it that enabled ConvNets, which seemed to be at a dead end in the 1990s, to suddenly dominate the ImageNet competition, and subsequently most of computer vision in the last half a decade? It turns out that the recent success of deep learning is due less to new breakthroughs in AI than to the availability of huge amounts of data (thank you, internet!)
The following year, the highest-scoring program—also using support vector machines—showed a respectable but modest improvement, getting 74 percent of the test images correct. Most people in the field expected this trend to continue; computer-vision research would chip away at the problem, with gradual improvement at each annual competition. However, these expectations were upended in the 2012 ImageNet competition: the winning entry achieved an amazing 85 percent correct. Such a jump in accuracy was a shocking development. What’s more, the winning entry did not use support vector machines or any of the other dominant computer-vision methods of the day. Instead, it was a convolutional neural network. This particular ConvNet has come to be known as AlexNet, named after its main creator, Alex Krizhevsky, then a graduate student at the University of Toronto, supervised by the eminent neural network researcher Geoffrey Hinton.
Prediction Machines: The Simple Economics of Artificial Intelligence by Ajay Agrawal, Joshua Gans, Avi Goldfarb
"Robert Solow", Ada Lovelace, AI winter, Air France Flight 447, Airbus A320, artificial general intelligence, autonomous vehicles, basic income, Bayesian statistics, Black Swan, blockchain, call centre, Capital in the Twenty-First Century by Thomas Piketty, Captain Sullenberger Hudson, collateralized debt obligation, computer age, creative destruction, Daniel Kahneman / Amos Tversky, data acquisition, data is the new oil, deskilling, disruptive innovation, Elon Musk, en.wikipedia.org, Erik Brynjolfsson, everywhere but in the productivity statistics, Google Glasses, high net worth, ImageNet competition, income inequality, information retrieval, inventory management, invisible hand, job automation, John Markoff, Joseph Schumpeter, Kevin Kelly, Lyft, Minecraft, Mitch Kapor, Moneyball by Michael Lewis explains big data, Nate Silver, new economy, On the Economy of Machinery and Manufactures, pattern recognition, performance metric, profit maximization, QWERTY keyboard, race to the bottom, randomized controlled trial, Ray Kurzweil, ride hailing / ride sharing, Second Machine Age, self-driving car, shareholder value, Silicon Valley, statistical model, Stephen Hawking, Steve Jobs, Steven Levy, strong AI, The Future of Employment, The Signal and the Noise by Nate Silver, Tim Cook: Apple, Turing test, Uber and Lyft, uber lyft, US Airways Flight 1549, Vernor Vinge, Watson beat the top human players on Jeopardy!, William Langewiesche, Y Combinator, zero-sum game
Andrej Karpathy, “What I Learned from Competing against a ConvNet on ImageNet,” Andrej Karthy (blog), September 2, 2014, http://karpathy.github.io/2014/09/02/what-i-learned-from-competing-against-a-convnet-on-imagenet/; ImageNet, Large Scale Visual Recognition Challenge 2016, http://image-net.org/challenges/LSVRC/2016/results; Andrej Karpathy, LISVRC 2014, http://cs.stanford.edu/people/karpathy/ilsvrc/. 8. Aaron Tilley, “China’s Rise in the Global AI Race Emerges as It Takes Over the Final ImageNet Competition,” Forbes, July 31, 2017, https://www.forbes.com/sites/aarontilley/2017/07/31/china-ai-imagenet/#dafa182170a8. 9. Dave Gershgorn, “The Data That Transformed AI Research—and Possibly the World,” Quartz, July 26, 2017, https://qz.com/1034972/ the-data-that-changed-the-direction-of-ai-research-and-possibly-the-world/. 10. Definitions from the Oxford English Dictionary. Chapter 4 1. J.
Health insurance involved predicting how much an individual would spend on medical care. Inventory management involved predicting how many items would be in a warehouse on a given day. More recently, entirely new classes of prediction problems emerged. Many were nearly impossible before the recent advances in machine intelligence technology, including object identification, language translation, and drug discovery. For example, the ImageNet Challenge is a high-profile annual contest to predict the name of an object in an image. Predicting the object in an image can be a difficult task, even for humans. The ImageNet data contains a thousand categories of objects, including many breeds of dog and other similar images. It can be difficult to tell the difference between a Tibetan mastiff and a Bernese mountain dog, or between a safe and a combination lock.
Ghost Work: How to Stop Silicon Valley From Building a New Global Underclass by Mary L. Gray, Siddharth Suri
Affordable Care Act / Obamacare, Amazon Mechanical Turk, augmented reality, autonomous vehicles, barriers to entry, basic income, big-box store, bitcoin, blue-collar work, business process, business process outsourcing, call centre, Capital in the Twenty-First Century by Thomas Piketty, cloud computing, collaborative consumption, collective bargaining, computer vision, corporate social responsibility, crowdsourcing, data is the new oil, deindustrialization, deskilling, don't be evil, Donald Trump, Elon Musk, employer provided health coverage, en.wikipedia.org, equal pay for equal work, Erik Brynjolfsson, financial independence, Frank Levy and Richard Murnane: The New Division of Labor, future of work, gig economy, glass ceiling, global supply chain, hiring and firing, ImageNet competition, industrial robot, informal economy, information asymmetry, Jeff Bezos, job automation, knowledge economy, low skilled workers, low-wage service sector, market friction, Mars Rover, natural language processing, new economy, passive income, pattern recognition, post-materialism, post-work, race to the bottom, Rana Plaza, recommendation engine, ride hailing / ride sharing, Ronald Coase, Second Machine Age, sentiment analysis, sharing economy, Shoshana Zuboff, side project, Silicon Valley, Silicon Valley startup, Skype, software as a service, speech recognition, spinning jenny, Stephen Hawking, The Future of Employment, The Nature of the Firm, transaction costs, two-sided market, union organizing, universal basic income, Vilfredo Pareto, women in the workforce, Works Progress Administration, Y Combinator
Shortly after, in 2007, Li and her colleagues found MTurk, and they realized that the MTurk API gave them a way to automatically distribute image-labeling tasks to people and pay them. They tried a few different workflows but were ultimately able to use about 49,000 workers from 167 countries to accurately label 3.2 million images.9 After two and a half years, their collective labor created a massive, gold-standard data set of high-resolution images, each with highly accurate labels of the objects in the image. Li called it ImageNet. Thanks to ImageNet competitions held annually since its creation, research teams use the data set to develop more sophisticated image recognition algorithms and to advance the state of the art. Having a gold-standard data set allowed researchers to measure the accuracy of their new algorithms and to compare their algorithms with the current state of the art. This allowed researchers to make so much progress that some AIs can now do a better job than humans in recognizing images!
Edge, https://www.edge.org/response-detail/26587, accessed October 21, 2018. [back] 12. To incentivize researchers to use the data set, Li and her colleagues organized an annual contest pitting the best algorithms for the image recognition problem, from various research teams around the world, against one another. The progress scientists made toward this goal was staggering. The annual ImageNet competition saw a roughly 10x reduction in error and a roughly 3x increase in precision in recognizing images over the course of eight years. Eventually the vision algorithms achieved a lower error rate than the human workers. The algorithmic and engineering advances that scientists achieved over the eight years of competition fueled much of the recent success of neural networks, the so-called deep learning revolution, which would impact a variety of fields and problem domains.
MTurk workers are the AI revolution’s unsung heroes. Without them generating and improving the size and quality of the training data, ImageNet would not exist.11 ImageNet’s success is a noteworthy example of the paradox of automation’s last mile in action. Humans trained an AI, only to have the AI ultimately take over the task entirely. Researchers could then open up even harder problems. For example, after the ImageNet challenge finished, researchers turned their attention to finding where an object is in an image or video. These problems needed yet more training data, generating another wave of ghost work. But ImageNet is merely one of many examples of how computer programmers and business entrepreneurs use ghost work to create training data to develop better artificial intelligence.12 The Range of Ghost Work: From Micro-Tasks to Macro-Tasks The platforms generating on-demand ghost work offer themselves up as gatekeepers helping employers-turned-requesters tackle problems that need a bit of human intelligence.
The Ethical Algorithm: The Science of Socially Aware Algorithm Design by Michael Kearns, Aaron Roth
23andMe, affirmative action, algorithmic trading, Alvin Roth, Bayesian statistics, bitcoin, cloud computing, computer vision, crowdsourcing, Edward Snowden, Elon Musk, Filter Bubble, general-purpose programming language, Google Chrome, ImageNet competition, Lyft, medical residency, Nash equilibrium, Netflix Prize, p-value, Pareto efficiency, performance metric, personalized medicine, pre–internet, profit motive, quantitative trading / quantitative ﬁnance, RAND corporation, recommendation engine, replication crisis, ride hailing / ride sharing, Robert Bork, Ronald Coase, self-driving car, short selling, sorting algorithm, speech recognition, statistical model, Stephen Hawking, superintelligent machines, telemarketer, Turing machine, two-sided market, Vilfredo Pareto
Such competitions have proliferated in recent years; the Netflix competition, which we have mentioned a couple of times already, was an early example. Commercial platforms such as Kaggle (which now, in fact, hosts the ImageNet competition) offer datasets and competitions—some offering awards of $100,000 for winning teams—for thousands of diverse, complex prediction problems. Machine learning has truly become a competitive sport. It wouldn’t make sense to score ImageNet competitors based on how well they classified the training images—after all, an algorithm could have simply memorized the labels for the training set, without learning any generalizable rule for classifying images. Instead, the right way to evaluate the competitors is to see how well their models classify new images that they have never seen before. The ImageNet competition reserved 100,000 “validation” images for this purpose. But the competition organizers also wanted to give participants a way to see how well they were doing.
But money alone wasn’t enough to recruit talent—top researchers want to work where other top researchers are—so it was important for AI labs that wanted to recruit premium talent to be viewed as places that were already on the cutting edge. In the United States, this included research labs at companies such as Google and Facebook. One way to do this was to beat the big players in a high-profile competition. The ImageNet competition was perfect—focused on exactly the kind of vision task for which deep learning was making headlines. The contest required each team’s computer program to classify the objects in images into a thousand different and highly specific categories, including “frilled lizard,” “banded gecko,” “oscilloscope,” and “reflex camera.” Each team could train their algorithm on a set of 1.5 million images that the competition organizers made available to all participants.
By doing so, they were able to test a sequence of slightly different models that gradually fit the validation set better and better—seeming to steadily improve their accuracy. But because of how they had cheated, it was impossible to tell whether they were making real scientific progress or if they were just exploiting a loophole to exfiltrate information about the supposedly “held out” validation set. Once the cheating was discovered, the competition organizers banned Baidu from the ImageNet competition for a year, and the company withdrew the scientific paper in which it had reported on its results. The team leader was fired, and instead of staging a coup in the competition for AI talent, Baidu suffered an embarrassing hit to its reputation in the machine learning community. But why, exactly, was testing too many models cheating? How did creating fake accounts help Baidu appear to have a better learning algorithm than it actually did?
The Economic Singularity: Artificial Intelligence and the Death of Capitalism by Calum Chace
3D printing, additive manufacturing, agricultural Revolution, AI winter, Airbnb, artificial general intelligence, augmented reality, autonomous vehicles, banking crisis, basic income, Baxter: Rethink Robotics, Berlin Wall, Bernie Sanders, bitcoin, blockchain, call centre, Chris Urmson, congestion charging, credit crunch, David Ricardo: comparative advantage, Douglas Engelbart, Elon Musk, en.wikipedia.org, Erik Brynjolfsson, Flynn Effect, full employment, future of work, gender pay gap, gig economy, Google Glasses, Google X / Alphabet X, ImageNet competition, income inequality, industrial robot, Internet of things, invention of the telephone, invisible hand, James Watt: steam engine, Jaron Lanier, Jeff Bezos, job automation, John Markoff, John Maynard Keynes: technological unemployment, John von Neumann, Kevin Kelly, knowledge worker, lifelogging, lump of labour, Lyft, Marc Andreessen, Mark Zuckerberg, Martin Wolf, McJob, means of production, Milgram experiment, Narrative Science, natural language processing, new economy, Occupy movement, Oculus Rift, PageRank, pattern recognition, post scarcity, post-industrial society, post-work, precariat, prediction markets, QWERTY keyboard, railway mania, RAND corporation, Ray Kurzweil, RFID, Rodney Brooks, Sam Altman, Satoshi Nakamoto, Second Machine Age, self-driving car, sharing economy, Silicon Valley, Skype, software is eating the world, speech recognition, Stephen Hawking, Steve Jobs, TaskRabbit, technological singularity, The Future of Employment, Thomas Malthus, transaction costs, Tyler Cowen: Great Stagnation, Uber for X, uber lyft, universal basic income, Vernor Vinge, working-age population, Y Combinator, young professional
The way that some game-playing AIs become superhuman in their field is by playing millions of games against versions of themselves and learning from the outcomes.) In deep learning, the algorithms operate in several layers, each layer processing data from previous ones and passing the output up to the next layer. The output is not necessarily binary, just on or off: it can be weighted. The number of layers can vary too, with anything above ten layers seen as very deep learning – although in December 2015 a Microsoft team won the ImageNet competition with a system which employed a massive 152 layers.[lxvi] Deep learning, and especially artificial neural nets (ANNs), are in many ways a return to an older approach to AI which was explored in the 1960s but abandoned because it proved ineffective. While Good Old-Fashioned AI held sway in most labs, a small group of pioneers known as the Toronto mafia kept faith with the neural network approach.
Even the worst case predictions envisage continued rapid improvement in computer processing power, albeit perhaps slower than previously. In December 2015, Microsoft's chief speech scientist Xuedong Huang noted that speech recognition has improved 20% a year consistently for the last 20 years. He predicted that computers would be as good as humans at understanding human speech within five years. Geoff Hinton – the man whose team won the landmark 2012 ImageNet competition – went further. In May 2015 he said that he expects machines to demonstrate common sense within a decade. Common sense can be described as having a mental model of the world which allows you to predict what will happen if certain actions are taken. Professor Murray Shanahan of Imperial College uses the example of throwing a chair from a stage into an audience: humans would understand that members of the audience would throw up their hands to protect themselves, but some damage would probably be caused, and certainly some upset.
AI Superpowers: China, Silicon Valley, and the New World Order by Kai-Fu Lee
AI winter, Airbnb, Albert Einstein, algorithmic trading, artificial general intelligence, autonomous vehicles, barriers to entry, basic income, business cycle, cloud computing, commoditize, computer vision, corporate social responsibility, creative destruction, crony capitalism, Deng Xiaoping, deskilling, Donald Trump, Elon Musk, en.wikipedia.org, Erik Brynjolfsson, full employment, future of work, gig economy, Google Chrome, happiness index / gross national happiness, if you build it, they will come, ImageNet competition, income inequality, informal economy, Internet of things, invention of the telegraph, Jeff Bezos, job automation, John Markoff, Kickstarter, knowledge worker, Lean Startup, low skilled workers, Lyft, mandatory minimum, Mark Zuckerberg, Menlo Park, minimum viable product, natural language processing, new economy, pattern recognition, pirate software, profit maximization, QR code, Ray Kurzweil, recommendation engine, ride hailing / ride sharing, risk tolerance, Robert Mercer, Rodney Brooks, Rubik’s Cube, Sam Altman, Second Machine Age, self-driving car, sentiment analysis, sharing economy, Silicon Valley, Silicon Valley ideology, Silicon Valley startup, Skype, special economic zone, speech recognition, Stephen Hawking, Steve Jobs, strong AI, The Future of Employment, Travis Kalanick, Uber and Lyft, uber lyft, universal basic income, urban planning, Y Combinator
Those unexpected improvements are expanding the realm of the possible when it comes to real-world uses and thus job disruptions. One of the clearest examples of these accelerating improvements is the ImageNet competition. In the competition, algorithms submitted by different teams are tasked with identifying thousands of different objects within millions of different images, such as birds, baseballs, screwdrivers, and mosques. It has quickly emerged as one of the most respected image-recognition contests and a clear benchmark for AI’s progress in computer vision. When the Oxford machine-learning experts made their estimates of technical capabilities in early 2013, the most recent ImageNet competition of 2012 had been the coming-out party for deep learning. Geoffrey Hinton’s team used those techniques to achieve a record-setting error rate of around 16 percent, a large leap forward in a competition where no team had ever gotten below 25 percent.
Bold: How to Go Big, Create Wealth and Impact the World by Peter H. Diamandis, Steven Kotler
3D printing, additive manufacturing, Airbnb, Amazon Mechanical Turk, Amazon Web Services, augmented reality, autonomous vehicles, Charles Lindbergh, cloud computing, creative destruction, crowdsourcing, Daniel Kahneman / Amos Tversky, dematerialisation, deskilling, disruptive innovation, Elon Musk, en.wikipedia.org, Exxon Valdez, fear of failure, Firefox, Galaxy Zoo, Google Glasses, Google Hangouts, gravity well, ImageNet competition, industrial robot, Internet of things, Jeff Bezos, John Harrison: Longitude, John Markoff, Jono Bacon, Just-in-time delivery, Kickstarter, Kodak vs Instagram, Law of Accelerating Returns, Lean Startup, life extension, loss aversion, Louis Pasteur, low earth orbit, Mahatma Gandhi, Marc Andreessen, Mark Zuckerberg, Mars Rover, meta analysis, meta-analysis, microbiome, minimum viable product, move fast and break things, Narrative Science, Netflix Prize, Network effects, Oculus Rift, optical character recognition, packet switching, PageRank, pattern recognition, performance metric, Peter H. Diamandis: Planetary Resources, Peter Thiel, pre–internet, Ray Kurzweil, recommendation engine, Richard Feynman, ride hailing / ride sharing, risk tolerance, rolodex, self-driving car, sentiment analysis, shareholder value, Silicon Valley, Silicon Valley startup, skunkworks, Skype, smart grid, stem cell, Stephen Hawking, Steve Jobs, Steven Levy, Stewart Brand, superconnector, technoutopianism, telepresence, telepresence robot, Turing test, urban renewal, web application, X Prize, Y Combinator, zero-sum game
In Germany, an annual competition pits humans against machine learning algorithms in an attempt to see, identify, and categorize traffic signs. Fifty thousand different traffic signs are used—signs obscured by long distances, by trees, by the glare of sunlight. In 2011, for the first time, a machine-learning algorithm bested its makers, achieving a 0.5 percent error rate, compared to 1.2 percent for humans.32 Even more impressive were the results of the 2012 ImageNet Competition, which challenged algorithms to look at one million different images—ranging from birds to kitchenware to people on motor scooters—and correctly slot them into a thousand unique categories. Seriously, it’s one thing for a computer to recognize known objects (zip codes, traffic signs), but categorizing thousands of random objects is an ability that is downright human. Only better. For again the algorithms outperformed people.33 Similar progress is showing up in reading.
., 15, 17, 18, 19, 20, 21 structure of, 21 see also entrepreneurs, exponential; specific exponential entrepreneurs and organizations Exponential Organizations (ExO) (Ismail), xiv, 15 extrinsic rewards, 78, 79 Exxon Valdez, 250 FAA (Federal Aviation Administration), 110, 111, 261 Facebook, 14, 16, 88, 128, 173, 182, 185, 190, 195, 196, 202, 212, 213, 217, 218, 224, 233, 234, 236, 241 facial recognition software, 58 Fairchild Semiconductor, 4 Falcon launchers, 97, 119, 122, 123 false wins, 268, 269, 271 Fast Company, 5, 248 Favreau, Jon, 117 feedback, feedback loops, 28, 77, 83, 84, 120, 176, 180 in crowdfunding campaigns, 176, 180, 182, 185, 190, 199, 200, 202, 209–10 triggering flow with, 86, 87, 90–91, 92 Festo, 61 FeverBee (blog), 233 Feynman, Richard, 268, 271 Firefox Web browser, 11 first principles, 116, 120–21, 122, 126 Fiverr, 157 fixed-funding campaigns, 185–86, 206 “flash prizes,” 250 Flickr, 14 flow, 85–94, 109, 278 creative triggers of, 87, 93 definition of, 86 environmental triggers of, 87, 88–89 psychological triggers of, 87, 89–91, 92 social triggers of, 87, 91–93 Flow Genome Project, xiii, 87, 278 Foldit, 145 Forbes, 125 Ford, Henry, 33, 112–13 Fortune, 123 Fossil Wrist Net, 176 Foster, Richard, 14–15 Foundations (Rose), 120 Fowler, Emily, 299n Foxconn, 62 Free (Anderson), 10–11 Freelancer.com, 149–51, 156, 158, 163, 165, 195, 207 Friedman, Thomas, 150–51 Galaxy Zoo, 220–21, 228 Gartner Hype Cycle, 25–26, 25, 26, 29 Gates, Bill, 23, 53 GEICO, 227 General Electric (GE), 43, 225 General Mills, 145 Gengo.com, 145 Genius, 161 genomics, x, 63, 64–65, 66, 227 Georgia Tech, 197 geostationary satellite, 100 Germany, 55 Get a Freelancer (website), 149 Gigwalk, 159 Giovannitti, Fred, 253 Gmail, 77, 138, 163 goals, goal setting, 74–75, 78, 79, 80, 82–83, 84, 85, 87, 137 in crowdfunding campaigns, 185–87, 191 moonshots in, 81–83, 93, 98, 103, 104, 110, 245, 248 subgoals in, 103–4, 112 triggering flow with, 89–90, 92, 93 Godin, Seth, 239–40 Google, 11, 14, 47, 50, 61, 77, 80, 99, 128, 134, 135–39, 167, 195, 208, 251, 286n artificial intelligence development at, 24, 53, 58, 81, 138–39 autonomous cars of, 43–44, 44, 136, 137 eight innovation principles of, 84–85 robotics at, 139 skunk methodology used at, 81–84 thinking-at-scale strategies at, 136–38 Google Docs, 11 Google Glass, 58 Google Hangouts, 193, 202 Google Lunar XPRIZE, 139, 249 Googleplex, 134 Google+, 185, 190, 202 GoogleX, 81, 82, 83, 139 Google Zeitgeist, 136 Gossamer Condor, 263 Gou, Terry, 62 graphic designers, in crowdfunding campaigns, 193 Green, Hank, 180, 200 Grepper, Ryan, 210, 211–13 Grishin, Dmitry, 62 Grishin Robotics, 62 group flow, 91–93 Gulf Coast oil spill (2010), 250, 251, 253 Gulf of Mexico, 250, 251 hackathons, 159 hacker spaces, 62, 64 Hagel, John, III, 86, 106–7 HAL (fictional AI system), 52, 53 Hallowell, Ned, 88 Hariri, Robert, 65, 66 Harrison, John, 245, 247, 267 Hawking, Stephen, 110–12 Hawley, Todd, 100, 103, 104, 107, 114n Hayabusa mission, 97 health care, x, 245 AI’s impact on, 57, 276 behavior tracking in, 47 crowdsourcing projects in, 227, 253 medical manufacturing in, 34–35 robotics in, 62 3–D printing’s impact on, 34–35 Heath, Dan and Chip, 248 Heinlein, Robert, 114n Hendy, Barry, 12 Hendy’s law, 12 HeroX, 257–58, 262, 263, 265, 267, 269, 299n Hessel, Andrew, 63, 64 Hinton, Geoffrey, 58 Hoffman, Reid, 77, 231 Hollywood, 151–52 hosting platforms, 20–21 Howard, Jeremy, 54 Howe, Jeff, 144 Hseih, Tony, 80 Hughes, Jack, 152, 225–27, 254 Hull, Charles, 29–30, 32 Human Longevity, Inc. (HLI), 65–66 Hyatt Hotels Corporation, 20 IBM, 56, 57, 59, 76 ImageNet Competition (2012), 55 image recognition, 55, 58 Immelt, Jeff, 225 incentive competitions, xiii, 22, 139, 148, 152–54, 159, 160, 237, 240, 242, 243–73 addressing market failures with, 264–65, 269, 272 back-end business models in, 249, 265, 268 benefits of, 258–61 case studies of, 250–58 collaborative spirit in, 255, 260–61 crowdsourcing in designing of, 257–58 factors influencing success of, 245–47 false wins in, 268, 269, 271 “flash prizes” in, 250 global participation in, 267 innovation driven by, 245, 247, 248, 249, 252, 258–59, 260, 261 intellectual property (IP) in, 262, 267–68, 271 intrinsic rewards in, 254, 255 judging in, 273 key parameters for designing of, 263–68 launching of new industries with, 260, 268, 272 Master Team Agreements in, 273 media exposure in, 265, 266, 272, 273 MTP and passion as important in, 248, 249, 255, 263, 265, 270 operating costs of, 271, 272–73 principal motivators in, 254, 262–63 purses in, 265, 266, 270, 273 reasons for effectiveness of, 247–49 risk taking in, 247, 248–49, 261, 270 setting rules in, 263, 268, 269, 271, 273 small teams as ideal in, 262 step-by-step guide to, 269–73 telegenic finishes in, 266, 272, 273 time limits in, 249, 267, 271–72 XPRIZE, see XPRIZE competitions Indian Motorcycle company, 222 Indian Space Research Organization, 102 Indiegogo, 145, 173, 175, 178, 179, 184, 185–86, 187, 190, 199, 205, 206, 257 infinite computing, 21, 24, 41, 48–52, 61, 66 entrepreneurial opportunities and, 50–52 information: crowdsourcing platforms in gathering of, 145–46, 154–56, 157, 159–60, 220–21, 228 in data-driven crowdfunding campaigns, 207–10, 213 networks and sensors in garnering of, 42–43, 44, 47, 48, 256 science, 64 see also data mining Inman, Matthew, 178, 192, 193, 200 innovation, 8, 30, 56, 137, 256 companies resistant to, xi, 9–10, 12, 15, 23, 76 crowdsourcing and, see crowdsourcing as disruptive technology, 9–10 feedback loops in fostering of, 28, 77, 83, 84, 86, 87, 90–91, 92, 120, 176 Google’s eight principles of, 84–85 incentive competitions in driving of, 245, 247, 248, 249, 252, 258–59, 260, 261 infinite computing as new approach to, 51 power of constraints and, 248–49, 259 rate of, in online communities, 216, 219, 224, 225, 228, 233, 237 setting big goals for, 74–75, 78, 79, 80, 82–83, 84, 85, 87, 89–90, 92, 93, 103 skunk methodology in fostering of, 71–87, 88; see also skunk methodology inPulse, 176, 200 Instagram, 15–16, 16 insurance companies, 47 Intel, 7 intellectual property (IP), 262, 267–68, 271 INTELSAT, 102 Intel Science and Engineering Fair, 65 International Manufacturing Technology Show, 33 International Space Station (ISS), 35–36, 37, 97, 119 International Space University (ISU), 96, 100–104, 107–8 Founding Conference of, 102, 103 Internet, 8, 14, 39, 41, 45, 49, 50, 117, 118, 119, 132, 136, 143, 144, 153, 154, 163, 177, 207, 208, 209, 212, 216, 217, 228 building communities on, see communities, online crowd tools on, see crowdfunding, crowdfunding campaigns; crowdsourcing development of, 27 explosion of connectivity to, 42, 46, 46, 146, 147, 245 mainstreaming of, 27, 32, 33 reputation economics and, 217–19, 230, 232, 236–37 Internet-of-Things (IoT), 46, 47, 53 intrinsic rewards, 79, 254, 255 Invisalign, 34–35 iPads, 42, 57, 167 iPhones, 12, 42, 62, 176 iPod, 17, 18, 178 iRobot, 60 Iron Man, 52–53, 117 Ismail, Salim, xiv, 15, 77, 92 isolation, innovation and, 72, 76, 78, 79, 81–82, 257 Japan Aerospace Exploration Agency, 97 JARVIS (fictional AI system), 52–53, 58, 59, 146 Jeopardy, 56, 57 Jet Propulsion Laboratory (JPL), 99 Jobs, Steve, xiv, 23, 66–67, 72, 89, 111, 123 Johnson, Carolyn, 227 Johnson, Clarence “Kelly,” 71, 74, 75 skunk work rules of, 74, 75–76, 77, 81, 84, 247 Joy, Bill, 216, 256 Jumpstart Our Business Startups (JOBS) Act (2012), 171, 173 Kaggle, 160, 161 Kahneman, Daniel, 78, 121 Kaku, Michio, 49 Kauffman, Stuart, 276 Kaufman, Ben, 17–20 Kay, Alan, 114n Kemmer, Aaron, 35, 36, 37 Kickstarter, 145, 171, 173, 175, 176, 179–80, 182, 184, 190, 191, 193, 195, 197, 200, 205, 206 Kindle, 132 Kiva.org, 144–45, 172 Klein, Candace, 19–20, 171 Klein, Joshua, 217–18, 221 Klout, 218 Kodak Corporation, 4–8, 9–10, 11, 12, 20 Apparatus Division of, 4 bankruptcy of, 10, 16 digital camera developed by, 4–5, 5, 9 as innovation resistant, 9–10, 12, 15, 76 market dominance of, 5–6, 13–14 Kotler, Steven, xi, xiii, xv, 87, 279 Krieger, Mike, 15 Kubrick, Stanley, 52 Kurzweil, Ray, 53, 54, 58, 59 language translators, 137–38 crowdsourcing projects of, 145, 155–56 Latham, Gary, 74–75, 103 Law of Niches, 221, 223, 228, 231 leadership: importance of vision in, 23–24 moral, 274–76 Lean In (Sandberg), 217 Lean In circles, 217, 237 LEAP airplane, 34 LendingClub, 172 LeNet 5, 54, 55 Let’s Build a Goddamn Tesla Museum, see Tesla Museum campaign Levy, Steven, 138 Lewicki, Chris, 99, 179, 202, 203–4 Lichtenberg, Byron K., 102, 114n Licklider, J.
Human Compatible: Artificial Intelligence and the Problem of Control by Stuart Russell
3D printing, Ada Lovelace, AI winter, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Alfred Russel Wallace, Andrew Wiles, artificial general intelligence, Asilomar, Asilomar Conference on Recombinant DNA, augmented reality, autonomous vehicles, basic income, blockchain, brain emulation, Cass Sunstein, Claude Shannon: information theory, complexity theory, computer vision, connected car, crowdsourcing, Daniel Kahneman / Amos Tversky, delayed gratification, Elon Musk, en.wikipedia.org, Erik Brynjolfsson, Ernest Rutherford, Flash crash, full employment, future of work, Gerolamo Cardano, ImageNet competition, Intergovernmental Panel on Climate Change (IPCC), Internet of things, invention of the wheel, job automation, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John Nash: game theory, John von Neumann, Kenneth Arrow, Kevin Kelly, Law of Accelerating Returns, Mark Zuckerberg, Nash equilibrium, Norbert Wiener, NP-complete, openstreetmap, P = NP, Pareto efficiency, Paul Samuelson, Pierre-Simon Laplace, positional goods, probability theory / Blaise Pascal / Pierre de Fermat, profit maximization, RAND corporation, random walk, Ray Kurzweil, recommendation engine, RFID, Richard Thaler, ride hailing / ride sharing, Robert Shiller, Robert Shiller, Rodney Brooks, Second Machine Age, self-driving car, Shoshana Zuboff, Silicon Valley, smart cities, smart contracts, social intelligence, speech recognition, Stephen Hawking, Steven Pinker, superintelligent machines, Thales of Miletus, The Future of Employment, Thomas Bayes, Thorstein Veblen, transport as a service, Turing machine, Turing test, universal basic income, uranium enrichment, Von Neumann architecture, Wall-E, Watson beat the top human players on Jeopardy!, web application, zero-sum game
Working out which way to turn the knobs to decrease the error is a straightforward application of calculus to compute how changing each weight would change the error at the output layer. This leads to a simple formula for propagating the error backwards from the output layer to the input layer, tweaking knobs along the way. Miraculously, the process works. For the task of recognizing objects in photographs, deep learning algorithms have demonstrated remarkable performance. The first inkling of this came in the 2012 ImageNet competition, which provides training data consisting of 1.2 million labeled images in one thousand categories, and then requires the algorithm to label one hundred thousand new images.4 Geoff Hinton, a British computational psychologist who was at the forefront of the first neural network revolution in the 1980s, had been experimenting with a very large deep convolutional network: 650,000 nodes and 60 million parameters.
Valiant’s approach concentrated on computational complexity, Vapnik’s on statistical analysis of the learning capacity of various classes of hypotheses, but both shared a common theoretical core connecting data and predictive accuracy. 3. For example, to learn the difference between the “situational superko” and “natural situational superko” rules, the learning algorithm would have to try repeating a board position that it had created previously by a pass rather than by playing a stone. The results would be different in different countries. 4. For a description of the ImageNet competition, see Olga Russakovsky et al., “ImageNet large scale visual recognition challenge,” International Journal of Computer Vision 115 (2015): 211–52. 5. The first demonstration of deep networks for vision: Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton, “ImageNet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems 25, ed. Fernando Pereira et al. (2012). 6.
Coders: The Making of a New Tribe and the Remaking of the World by Clive Thompson
2013 Report for America's Infrastructure - American Society of Civil Engineers - 19 March 2013, 4chan, 8-hour work day, Ada Lovelace, AI winter, Airbnb, Amazon Web Services, Asperger Syndrome, augmented reality, Ayatollah Khomeini, barriers to entry, basic income, Bernie Sanders, bitcoin, blockchain, blue-collar work, Brewster Kahle, Brian Krebs, Broken windows theory, call centre, cellular automata, Chelsea Manning, clean water, cloud computing, cognitive dissonance, computer vision, Conway's Game of Life, crowdsourcing, cryptocurrency, Danny Hillis, David Heinemeier Hansson, don't be evil, don't repeat yourself, Donald Trump, dumpster diving, Edward Snowden, Elon Musk, Erik Brynjolfsson, Ernest Rutherford, Ethereum, ethereum blockchain, Firefox, Frederick Winslow Taylor, game design, glass ceiling, Golden Gate Park, Google Hangouts, Google X / Alphabet X, Grace Hopper, Guido van Rossum, Hacker Ethic, HyperCard, illegal immigration, ImageNet competition, Internet Archive, Internet of things, Jane Jacobs, John Markoff, Jony Ive, Julian Assange, Kickstarter, Larry Wall, lone genius, Lyft, Marc Andreessen, Mark Shuttleworth, Mark Zuckerberg, Menlo Park, microservices, Minecraft, move fast and break things, move fast and break things, Nate Silver, Network effects, neurotypical, Nicholas Carr, Oculus Rift, PageRank, pattern recognition, Paul Graham, paypal mafia, Peter Thiel, pink-collar, planetary scale, profit motive, ransomware, recommendation engine, Richard Stallman, ride hailing / ride sharing, Rubik’s Cube, Ruby on Rails, Sam Altman, Satoshi Nakamoto, Saturday Night Live, self-driving car, side project, Silicon Valley, Silicon Valley ideology, Silicon Valley startup, single-payer health, Skype, smart contracts, Snapchat, social software, software is eating the world, sorting algorithm, South of Market, San Francisco, speech recognition, Steve Wozniak, Steven Levy, TaskRabbit, the High Line, Travis Kalanick, Uber and Lyft, Uber for X, uber lyft, universal basic income, urban planning, Wall-E, Watson beat the top human players on Jeopardy!, WikiLeaks, women in the workforce, Y Combinator, Zimmermann PGP, éminence grise
You could now create neural nets with many layers, or even dozens: “deep learning,” as it’s called, because of how many layers are stacked up. By 2012, the field had a seismic breakthrough. Up at the University of Toronto, the British computer scientist Geoff Hinton had been beavering away for two decades on improving neural networks. That year he and a team of students showed off the most impressive neural net yet—by soundly beating competitors at an annual AI shootout. The ImageNet challenge, as it’s known, is an annual competition among AI researchers to see whose system is best at recognizing images. That year, Hinton’s deep-learning neural net got only 15.3 percent of the images wrong. The next-best competitor had an error rate almost twice as high, of 26.2 percent. It was an AI moon shot. Another of Dean’s colleagues was equally impressed: a Stanford professor named Andrew Ng, then a part-time consultant for Google X.
Army of None: Autonomous Weapons and the Future of War by Paul Scharre
active measures, Air France Flight 447, algorithmic trading, artificial general intelligence, augmented reality, automated trading system, autonomous vehicles, basic income, brain emulation, Brian Krebs, cognitive bias, computer vision, cuban missile crisis, dark matter, DARPA: Urban Challenge, DevOps, drone strike, Elon Musk, en.wikipedia.org, Erik Brynjolfsson, facts on the ground, fault tolerance, Flash crash, Freestyle chess, friendly fire, IFF: identification friend or foe, ImageNet competition, Internet of things, Johann Wolfgang von Goethe, John Markoff, Kevin Kelly, Loebner Prize, loose coupling, Mark Zuckerberg, moral hazard, mutually assured destruction, Nate Silver, pattern recognition, Rodney Brooks, Rubik’s Cube, self-driving car, sensor fusion, South China Sea, speech recognition, Stanislav Petrov, Stephen Hawking, Steve Ballmer, Steve Wozniak, Stuxnet, superintelligent machines, Tesla Model S, The Signal and the Noise by Nate Silver, theory of mind, Turing test, universal basic income, Valery Gerasimov, Wall-E, William Langewiesche, Y2K, zero day
Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf. 87 over a hundred layers: Christian Szegedy et al., “Going Deeper With Convolutions,” https://www.cs.unc.edu/~wliu/papers/GoogLeNet.pdf. 87 error rate of only 4.94 percent: Richard Eckel, “Microsoft Researchers’ Algorithm Sets ImageNet Challenge Milestone,” Microsoft Research Blog, February 10, 2015, https://www.microsoft.com/en-us/research/microsoft-researchers-algorithm-sets-imagenet-challenge-milestone/. Kaiming He et al., “Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification,” https://arxiv.org/pdf/1502.01852.pdf. 87 estimated 5.1 percent error rate: Olga Russakovsky et al., “ImageNet Large Scale Visual Recognition Challenge,” January 20, 2015, https://arxiv.org/pdf/1409.0575.pdf. 87 3.57 percent rate: Kaiming He et al., “Deep Residual Learning for Image Recognition,” December 10, 2015, https://arxiv.org/pdf/1512.03385v1.pdf. 6 Crossing the Threshold: Approving Autonomous Weapons 89 delineation of three classes of systems: Department of Defense, “Department of Defense Directive Number 3000.09.” 90 “minimize the probability and consequences”: Ibid, 1. 91 “We haven’t had anything that was even remotely close”: Frank Kendall, interview, November 7, 2016. 91 “We had an automatic mode”: Ibid. 91 “relatively soon”: Ibid. 91 “sort through all that”: Ibid. 91 “Are you just driving down”: Ibid. 92 “other side of the equation”: Ibid. 92 “a reasonable question to ask”: Ibid. 92 “where technology supports it”: Ibid. 92 “principles and obey them”: Ibid. 93 “Automation and artificial intelligence are”: Ibid. 93 Work explained in a 2014 monograph: Robert O.
Hands-On Machine Learning With Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems by Aurélien Géron
Amazon Mechanical Turk, Anton Chekhov, combinatorial explosion, computer vision, constrained optimization, correlation coefficient, crowdsourcing, don't repeat yourself, Elon Musk, en.wikipedia.org, friendly AI, ImageNet competition, information retrieval, iterative process, John von Neumann, Kickstarter, natural language processing, Netflix Prize, NP-complete, optical character recognition, P = NP, p-value, pattern recognition, pull request, recommendation engine, self-driving car, sentiment analysis, SpamAssassin, speech recognition, stochastic process
Typical CNN architecture Tip A common mistake is to use convolution kernels that are too large. You can often get the same effect as a 9 × 9 kernel by stacking two 3 × 3 kernels on top of each other, for a lot less compute. Over the years, variants of this fundamental architecture have been developed, leading to amazing advances in the field. A good measure of this progress is the error rate in competitions such as the ILSVRC ImageNet challenge. In this competition the top-5 error rate for image classification fell from over 26% to barely over 3% in just five years. The top-five error rate is the number of test images for which the system’s top 5 predictions did not include the correct answer. The images are large (256 pixels high) and there are 1,000 classes, some of which are really subtle (try distinguishing 120 dog breeds). Looking at the evolution of the winning entries is a good way to understand how CNNs work.
Pac-Man Using Deep Q-Learning GRU (Gated Recurrent Unit) cell, GRU Cell-GRU Cell H hailstone sequence, Efficient Data Representations hard margin classification, Soft Margin Classification-Soft Margin Classification hard voting classifiers, Voting Classifiers-Voting Classifiers harmonic mean, Precision and Recall He initialization, Vanishing/Exploding Gradients Problems-Xavier and He Initialization Heaviside step function, The Perceptron Hebb's rule, The Perceptron, Hopfield Networks Hebbian learning, The Perceptron hidden layers, Multi-Layer Perceptron and Backpropagation hierarchical clustering, Unsupervised learning hinge loss function, Online SVMs histograms, Take a Quick Look at the Data Structure-Take a Quick Look at the Data Structure hold-out sets, Stacking(see also blenders) Hopfield Networks, Hopfield Networks-Hopfield Networks hyperbolic tangent (htan activation function), Multi-Layer Perceptron and Backpropagation, Activation Functions, Vanishing/Exploding Gradients Problems, Xavier and He Initialization, Recurrent Neurons hyperparameters, Overfitting the Training Data, Custom Transformers, Grid Search-Grid Search, Evaluate Your System on the Test Set, Gradient Descent, Polynomial Kernel, Computational Complexity, Fine-Tuning Neural Network Hyperparameters(see also neural network hyperparameters) hyperplane, Decision Function and Predictions, Manifold Learning-PCA, Projecting Down to d Dimensions, Other Dimensionality Reduction Techniques hypothesis, Select a Performance Measuremanifold, Manifold Learning hypothesis boosting (see boosting) hypothesis function, Linear Regression hypothesis, null, Regularization Hyperparameters I identity matrix, Ridge Regression, Quadratic Programming ILSVRC ImageNet challenge, CNN Architectures image classification, CNN Architectures impurity measures, Making Predictions, Gini Impurity or Entropy? in-graph replication, In-Graph Versus Between-Graph Replication inception modules, GoogLeNet Inception-v4, ResNet incremental learning, Online learning, Incremental PCA inequality constraints, SVM Dual Problem inference, Model-based learning, Exercises, Memory Requirements, An Encoder–Decoder Network for Machine Translation info(), Take a Quick Look at the Data Structure information gain, Gini Impurity or Entropy?
Architects of Intelligence by Martin Ford
3D printing, agricultural Revolution, AI winter, Apple II, artificial general intelligence, Asilomar, augmented reality, autonomous vehicles, barriers to entry, basic income, Baxter: Rethink Robotics, Bayesian statistics, bitcoin, business intelligence, business process, call centre, cloud computing, cognitive bias, Colonization of Mars, computer vision, correlation does not imply causation, crowdsourcing, DARPA: Urban Challenge, deskilling, disruptive innovation, Donald Trump, Douglas Hofstadter, Elon Musk, Erik Brynjolfsson, Ernest Rutherford, Fellow of the Royal Society, Flash crash, future of work, gig economy, Google X / Alphabet X, Gödel, Escher, Bach, Hans Rosling, ImageNet competition, income inequality, industrial robot, information retrieval, job automation, John von Neumann, Law of Accelerating Returns, life extension, Loebner Prize, Mark Zuckerberg, Mars Rover, means of production, Mitch Kapor, natural language processing, new economy, optical character recognition, pattern recognition, phenotype, Productivity paradox, Ray Kurzweil, recommendation engine, Robert Gordon, Rodney Brooks, Sam Altman, self-driving car, sensor fusion, sentiment analysis, Silicon Valley, smart cities, social intelligence, speech recognition, statistical model, stealth mode startup, stem cell, Stephen Hawking, Steve Jobs, Steve Wozniak, Steven Pinker, strong AI, superintelligent machines, Ted Kaczynski, The Rise and Fall of American Growth, theory of mind, Thomas Bayes, Travis Kalanick, Turing test, universal basic income, Wall-E, Watson beat the top human players on Jeopardy!, women in the workforce, working-age population, zero-sum game, Zipcar
That’s a lovely example of scientists saying, “We’ve already decided what the answer has to look like, and anything that doesn’t look like the answer we believe in is of no interest.” In the end, science won out, and two of my students won a big public competition, and they won it dramatically. They got almost half the error rate of the best computer vision systems, and they were using mainly techniques developed in Yann LeCun’s lab but mixed in with a few of our own techniques as well. MARTIN FORD: This was the ImageNet competition? GEOFFREY HINTON: Yes, and what happened then was what should happen in science. One method that people used to think of as complete nonsense had now worked much better than the method they believed in, and within two years, they all switched. So, for things like object classification, nobody would dream of trying to do it without using a neural network now. MARTIN FORD: This was back in 2012, I believe.
We immediately open-sourced ImageNet for the world, because to this day I believe in the democratization of technology. We released the entire 15 million images to the world and started to run international competitions for researchers to work on the ImageNet problems: not on the tiny small-scale problems but on the problems that mattered to humans and applications. Fast-forward to 2012, and I think we see the turning point in object recognition for a lot of people. The winner of the 2012 ImageNet competition created a convergence of ImageNet, GPU computing power, and convolutional neural networks as an algorithm. Geoffrey Hinton wrote a seminal paper that, for me, was Phase One in achieving the holy grail of object recognition. MARTIN FORD: Did you continue this project? FEI-FEI LI: For the next two years, I worked on taking object recognition a step further. If we again look at human development, babies start by babbling, a few words, and then they start making sentences.
Driverless: Intelligent Cars and the Road Ahead by Hod Lipson, Melba Kurman
AI winter, Air France Flight 447, Amazon Mechanical Turk, autonomous vehicles, barriers to entry, butterfly effect, carbon footprint, Chris Urmson, cloud computing, computer vision, connected car, creative destruction, crowdsourcing, DARPA: Urban Challenge, digital map, Elon Musk, en.wikipedia.org, Erik Brynjolfsson, Google Earth, Google X / Alphabet X, high net worth, hive mind, ImageNet competition, income inequality, industrial robot, intermodal, Internet of things, job automation, Joseph Schumpeter, lone genius, Lyft, megacity, Network effects, New Urbanism, Oculus Rift, pattern recognition, performance metric, precision agriculture, RFID, ride hailing / ride sharing, Second Machine Age, self-driving car, Silicon Valley, smart cities, speech recognition, statistical model, Steve Jobs, technoutopianism, Tesla Model S, Travis Kalanick, Uber and Lyft, uber lyft, Unsafe at Any Speed
See also Mid-level controls Consumer acceptance, 11–13 Controls engineering Overview of, 47, 75–77 See also Low-level controls; Mid-level controls; High-level controls Convolutional neural networks (CNNs), 214–218 Corner cases, 4, 5, 89, 154 Creative destruction, 261–263 Crime, 273, 274 DARPA Challenges, 149, 150 DARPA Grand Challenge 2004 DARPA Grand Challenge 2005, 151, 152 DARPA Urban Challenge 2007, 156–158 Data CAN bus protocol, 193, 194 Data collection, 239, 240 Training data for deep learning, 218–220 See also Machine learning; Route-planning software; Traffic prediction software Deep learning History of, 197, 199–202, 219, 223–226 How deep learning works, 7, 8, 226–231 See also ImageNet competition; Neocognitron; Perceptron; SuperVision Demo 97, 134, 135 Digital cameras, 173–175 Disney Hall, Los Angeles, 36 Disney’s Magic Highway U.S.A. Dog of War, 79 Downtowns, 32–37 Drive by wire191–194 Driver assist, 55–58. See also Human in the loop Driverless-car reliability, 98–104, 195–196 Drive-PX 225 E-commerce, 271, 272 Edge detectors 229 Electronic Highway History of, 116–120 Reasons for demise, 123, 124 See also General Motors Corporation (GM) Environment.
Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again by Eric Topol
23andMe, Affordable Care Act / Obamacare, AI winter, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, artificial general intelligence, augmented reality, autonomous vehicles, bioinformatics, blockchain, cloud computing, cognitive bias, Colonization of Mars, computer age, computer vision, conceptual framework, creative destruction, crowdsourcing, Daniel Kahneman / Amos Tversky, dark matter, David Brooks, digital twin, Elon Musk, en.wikipedia.org, epigenetics, Erik Brynjolfsson, fault tolerance, George Santayana, Google Glasses, ImageNet competition, Jeff Bezos, job automation, job satisfaction, Joi Ito, Mark Zuckerberg, medical residency, meta analysis, meta-analysis, microbiome, natural language processing, new economy, Nicholas Carr, nudge unit, pattern recognition, performance metric, personalized medicine, phenotype, placebo effect, randomized controlled trial, recommendation engine, Rubik’s Cube, Sam Altman, self-driving car, Silicon Valley, speech recognition, Stephen Hawking, text mining, the scientific method, Tim Cook: Apple, War on Poverty, Watson beat the top human players on Jeopardy!, working-age population
IMAGES ImageNet exemplified an adage about AI: datasets—not algorithms—might be the key limiting factor of human-level artificial intelligence.39 When Fei-Fei Li, a computer scientist now at Stanford and half time at Google, started ImageNet in 2007, she bucked the idea that algorithms ideally needed nurturing from Big Data and instead pursued the in-depth annotation of images. She recognized it wasn’t about Big Data; it was about carefully, extensively labeled Big Data. A few years ago, she said, “I consider the pixel data in images and video to be the dark matter of the Internet.”40 Many different convolutional DNNs were used to classify the images with annual ImageNet Challenge contests to recognize the best (such as AlexNet, GoogleNet, VGG Net, and ResNet). Figure 4.6 shows the progress in reducing the error rate over several years, with ImageNet wrapping up in 2017, with significantly better than human performance in image recognition. The error rate fell from 30 percent in 2010 to 4 percent in 2016. Li’s 2015 TED Talk “How We’re Teaching Computers to Understand Pictures” has been viewed more than 2 million times, and it’s one of my favorites.41 FIGURE 4.6: Over time, deep learning AI has exceeded human performance for image recognition.