ImageNet competition

14 results back to index


pages: 350 words: 98,077

Artificial Intelligence: A Guide for Thinking Humans by Melanie Mitchell

Ada Lovelace, AI winter, Alignment Problem, AlphaGo, Amazon Mechanical Turk, Apple's 1984 Super Bowl advert, artificial general intelligence, autonomous vehicles, backpropagation, Bernie Sanders, Big Tech, Boston Dynamics, Cambridge Analytica, Charles Babbage, Claude Shannon: information theory, cognitive dissonance, computer age, computer vision, Computing Machinery and Intelligence, dark matter, deep learning, DeepMind, Demis Hassabis, Douglas Hofstadter, driverless car, Elon Musk, en.wikipedia.org, folksonomy, Geoffrey Hinton, Gödel, Escher, Bach, I think there is a world market for maybe five computers, ImageNet competition, Jaron Lanier, job automation, John Markoff, John von Neumann, Kevin Kelly, Kickstarter, license plate recognition, machine translation, Mark Zuckerberg, natural language processing, Nick Bostrom, Norbert Wiener, ought to be enough for anybody, paperclip maximiser, pattern recognition, performance metric, RAND corporation, Ray Kurzweil, recommendation engine, ride hailing / ride sharing, Rodney Brooks, self-driving car, sentiment analysis, Silicon Valley, Singularitarianism, Skype, speech recognition, Stephen Hawking, Steve Jobs, Steve Wozniak, Steven Pinker, strong AI, superintelligent machines, tacit knowledge, tail risk, TED Talk, the long tail, theory of mind, There's no reason for any individual to have a computer in his home - Ken Olsen, trolley problem, Turing test, Vernor Vinge, Watson beat the top human players on Jeopardy!, world market for maybe five computers

For the ImageNet project, Mechanical Turk was “a godsend.”6 The service continues to be widely used by AI researchers for creating data sets; nowadays, academic grant proposals in AI commonly include a line item for “Mechanical Turk workers.” The ImageNet Competitions In 2010, the ImageNet project launched the first ImageNet Large Scale Visual Recognition Challenge, in order to spur progress toward more general object-recognition algorithms. Thirty-five programs competed, representing computer-vision researchers from academia and industry around the world. The competitors were given labeled training images—1.2 million of them—and a list of possible categories. The task for the trained programs was to output the correct category of each input image. The ImageNet competition had a thousand possible categories, compared with PASCAL’s twenty.

The following year, the highest-scoring program—also using support vector machines—showed a respectable but modest improvement, getting 74 percent of the test images correct. Most people in the field expected this trend to continue; computer-vision research would chip away at the problem, with gradual improvement at each annual competition. However, these expectations were upended in the 2012 ImageNet competition: the winning entry achieved an amazing 85 percent correct. Such a jump in accuracy was a shocking development. What’s more, the winning entry did not use support vector machines or any of the other dominant computer-vision methods of the day. Instead, it was a convolutional neural network.

It didn’t take long before all the big tech companies (as well as many smaller ones) were snapping up deep-learning experts and their graduate students as fast as possible. Seemingly overnight, deep learning became the hottest part of AI, and expertise in deep learning guaranteed computer scientists a large salary in Silicon Valley or, better yet, venture capital funding for their proliferating deep-learning start-up companies. The annual ImageNet competition began to see wider coverage in the media, and it quickly morphed from a friendly academic contest into a high-profile sparring match for tech companies commercializing computer vision. Winning at ImageNet would guarantee coveted respect from the vision community, along with free publicity, which might translate into product sales and higher stock prices.


pages: 345 words: 75,660

Prediction Machines: The Simple Economics of Artificial Intelligence by Ajay Agrawal, Joshua Gans, Avi Goldfarb

Abraham Wald, Ada Lovelace, AI winter, Air France Flight 447, Airbus A320, algorithmic bias, AlphaGo, Amazon Picking Challenge, artificial general intelligence, autonomous vehicles, backpropagation, basic income, Bayesian statistics, Black Swan, blockchain, call centre, Capital in the Twenty-First Century by Thomas Piketty, Captain Sullenberger Hudson, carbon tax, Charles Babbage, classic study, collateralized debt obligation, computer age, creative destruction, Daniel Kahneman / Amos Tversky, data acquisition, data is the new oil, data science, deep learning, DeepMind, deskilling, disruptive innovation, driverless car, Elon Musk, en.wikipedia.org, Erik Brynjolfsson, everywhere but in the productivity statistics, financial engineering, fulfillment center, general purpose technology, Geoffrey Hinton, Google Glasses, high net worth, ImageNet competition, income inequality, information retrieval, inventory management, invisible hand, Jeff Hawkins, job automation, John Markoff, Joseph Schumpeter, Kevin Kelly, Lyft, Minecraft, Mitch Kapor, Moneyball by Michael Lewis explains big data, Nate Silver, new economy, Nick Bostrom, On the Economy of Machinery and Manufactures, OpenAI, paperclip maximiser, pattern recognition, performance metric, profit maximization, QWERTY keyboard, race to the bottom, randomized controlled trial, Ray Kurzweil, ride hailing / ride sharing, Robert Solow, Salesforce, Second Machine Age, self-driving car, shareholder value, Silicon Valley, statistical model, Stephen Hawking, Steve Jobs, Steve Jurvetson, Steven Levy, strong AI, The Future of Employment, the long tail, The Signal and the Noise by Nate Silver, Tim Cook: Apple, trolley problem, Turing test, Uber and Lyft, uber lyft, US Airways Flight 1549, Vernor Vinge, vertical integration, warehouse automation, warehouse robotics, Watson beat the top human players on Jeopardy!, William Langewiesche, Y Combinator, zero-sum game

Andrej Karpathy, “What I Learned from Competing against a ConvNet on ImageNet,” Andrej Karthy (blog), September 2, 2014, http://karpathy.github.io/2014/09/02/what-i-learned-from-competing-against-a-convnet-on-imagenet/; ImageNet, Large Scale Visual Recognition Challenge 2016, http://image-net.org/challenges/LSVRC/2016/results; Andrej Karpathy, LISVRC 2014, http://cs.stanford.edu/people/karpathy/ilsvrc/. 8. Aaron Tilley, “China’s Rise in the Global AI Race Emerges as It Takes Over the Final ImageNet Competition,” Forbes, July 31, 2017, https://www.forbes.com/sites/aarontilley/2017/07/31/china-ai-imagenet/#dafa182170a8. 9. Dave Gershgorn, “The Data That Transformed AI Research—and Possibly the World,” Quartz, July 26, 2017, https://qz.com/1034972/ the-data-that-changed-the-direction-of-ai-research-and-possibly-the-world/. 10.

Inventory management involved predicting how many items would be in a warehouse on a given day. More recently, entirely new classes of prediction problems emerged. Many were nearly impossible before the recent advances in machine intelligence technology, including object identification, language translation, and drug discovery. For example, the ImageNet Challenge is a high-profile annual contest to predict the name of an object in an image. Predicting the object in an image can be a difficult task, even for humans. The ImageNet data contains a thousand categories of objects, including many breeds of dog and other similar images. It can be difficult to tell the difference between a Tibetan mastiff and a Bernese mountain dog, or between a safe and a combination lock.

., 93–94 Duke University Teradata Center, 35 earthquakes, 59–60 eBay, 199 economics, 3 of AI, 8–9 of cost reductions, 9–11 data collection and, 49–50 on externalities, 116–117 New Economy and, 10–11 economies of scale, 49–50, 215–217 Edelman, Ben, 196–197 education, income inequality and, 214 electricity, cost of light and, 11 emergency braking, automatic, 111–112 error, tolerance for, 184–186 ethical dilemmas, 116 Etzioni, Oren, 220 Europe, privacy regulation in, 219–220 exceptions, prediction by, 67–68 Executive Office of the US President, 222–223 experience, 191–193 experimentation, 88, 99–100 AI tool development and, 159–160 expert prediction, 55–58 externalities, 116–117 Facebook, 176, 190, 195–196, 215, 217 facial recognition, 190, 219–220 Federal Aviation Administration, 185 Federal Trade Commission, 195 feedback data, 43, 46 in decision making, 74–76, 134–138 experience and, 191–193 risks with, 204–205 financial crisis of 2008, 36–37 flexibility, 36 Forbes, Silke, 168–169 Ford, 123–134, 164 Frankston, Bob, 141, 164 fraud detection, 24–25, 27, 84–88, 91 Frey, Carl, 149 fulfillment industry, 105, 143–145 Furman, Jason, 213 Gates, Bill, 163, 210, 213, 221 gender discrimination, 196–198 Gildert, Suzanne, 145 Glozman, Ron, 53–54 Goizueta, Robert, 43 Goldin, Claudia, 214 Goldman Sachs, 125 Google, 7–8, 43, 50, 187, 215, 223 advertising, 176, 195–196, 198–199 AI-first strategy at, 179–180 AI tool development at, 160 anti-spam team sting, 202–203 bias in ads and, 195–196 China, 219 Inbox, 185, 187 market share of, 216–217 Now, 106 privacy policy, 190 search engine optimization and, 64 search tool, 19 translation service, 25–26 video content algorithm, 200 Waymo, 95 Waze and, 89–90 Grammarly, 96 Greece, ancient, 23 Griliches, Zvi, 159 Grove, Andy, 155 hackers, 200 Hacking, Ian, 40 Hammer, Michael, 123–134 Harford, Tim, 192–193 Harvard Business School cases, 141 Hawking, Stephen, 8, 210–211, 221 Hawkins, Jeff, 39 health insurance, 28 heart disease, diagnosing, 44–45, 47–49 Heifets, Abraham, 135, 136 Hemingway, Ernest, 25–26 heuristics, 55 Hinton, Geoffrey, 145 hiring, 58, 98 ZipRecruiter and, 93–94, 100 Hoffman, Mitchell, 58 homogeneity, data, 201–202 hotel industry, 63–64 Houston Astros, 161 Howe, Kathryn, 14 human resource (HR) management, 172–173 IBM’s Watson, 146 identity verification, 201, 219–220 iFlytek, 26–27 if-then logic, 91, 104–109 image classification, 28–29 ImageNet, 7 ImageNet Challenge, 28–29 imitation of algorithms, 202–204 income inequality, 19, 212–214 independent variables, 45 inequality, 19, 212–214 initial public offerings (IPOs), 9–10, 125 innovation, 169–170, 171, 218–219 innovator’s dilemma, 181–182 input data, 43 in decision making, 74–76, 134–138 identifying required, 139 Integrate.ai, 14 Intel, 15, 215 intelligence churn prediction and, 32–36 human, 39 prediction as, 2–3, 29, 31–41 internet advertising, 175–176 browsers, 9–10 delivery time uncertainty and commerce via, 157–158 development of the commercial, 9–10 inventory management, 28, 105 Iowa, hybrid corn adoption in, 158–160, 181 iPhone, 129–130, 155 iRobot, 104 James, Bill, 56 Jelinek, Frederick, 108 jobs, 19.


pages: 346 words: 97,330

Ghost Work: How to Stop Silicon Valley From Building a New Global Underclass by Mary L. Gray, Siddharth Suri

"World Economic Forum" Davos, Affordable Care Act / Obamacare, AlphaGo, Amazon Mechanical Turk, Apollo 13, augmented reality, autonomous vehicles, barriers to entry, basic income, benefit corporation, Big Tech, big-box store, bitcoin, blue-collar work, business process, business process outsourcing, call centre, Capital in the Twenty-First Century by Thomas Piketty, cloud computing, cognitive load, collaborative consumption, collective bargaining, computer vision, corporate social responsibility, cotton gin, crowdsourcing, data is the new oil, data science, deep learning, DeepMind, deindustrialization, deskilling, digital divide, do well by doing good, do what you love, don't be evil, Donald Trump, Elon Musk, employer provided health coverage, en.wikipedia.org, equal pay for equal work, Erik Brynjolfsson, fake news, financial independence, Frank Levy and Richard Murnane: The New Division of Labor, fulfillment center, future of work, gig economy, glass ceiling, global supply chain, hiring and firing, ImageNet competition, independent contractor, industrial robot, informal economy, information asymmetry, Jeff Bezos, job automation, knowledge economy, low skilled workers, low-wage service sector, machine translation, market friction, Mars Rover, natural language processing, new economy, operational security, passive income, pattern recognition, post-materialism, post-work, power law, race to the bottom, Rana Plaza, recommendation engine, ride hailing / ride sharing, Ronald Coase, scientific management, search costs, Second Machine Age, sentiment analysis, sharing economy, Shoshana Zuboff, side project, Silicon Valley, Silicon Valley startup, Skype, software as a service, speech recognition, spinning jenny, Stephen Hawking, TED Talk, The Future of Employment, The Nature of the Firm, Tragedy of the Commons, transaction costs, two-sided market, union organizing, universal basic income, Vilfredo Pareto, Wayback Machine, women in the workforce, work culture , Works Progress Administration, Y Combinator, Yochai Benkler

They tried a few different workflows but were ultimately able to use about 49,000 workers from 167 countries to accurately label 3.2 million images.9 After two and a half years, their collective labor created a massive, gold-standard data set of high-resolution images, each with highly accurate labels of the objects in the image. Li called it ImageNet. Thanks to ImageNet competitions held annually since its creation, research teams use the data set to develop more sophisticated image recognition algorithms and to advance the state of the art. Having a gold-standard data set allowed researchers to measure the accuracy of their new algorithms and to compare their algorithms with the current state of the art.

To incentivize researchers to use the data set, Li and her colleagues organized an annual contest pitting the best algorithms for the image recognition problem, from various research teams around the world, against one another. The progress scientists made toward this goal was staggering. The annual ImageNet competition saw a roughly 10x reduction in error and a roughly 3x increase in precision in recognizing images over the course of eight years. Eventually the vision algorithms achieved a lower error rate than the human workers. The algorithmic and engineering advances that scientists achieved over the eight years of competition fueled much of the recent success of neural networks, the so-called deep learning revolution, which would impact a variety of fields and problem domains.

Without them generating and improving the size and quality of the training data, ImageNet would not exist.11 ImageNet’s success is a noteworthy example of the paradox of automation’s last mile in action. Humans trained an AI, only to have the AI ultimately take over the task entirely. Researchers could then open up even harder problems. For example, after the ImageNet challenge finished, researchers turned their attention to finding where an object is in an image or video. These problems needed yet more training data, generating another wave of ghost work. But ImageNet is merely one of many examples of how computer programmers and business entrepreneurs use ghost work to create training data to develop better artificial intelligence.12 The Range of Ghost Work: From Micro-Tasks to Macro-Tasks The platforms generating on-demand ghost work offer themselves up as gatekeepers helping employers-turned-requesters tackle problems that need a bit of human intelligence.


The Ethical Algorithm: The Science of Socially Aware Algorithm Design by Michael Kearns, Aaron Roth

23andMe, affirmative action, algorithmic bias, algorithmic trading, Alignment Problem, Alvin Roth, backpropagation, Bayesian statistics, bitcoin, cloud computing, computer vision, crowdsourcing, data science, deep learning, DeepMind, Dr. Strangelove, Edward Snowden, Elon Musk, fake news, Filter Bubble, general-purpose programming language, Geoffrey Hinton, Google Chrome, ImageNet competition, Lyft, medical residency, Nash equilibrium, Netflix Prize, p-value, Pareto efficiency, performance metric, personalized medicine, pre–internet, profit motive, quantitative trading / quantitative finance, RAND corporation, recommendation engine, replication crisis, ride hailing / ride sharing, Robert Bork, Ronald Coase, self-driving car, short selling, sorting algorithm, sparse data, speech recognition, statistical model, Stephen Hawking, superintelligent machines, TED Talk, telemarketer, Turing machine, two-sided market, Vilfredo Pareto

But money alone wasn’t enough to recruit talent—top researchers want to work where other top researchers are—so it was important for AI labs that wanted to recruit premium talent to be viewed as places that were already on the cutting edge. In the United States, this included research labs at companies such as Google and Facebook. One way to do this was to beat the big players in a high-profile competition. The ImageNet competition was perfect—focused on exactly the kind of vision task for which deep learning was making headlines. The contest required each team’s computer program to classify the objects in images into a thousand different and highly specific categories, including “frilled lizard,” “banded gecko,” “oscilloscope,” and “reflex camera.”

The training images came with labels, so that the learning algorithms could be told what kind of object was in each image. Such competitions have proliferated in recent years; the Netflix competition, which we have mentioned a couple of times already, was an early example. Commercial platforms such as Kaggle (which now, in fact, hosts the ImageNet competition) offer datasets and competitions—some offering awards of $100,000 for winning teams—for thousands of diverse, complex prediction problems. Machine learning has truly become a competitive sport. It wouldn’t make sense to score ImageNet competitors based on how well they classified the training images—after all, an algorithm could have simply memorized the labels for the training set, without learning any generalizable rule for classifying images.

It wouldn’t make sense to score ImageNet competitors based on how well they classified the training images—after all, an algorithm could have simply memorized the labels for the training set, without learning any generalizable rule for classifying images. Instead, the right way to evaluate the competitors is to see how well their models classify new images that they have never seen before. The ImageNet competition reserved 100,000 “validation” images for this purpose. But the competition organizers also wanted to give participants a way to see how well they were doing. So they allowed each team to test their progress by submitting their current model and being told how frequently it correctly classified the validation images.


pages: 288 words: 86,995

Rule of the Robots: How Artificial Intelligence Will Transform Everything by Martin Ford

AI winter, Airbnb, algorithmic bias, algorithmic trading, Alignment Problem, AlphaGo, Amazon Mechanical Turk, Amazon Web Services, artificial general intelligence, Automated Insights, autonomous vehicles, backpropagation, basic income, Big Tech, big-box store, call centre, carbon footprint, Chris Urmson, Claude Shannon: information theory, clean water, cloud computing, commoditize, computer age, computer vision, Computing Machinery and Intelligence, coronavirus, correlation does not imply causation, COVID-19, crowdsourcing, data is the new oil, data science, deep learning, deepfake, DeepMind, Demis Hassabis, deskilling, disruptive innovation, Donald Trump, Elon Musk, factory automation, fake news, fulfillment center, full employment, future of work, general purpose technology, Geoffrey Hinton, George Floyd, gig economy, Gini coefficient, global pandemic, Googley, GPT-3, high-speed rail, hype cycle, ImageNet competition, income inequality, independent contractor, industrial robot, informal economy, information retrieval, Intergovernmental Panel on Climate Change (IPCC), Internet of things, Jeff Bezos, job automation, John Markoff, Kiva Systems, knowledge worker, labor-force participation, Law of Accelerating Returns, license plate recognition, low interest rates, low-wage service sector, Lyft, machine readable, machine translation, Mark Zuckerberg, Mitch Kapor, natural language processing, Nick Bostrom, Northpointe / Correctional Offender Management Profiling for Alternative Sanctions, Ocado, OpenAI, opioid epidemic / opioid crisis, passive income, pattern recognition, Peter Thiel, Phillips curve, post scarcity, public intellectual, Ray Kurzweil, recommendation engine, remote working, RFID, ride hailing / ride sharing, Robert Gordon, Rodney Brooks, Rubik’s Cube, Sam Altman, self-driving car, Silicon Valley, Silicon Valley startup, social distancing, SoftBank, South of Market, San Francisco, special economic zone, speech recognition, stealth mode startup, Stephen Hawking, superintelligent machines, TED Talk, The Future of Employment, The Rise and Fall of American Growth, the scientific method, Turing machine, Turing test, Tyler Cowen, Tyler Cowen: Great Stagnation, Uber and Lyft, uber lyft, universal basic income, very high income, warehouse automation, warehouse robotics, Watson beat the top human players on Jeopardy!, WikiLeaks, women in the workforce, Y Combinator

Without relying on GPU chips to accelerate their deep neural network, it’s doubtful that the winning team’s entry would have performed well enough to win the contest. We’ll delve further into the history of deep learning in Chapter 4. The University of Toronto’s team used GPUs manufactured by NVIDIA, a company founded in 1993 whose business focused exclusively on designing and manufacturing state-of-the-art graphics chips. In the wake of the 2012 ImageNet competition and the ensuing widespread recognition of the powerful synergy between deep learning and GPUs, the company’s trajectory shifted dramatically, transforming it into one of the most prominent technology companies associated with the rise of artificial intelligence. Evidence of the deep learning revolution manifested directly in the company’s market value: between January 2012 and January 2020 NVIDIA’s shares soared by more than 1,500 percent.

Many of the startup companies and university researchers working in this area believe, like Covariant, that a strategy founded on deep neural networks and reinforcement learning is the best way to fuel progress toward more dexterous robots. One notable exception is Vicarious, a small AI company based in the San Francisco Bay Area. Founded in 2010—two years before the 2012 ImageNet competition brought deep learning to the forefront—Vicarious’s long-term objective is to achieve human-level or artificial general intelligence. In other words, the company is, in a sense, competing directly with higher-profile and far better funded initiatives like those at DeepMind and OpenAI. We’ll delve into the paths being forged by those two companies and the general quest for human-level AI in Chapter 5.

Schmidhuber is clearly frustrated over the lack of recognition given to his own research, and is known for abrasively interrupting presentations at AI conferences and leveling accusations of a “conspiracy” to rewrite deep learning’s history, especially on the part of Hinton, LeCun and Bengio.15 For their part, these better-known researchers push back aggressively. LeCun told a New York Times reporter that “Jürgen is manically obsessed with recognition and keeps claiming credit he doesn’t deserve.”16 Though disagreements about the true origins of deep learning are likely to persist, there is no doubt that in the wake of the 2012 ImageNet competition, the technique rapidly took the field of artificial intelligence—and most of the technology industry’s largest companies—by storm. American tech behemoths like Google, Amazon, Facebook and Apple, as well as the Chinese companies Baidu, Tencent and Alibaba, immediately recognized the disruptive potential of deep neural networks and began to build research teams and incorporate the technology into their products and operations.


The Myth of Artificial Intelligence: Why Computers Can't Think the Way We Do by Erik J. Larson

AI winter, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Albert Einstein, Alignment Problem, AlphaGo, Amazon Mechanical Turk, artificial general intelligence, autonomous vehicles, Big Tech, Black Swan, Bletchley Park, Boeing 737 MAX, business intelligence, Charles Babbage, Claude Shannon: information theory, Computing Machinery and Intelligence, conceptual framework, correlation does not imply causation, data science, deep learning, DeepMind, driverless car, Elon Musk, Ernest Rutherford, Filter Bubble, Geoffrey Hinton, Georg Cantor, Higgs boson, hive mind, ImageNet competition, information retrieval, invention of the printing press, invention of the wheel, Isaac Newton, Jaron Lanier, Jeff Hawkins, John von Neumann, Kevin Kelly, Large Hadron Collider, Law of Accelerating Returns, Lewis Mumford, Loebner Prize, machine readable, machine translation, Nate Silver, natural language processing, Nick Bostrom, Norbert Wiener, PageRank, PalmPilot, paperclip maximiser, pattern recognition, Peter Thiel, public intellectual, Ray Kurzweil, retrograde motion, self-driving car, semantic web, Silicon Valley, social intelligence, speech recognition, statistical model, Stephen Hawking, superintelligent machines, tacit knowledge, technological singularity, TED Talk, The Coming Technological Singularity, the long tail, the scientific method, The Signal and the Noise by Nate Silver, The Wisdom of Crowds, theory of mind, Turing machine, Turing test, Vernor Vinge, Watson beat the top human players on Jeopardy!, Yochai Benkler

The systems a­ ren’t M achine L earning and B ig D ata 135 perfect, largely b­ ecause of the constant cat-­and-­mouse game between ser­v ice providers and spammers endlessly trying new and dif­fer­ent approaches to fool trained filters.3 Spam detection is not a particularly sexy example of supervised learning. Modern deep learning systems also perform classification for tasks like image recognition and visual object recognition. The well-­k nown ImageNet competitions pre­sent contestants with a large-­ scale task in supervised learning, drawing on the millions of images that ImageNet has downloaded from websites like Flickr for use in training and testing the accuracy of deep learning systems. All ­these images have been labeled by h­ umans (providing their ser­v ices to the proj­ect through Amazon’s Mechanical Turk interface) and the terms they apply make up a structured database of En­glish words known as WordNet.

A selected subset of words in WordNet represents a category to be learned, using common nouns (like dog, pumpkin, piano, ­house) and a se­lection of more obscure items (like Scottish terrier, hussar monkey, flamingo). The contest is to see which of the competing deep learning classifiers is able to label the most images correctly, as they w ­ ere labeled by the ­humans. With over a thousand categories being used in ImageNet competitions, the task far exceeds the yes-­or-no prob­lem presented to spam detectors (or any other binary classification task, such as simply labeling ­whether an image is of a ­human face or not). Competing in this competition means performing a massive classification task using pixel data as input.4 Sequence classification is often used in natu­ral language pro­cessing applications.

In truth it was b­ ecause ­there was, initially, a hodgepodge of older statistical techniques in use for data science and machine learning in AI that the sought-­a fter insights emerging from big data w ­ ere mistakenly pinned to the data volume itself. This was a ridicu­lous proposition from the start; data points are facts and, again, ­can’t become insightful themselves. Although this has become apparent only in the rearview mirror, the early deep learning successes on visual object recognition, in the ImageNet competitions, signaled the beginning of a transfer of zeal from big data to the machine learning methods that benefit from it—in other words, to the newly explosive field of AI. Thus big data has peaked, and now seems to be receding from popu­lar discussion almost as quickly as it appeared. The focus on deep learning makes sense, ­because ­a fter all, the algorithms rather than just the data are responsible for trouncing ­human champions at Go, mastering Atari games, driving cars, and the rest.


pages: 625 words: 167,349

The Alignment Problem: Machine Learning and Human Values by Brian Christian

Albert Einstein, algorithmic bias, Alignment Problem, AlphaGo, Amazon Mechanical Turk, artificial general intelligence, augmented reality, autonomous vehicles, backpropagation, butterfly effect, Cambridge Analytica, Cass Sunstein, Claude Shannon: information theory, computer vision, Computing Machinery and Intelligence, data science, deep learning, DeepMind, Donald Knuth, Douglas Hofstadter, effective altruism, Elaine Herzberg, Elon Musk, Frances Oldham Kelsey, game design, gamification, Geoffrey Hinton, Goodhart's law, Google Chrome, Google Glasses, Google X / Alphabet X, Gödel, Escher, Bach, Hans Moravec, hedonic treadmill, ImageNet competition, industrial robot, Internet Archive, John von Neumann, Joi Ito, Kenneth Arrow, language acquisition, longitudinal study, machine translation, mandatory minimum, mass incarceration, multi-armed bandit, natural language processing, Nick Bostrom, Norbert Wiener, Northpointe / Correctional Offender Management Profiling for Alternative Sanctions, OpenAI, Panopticon Jeremy Bentham, pattern recognition, Peter Singer: altruism, Peter Thiel, precautionary principle, premature optimization, RAND corporation, recommendation engine, Richard Feynman, Rodney Brooks, Saturday Night Live, selection bias, self-driving car, seminal paper, side project, Silicon Valley, Skinner box, sparse data, speech recognition, Stanislav Petrov, statistical model, Steve Jobs, strong AI, the map is not the territory, theory of mind, Tim Cook: Apple, W. E. B. Du Bois, Wayback Machine, zero-sum game

Hinton has come up with an idea called “dropout,” where during training certain portions of the network get randomly turned off. Krizhevsky tries this, and it seems, for various reasons, to help. He tries using neurons with a so-called “rectified linear” output function. This, too, seems to help. He submits his best model on the ImageNet competition deadline, September 30, and then the final wait begins. Two days later, Krizhevsky gets an email from Stanford’s Jia Deng, who is organizing that year’s competition, cc’d to all of the entrants. In plain, unemotional language, Deng says to click the link provided to see the results. Krizhevsky clicks the link provided and sees the results.

—ERNEST BURGESS71 Your scientists were so preoccupied with whether or not they could . . . that they didn’t stop to think if they should. — JEFF GOLDBLUM AS IAN MALCOLM, JURASSIC PARK One of the most important things in any prediction is to make sure that you’re actually predicting what you think you’re predicting. This is harder than it sounds. In the ImageNet competition, for instance—in which AlexNet did so well in 2012—the goal is to train machines to identify what images depict. But this isn’t what the training data captures. The training data captures what human volunteers on Mechanical Turk said the image depicted. If a baby lion, let’s say, were repeatedly misidentified by human volunteers as a cat, it would become part of a system’s training data as a cat—and any system labeling it as a lion would be docked points and would have to adjust its parameters to correct this “error.”

By the fourth layer, the network was responding to configurations of eyes and nose, to tile floors, to the radial geometry of a starfish or a spider, to the petals of a flower or keys on a typewriter. By the fifth layer, the ultimate categories into which objects were being assigned seemed to exert a strong influence. The effect was dramatic, insightful. But was it useful? Zeiler popped the hood of the AlexNet model that had won the ImageNet competition in 2012 and started digging around, inspecting it using deconvolution. He noticed a bunch of flaws. Some low-level parts of the network had normalized incorrectly, like an overexposed photograph. Other filters had gone “dead” and weren’t detecting anything. Zeiler hypothesized that they weren’t correctly sized for the types of patterns they were trying to match.


pages: 477 words: 75,408

The Economic Singularity: Artificial Intelligence and the Death of Capitalism by Calum Chace

"World Economic Forum" Davos, 3D printing, additive manufacturing, agricultural Revolution, AI winter, Airbnb, AlphaGo, Alvin Toffler, Amazon Robotics, Andy Rubin, artificial general intelligence, augmented reality, autonomous vehicles, banking crisis, basic income, Baxter: Rethink Robotics, Berlin Wall, Bernie Sanders, bitcoin, blockchain, Boston Dynamics, bread and circuses, call centre, Chris Urmson, congestion charging, credit crunch, David Ricardo: comparative advantage, deep learning, DeepMind, Demis Hassabis, digital divide, Douglas Engelbart, Dr. Strangelove, driverless car, Elon Musk, en.wikipedia.org, Erik Brynjolfsson, Fairchild Semiconductor, Flynn Effect, full employment, future of work, Future Shock, gender pay gap, Geoffrey Hinton, gig economy, Google Glasses, Google X / Alphabet X, Hans Moravec, Herman Kahn, hype cycle, ImageNet competition, income inequality, industrial robot, Internet of things, invention of the telephone, invisible hand, James Watt: steam engine, Jaron Lanier, Jeff Bezos, job automation, John Markoff, John Maynard Keynes: technological unemployment, John von Neumann, Kevin Kelly, Kiva Systems, knowledge worker, lifelogging, lump of labour, Lyft, machine translation, Marc Andreessen, Mark Zuckerberg, Martin Wolf, McJob, means of production, Milgram experiment, Narrative Science, natural language processing, Neil Armstrong, new economy, Nick Bostrom, Occupy movement, Oculus Rift, OpenAI, PageRank, pattern recognition, post scarcity, post-industrial society, post-work, precariat, prediction markets, QWERTY keyboard, railway mania, RAND corporation, Ray Kurzweil, RFID, Rodney Brooks, Sam Altman, Satoshi Nakamoto, Second Machine Age, self-driving car, sharing economy, Silicon Valley, Skype, SoftBank, software is eating the world, speech recognition, Stephen Hawking, Steve Jobs, TaskRabbit, technological singularity, TED Talk, The future is already here, The Future of Employment, Thomas Malthus, transaction costs, Two Sigma, Tyler Cowen, Tyler Cowen: Great Stagnation, Uber for X, uber lyft, universal basic income, Vernor Vinge, warehouse automation, warehouse robotics, working-age population, Y Combinator, young professional

In deep learning, the algorithms operate in several layers, each layer processing data from previous ones and passing the output up to the next layer. The output is not necessarily binary, just on or off: it can be weighted. The number of layers can vary too, with anything above ten layers seen as very deep learning – although in December 2015 a Microsoft team won the ImageNet competition with a system which employed a massive 152 layers.[lxvi] Deep learning, and especially artificial neural nets (ANNs), are in many ways a return to an older approach to AI which was explored in the 1960s but abandoned because it proved ineffective. While Good Old-Fashioned AI held sway in most labs, a small group of pioneers known as the Toronto mafia kept faith with the neural network approach.

In December 2015, Microsoft's chief speech scientist Xuedong Huang noted that speech recognition has improved 20% a year consistently for the last 20 years. He predicted that computers would be as good as humans at understanding human speech within five years. Geoff Hinton – the man whose team won the landmark 2012 ImageNet competition – went further. In May 2015 he said that he expects machines to demonstrate common sense within a decade. Common sense can be described as having a mental model of the world which allows you to predict what will happen if certain actions are taken. Professor Murray Shanahan of Imperial College uses the example of throwing a chair from a stage into an audience: humans would understand that members of the audience would throw up their hands to protect themselves, but some damage would probably be caused, and certainly some upset.


pages: 307 words: 88,180

AI Superpowers: China, Silicon Valley, and the New World Order by Kai-Fu Lee

"World Economic Forum" Davos, AI winter, Airbnb, Albert Einstein, algorithmic bias, algorithmic trading, Alignment Problem, AlphaGo, artificial general intelligence, autonomous vehicles, barriers to entry, basic income, bike sharing, business cycle, Cambridge Analytica, cloud computing, commoditize, computer vision, corporate social responsibility, cotton gin, creative destruction, crony capitalism, data science, deep learning, DeepMind, Demis Hassabis, Deng Xiaoping, deskilling, Didi Chuxing, Donald Trump, driverless car, Elon Musk, en.wikipedia.org, Erik Brynjolfsson, fake news, full employment, future of work, general purpose technology, Geoffrey Hinton, gig economy, Google Chrome, Hans Moravec, happiness index / gross national happiness, high-speed rail, if you build it, they will come, ImageNet competition, impact investing, income inequality, informal economy, Internet of things, invention of the telegraph, Jeff Bezos, job automation, John Markoff, Kickstarter, knowledge worker, Lean Startup, low skilled workers, Lyft, machine translation, mandatory minimum, Mark Zuckerberg, Menlo Park, minimum viable product, natural language processing, Neil Armstrong, new economy, Nick Bostrom, OpenAI, pattern recognition, pirate software, profit maximization, QR code, Ray Kurzweil, recommendation engine, ride hailing / ride sharing, risk tolerance, Robert Mercer, Rodney Brooks, Rubik’s Cube, Sam Altman, Second Machine Age, self-driving car, sentiment analysis, sharing economy, Silicon Valley, Silicon Valley ideology, Silicon Valley startup, Skype, SoftBank, Solyndra, special economic zone, speech recognition, Stephen Hawking, Steve Jobs, strong AI, TED Talk, The Future of Employment, Travis Kalanick, Uber and Lyft, uber lyft, universal basic income, urban planning, vertical integration, Vision Fund, warehouse robotics, Y Combinator

One of the clearest examples of these accelerating improvements is the ImageNet competition. In the competition, algorithms submitted by different teams are tasked with identifying thousands of different objects within millions of different images, such as birds, baseballs, screwdrivers, and mosques. It has quickly emerged as one of the most respected image-recognition contests and a clear benchmark for AI’s progress in computer vision. When the Oxford machine-learning experts made their estimates of technical capabilities in early 2013, the most recent ImageNet competition of 2012 had been the coming-out party for deep learning.


pages: 368 words: 96,825

Bold: How to Go Big, Create Wealth and Impact the World by Peter H. Diamandis, Steven Kotler

3D printing, additive manufacturing, adjacent possible, Airbnb, Amazon Mechanical Turk, Amazon Web Services, Apollo 11, augmented reality, autonomous vehicles, Boston Dynamics, Charles Lindbergh, cloud computing, company town, creative destruction, crowdsourcing, Daniel Kahneman / Amos Tversky, data science, deal flow, deep learning, dematerialisation, deskilling, disruptive innovation, driverless car, Elon Musk, en.wikipedia.org, Exxon Valdez, fail fast, Fairchild Semiconductor, fear of failure, Firefox, Galaxy Zoo, Geoffrey Hinton, Google Glasses, Google Hangouts, gravity well, hype cycle, ImageNet competition, industrial robot, information security, Internet of things, Jeff Bezos, John Harrison: Longitude, John Markoff, Jono Bacon, Just-in-time delivery, Kickstarter, Kodak vs Instagram, Law of Accelerating Returns, Lean Startup, life extension, loss aversion, Louis Pasteur, low earth orbit, Mahatma Gandhi, Marc Andreessen, Mark Zuckerberg, Mars Rover, meta-analysis, microbiome, minimum viable product, move fast and break things, Narrative Science, Netflix Prize, Network effects, Oculus Rift, OpenAI, optical character recognition, packet switching, PageRank, pattern recognition, performance metric, Peter H. Diamandis: Planetary Resources, Peter Thiel, pre–internet, Ray Kurzweil, recommendation engine, Richard Feynman, ride hailing / ride sharing, risk tolerance, rolodex, Scaled Composites, self-driving car, sentiment analysis, shareholder value, Sheryl Sandberg, Silicon Valley, Silicon Valley startup, skunkworks, Skype, smart grid, SpaceShipOne, stem cell, Stephen Hawking, Steve Jobs, Steven Levy, Stewart Brand, Stuart Kauffman, superconnector, Susan Wojcicki, synthetic biology, technoutopianism, TED Talk, telepresence, telepresence robot, Turing test, urban renewal, Virgin Galactic, Wayback Machine, web application, X Prize, Y Combinator, zero-sum game

Fifty thousand different traffic signs are used—signs obscured by long distances, by trees, by the glare of sunlight. In 2011, for the first time, a machine-learning algorithm bested its makers, achieving a 0.5 percent error rate, compared to 1.2 percent for humans.32 Even more impressive were the results of the 2012 ImageNet Competition, which challenged algorithms to look at one million different images—ranging from birds to kitchenware to people on motor scooters—and correctly slot them into a thousand unique categories. Seriously, it’s one thing for a computer to recognize known objects (zip codes, traffic signs), but categorizing thousands of random objects is an ability that is downright human.

., 15, 17, 18, 19, 20, 21 structure of, 21 see also entrepreneurs, exponential; specific exponential entrepreneurs and organizations Exponential Organizations (ExO) (Ismail), xiv, 15 extrinsic rewards, 78, 79 Exxon Valdez, 250 FAA (Federal Aviation Administration), 110, 111, 261 Facebook, 14, 16, 88, 128, 173, 182, 185, 190, 195, 196, 202, 212, 213, 217, 218, 224, 233, 234, 236, 241 facial recognition software, 58 Fairchild Semiconductor, 4 Falcon launchers, 97, 119, 122, 123 false wins, 268, 269, 271 Fast Company, 5, 248 Favreau, Jon, 117 feedback, feedback loops, 28, 77, 83, 84, 120, 176, 180 in crowdfunding campaigns, 176, 180, 182, 185, 190, 199, 200, 202, 209–10 triggering flow with, 86, 87, 90–91, 92 Festo, 61 FeverBee (blog), 233 Feynman, Richard, 268, 271 Firefox Web browser, 11 first principles, 116, 120–21, 122, 126 Fiverr, 157 fixed-funding campaigns, 185–86, 206 “flash prizes,” 250 Flickr, 14 flow, 85–94, 109, 278 creative triggers of, 87, 93 definition of, 86 environmental triggers of, 87, 88–89 psychological triggers of, 87, 89–91, 92 social triggers of, 87, 91–93 Flow Genome Project, xiii, 87, 278 Foldit, 145 Forbes, 125 Ford, Henry, 33, 112–13 Fortune, 123 Fossil Wrist Net, 176 Foster, Richard, 14–15 Foundations (Rose), 120 Fowler, Emily, 299n Foxconn, 62 Free (Anderson), 10–11 Freelancer.com, 149–51, 156, 158, 163, 165, 195, 207 Friedman, Thomas, 150–51 Galaxy Zoo, 220–21, 228 Gartner Hype Cycle, 25–26, 25, 26, 29 Gates, Bill, 23, 53 GEICO, 227 General Electric (GE), 43, 225 General Mills, 145 Gengo.com, 145 Genius, 161 genomics, x, 63, 64–65, 66, 227 Georgia Tech, 197 geostationary satellite, 100 Germany, 55 Get a Freelancer (website), 149 Gigwalk, 159 Giovannitti, Fred, 253 Gmail, 77, 138, 163 goals, goal setting, 74–75, 78, 79, 80, 82–83, 84, 85, 87, 137 in crowdfunding campaigns, 185–87, 191 moonshots in, 81–83, 93, 98, 103, 104, 110, 245, 248 subgoals in, 103–4, 112 triggering flow with, 89–90, 92, 93 Godin, Seth, 239–40 Google, 11, 14, 47, 50, 61, 77, 80, 99, 128, 134, 135–39, 167, 195, 208, 251, 286n artificial intelligence development at, 24, 53, 58, 81, 138–39 autonomous cars of, 43–44, 44, 136, 137 eight innovation principles of, 84–85 robotics at, 139 skunk methodology used at, 81–84 thinking-at-scale strategies at, 136–38 Google Docs, 11 Google Glass, 58 Google Hangouts, 193, 202 Google Lunar XPRIZE, 139, 249 Googleplex, 134 Google+, 185, 190, 202 GoogleX, 81, 82, 83, 139 Google Zeitgeist, 136 Gossamer Condor, 263 Gou, Terry, 62 graphic designers, in crowdfunding campaigns, 193 Green, Hank, 180, 200 Grepper, Ryan, 210, 211–13 Grishin, Dmitry, 62 Grishin Robotics, 62 group flow, 91–93 Gulf Coast oil spill (2010), 250, 251, 253 Gulf of Mexico, 250, 251 hackathons, 159 hacker spaces, 62, 64 Hagel, John, III, 86, 106–7 HAL (fictional AI system), 52, 53 Hallowell, Ned, 88 Hariri, Robert, 65, 66 Harrison, John, 245, 247, 267 Hawking, Stephen, 110–12 Hawley, Todd, 100, 103, 104, 107, 114n Hayabusa mission, 97 health care, x, 245 AI’s impact on, 57, 276 behavior tracking in, 47 crowdsourcing projects in, 227, 253 medical manufacturing in, 34–35 robotics in, 62 3–D printing’s impact on, 34–35 Heath, Dan and Chip, 248 Heinlein, Robert, 114n Hendy, Barry, 12 Hendy’s law, 12 HeroX, 257–58, 262, 263, 265, 267, 269, 299n Hessel, Andrew, 63, 64 Hinton, Geoffrey, 58 Hoffman, Reid, 77, 231 Hollywood, 151–52 hosting platforms, 20–21 Howard, Jeremy, 54 Howe, Jeff, 144 Hseih, Tony, 80 Hughes, Jack, 152, 225–27, 254 Hull, Charles, 29–30, 32 Human Longevity, Inc. (HLI), 65–66 Hyatt Hotels Corporation, 20 IBM, 56, 57, 59, 76 ImageNet Competition (2012), 55 image recognition, 55, 58 Immelt, Jeff, 225 incentive competitions, xiii, 22, 139, 148, 152–54, 159, 160, 237, 240, 242, 243–73 addressing market failures with, 264–65, 269, 272 back-end business models in, 249, 265, 268 benefits of, 258–61 case studies of, 250–58 collaborative spirit in, 255, 260–61 crowdsourcing in designing of, 257–58 factors influencing success of, 245–47 false wins in, 268, 269, 271 “flash prizes” in, 250 global participation in, 267 innovation driven by, 245, 247, 248, 249, 252, 258–59, 260, 261 intellectual property (IP) in, 262, 267–68, 271 intrinsic rewards in, 254, 255 judging in, 273 key parameters for designing of, 263–68 launching of new industries with, 260, 268, 272 Master Team Agreements in, 273 media exposure in, 265, 266, 272, 273 MTP and passion as important in, 248, 249, 255, 263, 265, 270 operating costs of, 271, 272–73 principal motivators in, 254, 262–63 purses in, 265, 266, 270, 273 reasons for effectiveness of, 247–49 risk taking in, 247, 248–49, 261, 270 setting rules in, 263, 268, 269, 271, 273 small teams as ideal in, 262 step-by-step guide to, 269–73 telegenic finishes in, 266, 272, 273 time limits in, 249, 267, 271–72 XPRIZE, see XPRIZE competitions Indian Motorcycle company, 222 Indian Space Research Organization, 102 Indiegogo, 145, 173, 175, 178, 179, 184, 185–86, 187, 190, 199, 205, 206, 257 infinite computing, 21, 24, 41, 48–52, 61, 66 entrepreneurial opportunities and, 50–52 information: crowdsourcing platforms in gathering of, 145–46, 154–56, 157, 159–60, 220–21, 228 in data-driven crowdfunding campaigns, 207–10, 213 networks and sensors in garnering of, 42–43, 44, 47, 48, 256 science, 64 see also data mining Inman, Matthew, 178, 192, 193, 200 innovation, 8, 30, 56, 137, 256 companies resistant to, xi, 9–10, 12, 15, 23, 76 crowdsourcing and, see crowdsourcing as disruptive technology, 9–10 feedback loops in fostering of, 28, 77, 83, 84, 86, 87, 90–91, 92, 120, 176 Google’s eight principles of, 84–85 incentive competitions in driving of, 245, 247, 248, 249, 252, 258–59, 260, 261 infinite computing as new approach to, 51 power of constraints and, 248–49, 259 rate of, in online communities, 216, 219, 224, 225, 228, 233, 237 setting big goals for, 74–75, 78, 79, 80, 82–83, 84, 85, 87, 89–90, 92, 93, 103 skunk methodology in fostering of, 71–87, 88; see also skunk methodology inPulse, 176, 200 Instagram, 15–16, 16 insurance companies, 47 Intel, 7 intellectual property (IP), 262, 267–68, 271 INTELSAT, 102 Intel Science and Engineering Fair, 65 International Manufacturing Technology Show, 33 International Space Station (ISS), 35–36, 37, 97, 119 International Space University (ISU), 96, 100–104, 107–8 Founding Conference of, 102, 103 Internet, 8, 14, 39, 41, 45, 49, 50, 117, 118, 119, 132, 136, 143, 144, 153, 154, 163, 177, 207, 208, 209, 212, 216, 217, 228 building communities on, see communities, online crowd tools on, see crowdfunding, crowdfunding campaigns; crowdsourcing development of, 27 explosion of connectivity to, 42, 46, 46, 146, 147, 245 mainstreaming of, 27, 32, 33 reputation economics and, 217–19, 230, 232, 236–37 Internet-of-Things (IoT), 46, 47, 53 intrinsic rewards, 79, 254, 255 Invisalign, 34–35 iPads, 42, 57, 167 iPhones, 12, 42, 62, 176 iPod, 17, 18, 178 iRobot, 60 Iron Man, 52–53, 117 Ismail, Salim, xiv, 15, 77, 92 isolation, innovation and, 72, 76, 78, 79, 81–82, 257 Japan Aerospace Exploration Agency, 97 JARVIS (fictional AI system), 52–53, 58, 59, 146 Jeopardy, 56, 57 Jet Propulsion Laboratory (JPL), 99 Jobs, Steve, xiv, 23, 66–67, 72, 89, 111, 123 Johnson, Carolyn, 227 Johnson, Clarence “Kelly,” 71, 74, 75 skunk work rules of, 74, 75–76, 77, 81, 84, 247 Joy, Bill, 216, 256 Jumpstart Our Business Startups (JOBS) Act (2012), 171, 173 Kaggle, 160, 161 Kahneman, Daniel, 78, 121 Kaku, Michio, 49 Kauffman, Stuart, 276 Kaufman, Ben, 17–20 Kay, Alan, 114n Kemmer, Aaron, 35, 36, 37 Kickstarter, 145, 171, 173, 175, 176, 179–80, 182, 184, 190, 191, 193, 195, 197, 200, 205, 206 Kindle, 132 Kiva.org, 144–45, 172 Klein, Candace, 19–20, 171 Klein, Joshua, 217–18, 221 Klout, 218 Kodak Corporation, 4–8, 9–10, 11, 12, 20 Apparatus Division of, 4 bankruptcy of, 10, 16 digital camera developed by, 4–5, 5, 9 as innovation resistant, 9–10, 12, 15, 76 market dominance of, 5–6, 13–14 Kotler, Steven, xi, xiii, xv, 87, 279 Krieger, Mike, 15 Kubrick, Stanley, 52 Kurzweil, Ray, 53, 54, 58, 59 language translators, 137–38 crowdsourcing projects of, 145, 155–56 Latham, Gary, 74–75, 103 Law of Niches, 221, 223, 228, 231 leadership: importance of vision in, 23–24 moral, 274–76 Lean In (Sandberg), 217 Lean In circles, 217, 237 LEAP airplane, 34 LendingClub, 172 LeNet 5, 54, 55 Let’s Build a Goddamn Tesla Museum, see Tesla Museum campaign Levy, Steven, 138 Lewicki, Chris, 99, 179, 202, 203–4 Lichtenberg, Byron K., 102, 114n Licklider, J.


pages: 416 words: 112,268

Human Compatible: Artificial Intelligence and the Problem of Control by Stuart Russell

3D printing, Ada Lovelace, AI winter, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Alfred Russel Wallace, algorithmic bias, AlphaGo, Andrew Wiles, artificial general intelligence, Asilomar, Asilomar Conference on Recombinant DNA, augmented reality, autonomous vehicles, basic income, behavioural economics, Bletchley Park, blockchain, Boston Dynamics, brain emulation, Cass Sunstein, Charles Babbage, Claude Shannon: information theory, complexity theory, computer vision, Computing Machinery and Intelligence, connected car, CRISPR, crowdsourcing, Daniel Kahneman / Amos Tversky, data science, deep learning, deepfake, DeepMind, delayed gratification, Demis Hassabis, Elon Musk, en.wikipedia.org, Erik Brynjolfsson, Ernest Rutherford, fake news, Flash crash, full employment, future of work, Garrett Hardin, Geoffrey Hinton, Gerolamo Cardano, Goodhart's law, Hans Moravec, ImageNet competition, Intergovernmental Panel on Climate Change (IPCC), Internet of things, invention of the wheel, job automation, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John Nash: game theory, John von Neumann, Kenneth Arrow, Kevin Kelly, Law of Accelerating Returns, luminiferous ether, machine readable, machine translation, Mark Zuckerberg, multi-armed bandit, Nash equilibrium, Nick Bostrom, Norbert Wiener, NP-complete, OpenAI, openstreetmap, P = NP, paperclip maximiser, Pareto efficiency, Paul Samuelson, Pierre-Simon Laplace, positional goods, probability theory / Blaise Pascal / Pierre de Fermat, profit maximization, RAND corporation, random walk, Ray Kurzweil, Recombinant DNA, recommendation engine, RFID, Richard Thaler, ride hailing / ride sharing, Robert Shiller, robotic process automation, Rodney Brooks, Second Machine Age, self-driving car, Shoshana Zuboff, Silicon Valley, smart cities, smart contracts, social intelligence, speech recognition, Stephen Hawking, Steven Pinker, superintelligent machines, surveillance capitalism, Thales of Miletus, The Future of Employment, The Theory of the Leisure Class by Thorstein Veblen, Thomas Bayes, Thorstein Veblen, Tragedy of the Commons, transport as a service, trolley problem, Turing machine, Turing test, universal basic income, uranium enrichment, vertical integration, Von Neumann architecture, Wall-E, warehouse robotics, Watson beat the top human players on Jeopardy!, web application, zero-sum game

This leads to a simple formula for propagating the error backwards from the output layer to the input layer, tweaking knobs along the way. Miraculously, the process works. For the task of recognizing objects in photographs, deep learning algorithms have demonstrated remarkable performance. The first inkling of this came in the 2012 ImageNet competition, which provides training data consisting of 1.2 million labeled images in one thousand categories, and then requires the algorithm to label one hundred thousand new images.4 Geoff Hinton, a British computational psychologist who was at the forefront of the first neural network revolution in the 1980s, had been experimenting with a very large deep convolutional network: 650,000 nodes and 60 million parameters.

For example, to learn the difference between the “situational superko” and “natural situational superko” rules, the learning algorithm would have to try repeating a board position that it had created previously by a pass rather than by playing a stone. The results would be different in different countries. 4. For a description of the ImageNet competition, see Olga Russakovsky et al., “ImageNet large scale visual recognition challenge,” International Journal of Computer Vision 115 (2015): 211–52. 5. The first demonstration of deep networks for vision: Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton, “ImageNet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems 25, ed.


pages: 590 words: 152,595

Army of None: Autonomous Weapons and the Future of War by Paul Scharre

"World Economic Forum" Davos, active measures, Air France Flight 447, air gap, algorithmic trading, AlphaGo, Apollo 13, artificial general intelligence, augmented reality, automated trading system, autonomous vehicles, basic income, Black Monday: stock market crash in 1987, brain emulation, Brian Krebs, cognitive bias, computer vision, cuban missile crisis, dark matter, DARPA: Urban Challenge, data science, deep learning, DeepMind, DevOps, Dr. Strangelove, drone strike, Elon Musk, en.wikipedia.org, Erik Brynjolfsson, facts on the ground, fail fast, fault tolerance, Flash crash, Freestyle chess, friendly fire, Herman Kahn, IFF: identification friend or foe, ImageNet competition, information security, Internet of things, Jeff Hawkins, Johann Wolfgang von Goethe, John Markoff, Kevin Kelly, Korean Air Lines Flight 007, Loebner Prize, loose coupling, Mark Zuckerberg, military-industrial complex, moral hazard, move 37, mutually assured destruction, Nate Silver, Nick Bostrom, PalmPilot, paperclip maximiser, pattern recognition, Rodney Brooks, Rubik’s Cube, self-driving car, sensor fusion, South China Sea, speech recognition, Stanislav Petrov, Stephen Hawking, Steve Ballmer, Steve Wozniak, Strategic Defense Initiative, Stuxnet, superintelligent machines, Tesla Model S, The Signal and the Noise by Nate Silver, theory of mind, Turing test, Tyler Cowen, universal basic income, Valery Gerasimov, Wall-E, warehouse robotics, William Langewiesche, Y2K, zero day

Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf. 87 over a hundred layers: Christian Szegedy et al., “Going Deeper With Convolutions,” https://www.cs.unc.edu/~wliu/papers/GoogLeNet.pdf. 87 error rate of only 4.94 percent: Richard Eckel, “Microsoft Researchers’ Algorithm Sets ImageNet Challenge Milestone,” Microsoft Research Blog, February 10, 2015, https://www.microsoft.com/en-us/research/microsoft-researchers-algorithm-sets-imagenet-challenge-milestone/. Kaiming He et al., “Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification,” https://arxiv.org/pdf/1502.01852.pdf. 87 estimated 5.1 percent error rate: Olga Russakovsky et al., “ImageNet Large Scale Visual Recognition Challenge,” January 20, 2015, https://arxiv.org/pdf/1409.0575.pdf. 87 3.57 percent rate: Kaiming He et al., “Deep Residual Learning for Image Recognition,” December 10, 2015, https://arxiv.org/pdf/1512.03385v1.pdf. 6 Crossing the Threshold: Approving Autonomous Weapons 89 delineation of three classes of systems: Department of Defense, “Department of Defense Directive Number 3000.09.” 90 “minimize the probability and consequences”: Ibid, 1. 91 “We haven’t had anything that was even remotely close”: Frank Kendall, interview, November 7, 2016. 91 “We had an automatic mode”: Ibid. 91 “relatively soon”: Ibid. 91 “sort through all that”: Ibid. 91 “Are you just driving down”: Ibid. 92 “other side of the equation”: Ibid. 92 “a reasonable question to ask”: Ibid. 92 “where technology supports it”: Ibid. 92 “principles and obey them”: Ibid. 93 “Automation and artificial intelligence are”: Ibid. 93 Work explained in a 2014 monograph: Robert O.


pages: 499 words: 144,278

Coders: The Making of a New Tribe and the Remaking of the World by Clive Thompson

"Margaret Hamilton" Apollo, "Susan Fowler" uber, 2013 Report for America's Infrastructure - American Society of Civil Engineers - 19 March 2013, 4chan, 8-hour work day, Aaron Swartz, Ada Lovelace, AI winter, air gap, Airbnb, algorithmic bias, AlphaGo, Amazon Web Services, Andy Rubin, Asperger Syndrome, augmented reality, Ayatollah Khomeini, backpropagation, barriers to entry, basic income, behavioural economics, Bernie Sanders, Big Tech, bitcoin, Bletchley Park, blockchain, blue-collar work, Brewster Kahle, Brian Krebs, Broken windows theory, call centre, Cambridge Analytica, cellular automata, Charles Babbage, Chelsea Manning, Citizen Lab, clean water, cloud computing, cognitive dissonance, computer vision, Conway's Game of Life, crisis actor, crowdsourcing, cryptocurrency, Danny Hillis, data science, David Heinemeier Hansson, deep learning, DeepMind, Demis Hassabis, disinformation, don't be evil, don't repeat yourself, Donald Trump, driverless car, dumpster diving, Edward Snowden, Elon Musk, Erik Brynjolfsson, Ernest Rutherford, Ethereum, ethereum blockchain, fake news, false flag, Firefox, Frederick Winslow Taylor, Free Software Foundation, Gabriella Coleman, game design, Geoffrey Hinton, glass ceiling, Golden Gate Park, Google Hangouts, Google X / Alphabet X, Grace Hopper, growth hacking, Guido van Rossum, Hacker Ethic, hockey-stick growth, HyperCard, Ian Bogost, illegal immigration, ImageNet competition, information security, Internet Archive, Internet of things, Jane Jacobs, John Markoff, Jony Ive, Julian Assange, Ken Thompson, Kickstarter, Larry Wall, lone genius, Lyft, Marc Andreessen, Mark Shuttleworth, Mark Zuckerberg, Max Levchin, Menlo Park, meritocracy, microdosing, microservices, Minecraft, move 37, move fast and break things, Nate Silver, Network effects, neurotypical, Nicholas Carr, Nick Bostrom, no silver bullet, Northpointe / Correctional Offender Management Profiling for Alternative Sanctions, Oculus Rift, off-the-grid, OpenAI, operational security, opioid epidemic / opioid crisis, PageRank, PalmPilot, paperclip maximiser, pattern recognition, Paul Graham, paypal mafia, Peter Thiel, pink-collar, planetary scale, profit motive, ransomware, recommendation engine, Richard Stallman, ride hailing / ride sharing, Rubik’s Cube, Ruby on Rails, Sam Altman, Satoshi Nakamoto, Saturday Night Live, scientific management, self-driving car, side project, Silicon Valley, Silicon Valley ideology, Silicon Valley startup, single-payer health, Skype, smart contracts, Snapchat, social software, software is eating the world, sorting algorithm, South of Market, San Francisco, speech recognition, Steve Wozniak, Steven Levy, systems thinking, TaskRabbit, tech worker, techlash, TED Talk, the High Line, Travis Kalanick, Uber and Lyft, Uber for X, uber lyft, universal basic income, urban planning, Wall-E, Watson beat the top human players on Jeopardy!, WeWork, WikiLeaks, women in the workforce, Y Combinator, Zimmermann PGP, éminence grise

By 2012, the field had a seismic breakthrough. Up at the University of Toronto, the British computer scientist Geoff Hinton had been beavering away for two decades on improving neural networks. That year he and a team of students showed off the most impressive neural net yet—by soundly beating competitors at an annual AI shootout. The ImageNet challenge, as it’s known, is an annual competition among AI researchers to see whose system is best at recognizing images. That year, Hinton’s deep-learning neural net got only 15.3 percent of the images wrong. The next-best competitor had an error rate almost twice as high, of 26.2 percent. It was an AI moon shot.

coding, ref1, ref2 Helsby, Jen, ref1, ref2, ref3 Henderson, Cal, ref1 Hermany, Charles, ref1 Hicks, Marie, ref1, ref2, ref3 Hillis, Danny, ref1 Hinton, Geoff, ref1, ref2 Hipstamatic, ref1 Ho, Jason, ref1, ref2 Hoffman-Andrews, Jacob, ref1 Hollands, Jean, ref1 Hopper, Grace, ref1, ref2, ref3 Hour of Code, ref1 Houston, Drew, ref1, ref2, ref3, ref4 Hsu, Jake, ref1 HTML, ref1, ref2, ref3 HTTP protocol, ref1 Huang, Victoria, ref1 Hurlburt, Stephanie, ref1 Hustle, ref1 Huston, Cate, ref1 Hutchins, Marcus, ref1, ref2 IBM, ref1 IBM 704, ref1 ImageNet challenge, ref1 India, ref1 Industrial Revolution, ref1 infosec workers, ref1, ref2 malware, fighting, ref1 penetration testers, ref1 Infosystems, ref1 Inman, Bobby, ref1 Instacart, ref1 Instagram, ref1, ref2 Intel, ref1 Intercept, 235 Internet Archive, ref1 “In the Station of the Metro” (Pound), ref1 INTJ personalities, ref1, ref2 artistic temperaments and, ref1 back-end code and, ref1, ref2, ref3 Brandon’s report on programmer personalities, ref1 brutal pace of work, in 1990s and, ref1 coder/noncoder relationships, dynamics of, ref1 Cohen, profile of, ref1 deep immersion required of coding and, ref1 depression and other mental health issues, prevalence of, ref1 Drasner, profile of, ref1 explosion in number of coders and, ref1, ref2 flow state and, ref1 front-end design and, ref1, ref2 Mason on characterization of coders, ref1 Perry and Cannon’s study of programmers, ref1 social reticence and arrogance of programmers, reports on, ref1 surliness of coders, ref1 Weizenbaum’s description of programmers, ref1 Irani, Lilly, ref1, ref2 ISIS, ref1 Ivy, Lance, ref1, ref2 Jacobin, ref1 JavaScript, ref1, ref2, ref3 Jay, John, ref1 Jefferson, Thomas, ref1 Jeffery, Clara, ref1 Jennings, Betty, ref1, ref2 Jha, Paras, ref1 Jin, Kang-Xing, ref1 job training, ref1, ref2 Johansen, Jon Lech, ref1, ref2 Johnson, Justin, ref1 Johnson, Maggie, ref1 Johnson, Mat, ref1 Jones, Leslie, ref1 Justice, Rusty, ref1, ref2, ref3, ref4 justice system, effect of AI system bias on, ref1 Kahle, Brewster, ref1, ref2 Kalanick, Travis, ref1, ref2 Kalt, David, ref1 Kaplan-Moss, Jacob, ref1 Keats, John, ref1 Kernighan, Brian, ref1 Kickstarter, ref1 King, Stephen, ref1 Klawe, Maria, ref1 Koike, Makoto, ref1 Kramer, Steven J., ref1 Krebs, Brian, ref1, ref2 Krieger, Mike, ref1, ref2 Kronos, ref1 Kryptos Logic, ref1, ref2 “Kubla Khan” (Coleridge), ref1, ref2 Lacy, Sarah, ref1 Larson, Quincy, ref1, ref2 Latino coders.


pages: 282 words: 63,385

Attention Factory: The Story of TikTok and China's ByteDance by Matthew Brennan

Airbnb, AltaVista, augmented reality, Benchmark Capital, Big Tech, business logic, Cambridge Analytica, computer vision, coronavirus, COVID-19, deep learning, Didi Chuxing, Donald Trump, en.wikipedia.org, fail fast, Google X / Alphabet X, growth hacking, ImageNet competition, income inequality, invisible hand, Kickstarter, Mark Zuckerberg, Menlo Park, natural language processing, Netflix Prize, Network effects, paypal mafia, Pearl River Delta, pre–internet, recommendation engine, ride hailing / ride sharing, Sheryl Sandberg, Silicon Valley, Snapchat, social graph, Steve Jobs, TikTok, Travis Kalanick, WeWork, Y Combinator

66 https://techcrunch.com/2012/12/05/prismatic/ 67 http://yingdudasha.cn/ 68 Image source: https://m.weibo.cn/2745813247/3656157740605616 * “Real stuff” is my imperfect translation of 干货 gānhuò, which could also be translated as “the real McCoy” or “something of substance” Chapter 3 Recommendation, From YouTube to TikTok Chapter Timeline 2009 – Netflix awards a $1 million prize for an algorithm that increased the accuracy of their video recommendation by 10% 2011 – YouTube introduces machine learning algorithmic recommendation engine, Sibyl, with immediate impact 2012 Aug – ByteDance launches news aggregation app Toutiao 2012 Sep t – AlexNet breakthrough at the ImageNet challenge triggers a global explosion of interest in AI 2013 Mar – Facebook changes its newsfeed to a “personalized newspaper ” 2014 April – Instagram begins using an “explore ” tab of personalized content 2015 – Google Brain’s deep learning algorithms begin supercharging a wide variety of Google products, including YouTube recommendations It was 2010, and YouTube had a big problem.


pages: 1,331 words: 163,200

Hands-On Machine Learning With Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems by Aurélien Géron

AlphaGo, Amazon Mechanical Turk, Anton Chekhov, backpropagation, combinatorial explosion, computer vision, constrained optimization, correlation coefficient, crowdsourcing, data science, deep learning, DeepMind, don't repeat yourself, duck typing, Elon Musk, en.wikipedia.org, friendly AI, Geoffrey Hinton, ImageNet competition, information retrieval, iterative process, John von Neumann, Kickstarter, machine translation, natural language processing, Netflix Prize, NP-complete, OpenAI, optical character recognition, P = NP, p-value, pattern recognition, pull request, recommendation engine, self-driving car, sentiment analysis, SpamAssassin, speech recognition, stochastic process

You can often get the same effect as a 9 × 9 kernel by stacking two 3 × 3 kernels on top of each other, for a lot less compute. Over the years, variants of this fundamental architecture have been developed, leading to amazing advances in the field. A good measure of this progress is the error rate in competitions such as the ILSVRC ImageNet challenge. In this competition the top-5 error rate for image classification fell from over 26% to barely over 3% in just five years. The top-five error rate is the number of test images for which the system’s top 5 predictions did not include the correct answer. The images are large (256 pixels high) and there are 1,000 classes, some of which are really subtle (try distinguishing 120 dog breeds).

Pac-Man Using Deep Q-Learning GRU (Gated Recurrent Unit) cell, GRU Cell-GRU Cell H hailstone sequence, Efficient Data Representations hard margin classification, Soft Margin Classification-Soft Margin Classification hard voting classifiers, Voting Classifiers-Voting Classifiers harmonic mean, Precision and Recall He initialization, Vanishing/Exploding Gradients Problems-Xavier and He Initialization Heaviside step function, The Perceptron Hebb's rule, The Perceptron, Hopfield Networks Hebbian learning, The Perceptron hidden layers, Multi-Layer Perceptron and Backpropagation hierarchical clustering, Unsupervised learning hinge loss function, Online SVMs histograms, Take a Quick Look at the Data Structure-Take a Quick Look at the Data Structure hold-out sets, Stacking(see also blenders) Hopfield Networks, Hopfield Networks-Hopfield Networks hyperbolic tangent (htan activation function), Multi-Layer Perceptron and Backpropagation, Activation Functions, Vanishing/Exploding Gradients Problems, Xavier and He Initialization, Recurrent Neurons hyperparameters, Overfitting the Training Data, Custom Transformers, Grid Search-Grid Search, Evaluate Your System on the Test Set, Gradient Descent, Polynomial Kernel, Computational Complexity, Fine-Tuning Neural Network Hyperparameters(see also neural network hyperparameters) hyperplane, Decision Function and Predictions, Manifold Learning-PCA, Projecting Down to d Dimensions, Other Dimensionality Reduction Techniques hypothesis, Select a Performance Measuremanifold, Manifold Learning hypothesis boosting (see boosting) hypothesis function, Linear Regression hypothesis, null, Regularization Hyperparameters I identity matrix, Ridge Regression, Quadratic Programming ILSVRC ImageNet challenge, CNN Architectures image classification, CNN Architectures impurity measures, Making Predictions, Gini Impurity or Entropy? in-graph replication, In-Graph Versus Between-Graph Replication inception modules, GoogLeNet Inception-v4, ResNet incremental learning, Online learning, Incremental PCA inequality constraints, SVM Dual Problem inference, Model-based learning, Exercises, Memory Requirements, An Encoder–Decoder Network for Machine Translation info(), Take a Quick Look at the Data Structure information gain, Gini Impurity or Entropy?


pages: 586 words: 186,548

Architects of Intelligence by Martin Ford

3D printing, agricultural Revolution, AI winter, algorithmic bias, Alignment Problem, AlphaGo, Apple II, artificial general intelligence, Asilomar, augmented reality, autonomous vehicles, backpropagation, barriers to entry, basic income, Baxter: Rethink Robotics, Bayesian statistics, Big Tech, bitcoin, Boeing 747, Boston Dynamics, business intelligence, business process, call centre, Cambridge Analytica, cloud computing, cognitive bias, Colonization of Mars, computer vision, Computing Machinery and Intelligence, correlation does not imply causation, CRISPR, crowdsourcing, DARPA: Urban Challenge, data science, deep learning, DeepMind, Demis Hassabis, deskilling, disruptive innovation, Donald Trump, Douglas Hofstadter, driverless car, Elon Musk, Erik Brynjolfsson, Ernest Rutherford, fake news, Fellow of the Royal Society, Flash crash, future of work, general purpose technology, Geoffrey Hinton, gig economy, Google X / Alphabet X, Gödel, Escher, Bach, Hans Moravec, Hans Rosling, hype cycle, ImageNet competition, income inequality, industrial research laboratory, industrial robot, information retrieval, job automation, John von Neumann, Large Hadron Collider, Law of Accelerating Returns, life extension, Loebner Prize, machine translation, Mark Zuckerberg, Mars Rover, means of production, Mitch Kapor, Mustafa Suleyman, natural language processing, new economy, Nick Bostrom, OpenAI, opioid epidemic / opioid crisis, optical character recognition, paperclip maximiser, pattern recognition, phenotype, Productivity paradox, radical life extension, Ray Kurzweil, recommendation engine, Robert Gordon, Rodney Brooks, Sam Altman, self-driving car, seminal paper, sensor fusion, sentiment analysis, Silicon Valley, smart cities, social intelligence, sparse data, speech recognition, statistical model, stealth mode startup, stem cell, Stephen Hawking, Steve Jobs, Steve Wozniak, Steven Pinker, strong AI, superintelligent machines, synthetic biology, systems thinking, Ted Kaczynski, TED Talk, The Rise and Fall of American Growth, theory of mind, Thomas Bayes, Travis Kalanick, Turing test, universal basic income, Wall-E, Watson beat the top human players on Jeopardy!, women in the workforce, working-age population, workplace surveillance , zero-sum game, Zipcar

In the end, science won out, and two of my students won a big public competition, and they won it dramatically. They got almost half the error rate of the best computer vision systems, and they were using mainly techniques developed in Yann LeCun’s lab but mixed in with a few of our own techniques as well. MARTIN FORD: This was the ImageNet competition? GEOFFREY HINTON: Yes, and what happened then was what should happen in science. One method that people used to think of as complete nonsense had now worked much better than the method they believed in, and within two years, they all switched. So, for things like object classification, nobody would dream of trying to do it without using a neural network now.

We released the entire 15 million images to the world and started to run international competitions for researchers to work on the ImageNet problems: not on the tiny small-scale problems but on the problems that mattered to humans and applications. Fast-forward to 2012, and I think we see the turning point in object recognition for a lot of people. The winner of the 2012 ImageNet competition created a convergence of ImageNet, GPU computing power, and convolutional neural networks as an algorithm. Geoffrey Hinton wrote a seminal paper that, for me, was Phase One in achieving the holy grail of object recognition. MARTIN FORD: Did you continue this project? FEI-FEI LI: For the next two years, I worked on taking object recognition a step further.


Four Battlegrounds by Paul Scharre

2021 United States Capitol attack, 3D printing, active measures, activist lawyer, AI winter, AlphaGo, amateurs talk tactics, professionals talk logistics, artificial general intelligence, ASML, augmented reality, Automated Insights, autonomous vehicles, barriers to entry, Berlin Wall, Big Tech, bitcoin, Black Lives Matter, Boeing 737 MAX, Boris Johnson, Brexit referendum, business continuity plan, business process, carbon footprint, chief data officer, Citizen Lab, clean water, cloud computing, commoditize, computer vision, coronavirus, COVID-19, crisis actor, crowdsourcing, DALL-E, data is not the new oil, data is the new oil, data science, deep learning, deepfake, DeepMind, Demis Hassabis, Deng Xiaoping, digital map, digital rights, disinformation, Donald Trump, drone strike, dual-use technology, Elon Musk, en.wikipedia.org, endowment effect, fake news, Francis Fukuyama: the end of history, future of journalism, future of work, game design, general purpose technology, Geoffrey Hinton, geopolitical risk, George Floyd, global supply chain, GPT-3, Great Leap Forward, hive mind, hustle culture, ImageNet competition, immigration reform, income per capita, interchangeable parts, Internet Archive, Internet of things, iterative process, Jeff Bezos, job automation, Kevin Kelly, Kevin Roose, large language model, lockdown, Mark Zuckerberg, military-industrial complex, move fast and break things, Nate Silver, natural language processing, new economy, Nick Bostrom, one-China policy, Open Library, OpenAI, PalmPilot, Parler "social media", pattern recognition, phenotype, post-truth, purchasing power parity, QAnon, QR code, race to the bottom, RAND corporation, recommendation engine, reshoring, ride hailing / ride sharing, robotic process automation, Rodney Brooks, Rubik’s Cube, self-driving car, Shoshana Zuboff, side project, Silicon Valley, slashdot, smart cities, smart meter, Snapchat, social software, sorting algorithm, South China Sea, sparse data, speech recognition, Steve Bannon, Steven Levy, Stuxnet, supply-chain attack, surveillance capitalism, systems thinking, tech worker, techlash, telemarketer, The Brussels Effect, The Signal and the Noise by Nate Silver, TikTok, trade route, TSMC

., Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification (Microsoft Research, February 6, 2015), https://arxiv.org/pdf/1502.01852.pdf; Richard Eckel, “Microsoft Researchers’ Algorithm Sets ImageNet Challenge Milestone,” Microsoft Research Blog, February 10, 2015, https://www.microsoft.com/en-us/research/blog/microsoft-researchers-algorithm-sets-imagenet-challenge-milestone/. 160team’s 2015 paper on “deep residual learning”: Bec Crew, “Google Scholar Reveals Its Most Influential Papers for 2019,” nature index, August 2, 2019, https://www.natureindex.com/news-blog/google-scholar-reveals-most-influential-papers-research-citations-twenty-nineteen; Kaiming He et al., Deep Residual Learning for Image Recognition (thecvf.com, n.d.), https://openaccess.thecvf.com/content_cvpr_2016/papers/He_Deep_Residual_Learning_CVPR_2016_paper.pdf. 1605,000 papers in top-tier journals: Tim Pan, interview by author, June 21, 2019. 160“That’s, on average, every working day”: Pan, interview. 161“very small number” of interns: Kevin Luo, interview by author, June 21, 2019. 161approximately eleven such interns: Information in this section comes from multiple interviews with Microsoft representatives during July and August 2021. 161also a PhD student in computer science at the PLA’s NUDT: 微软亚洲研究院 [Microsoft Research Asia] “实习派 | 胡明昊:在MSRA研究机器阅读理解是一种怎样的体验?


Driverless: Intelligent Cars and the Road Ahead by Hod Lipson, Melba Kurman

AI winter, Air France Flight 447, AlphaGo, Amazon Mechanical Turk, autonomous vehicles, backpropagation, barriers to entry, butterfly effect, carbon footprint, Chris Urmson, cloud computing, computer vision, connected car, creative destruction, crowdsourcing, DARPA: Urban Challenge, deep learning, digital map, Donald Shoup, driverless car, Elon Musk, en.wikipedia.org, Erik Brynjolfsson, General Motors Futurama, Geoffrey Hinton, Google Earth, Google X / Alphabet X, Hans Moravec, high net worth, hive mind, ImageNet competition, income inequality, industrial robot, intermodal, Internet of things, Jeff Hawkins, job automation, Joseph Schumpeter, lone genius, Lyft, megacity, Network effects, New Urbanism, Oculus Rift, pattern recognition, performance metric, Philippa Foot, precision agriculture, RFID, ride hailing / ride sharing, Second Machine Age, self-driving car, Silicon Valley, smart cities, speech recognition, statistical model, Steve Jobs, technoutopianism, TED Talk, Tesla Model S, Travis Kalanick, trolley problem, Uber and Lyft, uber lyft, Unsafe at Any Speed, warehouse robotics

See also Mid-level controls Consumer acceptance, 11–13 Controls engineering Overview of, 47, 75–77 See also Low-level controls; Mid-level controls; High-level controls Convolutional neural networks (CNNs), 214–218 Corner cases, 4, 5, 89, 154 Creative destruction, 261–263 Crime, 273, 274 DARPA Challenges, 149, 150 DARPA Grand Challenge 2004 DARPA Grand Challenge 2005, 151, 152 DARPA Urban Challenge 2007, 156–158 Data CAN bus protocol, 193, 194 Data collection, 239, 240 Training data for deep learning, 218–220 See also Machine learning; Route-planning software; Traffic prediction software Deep learning History of, 197, 199–202, 219, 223–226 How deep learning works, 7, 8, 226–231 See also ImageNet competition; Neocognitron; Perceptron; SuperVision Demo 97, 134, 135 Digital cameras, 173–175 Disney Hall, Los Angeles, 36 Disney’s Magic Highway U.S.A. Dog of War, 79 Downtowns, 32–37 Drive by wire191–194 Driver assist, 55–58. See also Human in the loop Driverless-car reliability, 98–104, 195–196 Drive-PX 225 E-commerce, 271, 272 Edge detectors 229 Electronic Highway History of, 116–120 Reasons for demise, 123, 124 See also General Motors Corporation (GM) Environment.


pages: 447 words: 111,991

Exponential: How Accelerating Technology Is Leaving Us Behind and What to Do About It by Azeem Azhar

"Friedman doctrine" OR "shareholder theory", "World Economic Forum" Davos, 23andMe, 3D printing, A Declaration of the Independence of Cyberspace, Ada Lovelace, additive manufacturing, air traffic controllers' union, Airbnb, algorithmic management, algorithmic trading, Amazon Mechanical Turk, autonomous vehicles, basic income, Berlin Wall, Bernie Sanders, Big Tech, Bletchley Park, Blitzscaling, Boeing 737 MAX, book value, Boris Johnson, Bretton Woods, carbon footprint, Chris Urmson, Citizen Lab, Clayton Christensen, cloud computing, collective bargaining, computer age, computer vision, contact tracing, contact tracing app, coronavirus, COVID-19, creative destruction, crowdsourcing, cryptocurrency, cuban missile crisis, Daniel Kahneman / Amos Tversky, data science, David Graeber, David Ricardo: comparative advantage, decarbonisation, deep learning, deglobalization, deindustrialization, dematerialisation, Demis Hassabis, Diane Coyle, digital map, digital rights, disinformation, Dissolution of the Soviet Union, Donald Trump, Double Irish / Dutch Sandwich, drone strike, Elon Musk, emotional labour, energy security, Fairchild Semiconductor, fake news, Fall of the Berlin Wall, Firefox, Frederick Winslow Taylor, fulfillment center, future of work, Garrett Hardin, gender pay gap, general purpose technology, Geoffrey Hinton, gig economy, global macro, global pandemic, global supply chain, global value chain, global village, GPT-3, Hans Moravec, happiness index / gross national happiness, hiring and firing, hockey-stick growth, ImageNet competition, income inequality, independent contractor, industrial robot, intangible asset, Jane Jacobs, Jeff Bezos, job automation, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John Perry Barlow, Just-in-time delivery, Kickstarter, Kiva Systems, knowledge worker, Kodak vs Instagram, Law of Accelerating Returns, lockdown, low skilled workers, lump of labour, Lyft, manufacturing employment, Marc Benioff, Mark Zuckerberg, megacity, Mitch Kapor, Mustafa Suleyman, Network effects, new economy, NSO Group, Ocado, offshore financial centre, OpenAI, PalmPilot, Panopticon Jeremy Bentham, Peter Thiel, Planet Labs, price anchoring, RAND corporation, ransomware, Ray Kurzweil, remote working, RFC: Request For Comment, Richard Florida, ride hailing / ride sharing, Robert Bork, Ronald Coase, Ronald Reagan, Salesforce, Sam Altman, scientific management, Second Machine Age, self-driving car, Shoshana Zuboff, Silicon Valley, Social Responsibility of Business Is to Increase Its Profits, software as a service, Steve Ballmer, Steve Jobs, Stuxnet, subscription business, synthetic biology, tacit knowledge, TaskRabbit, tech worker, The Death and Life of Great American Cities, The Future of Employment, The Nature of the Firm, Thomas Malthus, TikTok, Tragedy of the Commons, Turing machine, Uber and Lyft, Uber for X, uber lyft, universal basic income, uranium enrichment, vertical integration, warehouse automation, winner-take-all economy, workplace surveillance , Yom Kippur War

In 2012, a group of leading AI researchers – Alex Krizhevsky, Ilya Sutskever and Geoffrey Hinton – developed a ‘deep convolutional neural network’ which applied deep learning to the kinds of image-sorting tasks that AIs had long struggled with. It was rooted in extraordinary computing clout. The neural network contained 650,000 neurons and 60 million ‘parameters’, settings you could use to tune the system. It was a game-changer. Before AlexNet, as Krizhevsky’s team’s invention was called, most AIs that took on the ImageNet competition had stumbled, for years never scoring higher than 74 per cent. AlexNet had a success rate as high as 87 per cent. Deep learning worked. The triumph of deep learning sparked an AI feeding frenzy. Scientists rushed to build artificial intelligence systems, applying deep neural networks and their derivatives to a vast array of problems: from spotting manufacturing defects to translating between languages; from voice recognition to detecting credit card fraud; from discovering new medicines to recommending the next video we should watch.


pages: 424 words: 114,905

Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again by Eric Topol

"World Economic Forum" Davos, 23andMe, Affordable Care Act / Obamacare, AI winter, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, algorithmic bias, AlphaGo, Apollo 11, artificial general intelligence, augmented reality, autism spectrum disorder, autonomous vehicles, backpropagation, Big Tech, bioinformatics, blockchain, Cambridge Analytica, cloud computing, cognitive bias, Colonization of Mars, computer age, computer vision, Computing Machinery and Intelligence, conceptual framework, creative destruction, CRISPR, crowdsourcing, Daniel Kahneman / Amos Tversky, dark matter, data science, David Brooks, deep learning, DeepMind, Demis Hassabis, digital twin, driverless car, Elon Musk, en.wikipedia.org, epigenetics, Erik Brynjolfsson, fake news, fault tolerance, gamification, general purpose technology, Geoffrey Hinton, George Santayana, Google Glasses, ImageNet competition, Jeff Bezos, job automation, job satisfaction, Joi Ito, machine translation, Mark Zuckerberg, medical residency, meta-analysis, microbiome, move 37, natural language processing, new economy, Nicholas Carr, Nick Bostrom, nudge unit, OpenAI, opioid epidemic / opioid crisis, pattern recognition, performance metric, personalized medicine, phenotype, placebo effect, post-truth, randomized controlled trial, recommendation engine, Rubik’s Cube, Sam Altman, self-driving car, Silicon Valley, Skinner box, speech recognition, Stephen Hawking, techlash, TED Talk, text mining, the scientific method, Tim Cook: Apple, traumatic brain injury, trolley problem, War on Poverty, Watson beat the top human players on Jeopardy!, working-age population

IMAGES ImageNet exemplified an adage about AI: datasets—not algorithms—might be the key limiting factor of human-level artificial intelligence.39 When Fei-Fei Li, a computer scientist now at Stanford and half time at Google, started ImageNet in 2007, she bucked the idea that algorithms ideally needed nurturing from Big Data and instead pursued the in-depth annotation of images. She recognized it wasn’t about Big Data; it was about carefully, extensively labeled Big Data. A few years ago, she said, “I consider the pixel data in images and video to be the dark matter of the Internet.”40 Many different convolutional DNNs were used to classify the images with annual ImageNet Challenge contests to recognize the best (such as AlexNet, GoogleNet, VGG Net, and ResNet). Figure 4.6 shows the progress in reducing the error rate over several years, with ImageNet wrapping up in 2017, with significantly better than human performance in image recognition. The error rate fell from 30 percent in 2010 to 4 percent in 2016.