14 results back to index
Artificial Intelligence: A Guide for Thinking Humans by Melanie Mitchell
Ada Lovelace, AI winter, Amazon Mechanical Turk, Apple's 1984 Super Bowl advert, artificial general intelligence, autonomous vehicles, backpropagation, Bernie Sanders, Claude Shannon: information theory, cognitive dissonance, computer age, computer vision, dark matter, Douglas Hofstadter, Elon Musk, en.wikipedia.org, Gödel, Escher, Bach, I think there is a world market for maybe five computers, ImageNet competition, Jaron Lanier, job automation, John Markoff, John von Neumann, Kevin Kelly, Kickstarter, license plate recognition, Mark Zuckerberg, natural language processing, Norbert Wiener, ought to be enough for anybody, pattern recognition, performance metric, RAND corporation, Ray Kurzweil, recommendation engine, ride hailing / ride sharing, Rodney Brooks, self-driving car, sentiment analysis, Silicon Valley, Singularitarianism, Skype, speech recognition, Stephen Hawking, Steve Jobs, Steve Wozniak, Steven Pinker, strong AI, superintelligent machines, tail risk, theory of mind, There's no reason for any individual to have a computer in his home - Ken Olsen, Turing test, Vernor Vinge, Watson beat the top human players on Jeopardy!
For the ImageNet project, Mechanical Turk was “a godsend.”6 The service continues to be widely used by AI researchers for creating data sets; nowadays, academic grant proposals in AI commonly include a line item for “Mechanical Turk workers.” The ImageNet Competitions In 2010, the ImageNet project launched the first ImageNet Large Scale Visual Recognition Challenge, in order to spur progress toward more general object-recognition algorithms. Thirty-five programs competed, representing computer-vision researchers from academia and industry around the world. The competitors were given labeled training images—1.2 million of them—and a list of possible categories. The task for the trained programs was to output the correct category of each input image. The ImageNet competition had a thousand possible categories, compared with PASCAL’s twenty.
The following year, the highest-scoring program—also using support vector machines—showed a respectable but modest improvement, getting 74 percent of the test images correct. Most people in the field expected this trend to continue; computer-vision research would chip away at the problem, with gradual improvement at each annual competition. However, these expectations were upended in the 2012 ImageNet competition: the winning entry achieved an amazing 85 percent correct. Such a jump in accuracy was a shocking development. What’s more, the winning entry did not use support vector machines or any of the other dominant computer-vision methods of the day. Instead, it was a convolutional neural network.
It didn’t take long before all the big tech companies (as well as many smaller ones) were snapping up deep-learning experts and their graduate students as fast as possible. Seemingly overnight, deep learning became the hottest part of AI, and expertise in deep learning guaranteed computer scientists a large salary in Silicon Valley or, better yet, venture capital funding for their proliferating deep-learning start-up companies. The annual ImageNet competition began to see wider coverage in the media, and it quickly morphed from a friendly academic contest into a high-profile sparring match for tech companies commercializing computer vision. Winning at ImageNet would guarantee coveted respect from the vision community, along with free publicity, which might translate into product sales and higher stock prices.
Prediction Machines: The Simple Economics of Artificial Intelligence by Ajay Agrawal, Joshua Gans, Avi Goldfarb
"Robert Solow", Ada Lovelace, AI winter, Air France Flight 447, Airbus A320, algorithmic bias, Amazon Picking Challenge, artificial general intelligence, autonomous vehicles, backpropagation, basic income, Bayesian statistics, Black Swan, blockchain, call centre, Capital in the Twenty-First Century by Thomas Piketty, Captain Sullenberger Hudson, collateralized debt obligation, computer age, creative destruction, Daniel Kahneman / Amos Tversky, data acquisition, data is the new oil, deskilling, disruptive innovation, Elon Musk, en.wikipedia.org, Erik Brynjolfsson, everywhere but in the productivity statistics, Google Glasses, high net worth, ImageNet competition, income inequality, information retrieval, inventory management, invisible hand, job automation, John Markoff, Joseph Schumpeter, Kevin Kelly, Lyft, Minecraft, Mitch Kapor, Moneyball by Michael Lewis explains big data, Nate Silver, new economy, On the Economy of Machinery and Manufactures, pattern recognition, performance metric, profit maximization, QWERTY keyboard, race to the bottom, randomized controlled trial, Ray Kurzweil, ride hailing / ride sharing, Second Machine Age, self-driving car, shareholder value, Silicon Valley, statistical model, Stephen Hawking, Steve Jobs, Steven Levy, strong AI, The Future of Employment, The Signal and the Noise by Nate Silver, Tim Cook: Apple, Turing test, Uber and Lyft, uber lyft, US Airways Flight 1549, Vernor Vinge, Watson beat the top human players on Jeopardy!, William Langewiesche, Y Combinator, zero-sum game
Andrej Karpathy, “What I Learned from Competing against a ConvNet on ImageNet,” Andrej Karthy (blog), September 2, 2014, http://karpathy.github.io/2014/09/02/what-i-learned-from-competing-against-a-convnet-on-imagenet/; ImageNet, Large Scale Visual Recognition Challenge 2016, http://image-net.org/challenges/LSVRC/2016/results; Andrej Karpathy, LISVRC 2014, http://cs.stanford.edu/people/karpathy/ilsvrc/. 8. Aaron Tilley, “China’s Rise in the Global AI Race Emerges as It Takes Over the Final ImageNet Competition,” Forbes, July 31, 2017, https://www.forbes.com/sites/aarontilley/2017/07/31/china-ai-imagenet/#dafa182170a8. 9. Dave Gershgorn, “The Data That Transformed AI Research—and Possibly the World,” Quartz, July 26, 2017, https://qz.com/1034972/ the-data-that-changed-the-direction-of-ai-research-and-possibly-the-world/. 10.
Inventory management involved predicting how many items would be in a warehouse on a given day. More recently, entirely new classes of prediction problems emerged. Many were nearly impossible before the recent advances in machine intelligence technology, including object identification, language translation, and drug discovery. For example, the ImageNet Challenge is a high-profile annual contest to predict the name of an object in an image. Predicting the object in an image can be a difficult task, even for humans. The ImageNet data contains a thousand categories of objects, including many breeds of dog and other similar images. It can be difficult to tell the difference between a Tibetan mastiff and a Bernese mountain dog, or between a safe and a combination lock.
Ghost Work: How to Stop Silicon Valley From Building a New Global Underclass by Mary L. Gray, Siddharth Suri
Affordable Care Act / Obamacare, Amazon Mechanical Turk, augmented reality, autonomous vehicles, barriers to entry, basic income, big-box store, bitcoin, blue-collar work, business process, business process outsourcing, call centre, Capital in the Twenty-First Century by Thomas Piketty, cloud computing, collaborative consumption, collective bargaining, computer vision, corporate social responsibility, crowdsourcing, data is the new oil, deindustrialization, deskilling, don't be evil, Donald Trump, Elon Musk, employer provided health coverage, en.wikipedia.org, equal pay for equal work, Erik Brynjolfsson, financial independence, Frank Levy and Richard Murnane: The New Division of Labor, future of work, gig economy, glass ceiling, global supply chain, hiring and firing, ImageNet competition, independent contractor, industrial robot, informal economy, information asymmetry, Jeff Bezos, job automation, knowledge economy, low skilled workers, low-wage service sector, market friction, Mars Rover, natural language processing, new economy, passive income, pattern recognition, post-materialism, post-work, race to the bottom, Rana Plaza, recommendation engine, ride hailing / ride sharing, Ronald Coase, Second Machine Age, sentiment analysis, sharing economy, Shoshana Zuboff, side project, Silicon Valley, Silicon Valley startup, Skype, software as a service, speech recognition, spinning jenny, Stephen Hawking, The Future of Employment, The Nature of the Firm, Tragedy of the Commons, transaction costs, two-sided market, union organizing, universal basic income, Vilfredo Pareto, women in the workforce, Works Progress Administration, Y Combinator, Yochai Benkler
They tried a few different workflows but were ultimately able to use about 49,000 workers from 167 countries to accurately label 3.2 million images.9 After two and a half years, their collective labor created a massive, gold-standard data set of high-resolution images, each with highly accurate labels of the objects in the image. Li called it ImageNet. Thanks to ImageNet competitions held annually since its creation, research teams use the data set to develop more sophisticated image recognition algorithms and to advance the state of the art. Having a gold-standard data set allowed researchers to measure the accuracy of their new algorithms and to compare their algorithms with the current state of the art.
To incentivize researchers to use the data set, Li and her colleagues organized an annual contest pitting the best algorithms for the image recognition problem, from various research teams around the world, against one another. The progress scientists made toward this goal was staggering. The annual ImageNet competition saw a roughly 10x reduction in error and a roughly 3x increase in precision in recognizing images over the course of eight years. Eventually the vision algorithms achieved a lower error rate than the human workers. The algorithmic and engineering advances that scientists achieved over the eight years of competition fueled much of the recent success of neural networks, the so-called deep learning revolution, which would impact a variety of fields and problem domains.
Without them generating and improving the size and quality of the training data, ImageNet would not exist.11 ImageNet’s success is a noteworthy example of the paradox of automation’s last mile in action. Humans trained an AI, only to have the AI ultimately take over the task entirely. Researchers could then open up even harder problems. For example, after the ImageNet challenge finished, researchers turned their attention to finding where an object is in an image or video. These problems needed yet more training data, generating another wave of ghost work. But ImageNet is merely one of many examples of how computer programmers and business entrepreneurs use ghost work to create training data to develop better artificial intelligence.12 The Range of Ghost Work: From Micro-Tasks to Macro-Tasks The platforms generating on-demand ghost work offer themselves up as gatekeepers helping employers-turned-requesters tackle problems that need a bit of human intelligence.
The Ethical Algorithm: The Science of Socially Aware Algorithm Design by Michael Kearns, Aaron Roth
23andMe, affirmative action, algorithmic bias, algorithmic trading, Alvin Roth, backpropagation, Bayesian statistics, bitcoin, cloud computing, computer vision, crowdsourcing, Edward Snowden, Elon Musk, Filter Bubble, general-purpose programming language, Google Chrome, ImageNet competition, Lyft, medical residency, Nash equilibrium, Netflix Prize, p-value, Pareto efficiency, performance metric, personalized medicine, pre–internet, profit motive, quantitative trading / quantitative ﬁnance, RAND corporation, recommendation engine, replication crisis, ride hailing / ride sharing, Robert Bork, Ronald Coase, self-driving car, short selling, sorting algorithm, speech recognition, statistical model, Stephen Hawking, superintelligent machines, telemarketer, Turing machine, two-sided market, Vilfredo Pareto
But money alone wasn’t enough to recruit talent—top researchers want to work where other top researchers are—so it was important for AI labs that wanted to recruit premium talent to be viewed as places that were already on the cutting edge. In the United States, this included research labs at companies such as Google and Facebook. One way to do this was to beat the big players in a high-profile competition. The ImageNet competition was perfect—focused on exactly the kind of vision task for which deep learning was making headlines. The contest required each team’s computer program to classify the objects in images into a thousand different and highly specific categories, including “frilled lizard,” “banded gecko,” “oscilloscope,” and “reflex camera.”
The training images came with labels, so that the learning algorithms could be told what kind of object was in each image. Such competitions have proliferated in recent years; the Netflix competition, which we have mentioned a couple of times already, was an early example. Commercial platforms such as Kaggle (which now, in fact, hosts the ImageNet competition) offer datasets and competitions—some offering awards of $100,000 for winning teams—for thousands of diverse, complex prediction problems. Machine learning has truly become a competitive sport. It wouldn’t make sense to score ImageNet competitors based on how well they classified the training images—after all, an algorithm could have simply memorized the labels for the training set, without learning any generalizable rule for classifying images.
It wouldn’t make sense to score ImageNet competitors based on how well they classified the training images—after all, an algorithm could have simply memorized the labels for the training set, without learning any generalizable rule for classifying images. Instead, the right way to evaluate the competitors is to see how well their models classify new images that they have never seen before. The ImageNet competition reserved 100,000 “validation” images for this purpose. But the competition organizers also wanted to give participants a way to see how well they were doing. So they allowed each team to test their progress by submitting their current model and being told how frequently it correctly classified the validation images.
The Myth of Artificial Intelligence: Why Computers Can't Think the Way We Do by Erik J. Larson
AI winter, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Albert Einstein, Amazon Mechanical Turk, artificial general intelligence, autonomous vehicles, Black Swan, Boeing 737 MAX, business intelligence, Claude Shannon: information theory, conceptual framework, correlation does not imply causation, Elon Musk, Ernest Rutherford, Filter Bubble, Georg Cantor, hive mind, ImageNet competition, information retrieval, invention of the printing press, invention of the wheel, Isaac Newton, Jaron Lanier, John von Neumann, Kevin Kelly, Law of Accelerating Returns, Loebner Prize, Nate Silver, natural language processing, Norbert Wiener, PageRank, pattern recognition, Peter Thiel, Ray Kurzweil, retrograde motion, self-driving car, semantic web, Silicon Valley, social intelligence, speech recognition, statistical model, Stephen Hawking, superintelligent machines, technological singularity, The Coming Technological Singularity, the scientific method, The Signal and the Noise by Nate Silver, The Wisdom of Crowds, theory of mind, Turing machine, Turing test, Vernor Vinge, Watson beat the top human players on Jeopardy!, Yochai Benkler
The systems a ren’t M achine L earning and B ig D ata 135 perfect, largely b ecause of the constant cat-and-mouse game between serv ice providers and spammers endlessly trying new and different approaches to fool trained filters.3 Spam detection is not a particularly sexy example of supervised learning. Modern deep learning systems also perform classification for tasks like image recognition and visual object recognition. The well-k nown ImageNet competitions present contestants with a large- scale task in supervised learning, drawing on the millions of images that ImageNet has downloaded from websites like Flickr for use in training and testing the accuracy of deep learning systems. All these images have been labeled by h umans (providing their serv ices to the project through Amazon’s Mechanical Turk interface) and the terms they apply make up a structured database of English words known as WordNet.
A selected subset of words in WordNet represents a category to be learned, using common nouns (like dog, pumpkin, piano, house) and a selection of more obscure items (like Scottish terrier, hussar monkey, flamingo). The contest is to see which of the competing deep learning classifiers is able to label the most images correctly, as they w ere labeled by the humans. With over a thousand categories being used in ImageNet competitions, the task far exceeds the yes-or-no problem presented to spam detectors (or any other binary classification task, such as simply labeling whether an image is of a human face or not). Competing in this competition means performing a massive classification task using pixel data as input.4 Sequence classification is often used in natural language processing applications.
In truth it was b ecause there was, initially, a hodgepodge of older statistical techniques in use for data science and machine learning in AI that the sought-a fter insights emerging from big data w ere mistakenly pinned to the data volume itself. This was a ridiculous proposition from the start; data points are facts and, again, can’t become insightful themselves. Although this has become apparent only in the rearview mirror, the early deep learning successes on visual object recognition, in the ImageNet competitions, signaled the beginning of a transfer of zeal from big data to the machine learning methods that benefit from it—in other words, to the newly explosive field of AI. Thus big data has peaked, and now seems to be receding from popular discussion almost as quickly as it appeared. The focus on deep learning makes sense, because a fter all, the algorithms rather than just the data are responsible for trouncing human champions at Go, mastering Atari games, driving cars, and the rest.
The Alignment Problem: Machine Learning and Human Values by Brian Christian
Albert Einstein, algorithmic bias, Amazon Mechanical Turk, artificial general intelligence, augmented reality, autonomous vehicles, backpropagation, butterfly effect, Cass Sunstein, Claude Shannon: information theory, computer vision, Donald Knuth, Douglas Hofstadter, effective altruism, Elon Musk, game design, Google Chrome, Google Glasses, Google X / Alphabet X, Gödel, Escher, Bach, hedonic treadmill, ImageNet competition, industrial robot, Internet Archive, John von Neumann, Joi Ito, Kenneth Arrow, longitudinal study, mandatory minimum, mass incarceration, natural language processing, Norbert Wiener, Panopticon Jeremy Bentham, pattern recognition, Peter Singer: altruism, Peter Thiel, premature optimization, RAND corporation, recommendation engine, Richard Feynman, Rodney Brooks, Saturday Night Live, selection bias, self-driving car, side project, Silicon Valley, speech recognition, Stanislav Petrov, statistical model, Steve Jobs, strong AI, the map is not the territory, theory of mind, Tim Cook: Apple, zero-sum game
Hinton has come up with an idea called “dropout,” where during training certain portions of the network get randomly turned off. Krizhevsky tries this, and it seems, for various reasons, to help. He tries using neurons with a so-called “rectified linear” output function. This, too, seems to help. He submits his best model on the ImageNet competition deadline, September 30, and then the final wait begins. Two days later, Krizhevsky gets an email from Stanford’s Jia Deng, who is organizing that year’s competition, cc’d to all of the entrants. In plain, unemotional language, Deng says to click the link provided to see the results. Krizhevsky clicks the link provided and sees the results.
—ERNEST BURGESS71 Your scientists were so preoccupied with whether or not they could . . . that they didn’t stop to think if they should. — JEFF GOLDBLUM AS IAN MALCOLM, JURASSIC PARK One of the most important things in any prediction is to make sure that you’re actually predicting what you think you’re predicting. This is harder than it sounds. In the ImageNet competition, for instance—in which AlexNet did so well in 2012—the goal is to train machines to identify what images depict. But this isn’t what the training data captures. The training data captures what human volunteers on Mechanical Turk said the image depicted. If a baby lion, let’s say, were repeatedly misidentified by human volunteers as a cat, it would become part of a system’s training data as a cat—and any system labeling it as a lion would be docked points and would have to adjust its parameters to correct this “error.”
By the fourth layer, the network was responding to configurations of eyes and nose, to tile floors, to the radial geometry of a starfish or a spider, to the petals of a flower or keys on a typewriter. By the fifth layer, the ultimate categories into which objects were being assigned seemed to exert a strong influence. The effect was dramatic, insightful. But was it useful? Zeiler popped the hood of the AlexNet model that had won the ImageNet competition in 2012 and started digging around, inspecting it using deconvolution. He noticed a bunch of flaws. Some low-level parts of the network had normalized incorrectly, like an overexposed photograph. Other filters had gone “dead” and weren’t detecting anything. Zeiler hypothesized that they weren’t correctly sized for the types of patterns they were trying to match.
The Economic Singularity: Artificial Intelligence and the Death of Capitalism by Calum Chace
3D printing, additive manufacturing, agricultural Revolution, AI winter, Airbnb, artificial general intelligence, augmented reality, autonomous vehicles, banking crisis, basic income, Baxter: Rethink Robotics, Berlin Wall, Bernie Sanders, bitcoin, blockchain, call centre, Chris Urmson, congestion charging, credit crunch, David Ricardo: comparative advantage, Douglas Engelbart, Elon Musk, en.wikipedia.org, Erik Brynjolfsson, Flynn Effect, full employment, future of work, gender pay gap, gig economy, Google Glasses, Google X / Alphabet X, ImageNet competition, income inequality, industrial robot, Internet of things, invention of the telephone, invisible hand, James Watt: steam engine, Jaron Lanier, Jeff Bezos, job automation, John Markoff, John Maynard Keynes: technological unemployment, John von Neumann, Kevin Kelly, knowledge worker, lifelogging, lump of labour, Lyft, Marc Andreessen, Mark Zuckerberg, Martin Wolf, McJob, means of production, Milgram experiment, Narrative Science, natural language processing, new economy, Occupy movement, Oculus Rift, PageRank, pattern recognition, post scarcity, post-industrial society, post-work, precariat, prediction markets, QWERTY keyboard, railway mania, RAND corporation, Ray Kurzweil, RFID, Rodney Brooks, Sam Altman, Satoshi Nakamoto, Second Machine Age, self-driving car, sharing economy, Silicon Valley, Skype, software is eating the world, speech recognition, Stephen Hawking, Steve Jobs, TaskRabbit, technological singularity, The future is already here, The Future of Employment, Thomas Malthus, transaction costs, Tyler Cowen: Great Stagnation, Uber for X, uber lyft, universal basic income, Vernor Vinge, working-age population, Y Combinator, young professional
In deep learning, the algorithms operate in several layers, each layer processing data from previous ones and passing the output up to the next layer. The output is not necessarily binary, just on or off: it can be weighted. The number of layers can vary too, with anything above ten layers seen as very deep learning – although in December 2015 a Microsoft team won the ImageNet competition with a system which employed a massive 152 layers.[lxvi] Deep learning, and especially artificial neural nets (ANNs), are in many ways a return to an older approach to AI which was explored in the 1960s but abandoned because it proved ineffective. While Good Old-Fashioned AI held sway in most labs, a small group of pioneers known as the Toronto mafia kept faith with the neural network approach.
In December 2015, Microsoft's chief speech scientist Xuedong Huang noted that speech recognition has improved 20% a year consistently for the last 20 years. He predicted that computers would be as good as humans at understanding human speech within five years. Geoff Hinton – the man whose team won the landmark 2012 ImageNet competition – went further. In May 2015 he said that he expects machines to demonstrate common sense within a decade. Common sense can be described as having a mental model of the world which allows you to predict what will happen if certain actions are taken. Professor Murray Shanahan of Imperial College uses the example of throwing a chair from a stage into an audience: humans would understand that members of the audience would throw up their hands to protect themselves, but some damage would probably be caused, and certainly some upset.
AI Superpowers: China, Silicon Valley, and the New World Order by Kai-Fu Lee
AI winter, Airbnb, Albert Einstein, algorithmic bias, algorithmic trading, artificial general intelligence, autonomous vehicles, barriers to entry, basic income, business cycle, cloud computing, commoditize, computer vision, corporate social responsibility, creative destruction, crony capitalism, Deng Xiaoping, deskilling, Donald Trump, Elon Musk, en.wikipedia.org, Erik Brynjolfsson, full employment, future of work, gig economy, Google Chrome, happiness index / gross national happiness, if you build it, they will come, ImageNet competition, impact investing, income inequality, informal economy, Internet of things, invention of the telegraph, Jeff Bezos, job automation, John Markoff, Kickstarter, knowledge worker, Lean Startup, low skilled workers, Lyft, mandatory minimum, Mark Zuckerberg, Menlo Park, minimum viable product, natural language processing, new economy, pattern recognition, pirate software, profit maximization, QR code, Ray Kurzweil, recommendation engine, ride hailing / ride sharing, risk tolerance, Robert Mercer, Rodney Brooks, Rubik’s Cube, Sam Altman, Second Machine Age, self-driving car, sentiment analysis, sharing economy, Silicon Valley, Silicon Valley ideology, Silicon Valley startup, Skype, special economic zone, speech recognition, Stephen Hawking, Steve Jobs, strong AI, The Future of Employment, Travis Kalanick, Uber and Lyft, uber lyft, universal basic income, urban planning, Y Combinator
One of the clearest examples of these accelerating improvements is the ImageNet competition. In the competition, algorithms submitted by different teams are tasked with identifying thousands of different objects within millions of different images, such as birds, baseballs, screwdrivers, and mosques. It has quickly emerged as one of the most respected image-recognition contests and a clear benchmark for AI’s progress in computer vision. When the Oxford machine-learning experts made their estimates of technical capabilities in early 2013, the most recent ImageNet competition of 2012 had been the coming-out party for deep learning.
Bold: How to Go Big, Create Wealth and Impact the World by Peter H. Diamandis, Steven Kotler
3D printing, additive manufacturing, Airbnb, Amazon Mechanical Turk, Amazon Web Services, augmented reality, autonomous vehicles, Charles Lindbergh, cloud computing, creative destruction, crowdsourcing, Daniel Kahneman / Amos Tversky, dematerialisation, deskilling, disruptive innovation, Elon Musk, en.wikipedia.org, Exxon Valdez, fear of failure, Firefox, Galaxy Zoo, Google Glasses, Google Hangouts, gravity well, ImageNet competition, industrial robot, Internet of things, Jeff Bezos, John Harrison: Longitude, John Markoff, Jono Bacon, Just-in-time delivery, Kickstarter, Kodak vs Instagram, Law of Accelerating Returns, Lean Startup, life extension, loss aversion, Louis Pasteur, low earth orbit, Mahatma Gandhi, Marc Andreessen, Mark Zuckerberg, Mars Rover, meta-analysis, microbiome, minimum viable product, move fast and break things, Narrative Science, Netflix Prize, Network effects, Oculus Rift, optical character recognition, packet switching, PageRank, pattern recognition, performance metric, Peter H. Diamandis: Planetary Resources, Peter Thiel, pre–internet, Ray Kurzweil, recommendation engine, Richard Feynman, ride hailing / ride sharing, risk tolerance, rolodex, self-driving car, sentiment analysis, shareholder value, Silicon Valley, Silicon Valley startup, skunkworks, Skype, smart grid, stem cell, Stephen Hawking, Steve Jobs, Steven Levy, Stewart Brand, superconnector, technoutopianism, telepresence, telepresence robot, Turing test, urban renewal, web application, X Prize, Y Combinator, zero-sum game
Fifty thousand different traffic signs are used—signs obscured by long distances, by trees, by the glare of sunlight. In 2011, for the first time, a machine-learning algorithm bested its makers, achieving a 0.5 percent error rate, compared to 1.2 percent for humans.32 Even more impressive were the results of the 2012 ImageNet Competition, which challenged algorithms to look at one million different images—ranging from birds to kitchenware to people on motor scooters—and correctly slot them into a thousand unique categories. Seriously, it’s one thing for a computer to recognize known objects (zip codes, traffic signs), but categorizing thousands of random objects is an ability that is downright human.
., 15, 17, 18, 19, 20, 21 structure of, 21 see also entrepreneurs, exponential; specific exponential entrepreneurs and organizations Exponential Organizations (ExO) (Ismail), xiv, 15 extrinsic rewards, 78, 79 Exxon Valdez, 250 FAA (Federal Aviation Administration), 110, 111, 261 Facebook, 14, 16, 88, 128, 173, 182, 185, 190, 195, 196, 202, 212, 213, 217, 218, 224, 233, 234, 236, 241 facial recognition software, 58 Fairchild Semiconductor, 4 Falcon launchers, 97, 119, 122, 123 false wins, 268, 269, 271 Fast Company, 5, 248 Favreau, Jon, 117 feedback, feedback loops, 28, 77, 83, 84, 120, 176, 180 in crowdfunding campaigns, 176, 180, 182, 185, 190, 199, 200, 202, 209–10 triggering flow with, 86, 87, 90–91, 92 Festo, 61 FeverBee (blog), 233 Feynman, Richard, 268, 271 Firefox Web browser, 11 first principles, 116, 120–21, 122, 126 Fiverr, 157 fixed-funding campaigns, 185–86, 206 “flash prizes,” 250 Flickr, 14 flow, 85–94, 109, 278 creative triggers of, 87, 93 definition of, 86 environmental triggers of, 87, 88–89 psychological triggers of, 87, 89–91, 92 social triggers of, 87, 91–93 Flow Genome Project, xiii, 87, 278 Foldit, 145 Forbes, 125 Ford, Henry, 33, 112–13 Fortune, 123 Fossil Wrist Net, 176 Foster, Richard, 14–15 Foundations (Rose), 120 Fowler, Emily, 299n Foxconn, 62 Free (Anderson), 10–11 Freelancer.com, 149–51, 156, 158, 163, 165, 195, 207 Friedman, Thomas, 150–51 Galaxy Zoo, 220–21, 228 Gartner Hype Cycle, 25–26, 25, 26, 29 Gates, Bill, 23, 53 GEICO, 227 General Electric (GE), 43, 225 General Mills, 145 Gengo.com, 145 Genius, 161 genomics, x, 63, 64–65, 66, 227 Georgia Tech, 197 geostationary satellite, 100 Germany, 55 Get a Freelancer (website), 149 Gigwalk, 159 Giovannitti, Fred, 253 Gmail, 77, 138, 163 goals, goal setting, 74–75, 78, 79, 80, 82–83, 84, 85, 87, 137 in crowdfunding campaigns, 185–87, 191 moonshots in, 81–83, 93, 98, 103, 104, 110, 245, 248 subgoals in, 103–4, 112 triggering flow with, 89–90, 92, 93 Godin, Seth, 239–40 Google, 11, 14, 47, 50, 61, 77, 80, 99, 128, 134, 135–39, 167, 195, 208, 251, 286n artificial intelligence development at, 24, 53, 58, 81, 138–39 autonomous cars of, 43–44, 44, 136, 137 eight innovation principles of, 84–85 robotics at, 139 skunk methodology used at, 81–84 thinking-at-scale strategies at, 136–38 Google Docs, 11 Google Glass, 58 Google Hangouts, 193, 202 Google Lunar XPRIZE, 139, 249 Googleplex, 134 Google+, 185, 190, 202 GoogleX, 81, 82, 83, 139 Google Zeitgeist, 136 Gossamer Condor, 263 Gou, Terry, 62 graphic designers, in crowdfunding campaigns, 193 Green, Hank, 180, 200 Grepper, Ryan, 210, 211–13 Grishin, Dmitry, 62 Grishin Robotics, 62 group flow, 91–93 Gulf Coast oil spill (2010), 250, 251, 253 Gulf of Mexico, 250, 251 hackathons, 159 hacker spaces, 62, 64 Hagel, John, III, 86, 106–7 HAL (fictional AI system), 52, 53 Hallowell, Ned, 88 Hariri, Robert, 65, 66 Harrison, John, 245, 247, 267 Hawking, Stephen, 110–12 Hawley, Todd, 100, 103, 104, 107, 114n Hayabusa mission, 97 health care, x, 245 AI’s impact on, 57, 276 behavior tracking in, 47 crowdsourcing projects in, 227, 253 medical manufacturing in, 34–35 robotics in, 62 3–D printing’s impact on, 34–35 Heath, Dan and Chip, 248 Heinlein, Robert, 114n Hendy, Barry, 12 Hendy’s law, 12 HeroX, 257–58, 262, 263, 265, 267, 269, 299n Hessel, Andrew, 63, 64 Hinton, Geoffrey, 58 Hoffman, Reid, 77, 231 Hollywood, 151–52 hosting platforms, 20–21 Howard, Jeremy, 54 Howe, Jeff, 144 Hseih, Tony, 80 Hughes, Jack, 152, 225–27, 254 Hull, Charles, 29–30, 32 Human Longevity, Inc. (HLI), 65–66 Hyatt Hotels Corporation, 20 IBM, 56, 57, 59, 76 ImageNet Competition (2012), 55 image recognition, 55, 58 Immelt, Jeff, 225 incentive competitions, xiii, 22, 139, 148, 152–54, 159, 160, 237, 240, 242, 243–73 addressing market failures with, 264–65, 269, 272 back-end business models in, 249, 265, 268 benefits of, 258–61 case studies of, 250–58 collaborative spirit in, 255, 260–61 crowdsourcing in designing of, 257–58 factors influencing success of, 245–47 false wins in, 268, 269, 271 “flash prizes” in, 250 global participation in, 267 innovation driven by, 245, 247, 248, 249, 252, 258–59, 260, 261 intellectual property (IP) in, 262, 267–68, 271 intrinsic rewards in, 254, 255 judging in, 273 key parameters for designing of, 263–68 launching of new industries with, 260, 268, 272 Master Team Agreements in, 273 media exposure in, 265, 266, 272, 273 MTP and passion as important in, 248, 249, 255, 263, 265, 270 operating costs of, 271, 272–73 principal motivators in, 254, 262–63 purses in, 265, 266, 270, 273 reasons for effectiveness of, 247–49 risk taking in, 247, 248–49, 261, 270 setting rules in, 263, 268, 269, 271, 273 small teams as ideal in, 262 step-by-step guide to, 269–73 telegenic finishes in, 266, 272, 273 time limits in, 249, 267, 271–72 XPRIZE, see XPRIZE competitions Indian Motorcycle company, 222 Indian Space Research Organization, 102 Indiegogo, 145, 173, 175, 178, 179, 184, 185–86, 187, 190, 199, 205, 206, 257 infinite computing, 21, 24, 41, 48–52, 61, 66 entrepreneurial opportunities and, 50–52 information: crowdsourcing platforms in gathering of, 145–46, 154–56, 157, 159–60, 220–21, 228 in data-driven crowdfunding campaigns, 207–10, 213 networks and sensors in garnering of, 42–43, 44, 47, 48, 256 science, 64 see also data mining Inman, Matthew, 178, 192, 193, 200 innovation, 8, 30, 56, 137, 256 companies resistant to, xi, 9–10, 12, 15, 23, 76 crowdsourcing and, see crowdsourcing as disruptive technology, 9–10 feedback loops in fostering of, 28, 77, 83, 84, 86, 87, 90–91, 92, 120, 176 Google’s eight principles of, 84–85 incentive competitions in driving of, 245, 247, 248, 249, 252, 258–59, 260, 261 infinite computing as new approach to, 51 power of constraints and, 248–49, 259 rate of, in online communities, 216, 219, 224, 225, 228, 233, 237 setting big goals for, 74–75, 78, 79, 80, 82–83, 84, 85, 87, 89–90, 92, 93, 103 skunk methodology in fostering of, 71–87, 88; see also skunk methodology inPulse, 176, 200 Instagram, 15–16, 16 insurance companies, 47 Intel, 7 intellectual property (IP), 262, 267–68, 271 INTELSAT, 102 Intel Science and Engineering Fair, 65 International Manufacturing Technology Show, 33 International Space Station (ISS), 35–36, 37, 97, 119 International Space University (ISU), 96, 100–104, 107–8 Founding Conference of, 102, 103 Internet, 8, 14, 39, 41, 45, 49, 50, 117, 118, 119, 132, 136, 143, 144, 153, 154, 163, 177, 207, 208, 209, 212, 216, 217, 228 building communities on, see communities, online crowd tools on, see crowdfunding, crowdfunding campaigns; crowdsourcing development of, 27 explosion of connectivity to, 42, 46, 46, 146, 147, 245 mainstreaming of, 27, 32, 33 reputation economics and, 217–19, 230, 232, 236–37 Internet-of-Things (IoT), 46, 47, 53 intrinsic rewards, 79, 254, 255 Invisalign, 34–35 iPads, 42, 57, 167 iPhones, 12, 42, 62, 176 iPod, 17, 18, 178 iRobot, 60 Iron Man, 52–53, 117 Ismail, Salim, xiv, 15, 77, 92 isolation, innovation and, 72, 76, 78, 79, 81–82, 257 Japan Aerospace Exploration Agency, 97 JARVIS (fictional AI system), 52–53, 58, 59, 146 Jeopardy, 56, 57 Jet Propulsion Laboratory (JPL), 99 Jobs, Steve, xiv, 23, 66–67, 72, 89, 111, 123 Johnson, Carolyn, 227 Johnson, Clarence “Kelly,” 71, 74, 75 skunk work rules of, 74, 75–76, 77, 81, 84, 247 Joy, Bill, 216, 256 Jumpstart Our Business Startups (JOBS) Act (2012), 171, 173 Kaggle, 160, 161 Kahneman, Daniel, 78, 121 Kaku, Michio, 49 Kauffman, Stuart, 276 Kaufman, Ben, 17–20 Kay, Alan, 114n Kemmer, Aaron, 35, 36, 37 Kickstarter, 145, 171, 173, 175, 176, 179–80, 182, 184, 190, 191, 193, 195, 197, 200, 205, 206 Kindle, 132 Kiva.org, 144–45, 172 Klein, Candace, 19–20, 171 Klein, Joshua, 217–18, 221 Klout, 218 Kodak Corporation, 4–8, 9–10, 11, 12, 20 Apparatus Division of, 4 bankruptcy of, 10, 16 digital camera developed by, 4–5, 5, 9 as innovation resistant, 9–10, 12, 15, 76 market dominance of, 5–6, 13–14 Kotler, Steven, xi, xiii, xv, 87, 279 Krieger, Mike, 15 Kubrick, Stanley, 52 Kurzweil, Ray, 53, 54, 58, 59 language translators, 137–38 crowdsourcing projects of, 145, 155–56 Latham, Gary, 74–75, 103 Law of Niches, 221, 223, 228, 231 leadership: importance of vision in, 23–24 moral, 274–76 Lean In (Sandberg), 217 Lean In circles, 217, 237 LEAP airplane, 34 LendingClub, 172 LeNet 5, 54, 55 Let’s Build a Goddamn Tesla Museum, see Tesla Museum campaign Levy, Steven, 138 Lewicki, Chris, 99, 179, 202, 203–4 Lichtenberg, Byron K., 102, 114n Licklider, J.
Human Compatible: Artificial Intelligence and the Problem of Control by Stuart Russell
3D printing, Ada Lovelace, AI winter, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Alfred Russel Wallace, algorithmic bias, Andrew Wiles, artificial general intelligence, Asilomar, Asilomar Conference on Recombinant DNA, augmented reality, autonomous vehicles, basic income, blockchain, brain emulation, Cass Sunstein, Claude Shannon: information theory, complexity theory, computer vision, connected car, crowdsourcing, Daniel Kahneman / Amos Tversky, delayed gratification, Elon Musk, en.wikipedia.org, Erik Brynjolfsson, Ernest Rutherford, Flash crash, full employment, future of work, Garrett Hardin, Gerolamo Cardano, ImageNet competition, Intergovernmental Panel on Climate Change (IPCC), Internet of things, invention of the wheel, job automation, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John Nash: game theory, John von Neumann, Kenneth Arrow, Kevin Kelly, Law of Accelerating Returns, Mark Zuckerberg, Nash equilibrium, Norbert Wiener, NP-complete, openstreetmap, P = NP, Pareto efficiency, Paul Samuelson, Pierre-Simon Laplace, positional goods, probability theory / Blaise Pascal / Pierre de Fermat, profit maximization, RAND corporation, random walk, Ray Kurzweil, recommendation engine, RFID, Richard Thaler, ride hailing / ride sharing, Robert Shiller, Robert Shiller, robotic process automation, Rodney Brooks, Second Machine Age, self-driving car, Shoshana Zuboff, Silicon Valley, smart cities, smart contracts, social intelligence, speech recognition, Stephen Hawking, Steven Pinker, superintelligent machines, surveillance capitalism, Thales of Miletus, The Future of Employment, The Theory of the Leisure Class by Thorstein Veblen, Thomas Bayes, Thorstein Veblen, Tragedy of the Commons, transport as a service, Turing machine, Turing test, universal basic income, uranium enrichment, Von Neumann architecture, Wall-E, Watson beat the top human players on Jeopardy!, web application, zero-sum game
This leads to a simple formula for propagating the error backwards from the output layer to the input layer, tweaking knobs along the way. Miraculously, the process works. For the task of recognizing objects in photographs, deep learning algorithms have demonstrated remarkable performance. The first inkling of this came in the 2012 ImageNet competition, which provides training data consisting of 1.2 million labeled images in one thousand categories, and then requires the algorithm to label one hundred thousand new images.4 Geoff Hinton, a British computational psychologist who was at the forefront of the first neural network revolution in the 1980s, had been experimenting with a very large deep convolutional network: 650,000 nodes and 60 million parameters.
For example, to learn the difference between the “situational superko” and “natural situational superko” rules, the learning algorithm would have to try repeating a board position that it had created previously by a pass rather than by playing a stone. The results would be different in different countries. 4. For a description of the ImageNet competition, see Olga Russakovsky et al., “ImageNet large scale visual recognition challenge,” International Journal of Computer Vision 115 (2015): 211–52. 5. The first demonstration of deep networks for vision: Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton, “ImageNet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems 25, ed.
Army of None: Autonomous Weapons and the Future of War by Paul Scharre
active measures, Air France Flight 447, algorithmic trading, artificial general intelligence, augmented reality, automated trading system, autonomous vehicles, basic income, brain emulation, Brian Krebs, cognitive bias, computer vision, cuban missile crisis, dark matter, DARPA: Urban Challenge, DevOps, drone strike, Elon Musk, en.wikipedia.org, Erik Brynjolfsson, facts on the ground, fault tolerance, Flash crash, Freestyle chess, friendly fire, IFF: identification friend or foe, ImageNet competition, Internet of things, Johann Wolfgang von Goethe, John Markoff, Kevin Kelly, Loebner Prize, loose coupling, Mark Zuckerberg, moral hazard, mutually assured destruction, Nate Silver, pattern recognition, Rodney Brooks, Rubik’s Cube, self-driving car, sensor fusion, South China Sea, speech recognition, Stanislav Petrov, Stephen Hawking, Steve Ballmer, Steve Wozniak, Stuxnet, superintelligent machines, Tesla Model S, The Signal and the Noise by Nate Silver, theory of mind, Turing test, universal basic income, Valery Gerasimov, Wall-E, William Langewiesche, Y2K, zero day
Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf. 87 over a hundred layers: Christian Szegedy et al., “Going Deeper With Convolutions,” https://www.cs.unc.edu/~wliu/papers/GoogLeNet.pdf. 87 error rate of only 4.94 percent: Richard Eckel, “Microsoft Researchers’ Algorithm Sets ImageNet Challenge Milestone,” Microsoft Research Blog, February 10, 2015, https://www.microsoft.com/en-us/research/microsoft-researchers-algorithm-sets-imagenet-challenge-milestone/. Kaiming He et al., “Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification,” https://arxiv.org/pdf/1502.01852.pdf. 87 estimated 5.1 percent error rate: Olga Russakovsky et al., “ImageNet Large Scale Visual Recognition Challenge,” January 20, 2015, https://arxiv.org/pdf/1409.0575.pdf. 87 3.57 percent rate: Kaiming He et al., “Deep Residual Learning for Image Recognition,” December 10, 2015, https://arxiv.org/pdf/1512.03385v1.pdf. 6 Crossing the Threshold: Approving Autonomous Weapons 89 delineation of three classes of systems: Department of Defense, “Department of Defense Directive Number 3000.09.” 90 “minimize the probability and consequences”: Ibid, 1. 91 “We haven’t had anything that was even remotely close”: Frank Kendall, interview, November 7, 2016. 91 “We had an automatic mode”: Ibid. 91 “relatively soon”: Ibid. 91 “sort through all that”: Ibid. 91 “Are you just driving down”: Ibid. 92 “other side of the equation”: Ibid. 92 “a reasonable question to ask”: Ibid. 92 “where technology supports it”: Ibid. 92 “principles and obey them”: Ibid. 93 “Automation and artificial intelligence are”: Ibid. 93 Work explained in a 2014 monograph: Robert O.
Coders: The Making of a New Tribe and the Remaking of the World by Clive Thompson
2013 Report for America's Infrastructure - American Society of Civil Engineers - 19 March 2013, 4chan, 8-hour work day, Ada Lovelace, AI winter, Airbnb, algorithmic bias, Amazon Web Services, Asperger Syndrome, augmented reality, Ayatollah Khomeini, backpropagation, barriers to entry, basic income, Bernie Sanders, bitcoin, blockchain, blue-collar work, Brewster Kahle, Brian Krebs, Broken windows theory, call centre, cellular automata, Chelsea Manning, clean water, cloud computing, cognitive dissonance, computer vision, Conway's Game of Life, crowdsourcing, cryptocurrency, Danny Hillis, David Heinemeier Hansson, disinformation, don't be evil, don't repeat yourself, Donald Trump, dumpster diving, Edward Snowden, Elon Musk, Erik Brynjolfsson, Ernest Rutherford, Ethereum, ethereum blockchain, Firefox, Frederick Winslow Taylor, game design, glass ceiling, Golden Gate Park, Google Hangouts, Google X / Alphabet X, Grace Hopper, Guido van Rossum, Hacker Ethic, hockey-stick growth, HyperCard, Ian Bogost, illegal immigration, ImageNet competition, Internet Archive, Internet of things, Jane Jacobs, John Markoff, Jony Ive, Julian Assange, Kickstarter, Larry Wall, lone genius, Lyft, Marc Andreessen, Mark Shuttleworth, Mark Zuckerberg, Menlo Park, microservices, Minecraft, move fast and break things, move fast and break things, Nate Silver, Network effects, neurotypical, Nicholas Carr, Oculus Rift, PageRank, pattern recognition, Paul Graham, paypal mafia, Peter Thiel, pink-collar, planetary scale, profit motive, ransomware, recommendation engine, Richard Stallman, ride hailing / ride sharing, Rubik’s Cube, Ruby on Rails, Sam Altman, Satoshi Nakamoto, Saturday Night Live, self-driving car, side project, Silicon Valley, Silicon Valley ideology, Silicon Valley startup, single-payer health, Skype, smart contracts, Snapchat, social software, software is eating the world, sorting algorithm, South of Market, San Francisco, speech recognition, Steve Wozniak, Steven Levy, TaskRabbit, the High Line, Travis Kalanick, Uber and Lyft, Uber for X, uber lyft, universal basic income, urban planning, Wall-E, Watson beat the top human players on Jeopardy!, WeWork, WikiLeaks, women in the workforce, Y Combinator, Zimmermann PGP, éminence grise
By 2012, the field had a seismic breakthrough. Up at the University of Toronto, the British computer scientist Geoff Hinton had been beavering away for two decades on improving neural networks. That year he and a team of students showed off the most impressive neural net yet—by soundly beating competitors at an annual AI shootout. The ImageNet challenge, as it’s known, is an annual competition among AI researchers to see whose system is best at recognizing images. That year, Hinton’s deep-learning neural net got only 15.3 percent of the images wrong. The next-best competitor had an error rate almost twice as high, of 26.2 percent. It was an AI moon shot.
Attention Factory: The Story of TikTok and China's ByteDance by Matthew Brennan
Airbnb, AltaVista, augmented reality, computer vision, coronavirus, Covid-19, COVID-19, Donald Trump, en.wikipedia.org, Google X / Alphabet X, ImageNet competition, income inequality, invisible hand, Kickstarter, Mark Zuckerberg, Menlo Park, natural language processing, Netflix Prize, Network effects, paypal mafia, Pearl River Delta, pre–internet, recommendation engine, ride hailing / ride sharing, Silicon Valley, Snapchat, social graph, Steve Jobs, Travis Kalanick, WeWork, Y Combinator
66 https://techcrunch.com/2012/12/05/prismatic/ 67 http://yingdudasha.cn/ 68 Image source: https://m.weibo.cn/2745813247/3656157740605616 * “Real stuff” is my imperfect translation of 干货 gānhuò, which could also be translated as “the real McCoy” or “something of substance” Chapter 3 Recommendation, From YouTube to TikTok Chapter Timeline 2009 – Netflix awards a $1 million prize for an algorithm that increased the accuracy of their video recommendation by 10% 2011 – YouTube introduces machine learning algorithmic recommendation engine, Sibyl, with immediate impact 2012 Aug – ByteDance launches news aggregation app Toutiao 2012 Sep t – AlexNet breakthrough at the ImageNet challenge triggers a global explosion of interest in AI 2013 Mar – Facebook changes its newsfeed to a “personalized newspaper ” 2014 April – Instagram begins using an “explore ” tab of personalized content 2015 – Google Brain’s deep learning algorithms begin supercharging a wide variety of Google products, including YouTube recommendations It was 2010, and YouTube had a big problem.
Hands-On Machine Learning With Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems by Aurélien Géron
Amazon Mechanical Turk, Anton Chekhov, backpropagation, combinatorial explosion, computer vision, constrained optimization, correlation coefficient, crowdsourcing, don't repeat yourself, Elon Musk, en.wikipedia.org, friendly AI, ImageNet competition, information retrieval, iterative process, John von Neumann, Kickstarter, natural language processing, Netflix Prize, NP-complete, optical character recognition, P = NP, p-value, pattern recognition, pull request, recommendation engine, self-driving car, sentiment analysis, SpamAssassin, speech recognition, stochastic process
You can often get the same effect as a 9 × 9 kernel by stacking two 3 × 3 kernels on top of each other, for a lot less compute. Over the years, variants of this fundamental architecture have been developed, leading to amazing advances in the field. A good measure of this progress is the error rate in competitions such as the ILSVRC ImageNet challenge. In this competition the top-5 error rate for image classification fell from over 26% to barely over 3% in just five years. The top-five error rate is the number of test images for which the system’s top 5 predictions did not include the correct answer. The images are large (256 pixels high) and there are 1,000 classes, some of which are really subtle (try distinguishing 120 dog breeds).
Pac-Man Using Deep Q-Learning GRU (Gated Recurrent Unit) cell, GRU Cell-GRU Cell H hailstone sequence, Efficient Data Representations hard margin classification, Soft Margin Classification-Soft Margin Classification hard voting classifiers, Voting Classifiers-Voting Classifiers harmonic mean, Precision and Recall He initialization, Vanishing/Exploding Gradients Problems-Xavier and He Initialization Heaviside step function, The Perceptron Hebb's rule, The Perceptron, Hopfield Networks Hebbian learning, The Perceptron hidden layers, Multi-Layer Perceptron and Backpropagation hierarchical clustering, Unsupervised learning hinge loss function, Online SVMs histograms, Take a Quick Look at the Data Structure-Take a Quick Look at the Data Structure hold-out sets, Stacking(see also blenders) Hopfield Networks, Hopfield Networks-Hopfield Networks hyperbolic tangent (htan activation function), Multi-Layer Perceptron and Backpropagation, Activation Functions, Vanishing/Exploding Gradients Problems, Xavier and He Initialization, Recurrent Neurons hyperparameters, Overfitting the Training Data, Custom Transformers, Grid Search-Grid Search, Evaluate Your System on the Test Set, Gradient Descent, Polynomial Kernel, Computational Complexity, Fine-Tuning Neural Network Hyperparameters(see also neural network hyperparameters) hyperplane, Decision Function and Predictions, Manifold Learning-PCA, Projecting Down to d Dimensions, Other Dimensionality Reduction Techniques hypothesis, Select a Performance Measuremanifold, Manifold Learning hypothesis boosting (see boosting) hypothesis function, Linear Regression hypothesis, null, Regularization Hyperparameters I identity matrix, Ridge Regression, Quadratic Programming ILSVRC ImageNet challenge, CNN Architectures image classification, CNN Architectures impurity measures, Making Predictions, Gini Impurity or Entropy? in-graph replication, In-Graph Versus Between-Graph Replication inception modules, GoogLeNet Inception-v4, ResNet incremental learning, Online learning, Incremental PCA inequality constraints, SVM Dual Problem inference, Model-based learning, Exercises, Memory Requirements, An Encoder–Decoder Network for Machine Translation info(), Take a Quick Look at the Data Structure information gain, Gini Impurity or Entropy?
Architects of Intelligence by Martin Ford
3D printing, agricultural Revolution, AI winter, algorithmic bias, Apple II, artificial general intelligence, Asilomar, augmented reality, autonomous vehicles, backpropagation, barriers to entry, basic income, Baxter: Rethink Robotics, Bayesian statistics, bitcoin, business intelligence, business process, call centre, cloud computing, cognitive bias, Colonization of Mars, computer vision, correlation does not imply causation, crowdsourcing, DARPA: Urban Challenge, deskilling, disruptive innovation, Donald Trump, Douglas Hofstadter, Elon Musk, Erik Brynjolfsson, Ernest Rutherford, Fellow of the Royal Society, Flash crash, future of work, gig economy, Google X / Alphabet X, Gödel, Escher, Bach, Hans Rosling, ImageNet competition, income inequality, industrial robot, information retrieval, job automation, John von Neumann, Law of Accelerating Returns, life extension, Loebner Prize, Mark Zuckerberg, Mars Rover, means of production, Mitch Kapor, natural language processing, new economy, optical character recognition, pattern recognition, phenotype, Productivity paradox, Ray Kurzweil, recommendation engine, Robert Gordon, Rodney Brooks, Sam Altman, self-driving car, sensor fusion, sentiment analysis, Silicon Valley, smart cities, social intelligence, speech recognition, statistical model, stealth mode startup, stem cell, Stephen Hawking, Steve Jobs, Steve Wozniak, Steven Pinker, strong AI, superintelligent machines, Ted Kaczynski, The Rise and Fall of American Growth, theory of mind, Thomas Bayes, Travis Kalanick, Turing test, universal basic income, Wall-E, Watson beat the top human players on Jeopardy!, women in the workforce, working-age population, zero-sum game, Zipcar
In the end, science won out, and two of my students won a big public competition, and they won it dramatically. They got almost half the error rate of the best computer vision systems, and they were using mainly techniques developed in Yann LeCun’s lab but mixed in with a few of our own techniques as well. MARTIN FORD: This was the ImageNet competition? GEOFFREY HINTON: Yes, and what happened then was what should happen in science. One method that people used to think of as complete nonsense had now worked much better than the method they believed in, and within two years, they all switched. So, for things like object classification, nobody would dream of trying to do it without using a neural network now.
We released the entire 15 million images to the world and started to run international competitions for researchers to work on the ImageNet problems: not on the tiny small-scale problems but on the problems that mattered to humans and applications. Fast-forward to 2012, and I think we see the turning point in object recognition for a lot of people. The winner of the 2012 ImageNet competition created a convergence of ImageNet, GPU computing power, and convolutional neural networks as an algorithm. Geoffrey Hinton wrote a seminal paper that, for me, was Phase One in achieving the holy grail of object recognition. MARTIN FORD: Did you continue this project? FEI-FEI LI: For the next two years, I worked on taking object recognition a step further.
Driverless: Intelligent Cars and the Road Ahead by Hod Lipson, Melba Kurman
AI winter, Air France Flight 447, Amazon Mechanical Turk, autonomous vehicles, backpropagation, barriers to entry, butterfly effect, carbon footprint, Chris Urmson, cloud computing, computer vision, connected car, creative destruction, crowdsourcing, DARPA: Urban Challenge, digital map, Elon Musk, en.wikipedia.org, Erik Brynjolfsson, Google Earth, Google X / Alphabet X, high net worth, hive mind, ImageNet competition, income inequality, industrial robot, intermodal, Internet of things, job automation, Joseph Schumpeter, lone genius, Lyft, megacity, Network effects, New Urbanism, Oculus Rift, pattern recognition, performance metric, precision agriculture, RFID, ride hailing / ride sharing, Second Machine Age, self-driving car, Silicon Valley, smart cities, speech recognition, statistical model, Steve Jobs, technoutopianism, Tesla Model S, Travis Kalanick, Uber and Lyft, uber lyft, Unsafe at Any Speed
See also Mid-level controls Consumer acceptance, 11–13 Controls engineering Overview of, 47, 75–77 See also Low-level controls; Mid-level controls; High-level controls Convolutional neural networks (CNNs), 214–218 Corner cases, 4, 5, 89, 154 Creative destruction, 261–263 Crime, 273, 274 DARPA Challenges, 149, 150 DARPA Grand Challenge 2004 DARPA Grand Challenge 2005, 151, 152 DARPA Urban Challenge 2007, 156–158 Data CAN bus protocol, 193, 194 Data collection, 239, 240 Training data for deep learning, 218–220 See also Machine learning; Route-planning software; Traffic prediction software Deep learning History of, 197, 199–202, 219, 223–226 How deep learning works, 7, 8, 226–231 See also ImageNet competition; Neocognitron; Perceptron; SuperVision Demo 97, 134, 135 Digital cameras, 173–175 Disney Hall, Los Angeles, 36 Disney’s Magic Highway U.S.A. Dog of War, 79 Downtowns, 32–37 Drive by wire191–194 Driver assist, 55–58. See also Human in the loop Driverless-car reliability, 98–104, 195–196 Drive-PX 225 E-commerce, 271, 272 Edge detectors 229 Electronic Highway History of, 116–120 Reasons for demise, 123, 124 See also General Motors Corporation (GM) Environment.
Exponential: How Accelerating Technology Is Leaving Us Behind and What to Do About It by Azeem Azhar
23andMe, 3D printing, A Declaration of the Independence of Cyberspace, Ada Lovelace, additive manufacturing, Airbnb, algorithmic trading, Amazon Mechanical Turk, autonomous vehicles, basic income, Berlin Wall, Bernie Sanders, Boeing 737 MAX, Boris Johnson, Bretton Woods, carbon footprint, Chris Urmson, Clayton Christensen, cloud computing, collective bargaining, computer age, computer vision, coronavirus, Covid-19, COVID-19, creative destruction, crowdsourcing, cryptocurrency, cuban missile crisis, Daniel Kahneman / Amos Tversky, David Graeber, David Ricardo: comparative advantage, decarbonisation, deglobalization, deindustrialization, dematerialisation, Diane Coyle, digital map, disinformation, Dissolution of the Soviet Union, Donald Trump, Double Irish / Dutch Sandwich, drone strike, Elon Musk, energy security, Fall of the Berlin Wall, Firefox, Frederick Winslow Taylor, future of work, Garrett Hardin, gender pay gap, gig economy, global pandemic, global supply chain, global value chain, global village, happiness index / gross national happiness, hiring and firing, hockey-stick growth, ImageNet competition, income inequality, independent contractor, industrial robot, intangible asset, Jane Jacobs, Jeff Bezos, job automation, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, Just-in-time delivery, Kickstarter, knowledge worker, Kodak vs Instagram, Law of Accelerating Returns, low skilled workers, lump of labour, Lyft, manufacturing employment, Mark Zuckerberg, megacity, Mitch Kapor, Network effects, new economy, offshore financial centre, Panopticon Jeremy Bentham, Peter Thiel, price anchoring, RAND corporation, ransomware, Ray Kurzweil, remote working, RFC: Request For Comment, Richard Florida, ride hailing / ride sharing, Robert Bork, Ronald Coase, Ronald Reagan, Sam Altman, Second Machine Age, self-driving car, Shoshana Zuboff, Silicon Valley, Social Responsibility of Business Is to Increase Its Profits, software as a service, Steve Ballmer, Steve Jobs, Stuxnet, subscription business, TaskRabbit, The Death and Life of Great American Cities, The Future of Employment, The Nature of the Firm, Thomas Malthus, Tragedy of the Commons, Turing machine, Uber and Lyft, Uber for X, uber lyft, universal basic income, uranium enrichment, winner-take-all economy, Yom Kippur War
In 2012, a group of leading AI researchers – Alex Krizhevsky, Ilya Sutskever and Geoffrey Hinton – developed a ‘deep convolutional neural network’ which applied deep learning to the kinds of image-sorting tasks that AIs had long struggled with. It was rooted in extraordinary computing clout. The neural network contained 650,000 neurons and 60 million ‘parameters’, settings you could use to tune the system. It was a game-changer. Before AlexNet, as Krizhevsky’s team’s invention was called, most AIs that took on the ImageNet competition had stumbled, for years never scoring higher than 74 per cent. AlexNet had a success rate as high as 87 per cent. Deep learning worked. The triumph of deep learning sparked an AI feeding frenzy. Scientists rushed to build artificial intelligence systems, applying deep neural networks and their derivatives to a vast array of problems: from spotting manufacturing defects to translating between languages; from voice recognition to detecting credit card fraud; from discovering new medicines to recommending the next video we should watch.
Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again by Eric Topol
23andMe, Affordable Care Act / Obamacare, AI winter, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, algorithmic bias, artificial general intelligence, augmented reality, autonomous vehicles, backpropagation, bioinformatics, blockchain, cloud computing, cognitive bias, Colonization of Mars, computer age, computer vision, conceptual framework, creative destruction, crowdsourcing, Daniel Kahneman / Amos Tversky, dark matter, David Brooks, digital twin, Elon Musk, en.wikipedia.org, epigenetics, Erik Brynjolfsson, fault tolerance, George Santayana, Google Glasses, ImageNet competition, Jeff Bezos, job automation, job satisfaction, Joi Ito, Mark Zuckerberg, medical residency, meta-analysis, microbiome, natural language processing, new economy, Nicholas Carr, nudge unit, pattern recognition, performance metric, personalized medicine, phenotype, placebo effect, randomized controlled trial, recommendation engine, Rubik’s Cube, Sam Altman, self-driving car, Silicon Valley, speech recognition, Stephen Hawking, text mining, the scientific method, Tim Cook: Apple, War on Poverty, Watson beat the top human players on Jeopardy!, working-age population
IMAGES ImageNet exemplified an adage about AI: datasets—not algorithms—might be the key limiting factor of human-level artificial intelligence.39 When Fei-Fei Li, a computer scientist now at Stanford and half time at Google, started ImageNet in 2007, she bucked the idea that algorithms ideally needed nurturing from Big Data and instead pursued the in-depth annotation of images. She recognized it wasn’t about Big Data; it was about carefully, extensively labeled Big Data. A few years ago, she said, “I consider the pixel data in images and video to be the dark matter of the Internet.”40 Many different convolutional DNNs were used to classify the images with annual ImageNet Challenge contests to recognize the best (such as AlexNet, GoogleNet, VGG Net, and ResNet). Figure 4.6 shows the progress in reducing the error rate over several years, with ImageNet wrapping up in 2017, with significantly better than human performance in image recognition. The error rate fell from 30 percent in 2010 to 4 percent in 2016.