19 results back to index
A Declaration of the Independence of Cyberspace, AI winter, airport security, Apple II, artificial general intelligence, augmented reality, autonomous vehicles, Baxter: Rethink Robotics, Bill Duvall, bioinformatics, Brewster Kahle, Burning Man, call centre, cellular automata, Chris Urmson, Claude Shannon: information theory, Clayton Christensen, clean water, cloud computing, collective bargaining, computer age, computer vision, crowdsourcing, Danny Hillis, DARPA: Urban Challenge, data acquisition, Dean Kamen, deskilling, don't be evil, Douglas Engelbart, Douglas Hofstadter, Dynabook, Edward Snowden, Elon Musk, Erik Brynjolfsson, factory automation, From Mathematics to the Technologies of Life and Death, future of work, Galaxy Zoo, Google Glasses, Google X / Alphabet X, Grace Hopper, Gödel, Escher, Bach, Hacker Ethic, haute couture, hive mind, hypertext link, indoor plumbing, industrial robot, information retrieval, Internet Archive, Internet of things, invention of the wheel, Jacques de Vaucanson, Jaron Lanier, Jeff Bezos, job automation, John Conway, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John von Neumann, Kevin Kelly, knowledge worker, Kodak vs Instagram, labor-force participation, loose coupling, Mark Zuckerberg, Marshall McLuhan, medical residency, Menlo Park, Mother of all demos, natural language processing, new economy, Norbert Wiener, PageRank, pattern recognition, pre–internet, RAND corporation, Ray Kurzweil, Richard Stallman, Robert Gordon, Rodney Brooks, Sand Hill Road, Second Machine Age, self-driving car, semantic web, shareholder value, side project, Silicon Valley, Silicon Valley startup, Singularitarianism, skunkworks, Skype, social software, speech recognition, stealth mode startup, Stephen Hawking, Steve Ballmer, Steve Jobs, Steve Wozniak, Steven Levy, Stewart Brand, strong AI, superintelligent machines, technological singularity, Ted Nelson, telemarketer, telepresence, telepresence robot, Tenerife airport disaster, The Coming Technological Singularity, the medium is the message, Thorstein Veblen, Turing test, Vannevar Bush, Vernor Vinge, Watson beat the top human players on Jeopardy!, Whole Earth Catalog, William Shockley: the traitorous eight
Winograd walked away from AI in part because of a series of challenging conversations with a group of philosophers at the University of California. A member of a small group of AI researchers, he engaged in a series of weekly seminars with Berkeley philosophers Hubert Dreyfus and John Searle. The philosophers convinced him that there were real limits to the capabilities of intelligent machines. Winograd’s conversion coincided with the collapse of a nascent artificial intelligence industry known as the “AI Winter.” Several decades later, Winograd, who was faculty advisor for Google cofounder Larry Page at Stanford, famously counseled the young graduate student to focus on the problem of Web search rather than self-driving cars. In the intervening decades Winograd had become acutely aware of the importance of the designer’s point of view. The separation of the fields of AI and human-computer interaction, or HCI, is partly a question of approach, but it’s also an ethical stance about designing humans either into or out of the systems we create.
When the commercial market for artificial intelligence software failed to materialize quickly enough, inside the company he struggled, most bitterly with board member Pierre Lamond, a venture capitalist who was a veteran of the semiconductor industry with no software experience. Ultimately Breiner lost his battle and Lamond brought in an outside corporate manager who moved the company headquarters to Texas, where the manager lived. Syntelligence itself would confront directly what would be become known as the “AI Winter.” One by one the artificial intelligence firms of the early 1980s were eclipsed either because they failed financially or because they returned to their roots as experimental efforts or consulting companies. The market failure became an enduring narrative that came to define artificial intelligence, with a repeated cycle of hype and failure fueled by overly ambitious scientific claims that are inevitably followed by performance and market disappointments.
A generation of true believers, steeped in the technocratic and optimistic artificial intelligence literature of the 1960s, clearly played an early part in the collapse. Since then the same boom-and-bust cycle has continued for decades, even as AI has advanced.38 Today the cycle is likely to repeat itself again as a new wave of artificial intelligence technologies is being heralded by some as being on the cusp of offering “thinking machines.” The first AI Winter had actually come a decade earlier in Europe. Sir Michael James Lighthill, a British applied mathematician, led a study in 1973 that excoriated the field for not delivering on the promises and predictions, such as the early SAIL prediction of a working artificial intelligence in a decade. Although it had little impact in the United States, the Lighthill report, “Artificial Intelligence: A General Survey,” led to the curtailment of funding in England and a dispersal of British researchers from the field.
3D printing, Ada Lovelace, AI winter, Airbnb, artificial general intelligence, augmented reality, barriers to entry, bitcoin, blockchain, brain emulation, Buckminster Fuller, cloud computing, computer age, computer vision, correlation does not imply causation, credit crunch, cryptocurrency, cuban missile crisis, dematerialisation, discovery of the americas, disintermediation, don't be evil, Elon Musk, en.wikipedia.org, epigenetics, Erik Brynjolfsson, everywhere but in the productivity statistics, Flash crash, friendly AI, Google Glasses, industrial robot, Internet of things, invention of agriculture, job automation, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John von Neumann, Kevin Kelly, life extension, low skilled workers, Mahatma Gandhi, means of production, mutually assured destruction, Nicholas Carr, pattern recognition, Peter Thiel, Ray Kurzweil, Rodney Brooks, Second Machine Age, self-driving car, Silicon Valley, Silicon Valley ideology, Skype, South Sea Bubble, speech recognition, Stanislav Petrov, Stephen Hawking, Steve Jobs, strong AI, technological singularity, theory of mind, Turing machine, Turing test, universal basic income, Vernor Vinge, wage slave, Wall-E
Herbert Simon said in 1965 that “machines will be capable, within twenty years, of doing any work a man can do,” (5) and Marvin Minksy said two years later that “Within a generation . . . the problem of creating ‘artificial intelligence’ will substantially be solved.” (6) These were hugely over-optimistic claims which with hindsight look like hubris. But hindsight is a wonderful thing, and it is unfair to criticise harshly the pioneers of AI for under-estimating the difficulty of replicating the feats which the human brain is capable of. AI winters and springs When it became apparent that AI was going to take much longer to achieve its goals than originally expected, the funding taps were turned off. There were rumblings of discontent among funding bodies from the late 1960s, and they crystallised in a report written in 1973 by mathematician James Lighthill for the British Science Research Council. A particular problem identified in the Lighthill report is the “combinatorial problem”, whereby a simple problem involving two or three variables becomes vast and possibly intractable when the number of variables is increased.
A particular problem identified in the Lighthill report is the “combinatorial problem”, whereby a simple problem involving two or three variables becomes vast and possibly intractable when the number of variables is increased. Thus simple AI applications which looked impressive in laboratory settings became useless when applied to real-world situations. From 1974 until around 1980 it was very hard for AI researchers to obtain funding, and this period of relative inactivity which became known as the first AI winter. This bust was followed in the 1980s by another boom, thanks to the advent of expert systems, and the Japanese Fifth Generation Computer Systems project. Expert systems limit themselves to solving narrowly-defined problems from single domains of expertise (for instance, litigation) using vast data banks. They avoid the messy complications of everyday life, and do not tackle the perennial problem of trying to inculcate common sense.
The reason was (again) the under-estimation of the difficulties of the tasks being addressed, and also the fact that desktop computers and what we now call servers overtook mainframes in speed and power, rendering very expensive legacy machines redundant. The boom and bust phenomenon was familiar to economists, with famous examples being Tulipmania in 1637 and the South Sea Bubble in 1720. It has also been a feature of technology introduction since the industrial revolution, seen in canals, railways and telecoms, as well as in the dot-com bubble of the late 1990s. The second AI winter thawed in the early 1990s, and AI research has been increasingly well funded since then. Some people are worried that the present excitement (and concern) about the progress in AI is merely the latest boom phase, characterised by hype and alarmism, and will shortly be followed by another damaging bust, in which thousands of AI researchers will find themselves out of a job, promising projects will be halted, and important knowledge and insights lost.
The Singularity Is Near: When Humans Transcend Biology by Ray Kurzweil
additive manufacturing, AI winter, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Albert Einstein, anthropic principle, Any sufficiently advanced technology is indistinguishable from magic, artificial general intelligence, augmented reality, autonomous vehicles, Benoit Mandelbrot, Bill Joy: nanobots, bioinformatics, brain emulation, Brewster Kahle, Brownian motion, business intelligence, c2.com, call centre, carbon-based life, cellular automata, Claude Shannon: information theory, complexity theory, conceptual framework, Conway's Game of Life, cosmological constant, cosmological principle, cuban missile crisis, data acquisition, Dava Sobel, David Brooks, Dean Kamen, disintermediation, double helix, Douglas Hofstadter, en.wikipedia.org, epigenetics, factory automation, friendly AI, George Gilder, Gödel, Escher, Bach, informal economy, information retrieval, invention of the telephone, invention of the telescope, invention of writing, Isaac Newton, iterative process, Jaron Lanier, Jeff Bezos, job automation, job satisfaction, John von Neumann, Kevin Kelly, Law of Accelerating Returns, life extension, linked data, Loebner Prize, Louis Pasteur, mandelbrot fractal, Mikhail Gorbachev, mouse model, Murray Gell-Mann, mutually assured destruction, natural language processing, Network effects, new economy, Norbert Wiener, oil shale / tar sands, optical character recognition, pattern recognition, phenotype, premature optimization, randomized controlled trial, Ray Kurzweil, remote working, reversible computing, Richard Feynman, Richard Feynman, Rodney Brooks, Search for Extraterrestrial Intelligence, semantic web, Silicon Valley, Singularitarianism, speech recognition, statistical model, stem cell, Stephen Hawking, Stewart Brand, strong AI, superintelligent machines, technological singularity, Ted Kaczynski, telepresence, The Coming Technological Singularity, transaction costs, Turing machine, Turing test, Vernor Vinge, Y2K, Yogi Berra
Duane Rettig wrote: "... companies rode the great AI wave in the early 80's, when large corporations poured billions of dollars into the AI hype that promised thinking machines in 10 years. When the promises turned out to be harder than originally thought, the AI wave crashed, and Lisp crashed with it because of its association with AI. We refer to it as the AI Winter." Duane Rettig quoted in "AI Winter," http://c2.com/cgi/wiki?AiWinter. 163. The General Problem Solver (GPS) computer program, written in 1957, was able to solve problems through rules that allowed the GPS to divide a problem's goals into subgoals, and then check if obtaining a particular subgoal would bring the GPS closer to solving the overall goal. In the early 1960s Thomas Evan wrote ANALOGY, a "program [that] solves geometric-analogy problems of the form A:B::C:? taken from IQ tests and college entrance exams." Boicho Kokinov and Robert M.
Shaw, and Herbert Simon, which was able to find proofs for theorems that had stumped mathematicians such as Bertrand Russell, and early programs from the MIT Artificial Intelligence Laboratory, which could answer SAT questions (such as analogies and story problems) at the level of college students.163 A rash of AI companies occurred in the 1970s, but when profits did not materialize there was an AI "bust" in the 1980s, which has become known as the "AI winter." Many observers still think that the AI winter was the end of the story and that nothing has since come of the AI field. Yet today many thousands of AI applications are deeply embedded in the infrastructure of every industry. Most of these applications were research projects ten to fifteen years ago; People who ask, "Whatever happened to AI?" remind me of travelers to the rain forest who wonder, "Where are all the many species that are supposed to live here?"
Heather Havenstein writes that the "inflated notions spawned by science fiction writers about the convergence of humans and machines tarnished the image of AI in the 1980s because AI was perceived as failing to live up to its potential." Heather Havenstein, "Spring Comes to AI Winter: A Thousand Applications Bloom in Medicine, Customer Service, Education and Manufacturing," Computerworld, February 14, 2005, http://www.computerworld.com/softwaretopics/software/story/0,10801,99691,00.html. This tarnished image led to "AI Winter," defined as "a term coined by Richard Gabriel for the (circa 1990–94?) crash of the wave of enthusiasm for the AI language Lisp and AI itself, following a boom in the 1980s." Duane Rettig wrote: "... companies rode the great AI wave in the early 80's, when large corporations poured billions of dollars into the AI hype that promised thinking machines in 10 years.
Nerds on Wall Street: Math, Machines and Wired Markets by David J. Leinweber
AI winter, algorithmic trading, asset allocation, banking crisis, barriers to entry, Big bang: deregulation of the City of London, butterfly effect, buttonwood tree, buy low sell high, capital asset pricing model, citizen journalism, collateralized debt obligation, corporate governance, Craig Reynolds: boids flock, credit crunch, Credit Default Swap, credit default swaps / collateralized debt obligations, Danny Hillis, demand response, disintermediation, distributed generation, diversification, diversified portfolio, Emanuel Derman, en.wikipedia.org, experimental economics, financial innovation, Gordon Gekko, implied volatility, index arbitrage, index fund, information retrieval, Internet Archive, John Nash: game theory, Khan Academy, load shedding, Long Term Capital Management, Machine translation of "The spirit is willing, but the flesh is weak." to Russian and back, market fragmentation, market microstructure, Mars Rover, moral hazard, mutually assured destruction, natural language processing, Network effects, optical character recognition, paper trading, passive investing, pez dispenser, phenotype, prediction markets, quantitative hedge fund, quantitative trading / quantitative ﬁnance, QWERTY keyboard, RAND corporation, random walk, Ray Kurzweil, Renaissance Technologies, Richard Stallman, risk tolerance, risk-adjusted returns, risk/return, Ronald Reagan, semantic web, Sharpe ratio, short selling, Silicon Valley, Small Order Execution System, smart grid, smart meter, social web, South Sea Bubble, statistical arbitrage, statistical model, Steve Jobs, Steven Levy, Tacoma Narrows Bridge, the scientific method, The Wisdom of Crowds, time value of money, too big to fail, transaction costs, Turing machine, Upton Sinclair, value at risk, Vernor Vinge, yield curve, Yogi Berra
Gr eatest Hits of Computation in Finance 39 Figure 2.4 Overly exuberant Wall Street Computer Review covers. Source: Wall Street Computer Review (now Wall Street & Technology), June 1987 and June 1990. Figure 2.5 The AI industry apologizes to the world, sort of. Source: Published with permission from Parallel Simulation Technology, LLC. All rights reserved. Marvin Minsky, MIT’s AI übermaven, declared an “AI winter” at the major AI conference of 1987. It got so bad that one AI vendor used the image in Figure 2.5 at the same venue. I don’t mean to pick on expert systems or any particular technology here. Neural nets, wavelets, chaos, genetic algorithms, fuzzy logic, and any number of others could be tarred with the same brush used here (though without the charming illustrations). Nor do I mean to 40 Nerds on Wall Str eet imply that these ideas are utterly without merit.
The ability to change large, complex data structures on the fly allowed LISP to deal with the complexity of problems like symbolic integration, but the need to clean up after those changes created the need for garbage collection.2 When we ran our first, very simple LISP trading systems demonstrations (crossover rules, for the most part) using recorded data for our visitors from Wall Street, we saw their eyes glaze over when, in the middle of the simulated run, the machine would take a break for a few minutes and we would offer more coffee. A Little AI Goes a Long Way on Wall Str eet 161 My colleague Dale Prouty, a brilliant Caltech Ph.D. physicist whose metabolism seemed to make his own caffeine, and I quickly realized there was no way LISP systems would fit in trading. Similar realizations, in other contexts, contributed to the AI winter, described in Chapter 2. Dale had heard that PaineWebber’s equity block desk was looking for proposals for an “intelligent hedging advisory system” for the desk. Ideally, the block traders would “go home flat,” with no net long or short exposure to the market, to sectors, or to other common equity factors. This was not always possible, so the firm had more overnight risk exposure than it wanted.
Tight integration with both market data and electronic execution channels, combined with an appropriate, accessible user interface and a high level of support contributed to a major AI success story with MarketMind/QuantEx. The transactions flowing through these systems produced more revenue on a busy day than many other AI applications generated over their entire operational lifetimes. Prehistory of Artificial Intelligence on Wall Street Summer 1987. AI godfather Marvin Minsky warns American Association for Artificial Intelligence (AAAI) Conference attendees in Seattle that “the AI winter will soon be upon us.” This isn’t news to most of them. Many of the pioneer firms have been pared down to near invisibility. AI stocks have dropped so low that Ferraris are being traded in for Yugos in Palo Alto and Cambridge. On Wall Street, the expert systems that were last year’s breakthrough of the century are this year’s R&D write-off. LISP3 machines can be had in lower Manhattan for 10 cents on the dollar.
Superintelligence: Paths, Dangers, Strategies by Nick Bostrom
agricultural Revolution, AI winter, Albert Einstein, algorithmic trading, anthropic principle, anti-communist, artificial general intelligence, autonomous vehicles, barriers to entry, bioinformatics, brain emulation, cloud computing, combinatorial explosion, computer vision, cosmological constant, dark matter, DARPA: Urban Challenge, data acquisition, delayed gratification, demographic transition, Douglas Hofstadter, Drosophila, Elon Musk, en.wikipedia.org, epigenetics, fear of failure, Flash crash, Flynn Effect, friendly AI, Gödel, Escher, Bach, income inequality, industrial robot, informal economy, information retrieval, interchangeable parts, iterative process, job automation, John von Neumann, knowledge worker, Menlo Park, meta analysis, meta-analysis, mutually assured destruction, Nash equilibrium, Netflix Prize, new economy, Norbert Wiener, NP-complete, nuclear winter, optical character recognition, pattern recognition, performance metric, phenotype, prediction markets, price stability, principal–agent problem, race to the bottom, random walk, Ray Kurzweil, recommendation engine, reversible computing, social graph, speech recognition, Stanislav Petrov, statistical model, stem cell, Stephen Hawking, strong AI, superintelligent machines, supervolcano, technological singularity, technoutopianism, The Coming Technological Singularity, The Nature of the Firm, Thomas Kuhn: the structure of scientific revolutions, transaction costs, Turing machine, Vernor Vinge, Watson beat the top human players on Jeopardy!, World Values Survey
The Fifth-Generation Project failed to meet its objectives, as did its counterparts in the United States and Europe. A second AI winter descended. At this point, a critic could justifiably bemoan “the history of artificial intelligence research to date, consisting always of very limited success in particular areas, followed immediately by failure to reach the broader goals at which these initial successes seem at first to hint.”21 Private investors began to shun any venture carrying the brand of “artificial intelligence.” Even among academics and their funders, “AI” became an unwanted epithet.22 Technical work continued apace, however, and by the 1990s, the second AI winter gradually thawed. Optimism was rekindled by the introduction of new techniques, which seemed to offer alternatives to the traditional logicist paradigm (often referred to as “Good Old-Fashioned Artificial Intelligence,” or “GOFAI” for short), which had focused on high-level symbol manipulation and which had reached its apogee in the expert systems of the 1980s.
The performance of these early systems also suffered because of poor methods for handling uncertainty, reliance on brittle and ungrounded symbolic representations, data scarcity, and severe hardware limitations on memory capacity and processor speed. By the mid-1970s, there was a growing awareness of these problems. The realization that many AI projects could never make good on their initial promises led to the onset of the first “AI winter”: a period of retrenchment, during which funding decreased and skepticism increased, and AI fell out of fashion. A new springtime arrived in the early 1980s, when Japan launched its Fifth-Generation Computer Systems Project, a well-funded public–private partnership that aimed to leapfrog the state of the art by developing a massively parallel computing architecture that would serve as a platform for artificial intelligence.
The brain-like qualities of neural networks contrasted favorably with the rigidly logic-chopping but brittle performance of traditional rule-based GOFAI systems—enough so to inspire a new “-ism,” connectionism, which emphasized the importance of massively parallel sub-symbolic processing. More than 150,000 academic papers have since been published on artificial neural networks, and they continue to be an important approach in machine learning. Evolution-based methods, such as genetic algorithms and genetic programming, constitute another approach whose emergence helped end the second AI winter. It made perhaps a smaller academic impact than neural nets but was widely popularized. In evolutionary models, a population of candidate solutions (which can be data structures or programs) is maintained, and new candidate solutions are generated randomly by mutating or recombining variants in the existing population. Periodically, the population is pruned by applying a selection criterion (a fitness function) that allows only the better candidates to survive into the next generation.
Affordable Care Act / Obamacare, AI winter, Airbnb, Atul Gawande, Captain Sullenberger Hudson, Checklist Manifesto, Clayton Christensen, collapse of Lehman Brothers, computer age, crowdsourcing, deskilling, en.wikipedia.org, Erik Brynjolfsson, everywhere but in the productivity statistics, Firefox, Frank Levy and Richard Murnane: The New Division of Labor, Google Glasses, Ignaz Semmelweis: hand washing, Internet of things, job satisfaction, Joseph Schumpeter, knowledge worker, medical malpractice, medical residency, Menlo Park, minimum viable product, natural language processing, Network effects, Nicholas Carr, obamacare, pattern recognition, personalized medicine, pets.com, Productivity paradox, Ralph Nader, RAND corporation, Second Machine Age, self-driving car, Silicon Valley, Silicon Valley startup, six sigma, Skype, Snapchat, software as a service, Steve Jobs, Steven Levy, the payments system, The Wisdom of Crowds, Toyota Production System, Uber for X, Watson beat the top human players on Jeopardy!, Yogi Berra
Locations are approximate in e-readers, and you may need to page down one or more times after clicking a link to get to the indexed material. accountable care organizations, 59, 188 ACOs. See accountable care organizations Adams, Timothy, 231 Adler-Milstein, Julia, 248 Affordable Care Act (ACA), 15, 16, 239 See also Obamacare AI. See artificial intelligence (AI) AI winter, 102, 107 AIDS activists, 195–196 alerts, 134, 143–153, 251 ignoring, 135–137 ways to safely reduce number of alerts, 145–146 Althaus, Deb, 86 Altmann, Erik, 83 American College of Surgeons, 36 APIs, 192–193, 216 application programming interfaces. See APIs Arenson, Ron, 59, 61 Arizona General Hospital, 73 artificial intelligence (AI) AI winter, 102, 107 computers replacing the physician’s brain, 93–104 little AI, 113 See also medical AI athenahealth, 89, 215, 217, 226–231, 233 Augmedix, 180, 240, 241, 242 auscultation, 32, 33 automation hazards of overreliance on, 162–163 irony of, 162 Avrin, David, 50, 51 Bainbridge, Lisanne, 162 Baker, Stephen, 53 bar-coded medication administration (BCMA), 130 Baron, Richard, 68–69, 76–77 Batalden, Paul, 19 Bates, David, 222, 224 Baucus, Max, 16 Bayes, Thomas, 99 Bayes’ theorem, 99 bedside teaching, 35 Bell, Joseph, 97 Benioff, Marc, 233 Berwick, Don, 232 Beth Israel Deaconess Medical Center, 172, 176, 178, 186, 231 big data, 7, 115–123 biometric identifiers, 190 Birkmeyer, John, 79 Blair, Tony, 10, 17 blood tests, 32 bloodletting, 33 Bloom, Paul, 156 Blumenfeld, Barry, 67 Blumenthal, David, 15, 205–208, 235–236, 243–244, 268 Bolten, Josh, 11–12 Boston Children’s Hospital, 144 Brailer, David, 10–14, 18–19, 68, 207, 212 Brigham & Women’s Hospital, 87, 88, 187 Brown, Eric, 103, 118–119, 123 Brynjolfsson, Erik, 94, 250 productivity paradox, 244–245 bundled payments, 59 Burton, Matthew, 1–8, 10, 113 Bush, George W., 9–10, 11–12, 17 Bush, Jonathan, 89, 226–233 Carr, Nicholas, 275 case-mix adjustment, 40 Cedars-Sinai Medical Center, 67–68 CellScope, 240–241, 242 Cerner, 8, 86, 187, 222, 231 Chan, Benjamin, 139–141, 149–153, 155–157 Chang, Paul, 53, 62 the chart, 44–45 The Checklist Manifesto (Gawande), 121–122 Christensen, Clay, 12, 61, 217, 229 clinical research, 263–264 clinical trials, 33 clinicopathologic correlation, 31 Clinton, Hillary, 11 Clinton, William “Bill”, 9, 189 Code Blue, 2–4 Codman, Ernest, 36 cognitive computing, 146 cognitive load, 150–151 complementary innovations, 245 computer systems, replacing the physician’s brain, 93–104 computerized decision support for clinicians, 248, 251, 260 computerized provider order entry (CPOE), 130 “Connecting for Health” initiative, 10, 17 cookbook medicine, 120 Cramer, Jim, 233 creative destruction, 250–251 The Creative Destruction of Medicine (Topol), 250 CT scans, 50–51 quality of images, 52–53 stacking, 53 data.
The attitude was captured by one early computing pioneer in a 1971 paean to his computer: “It is immune from fatigue and carelessness; and it works day and night, weekends and holidays, without coffee breaks, overtime, fringe benefits or human courtesy.” By the mid-1980s, disappointment had set in. The tools that had seemed so promising a decade earlier were, by and large, unable to manage the complexity of clinical medicine, and they garnered few clinician advocates and miniscule commercial adoption. The medical AI movement skidded to a halt, marking the start of a 20-year period that insiders still refer to as the “AI winter.” Ted Shortliffe, one of the field’s longstanding leaders, has said that the early experience with programs like INTERNIST, DXplain, and MYCIN reminded him of a cartoon that showed an obviously distressed patient who had just been interviewed by a physician. Evidently the poor man had come from an archery field, because protruding from his back was a two-foot-long arrow. The doctor had turned to his office computer and, after examining the screen, proclaimed, “Rapid pulse, sweating, shallow breathing… .
First, during Isabel’s hospitalization, he was amazed that the doctors weren’t using computers to prompt them to consider all possible diagnoses. Second, “medicine is beautifully written down. There are very few industries that have textbooks, journals, and all this data on the Internet.” And third, “from my background in finance, I knew about some clever searching software.” I asked Maude whether he was aware of the AI winter—the junkyard of failed computerized diagnostic programs built in the 1970s and 1980s—when he decided to build a medical AI program. He was, but he tried using one of the existing programs and thought, “Well, it’s pretty rubbish.” “I think I’m sort of naturally bloody-minded,” he said, using the British slang for cantankerous. “I thought, ‘I’m just going to make Isabel better.’” Joining the long line of people who have underappreciated the complexity of this problem, he predicted that his quest would take “maybe three to five years.”
3D printing, AI winter, Amazon Web Services, artificial general intelligence, Automated Insights, Bernie Madoff, Bill Joy: nanobots, brain emulation, cellular automata, cloud computing, cognitive bias, computer vision, cuban missile crisis, Daniel Kahneman / Amos Tversky, Danny Hillis, data acquisition, don't be evil, Extropian, finite state, Flash crash, friendly AI, friendly fire, Google Glasses, Google X / Alphabet X, Isaac Newton, Jaron Lanier, John von Neumann, Kevin Kelly, Law of Accelerating Returns, life extension, Loebner Prize, lone genius, mutually assured destruction, natural language processing, Nicholas Carr, optical character recognition, PageRank, pattern recognition, Peter Thiel, prisoner's dilemma, Ray Kurzweil, Rodney Brooks, Search for Extraterrestrial Intelligence, self-driving car, semantic web, Silicon Valley, Singularitarianism, Skype, smart grid, speech recognition, statistical model, stealth mode startup, stem cell, Stephen Hawking, Steve Jobs, Steve Wozniak, strong AI, Stuxnet, superintelligent machines, technological singularity, The Coming Technological Singularity, traveling salesman, Turing machine, Turing test, Vernor Vinge, Watson beat the top human players on Jeopardy!, zero day
Or could something stop it? AGI defeaters cluster around two ideas: economics and software complexity. The first, economics, considers that funds won’t be available to get from narrow AI to the far more complex and powerful cognitive architectures of AGI. Few AGI efforts are well-funded. This prompts a subset of researchers to feel that their field is stuck in the endless stall of a so-called AI winter. They’ll escape if the government or a corporation like IBM or Google considers AGI a priority of the first order, and undertakes a Manhattan Project–sized effort to achieve it. During World War II, fast-tracking atomic weapons development cost the U.S. government about $2 billion dollars, in today’s valuation, and employed around 130,000 people. The Manhattan Project frequently comes up among researchers who want to achieve AGI soon.
Barring some other bottleneck, the world’s economy will be driven by the creation of strong artificial intelligence, and fueled by the growing global apprehension of all the ways it will change our lives. Up ahead we’ll explore another critical roadblock—software complexity. We’ll find out if the challenge of creating software architectures that match human-level intelligence is just too difficult to conquer, and whether or not all that stretches out ahead is a perpetual AI winter. Chapter Twelve The Last Complication How can we be so confident that we will build superintelligent machines? Because the progress of neuroscience makes it clear that our wonderful minds have a physical basis, and we should have learned by now that our technology can do anything that’s physically possible. IBM’s Watson, playing Jeopardy! as skillfully as human champions, is a significant milestone and illustrates the progress of machine language processing.
But today, if you suddenly removed all AI from these industries, you couldn’t get a loan, your electricity wouldn’t work, your car wouldn’t go, and most trains and subways would stop. Drug manufacturing would creak to a halt, faucets would run dry, and commercial jets would drop from the sky. Grocery stores wouldn’t be stocked, and stocks couldn’t be bought. And when were all these AI systems implemented? During the last thirty years, the so-called AI winter, a term used to describe a long decline in investor confidence, after early, overly optimistic AI predictions proved false. But there was no real winter. To avoid the stigma of the label “artificial intelligence,” scientists used more technical names like machine learning, intelligent agents, probabilistic inference, advanced neural networks, and more. And still the accreditation problem continues.
3D printing, additive manufacturing, Affordable Care Act / Obamacare, AI winter, algorithmic trading, Amazon Mechanical Turk, artificial general intelligence, autonomous vehicles, banking crisis, Baxter: Rethink Robotics, Bernie Madoff, Bill Joy: nanobots, call centre, Capital in the Twenty-First Century by Thomas Piketty, Chris Urmson, Clayton Christensen, clean water, cloud computing, collateralized debt obligation, computer age, debt deflation, deskilling, diversified portfolio, Erik Brynjolfsson, factory automation, financial innovation, Flash crash, Fractional reserve banking, Freestyle chess, full employment, Goldman Sachs: Vampire Squid, High speed trading, income inequality, indoor plumbing, industrial robot, informal economy, iterative process, Jaron Lanier, job automation, John Maynard Keynes: technological unemployment, John von Neumann, Khan Academy, knowledge worker, labor-force participation, labour mobility, liquidity trap, low skilled workers, low-wage service sector, Lyft, manufacturing employment, McJob, moral hazard, Narrative Science, Network effects, new economy, Nicholas Carr, Norbert Wiener, obamacare, optical character recognition, passive income, performance metric, Peter Thiel, Plutocrats, plutocrats, post scarcity, precision agriculture, price mechanism, Ray Kurzweil, rent control, rent-seeking, reshoring, RFID, Richard Feynman, Richard Feynman, Rodney Brooks, secular stagnation, self-driving car, Silicon Valley, Silicon Valley startup, single-payer health, software is eating the world, sovereign wealth fund, speech recognition, Spread Networks laid a new fibre optics cable between New York and Chicago, stealth mode startup, stem cell, Stephen Hawking, Steve Jobs, Steven Levy, Steven Pinker, strong AI, Stuxnet, technological singularity, telepresence, telepresence robot, The Bell Curve by Richard Herrnstein and Charles Murray, The Coming Technological Singularity, Thomas L Friedman, too big to fail, Tyler Cowen: Great Stagnation, union organizing, Vernor Vinge, very high income, Watson beat the top human players on Jeopardy!, women in the workforce
Fascination with the idea of building a true thinking machine traces its origin at least as far back as 1950, when Alan Turing published the paper that ushered in the field of artificial intelligence. In the decades that followed, AI research was subjected to a boom-and-bust cycle in which expectations repeatedly soared beyond any realistic technical foundation, especially given the speed of the computers available at the time. When disappointment inevitably followed, investment and research activity collapsed and long, stagnant periods that have come to be called “AI winters” ensued. Spring has once again arrived, however. The extraordinary power of today’s computers combined with advances in specific areas of AI research, as well as in our understanding of the human brain, are generating a great deal of optimism. James Barrat, the author of a recent book on the implications of advanced AI, conducted an informal survey of about two hundred researchers in human-level, rather than merely narrow, artificial intelligence.
Never before have such deep-pocketed corporations viewed artificial intelligence as absolutely central to their business models—and never before has AI research been positioned so close to the nexus of competition between such powerful entities. A similar competitive dynamic is unfolding among nations. AI is becoming indispensable to militaries, intelligence agencies, and the surveillance apparatus in authoritarian states.* Indeed, an all-out AI arms race might well be looming in the near future. The real question, I think, is not whether the field as a whole is in any real danger of another AI winter but, rather, whether progress remains limited to narrow AI or ultimately expands to Artificial General Intelligence as well. If AI researchers do eventually manage to make the leap to AGI, there is little reason to believe that the result will be a machine that simply matches human-level intelligence. Once AGI is achieved, Moore’s Law alone would likely soon produce a computer that exceeded human intellectual capability.
INDEX ABB Group, 10 accelerometer, 4–5 ACFR. See Australian Centre for Field Robotics (ACFR) adaptive learning systems, 143 Adenhart, Nick, 83 Ad Hoc Committee on the Triple Revolution, 30–31 administrative costs, higher education, 140–141 Aethon, Inc., 154 Affordable Care Act, 151, 165, 167n, 168n, 204, 279 aging populations, xvii, 220–223, 224 agriculture, ix, x–xi, 23–26 AI. See artificial intelligence (AI) “AI winters,” 231 Alaska, annual dividend, 268 algorithms acceleration in development of, 71 automated trading, 56, 113–115 increasing efficiency of, 64 machine learning, 89, 93, 100–101, 107–115, 130–131 threat to jobs, xv, 85–86 alien invasion parable, 194–196, 240 “All Can Be Lost: The Risk of Putting Our Knowledge in the Hands of Machines” (Carr), 254 all-payer ceiling, 168–169 all-payer rates, 167–169 Amazon.com, 16–17, 76, 89 artificial intelligence and, 231 cloud computing and, 104–105, 107 delivery model, 190, 190n “Mechanical Turk” service, 125n AMD (Advanced Micro Devices), 70n American Airlines, 179 American Hospital Association, 168 American Motors, 76 Andreesen, Marc, 107 Android, 6, 21, 79, 121 Apple, Inc., 17, 20, 51, 92, 106–107, 279 Apple Watch, 160 apps, difficulty in monetizing, 79 Arai, Noriko, 127–128 Aramco, 68 Ariely, Dan, 47n Arrow, Kenneth, 162, 169 art, machines creating, 111–113 Artificial General Intelligence (AGI), 231–233 dark side of, 238–241 the Singularity and, 233–238 artificial intelligence (AI), xiv arms race and, 232, 239–240 in medicine, 147–153 narrow, 229–230 offshoring and, 118–119 warnings concerning dangers of, 229 See also Artificial General Intelligence (AGI); automation; information technology Artificial Intelligence Laboratory (Stanford University), 6 artificial neural networks, 90–92.
Only Humans Need Apply: Winners and Losers in the Age of Smart Machines by Thomas H. Davenport, Julia Kirby
AI winter, Andy Kessler, artificial general intelligence, asset allocation, Automated Insights, autonomous vehicles, Baxter: Rethink Robotics, business intelligence, business process, call centre, carbon-based life, Clayton Christensen, clockwork universe, conceptual framework, dark matter, David Brooks, deliberate practice, deskilling, Edward Lloyd's coffeehouse, Elon Musk, Erik Brynjolfsson, estate planning, follow your passion, Frank Levy and Richard Murnane: The New Division of Labor, Freestyle chess, game design, general-purpose programming language, Google Glasses, Hans Lippershey, haute cuisine, income inequality, index fund, industrial robot, information retrieval, intermodal, Internet of things, inventory management, Isaac Newton, job automation, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, Khan Academy, knowledge worker, labor-force participation, loss aversion, Mark Zuckerberg, Narrative Science, natural language processing, Norbert Wiener, nuclear winter, pattern recognition, performance metric, Peter Thiel, precariat, quantitative trading / quantitative ﬁnance, Ray Kurzweil, Richard Feynman, Richard Feynman, risk tolerance, Robert Shiller, Robert Shiller, Rodney Brooks, Second Machine Age, self-driving car, Silicon Valley, six sigma, Skype, speech recognition, spinning jenny, statistical model, Stephen Hawking, Steve Jobs, Steve Wozniak, strong AI, superintelligent machines, supply-chain management, transaction costs, Tyler Cowen: Great Stagnation, Watson beat the top human players on Jeopardy!, Works Progress Administration, Zipcar
Tracing all the steps between where we are now and that great convergence is a valuable exercise, because it helps to reveal the workplace realities that are most likely within the spans of our own careers. As we’ll see, there will remain plenty of opportunities to work with smart machines that don’t yet have it all. Ode to AI Spring For the sellers of smart machines, if we may slightly paraphrase Gerard Manley Hopkins, nothing is so beautiful as AI spring. The observation that artificial intelligence has its seasons of enthusiasm and also (in AI winter) of despair has become commonplace; by most accounts, the term “AI winter” was first coined as an allusion to nuclear winter, a level of devastation that seemed analogous when a slew of AI-related companies that had been founded in the 1970s all went bust in the early 1980s. By later in that same decade, a thaw was beginning. (In 1988, for example, Time magazine had AI back on its cover with an in-depth story called “Putting Knowledge to Work.”)
Accenture, 83, 102, 134, 183 Adrià, Ferran, 122 Aetna, 83 AI (film), 125 Ainge, Danny, 117 Allen, Robbie, 97 Allstate, 94, 103–4 Amazon Echo, 167 Amazon Robotics (Kiva Systems), 2–3 Amplify, 20 “Analytics 3.0,” 42–43 Analytics Revolution, The (Franks), 43 AnalytixInsight, 22 Anders, George, 120 Anthem, 15, 84 Aplin, Ken, 153–54 Apollo Guidance Computer, 67 Apple, 63 Archilochus, 171 architect jobs, 23, 24–25, 151 Ariely, Dan, 113 Armstrong, Stuart, 226, 249 Arnett, Thomas, 84 artificial intelligence, 7, 26, 33–58, 141, 163, 189. See also cognitive technologies; computers “AI winter,” 36 Apollo flights and, 67 Asimov’s three laws for, 244 augmentation of soft skills and, 123 automating administrative tasks and, 216 broadening the base of methods, 194 cancer cure and, 60–61 consequences of, 243–46, 249–50 cost to build, 155 in education, 230 endowing with emotions, 246 expert systems, 37 fighter pilots and, 66 human attributes, 124–27, 245 human control of, 244–45 investing by, 92–93 move from “narrow” to “general,” 35 programmers, 178 regulatory oversight of, 246–49 researchers and, 181 Ruby programming language, 222 self-awareness and AI spring, 54–57 social memory, 123 society-level decisions about, 226–28 term use and branches of, 37–38 Thiel on, 243 universe model and KIGEN, 59 warnings and predictions about, 225–26 Warren, 20 (see also IBM Watson) artists, 24, 119, 238 Asimov, Isaac, 244 Associated Press (AP) automated journalism, 96–98, 103, 104, 121, 222 investing in automated publishing, 97 Atlas, David, 121 Auerbach, Red, 116–17 Auerswald, Phil, 128 augmentation, 31–32, 59–88, 201–24, 234 “automation leader” for, 221–23 automation vs., 61–63, 128–29, 204–8, 223–24 cutting both ways, 70–74 defined, 64–65 example, spreadsheets, 69–70 example, underwriting, 77–84, 218–19 five options for, 76–77, 218 forms of, 65–69, 209 as goal, 228–29 in governance, 249–51 government policies and, 229–43 human-machine partnership, 68, 203, 234, 235, 237, 239, 250, 251 implementing, steps for, 208–23 key capabilities of humans and, 71–73 less work not likely with, 69–70 “moon shots,” 210, 215–16 as organizational priority, 203–4 preparing employees for, 219 proofs of concept or pilots, 220–21 reasons for augmentation, 204–8 smart machines and job security, 59–61 Stepping Aside, 77, 81, 85, 87, 108–30 Stepping Forward, 77, 83–86, 88, 176–200 Stepping In, 77, 81–82, 85, 87, 131–52 Stepping Narrowly, 77, 82, 85, 87–88, 153–75 Stepping Up, 76–77, 80, 84–86, 89–107 three forms, for specialization, 166–69 as wheels for the mind, 63–65 Augmentation Research Center, 64 “Augmenting Human Intellect” (Engelbart), 64 Automated Insights, 22, 97 Wordsmith, 96 automation, 1, 3–4, 5, 6, 8, 12–13 augmentation vs., 61–63, 128–29, 204–8, 223–24 business process management, 40 codified tasks and, 12–13, 14, 27–28, 30 content transmission and, 19–20 eras of, 2–5 government policies and, 229–43 income inequality and, 228–29 “isolation syndrome,” 24 job losses and, 1–6, 8, 30, 78, 150–51, 167, 223–24, 226, 227, 238 jobs resistant to, 153–75 process automation, 48–49 “race against the machine,” 8, 29 reductions in cost and time, 48, 49 regulated sectors and legal constraints on, 213–15 repetitive task, 42, 47–48, 49, 50 robotic process, 48–49, 187, 221, 222–23 “rule engines,” 47 sectors using, 1, 11–12, 13, 18, 74, 201–3 (see also specific industries) signs of coming automation, 19–22 Stepping Forward with, 176–200 Stepping In with, 134–52 Stepping Up and, 91–95 strategy of, as self-defeating, 204–8 strongest evidence of job threat, 19 Automation Anywhere, 48, 216 automotive sector, 1 Autor, David, 70–71 Balaporia, Zahir, 189–91 Bankrate.com, 96 Bathgate, Alastair, 156, 157 Baylor College of Medicine, 212 Beaudry, Paul, 6, 24 Belmont, Chris, 209 Berg company, 60–61 Berlin, Isaiah, 171 Bernanke, Ben, 28, 42, 73 Bernaski, Michael, 79, 80, 81, 82, 187 Bessen, James, 133, 233 Betterment, 86–87, 198 big-picture perspective, 71, 75, 76–77, 84, 91, 92, 99, 100, 155 Stepping Up and, 98–100 Binsted, Kim, 125 “black box,” 95, 134, 139, 148, 192, 198 Blanke, Jennifer, 7 Blue Prism, 49, 156, 216, 221 Bohrer, Abram, 159 Bostrom, Nick, 226, 227 Brackett, Glenn, 128 Braverman, Harry, 15–16 Breaking Bad (TV show), 172 Brem, Rachel, 181–82 Bridgewater Associates, 92–93 Brooks, David, 241 Brooks, Rodney, 170, 182 Brown, John Seely, 237 Brynjolfsson, Erik, 6, 8, 27, 74 Bryson, Joanna J., 226 Buehner, Carl, 120 Buffett, Warren, 244 Bush, Vannevar, 64, 248 Bustarret, Claire, 154 BYOD (Bring Your Own Device), 13 Cameron, James, 165–66 Carey, Greg, 154, 156, 172–73 Carr, Nick, 162 CastingWords, 168 Catanzaro, Sandro, 179–80, 193 Cathcart, Ron, 89–91, 95 Cerf, Vint, 248 Chambers, Joshua, 250 Charles Schwab, 88 chess, 74–76 Chi, Michelene, 163 Chicago Mercantile Exchange, 11–12 Chilean miners, 201–2 China, 239 Chiriac, Marcel, 217 Circle (Internet start-up), 146 Cisco, 43 Civilian Conservation Corps (CCC), 238 “Claiming our Humanity in the Digital Age,” 248 Class Dojo, 141 Cleveland Clinic, 54 Clifton, Jim, 8 Clinton, Bill, 108 Clockwork Universe, The (Dolnick), 169–70 Codelco/Codelco Digital, 40, 201–3 Cognex, 47 CognitiveScale, 45, 194, 209 cognitive technologies, 4–5, 32, 33–58.
Pandora's Brain by Calum Chace
3D printing, AI winter, Any sufficiently advanced technology is indistinguishable from magic, artificial general intelligence, brain emulation, Extropian, friendly AI, hive mind, Ray Kurzweil, self-driving car, Silicon Valley, Singularitarianism, Skype, speech recognition, stealth mode startup, Stephen Hawking, strong AI, technological singularity, theory of mind, Turing test, Wall-E
We are currently experiencing the third wave of optimism. The first wave was largely funded by the US military, and one of its champions, Herbert Simon, claimed in 1965 that ‘machines will be capable, within twenty years, of doing any work a man can do.’ Claims like this turned out to be wildly unrealistic, and disappointment was crystallised by a damning government report in 1974. Funding was cut off, causing the first ‘AI winter’. ‘Interest was sparked again in the early 1980s, when Japan announced its ‘fifth generation’ computer research programme. ‘Expert systems’, which captured and deployed the specialised knowledge of human experts were also showing considerable promise. This second boom was extinguished in the late 1980s when the expensive, specialised computers which drove it were overtaken by smaller, general-purpose desktop machines manufactured by IBM and others.
But impressed by the continued progress of Moore’s Law, which observes that computer processing power is doubling every 18 months, more and more scientists now believe that humans may create an artificial intelligence sometime this century. One of the more optimistic, Ray Kurzweil, puts the date as close as 2029.’ As the lights came back up, Ross was standing again, poised in front of the seated guests. ‘So, Professor Montaubon. Since David and Matt’s dramatic adventure the media has been full of talk about artificial intelligence. Are we just seeing the hype again? Will we shortly be heading to into another AI winter?’ ‘I don’t think so,’ replied Montaubon, cheerfully. ‘It is almost certain that artificial intelligence will arrive much sooner than most people think. Before long we will have robots which carry out our domestic chores. And people will notice that as each year’s model becomes more eerily intelligent than the last, they are progressing towards a genuine, conscious artificial intelligence. It will happen first in the military space first, because that is where the big money is.’
His work was highly confidential and it would have been very controversial – if anyone knew about it. But that didn’t bother him: he was following his passion. He had been fascinated by the human brain for as long as he could remember, and developing artificial intelligence seemed the best way to understand our own intelligence. His career had begun in the early 1990s, just as interest in the field was picking up – recovering from the ‘AI winter’ brought on by the failure of the Japanese Fifth Generation programme in the 1980s. He had benefited enormously from the influx of funding, which provided superb facilities and equipment, and rapid promotion opportunities for anyone who was ambitious and pliable. Which he was. Thanks to hard work and talent, his progress up the academic ladder had been swift, and he felt his efforts had been rewarded two years ago when he was recruited by the South Korean army for a senior role in a top-secret project – developing the country’s most advanced artificial intelligence software.
3D printing, A Declaration of the Independence of Cyberspace, AI winter, Airbnb, Albert Einstein, Amazon Web Services, augmented reality, bank run, barriers to entry, Baxter: Rethink Robotics, bitcoin, blockchain, book scanning, Brewster Kahle, Burning Man, cloud computing, computer age, connected car, crowdsourcing, dark matter, dematerialisation, Downton Abbey, Edward Snowden, Elon Musk, Filter Bubble, Freestyle chess, game design, Google Glasses, hive mind, Howard Rheingold, index card, indoor plumbing, industrial robot, Internet Archive, Internet of things, invention of movable type, invisible hand, Jaron Lanier, Jeff Bezos, job automation, Kevin Kelly, Kickstarter, linked data, Lyft, M-Pesa, Marshall McLuhan, means of production, megacity, Minecraft, multi-sided market, natural language processing, Netflix Prize, Network effects, new economy, Nicholas Carr, peer-to-peer lending, personalized medicine, placebo effect, planetary scale, postindustrial economy, recommendation engine, RFID, ride hailing / ride sharing, Rodney Brooks, self-driving car, sharing economy, Silicon Valley, slashdot, Snapchat, social graph, social web, software is eating the world, speech recognition, Stephen Hawking, Steven Levy, Ted Nelson, the scientific method, transport as a service, two-sided market, Uber for X, Watson beat the top human players on Jeopardy!, Whole Earth Review
Betterment or Wealthfront: Rob Berger, “7 Robo Advisors That Make Investing Effortless,” Forbes, February 5, 2015. 80 percent of its revenue: Rick Summer, “By Providing Products That Consumers Use Across the Internet, Google Can Dominate the Ad Market,” Morningstar, July 17, 2015. 3 billion queries that Google conducts: Danny Sullivan, “Google Still Doing at Least 1 Trillion Searches Per Year,” Search Engine Land, January 16, 2015. Google CEO Sundar Pichai stated: James Niccolai, “Google Reports Strong Profit, Says It’s ‘Rethinking Everything’ Around Machine Learning,” ITworld, October 22, 2015. the AI winter: “AI Winter,” Wikipedia, accessed July 24, 2015. Billions of neurons in our brain: Frederico A. C. Azevedo, Ludmila R. B. Carvalho, Lea T. Grinberg, et al., “Equal Numbers of Neuronal and Non-Neuronal Cells Make the Human Brain an Isometrically Scaled-up Primate Brain,” Journal of Comparative Neurology 513, no. 5 (2009): 532–41. run neural networks in parallel: Rajat Raina, Anand Madhavan, and Andrew Y.
My prediction: By 2026, Google’s main product will not be search but AI. This is the point where it is entirely appropriate to be skeptical. For almost 60 years, AI researchers have predicted that AI is right around the corner, yet until a few years ago it seemed as stuck in the future as ever. There was even a term coined to describe this era of meager results and even more meager research funding: the AI winter. Has anything really changed? Yes. Three recent breakthroughs have unleashed the long-awaited arrival of artificial intelligence: 1. Cheap Parallel Computation Thinking is an inherently parallel process. Billions of neurons in our brain fire simultaneously to create synchronous waves of computation. To build a neural network—the primary architecture of AI software—also requires many different processes to take place simultaneously.
Final Jeopardy: Man vs. Machine and the Quest to Know Everything by Stephen Baker
23andMe, AI winter, Albert Einstein, artificial general intelligence, business process, call centre, clean water, computer age, Frank Gehry, information retrieval, Iridium satellite, Isaac Newton, job automation, pattern recognition, Ray Kurzweil, Silicon Valley, Silicon Valley startup, statistical model, theory of mind, thinkpad, Turing test, Vernor Vinge, Wall-E, Watson beat the top human players on Jeopardy!
Machines, it seemed, would soon master language, recognize faces, and maneuver, as robots, in factories, hospitals, and homes. In short, computer scientists would create a superendowed class of electronic servants. This led, of course, to failed promises, to such a point that Artificial Intelligence became a term of derision. Bold projects to build bionic experts and conversational computers lost their sponsors. A long AI winter ensued, lasting through much of the ’80s and ’90s. What went wrong? In retrospect, it seems almost inconceivable that leading scientists, including Nobel laureates like Simon, believed it would be so easy. They certainly appreciated the complexity of the human brain. But they also realized that a lot of that complexity was tied up in dreams, memories, guilt, regrets, faith, desires, along with the controls to maintain the physical body.
Someone who has won the Boston Marathon might be contentedly weary. Another, in a divorce hearing, is anything but. One person may slack his jaw in an exaggerated way, as if to say “Know what I mean?” In this tiny negotiation, far beyond the range and capabilities of machines, two people can bridge the gap between the formal definition of a word and what they really want to say. It’s hard to nail down the exact end of AI winter. A certain thaw set in when IBM’s computer Deep Blue bested Garry Kasparov in their epic 1997 showdown. Until that match, human intelligence, with its blend of historical knowledge, pattern recognition, and the ability to understand and anticipate the behavior of the person across the board, ruled the game. Human grandmasters pondered a rich set of knowledge, jewels that had been handed down through the decades—from Bobby Fischer’s use of the Sozin Variation in his 1972 match with Boris Spassky to the history of the Queen’s Gambit Denied.
agricultural Revolution, AI winter, Albert Einstein, augmented reality, Bill Joy: nanobots, bioinformatics, blue-collar work, British Empire, Brownian motion, cloud computing, Colonization of Mars, DARPA: Urban Challenge, delayed gratification, double helix, Douglas Hofstadter, en.wikipedia.org, friendly AI, Gödel, Escher, Bach, hydrogen economy, I think there is a world market for maybe five computers, industrial robot, invention of movable type, invention of the telescope, Isaac Newton, John von Neumann, life extension, Louis Pasteur, Mahatma Gandhi, Mars Rover, megacity, Murray Gell-Mann, new economy, oil shale / tar sands, optical character recognition, pattern recognition, planetary scale, postindustrial economy, Ray Kurzweil, refrigerator car, Richard Feynman, Richard Feynman, Rodney Brooks, Ronald Reagan, Search for Extraterrestrial Intelligence, Silicon Valley, Simon Singh, speech recognition, stem cell, Stephen Hawking, Steve Jobs, telepresence, The Wealth of Nations by Adam Smith, Thomas L Friedman, Thomas Malthus, trade route, Turing machine, uranium enrichment, Vernor Vinge, Wall-E, Walter Mischel, Whole Earth Review, X Prize
Chess-playing machines could not win against a human expert, and could play only chess, nothing more. These early robots were like a one-trick pony, performing just one simple task. In fact, in the 1950s, real breakthroughs were made in AI, but because the progress was vastly overstated and overhyped, a backlash set in. In 1974, under a chorus of rising criticism, the U.S. and British governments cut off funding. The first AI winter set in. Today, AI researcher Paul Abrahams shakes his head when he looks back at those heady times in the 1950s when he was a graduate student at MIT and anything seemed possible. He recalled, “It’s as though a group of people had proposed to build a tower to the moon. Each year they point with pride at how much higher the tower is than it was the previous year. The only trouble is that the moon isn’t getting much closer.”
The Fifth Generation Project’s goal was, among others, to have a computer system that could speak conversational language, have full reasoning ability, and even anticipate what we want, all by the 1990s. Unfortunately, the only thing that the smart truck did was get lost. And the Fifth Generation Project, after much fanfare, was quietly dropped without explanation. Once again, the rhetoric far outpaced the reality. In fact, there were real gains made in AI in the 1980s, but because progress was again overhyped, a second backlash set in, creating the second AI winter, in which funding again dried up and disillusioned people left the field in droves. It became painfully clear that something was missing. In 1992 AI researchers had mixed feelings holding a special celebration in honor of the movie 2001, in which a computer called HAL 9000 runs amok and slaughters the crew of a spaceship. The movie, filmed in 1968, predicted that by 1992 there would be robots that could freely converse with any human on almost any topic and also command a spaceship.
AI researcher Richard Heckler says, “Today, you can buy chess programs for $49 that will beat all but world champions, yet no one thinks they’re intelligent.” But with Moore’s law spewing out new generations of computers every eighteen months, sooner or later the old pessimism of the past generation will be gradually forgotten and a new generation of bright enthusiasts will take over, creating renewed optimism and energy in the once-dormant field. Thirty years after the last AI winter set in, computers have advanced enough so that the new generation of AI researchers are again making hopeful predictions about the future. The time has finally come for AI, say its supporters. This time, it’s for real. The third try is the lucky charm. But if they are right, are humans soon to be obsolete? IS THE BRAIN A DIGITAL COMPUTER? One fundamental problem, as mathematicians now realize, is that they made a crucial error fifty years ago in thinking the brain was analogous to a large digital computer.
AI winter, call centre, carbon footprint, crowdsourcing, demand response, discovery of DNA, Erik Brynjolfsson, future of work, Geoffrey West, Santa Fe Institute, global supply chain, Internet of things, John von Neumann, Mars Rover, natural language processing, optical character recognition, pattern recognition, planetary scale, RAND corporation, RFID, Richard Feynman, Richard Feynman, smart grid, smart meter, speech recognition, Turing test, Von Neumann architecture, Watson beat the top human players on Jeopardy!
John McCarthy, a professor at Stanford University, coined the term “artificial intelligence” in 1955, and Marvin Minsky, a professor at MIT, has produced a long series of advances in the field since the 1950s. He’s now focusing on giving machines the ability to perform humanlike commonsense reasoning.6 Today, Andrew Ng of Stanford is leading a team of scientists in an attempt to create algorithms that can learn based on principles that the brain might also employ. The field of artificial intelligence has advanced in starts and stops. Periods of soaring optimism have been followed by so-called AI winters, when seemingly promising avenues of research failed to produce the anticipated results. Put simply, this is hard stuff. So it’s no surprise that when Charles Lickel proposed Jeopardy! as the next grand challenge, his suggestion was met initially with reactions ranging from skepticism to outright derision. But he quickly won Paul Horn’s support. Paul thought the project could be very exciting—both to computer scientists and the public at large.7 In mid-2006 Charles gave the go-ahead to researcher David Ferrucci, who was an enthusiastic evangelist for the project, to explore whether building a machine that could win on Jeopardy!
The Future of the Professions: How Technology Will Transform the Work of Human Experts by Richard Susskind, Daniel Susskind
23andMe, 3D printing, additive manufacturing, AI winter, Albert Einstein, Amazon Mechanical Turk, Amazon Web Services, Andrew Keen, Atul Gawande, Automated Insights, autonomous vehicles, Big bang: deregulation of the City of London, big data - Walmart - Pop Tarts, Bill Joy: nanobots, business process, business process outsourcing, Cass Sunstein, Checklist Manifesto, Clapham omnibus, Clayton Christensen, clean water, cloud computing, computer age, computer vision, conceptual framework, corporate governance, crowdsourcing, Daniel Kahneman / Amos Tversky, death of newspapers, disintermediation, Douglas Hofstadter, en.wikipedia.org, Erik Brynjolfsson, Filter Bubble, Frank Levy and Richard Murnane: The New Division of Labor, full employment, future of work, Google Glasses, Google X / Alphabet X, Hacker Ethic, industrial robot, informal economy, information retrieval, interchangeable parts, Internet of things, Isaac Newton, James Hargreaves, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, Joseph Schumpeter, Khan Academy, knowledge economy, lump of labour, Marshall McLuhan, Narrative Science, natural language processing, Network effects, optical character recognition, personalized medicine, pre–internet, Ray Kurzweil, Richard Feynman, Richard Feynman, Second Machine Age, self-driving car, semantic web, Skype, social web, speech recognition, spinning jenny, strong AI, supply-chain management, telepresence, the market place, The Wealth of Nations by Adam Smith, The Wisdom of Crowds, transaction costs, Turing test, Watson beat the top human players on Jeopardy!, young professional
What was most remarkable, although we say so ourselves, was that we managed to build a legal problem-solver that was in significant respects a better performer than the lawyer (the domain expert) on the basis of whose knowledge it was built.105 After that project we broadened our interest and worked on expert systems in tax as well as systems that were for use by auditors. Here again, we were heavily involved in the development of systems that could undertake expert tasks at a high level.106 At the same time, we also kept close to parallel advances in medicine where substantial progress was being made. These early successes generated much excitement. Then came what is often referred to as the ‘AI winter’, the period during which AI seemed to stall. In the professions, certainly, thirty years on, there are far fewer operational expert systems of the sort we developed than we had expected. What went wrong? Why have so few expert systems in law, tax, and audit emerged since then? Why was this great early promise not fulfilled?107 One reason for the lack of uptake was commercial—these systems were very costly to develop (hugely time-consuming for the experts whose knowledge went into the systems), at a time when law and accounting firms were increasingly profitable and saw no reason to embrace innovative technologies that might undermine their winning streak.
These systems will provide high-quality advice and guidance, but not by reasoning or working in the same way as skilled specialists; nor by seeking to model human thoughts and reasoning processes; nor again by having common sense or general knowledge. These systems are high-performing but are not intelligent in the way that human beings are (we expand on this in section 7.1). On this view, we need to reappraise AI. For many commentators, the AI winter was a euphemism for AI’s demise. But it transpires that AI has not been expiring. It has instead been hibernating, conserving its energy, as it were, ticking over quietly in the background, waiting for enabling technologies to emerge and catch up with some of the original aspirations of the early AI scientists. In the thaw that has followed the winter, over the past few years, we have seen a series of significant developments—Big Data, Watson, robotics, and affective computing—that we believe point to a second wave of AI.
3D printing, additive manufacturing, agricultural Revolution, AI winter, Airbnb, artificial general intelligence, augmented reality, autonomous vehicles, banking crisis, Baxter: Rethink Robotics, Berlin Wall, Bernie Sanders, bitcoin, blockchain, call centre, Chris Urmson, congestion charging, credit crunch, David Ricardo: comparative advantage, Elon Musk, en.wikipedia.org, Erik Brynjolfsson, Flynn Effect, full employment, future of work, gender pay gap, gig economy, Google Glasses, Google X / Alphabet X, income inequality, industrial robot, Internet of things, invention of the telephone, invisible hand, James Watt: steam engine, Jaron Lanier, Jeff Bezos, job automation, John Maynard Keynes: technological unemployment, John von Neumann, Kevin Kelly, knowledge worker, lump of labour, Lyft, Mark Zuckerberg, Martin Wolf, McJob, means of production, Milgram experiment, Narrative Science, natural language processing, new economy, Occupy movement, Oculus Rift, PageRank, pattern recognition, post scarcity, post-industrial society, precariat, prediction markets, QWERTY keyboard, railway mania, RAND corporation, Ray Kurzweil, RFID, Rodney Brooks, Satoshi Nakamoto, Second Machine Age, self-driving car, sharing economy, Silicon Valley, Skype, software is eating the world, speech recognition, Stephen Hawking, Steve Jobs, TaskRabbit, technological singularity, Thomas Malthus, transaction costs, Tyler Cowen: Great Stagnation, Uber for X, universal basic income, Vernor Vinge, working-age population, Y Combinator, young professional
Herbert Simon said in 1965 that “machines will be capable, within twenty years, of doing any work a man can do,”[lxii] and two years later Marvin Minksy said that “Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved.”[lxiii] These early claims turned out to be ill-founded, and later generations of researchers found their sources of funding dried up in so-called AI winters. Some leading figures in the field today are worried that a similar fate may befall them because, they say, excessive claims are being made about the capabilities of AI systems today, and what can be achieved in the short term. This seems to me an ungrounded fear. Machine intelligence is the target of enormous investments – by technology giants like Google and Facebook, by startups, by traditional companies like the automotive manufacturers, and by governments.
Successful Lisp - About by Unknown
If your own Lisp experience predates 1985 or so, you probably share this view. But in 1984, the year Big Brother never really became a reality (did it?), the year that the first bleeding-edge (but pathetic by today's standards) Macintosh started volume shipments, the Lisp world started changing. Unfortunately, most programmers never noticed; Lisp's fortune was tied to AI, which was undergoing a precipitous decline -- The AI Winter -- just as Lisp was coming of age. Some say this was bad luck for Lisp. I look at the resurgence of interest in other dynamic languages and the problems wrestled with by practicioners and vendors alike, and wonder whether Lisp wasn't too far ahead of its time. I changed my opinion of Lisp over the years, to the point where it's not only my favorite progamming language, but also a way of structuring much of my thinking about programming.
AI winter, artificial general intelligence, bioinformatics, brain emulation, combinatorial explosion, complexity theory, computer vision, conceptual framework, correlation coefficient, epigenetics, friendly AI, information retrieval, Isaac Newton, John Conway, Loebner Prize, Menlo Park, natural language processing, Occam's razor, p-value, pattern recognition, performance metric, Ray Kurzweil, Rodney Brooks, semantic web, statistical model, strong AI, theory of mind, traveling salesman, Turing machine, Turing test, Von Neumann architecture, Y2K
Probably most people will do both. For example, I’m a little surprised at this workshop that there is not a larger contingent of the brain modeling crowd. I hope we will by next year’s conference, or workshop. For example, Henry Markram, the Swiss IBM, the Blue Brain project, that kind of thing. So I hope at the next workshop there will be a bigger contingent of that aspect. Obstacles? Well you just talked about AI Winters and Summers. I don’t know how many cycles it’s been through. Is it three, five? Is there a lesson to be learnt there? I guess one obvious one is just ignorance. Even if you take a purely engineering approach, how the hell do you build an intelligent machine? My sense is we don’t even know what the target is; we don’t even know how difficult it is to do that thing. So it’s very hard to figure out how long it’s going to take to get there.
3D printing, agricultural Revolution, AI winter, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, algorithmic trading, artificial general intelligence, augmented reality, autonomous vehicles, bitcoin, blockchain, clean water, cognitive dissonance, Colonization of Mars, complexity theory, computer age, computer vision, constrained optimization, corporate personhood, cosmological principle, cryptocurrency, cuban missile crisis, Danny Hillis, dark matter, discrete time, Elon Musk, Emanuel Derman, endowment effect, epigenetics, Ernest Rutherford, experimental economics, Flash crash, friendly AI, Google Glasses, hive mind, income inequality, information trail, Internet of things, invention of writing, iterative process, Jaron Lanier, job automation, John von Neumann, Kevin Kelly, knowledge worker, loose coupling, microbiome, Moneyball by Michael Lewis explains big data, natural language processing, Network effects, Norbert Wiener, pattern recognition, Peter Singer: altruism, phenotype, planetary scale, Ray Kurzweil, recommendation engine, Republic of Letters, RFID, Richard Thaler, Rory Sutherland, Search for Extraterrestrial Intelligence, self-driving car, sharing economy, Silicon Valley, Skype, smart contracts, speech recognition, statistical model, stem cell, Stephen Hawking, Steve Jobs, Steven Pinker, Stewart Brand, strong AI, Stuxnet, superintelligent machines, supervolcano, the scientific method, The Wisdom of Crowds, theory of mind, Thorstein Veblen, too big to fail, Turing machine, Turing test, Von Neumann architecture, Watson beat the top human players on Jeopardy!, Y2K
However, in recent years the climate for ambitious artificial intelligence research has much improved, no doubt due to a string of stunning successes in the field. Not only have a number of longstanding challenges finally been met but there’s a growing sense among the community that the best is yet to come. We see this in our interactions with a wide range of researchers, and it can also be seen from the way in which media articles about artificial intelligence have changed in tone. If you hadn’t already noticed, the AI Winter is over and the AI Spring has begun. As with many trends, some people are a little too optimistic about the rate of progress, going as far as predicting that a solution to human-level artificial intelligence might be just around the corner. It’s not. Furthermore, given the negative portrayals of futuristic artificial intelligence in Hollywood, it’s perhaps not surprising that doomsday images still appear with some frequency in the media.