WebSocket

17 results back to index


pages: 210 words: 42,271

Programming HTML5 Applications by Zachary Kessin

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

barriers to entry, continuous integration, fault tolerance, Firefox, Google Chrome, mandelbrot fractal, QWERTY keyboard, web application, WebSocket

As of this writing (August 2011), web sockets are supported by Chrome version 8 and later and Safari version 5. As of Firefox version 6, web sockets are available, but the constructor is MozWebSockets. Opera has implemented the web sockets spec but leaves it turned off by default, pending work on security issues. For browsers that do not support web sockets, fallbacks using classic HTTP or Flash can work. There are also some libraries such as socket.io that will provide a constant interface for web sockets and the fallback to older-style HTTP communications for browsers that may not support web sockets. It is also possible to emulate web sockets via Flash for browsers that support Flash but not web sockets. The Web Sockets specification document also appears to be a work in progress. While web sockets have been deployed in several browsers, there is still very little documentation on how to implement them.

In recent years it has moved into the web space because all of the traits that make it useful in phone switches are very useful in a web server. The Erlang Yaws web server also supports web sockets right out of the box. The documentation can be found at the Web Sockets in Yaws web page, along with code for a simple echo server. Example 9-5. Erlang Yaws web socket handler out(A) -> case get_upgrade_header(A#arg.headers) of undefined -> {content, "text/plain", "You're not a web sockets client! Go away!"}; "WebSocket" -> WebSocketOwner = spawn(fun() -> websocket_owner() end), {websocket, WebSocketOwner, passive} end. websocket_owner() -> receive {ok, WebSocket} -> %% This is how we read messages (plural!!) from websockets on passive mode case yaws_api:websocket_receive(WebSocket) of {error,closed} -> io:format("The websocket got disconnected right from the start. " "This wasn't supposed to happen!!

While web sockets have been deployed in several browsers, there is still very little documentation on how to implement them. There have also been several earlier versions of the web sockets standard that are not always compatible. The Web Sockets Interface To use a web socket, start by creating a WebSocket object. As a parameter, pass a web socket URL. Unlike an HTTP URL, a web socket URL will start with ws or wss. The latter is a secure web socket that will use SSL, similar to HTTPS under Ajax: var socket = new WebSocket("ws://example.com/socket"); Once a socket connection is opened, the socket’s socket.onopen() callback will be called to let the program know that everything is ready. When the socket closes, the socket.onclose() method will be called. If the browser wishes to close the socket, it should call socket.close(). To send data over the socket, use the socket.send("data") method.


pages: 136 words: 20,501

Introduction to Tornado by Michael Dory, Adam Parrish, Brendan Berg

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

don't repeat yourself, Firefox, social web, web application, WebSocket

The HTML 5 spec not only describes the communication protocol itself, but also the browser APIs that are required to write client-side code that use WebSockets. Since WebSocket support is already supported in some of the latest browsers and since Tornado helpfully provides a module for it, it’s worth seeing how to implement applications that use WebSockets. Tornado’s WebSocket Module Tornado provides a WebSocketHandler class as part of the websocket module. The class provides hooks for WebSocket events and methods to communicate with the connected client. The open method is called when a new WebSocket connection is opened, and the on_message and on_close methods are called when the connection receives a new message or is closed by the client. Additionally, the WebSocketHandler class provides the write_message method to send messages to the client and the close method to close the connection.

Since we’re still using the HTTP API calls in the CartHandler class, we don’t listen for new messages on the WebSocket connection, so the on_message implementation is empty. (We override the default implementation of on_message to prevent Tornado from raising a NotImplementedError if we happen to receive a message.) Finally, the callback method writes the message contents to the WebSocket connection when the inventory changes. The JavaScript code in this version is virtually identical. We just need to change the requestInventory function. Instead of making an AJAX request for the long polling resource, we use the HTML 5 WebSocket API. See Example 5-8. Example 5-8. Web Sockets: The new requestInventory function from inventory.js function requestInventory() { var host = 'ws://localhost:8000/cart/status'; var websocket = new WebSocket(host); websocket.onopen = function (evt) { }; websocket.onmessage = function(evt) { $('#count').html($.parseJSON(evt.data)['inventoryCount']); }; websocket.onerror = function (evt) { }; } After creating a new WebSocket connection to the URL ws://localhost:8000/cart/status, we add handler functions for each of the events we want to respond to.

class EchoHandler(tornado.websocket.WebSocketHandler): def on_open(self): self.write_message('connected!') def on_message(self, message): self.write_message(message) As you can see in our EchoHandler implementation, the on_open method simply sends the string “connected!” back to the client using the write_message method provided by the WebSocketHandler base class. The on_message method is invoked every time the handler receives a new message from the client, and our implementation echoes the same message back to the client. That’s all there is to it! Let’s take a look at a complete example to see how easy this protocol is to implement. Example: Live Inventory with WebSockets In this section, we will see how easy it is to update the HTTP long polling example we saw previously to use WebSockets. Keep in mind, however, that WebSockets are a new standard and are only supported by the very latest browser versions.


pages: 435 words: 62,013

HTML5 Cookbook by Christopher Schmitt, Kyle Simpson

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Firefox, Internet Archive, security theater, web application, WebSocket

This would be useful if, for instance, you wanted to have a worker running in the background every so often, notifying the page each time it runs: self.onmessage = function(evt) { setInterval(function(){ self.postMessage(Math.random()); // send a random number back }, 60*60*1000); // execute once per hour }; See Also The W3C specification for Web Workers at http://dev.w3.org/html5/workers/. 10.5. Web Sockets Problem You want to create persistent, two-way communication between your web application and the server, so that both the browser and the server can send and receive data to and from each other as needed. Solution Most browsers now have the native ability to establish a bidirectional socket connection between themselves and the server, using the WebSocket API. This means that both sides (browser and server) can send and receive data. Common use cases for Web Sockets are live online games, stock tickers, chat clients, etc. To test if the browser supports Web Sockets, use the following feature-detect for the WebSocket API: var websockets_support = !!window.WebSocket; Now, let’s build a simple application with chat room–type functionality, where a user may read the current list of messages and add her own message to the room.

It’s similarly undesirable to instantiate a memory-heavy Flash instance to use socket communication. So, Web Sockets are understandably a welcomed addition to the “HTML5 & Friends” family of technologies. The message sending and receiving in Web Sockets is like a sensible mix between XHR and Web Workers, which we looked at in the previous recipe. Note Web Sockets require both the browser and the server to speak a standardized and agreed-upon protocol (much like HTTP is for normal web pages). However, this protocol has undergone quite a lot of experimentation and change as it has developed over the last couple of years. While things are beginning to stabilize, Web Sockets are still quite volatile, and you have to make sure that your server is speaking the most up-to-date version of the protocol so that the browser can communicate properly with it. The WebSocket object instance has, similar to XHR, a readyState property that lets you examine the state of the connection.

If Web Sockets are not supported, you’ll need to provide some fallback functionality for your application, or at least gracefully notify the user that his browser doesn’t support the required functionality. Fortunately, there’s a very easy way to do that. Because consistent browser support for Web Sockets has been elusive, the best practice suggestion for using Web Sockets is to use a library like Socket.io (http://socket.io), which attempts to use Web Sockets if available, and falls back to a variety of other techniques for communication if Web Sockets are not present. You should also be aware of how Web Sockets usage scales in terms of server resources. Traditional web requests only take up dedicated resources from the server for a split second at a time, which means you can serve a lot of web traffic from your server without having too much overlap and thus running out of resources. Sockets, on the other hand, tend to be more dedicated, so there can be issues with resource availability under high load.


pages: 325 words: 85,599

Professional Node.js: Building Javascript Based Scalable Software by Pedro Teixeira

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

en.wikipedia.org, Firefox, Google Chrome, node package manager, platform as a service, web application, WebSocket

To do this, the browser sends a special HTTP/1.1 request to the server, asking it to turn the connection of this request into a WebSockets connection: GET /chat HTTP/1.1 Host: server.example.com Upgrade: websocket Connection: Upgrade Sec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ== Origin: http://example.com Sec-WebSocket-Protocol: chat, superchat Sec-WebSocket-Version: 13 Although this starts out as a regular HTTP connection, the client asks to “upgrade” this connection to a WebSocket connection. If the server supports the WebSocket protocol, it answers like this: HTTP/1.1 101 Switching Protocols Upgrade: websocket Connection: Upgrade Sec-WebSocket-Accept: s3pPLMBiTxaQ9kYGzzhZRbK+xOo= Sec-WebSocket-Protocol: chat This marks the end of the handshake, and the connection switches to data transfer mode. Both sides can now send messages back and forth without any HTTP overhead or additional handshakes, making it a bi-directional full-duplex communication connection in which both client and server can send messages to one another at any time without having to wait for one another.

USING SOCKET.IO TO BUILD WEBSOCKET APPLICATIONS Although implementing your own WebSocket server for Node.js is possible, it’s not necessary. Many low-level details need to be taken care of before you can implement an actual application on top of it, which makes using a library a lot more practical. The de facto standard library for building WebSocket Node.js applications is Socket.IO. Not only is it a wrapper library that makes building WebSocket servers very convenient, it also provides transparent fallback mechanisms like long polling for clients that don’t support the WebSocket protocol. Furthermore, it ships with a client-side library that provides a convenient API for developing the browser part of the application. Using Socket.IO, you never need to grapple with the low-level implementation details of a WebSocket server or client. You get a clean and expressive API on both sides, which allows writing real-time applications with ease.

As you can see, both sides use a similar vocabulary. On the server, the first step is to bind to a TCP port, 4000 in this case. As soon as the server is running, it listens to incoming WebSocket connections. The connection event is triggered as soon as a new client connects. The server then listens for incoming messages on this connection, logging the content of any message it receives. NOTE As you can see, Socket.IO provides a message-oriented communication mechanism. This is an improvement over the underlying WebSockets protocol, which doesn’t provide message framing. You can also see that Socket.IO does not distinguish between a client that is connected using WebSockets or any other type of mechanism: It provides a unified API that abstracts away those implementation details. The event name my event is arbitrary – it’s just a label the client gives the messages it sends, and it is used to distinguish among different types of messages within an application.


pages: 1,038 words: 137,468

JavaScript Cookbook by Shelley Powers

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Firefox, Google Chrome, hypertext link, p-value, semantic web, web application, WebSocket

Demonstration of updates from polled Ajax calls new JavaScript API called WebSockets. Currently only implemented in Chrome, Web- Sockets enables bidirectional communication between server and client by using the send method on the WebSocket object for communicating to the server, and then at- taching a function to WebSocket’s onmessage event handler to get messages back from the server, as demonstrated in the following code from the Chromium Blog: if ("WebSocket" in window) { var ws = new WebSocket("ws://example.com/service"); ws.onopen = function() { // Web Socket is connected. You can send data by send() method. ws.send("message to send"); .... }; ws.onmessage = function (evt) { var received_msg = evt.data; ... }; ws.onclose = function() { // websocket is closed. }; } else { // the browser doesn't support WebSocket. } Another approach is a concept known as long polling.

Instead, it holds the connection open and does not respond until it has the requested data, or until a waiting time is exceeded. See Also See Recipe 14.8 for a demonstration of using this same functionality with an ARIA live region to ensure the application is accessible for those using screen readers. The W3C WebSockets API specification is located at http://dev.w3.org/html5/websockets/, and the 18.9 Using a Timer to Automatically Update the Page with Fresh Data | 429 Chrome introduction of support for WebSockets is at http://blog.chromium.org/2009/ 12/web-sockets-now-available-in-google.html. 18.10 Communicating Across Windows with PostMessage Problem Your application needs to communicate with a widget that’s located in an iFrame. However, you don’t want to have to send the communication through the network. Solution Use the new HTML5 postMessage to enable back-and-forth communication with the iFrame widget, bypassing network communication altogether.

This new function- ality originated with HTML5, though it has since split off to its own specification. It’s an uncomplicated functionality that allows for easy communication between a parent and child window, even if the child window is located in another domain. There are two other new communication APIs in work: Cross Origin Resource Sharing (CORS) and the Web Sockets API. Both are being developed in the W3C, and both are currently in Working Draft state: CORS at http://www.w3.org/TR/access-control/ and Web Sockets at http://dev.w3.org/html5/websockets/. CORS is a way of doing cross-domain Ajax calls, and is currently im- plemented in Firefox 3.5 and up, and Safari 4.x and up. The Web Sock- ets API is a bidirectional communication mechanism, implemented only in Chrome at this time. 413 18.1 Accessing the XMLHttpRequest Object Problem You want to access an instance of the XMLHttpRequest object.


pages: 266 words: 38,397

Mastering Ember.js by Mitchel Kelonye

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Firefox, MVC pattern, Ruby on Rails, single page application, web application, WebRTC, WebSocket

Ensure that you also attempt the given exercises in order to understand the following: Making Ajax requests Understanding Ember-data Creating data stores Defining models Declaring model relations Creating records Updating records Deleting records Persisting data Finding records Defining a store's adapter Creating REST APIs Customizing a store's serializer Making Ajax requests Most web applications communicate with backend services through either of the following technologies: Web sockets Ajax This chapter will mainly deal with Ajax, which enables client applications to send asynchronous requests to remote services through the use of XMLHttpRequests. Web sockets will be handled in a later chapter, but we'll find that many concepts used will be related. Here's an example of a POST request to a music catalog endpoint: var data = JSON.stringify({ album: 'Folie A Deux', artiste: 'Fall Out Boy' }); function onreadystatechange(event){ if (event.target.readyState != 4) return; console.log('POST /albums %s', event.target.status); } var xhr = new XMLHttpRequest(); xhr.onreadystatechange = onreadystatechange; xhr.open('POST', '/albums'); xhr.setRequestHeader('Content-type', 'application/json'); xhr.send(data); This is obviously boilerplate code, and jQuery makes this as simple as: $ .post('/albums', data) .then(function(albut){ console.log('POST /albums 200'); }); There are numerous ways we could integrate this into an Ember.js application.

For example, the first sample defines its adapter as follows: App.ApplicationAdapter = DS.FixtureAdapter; All adapters need to implement the following methods: find findAll findQuery createRecord updateRecord deleteRecord These adapters enable applications to stay in sync with various data stores such as: Local caches A browser's local storage or indexdb Remote databases through REST Remote databases through RPC Remote databases through WebSockets These adapters are, therefore, swappable in case applications need to use different data providers. Ember-data comes with two built-in adapters: the fixtures-adapter and the rest-adapter. The fixtures adapter uses an in-browser cache to store the application's records. This adapter is especially useful when the backend service of the project is either inaccessible for testing or is still being developed.

We have learned how to create records from defined models as well as updating and deleting them. We have also learned the different customizations we would need to make in order to consume existing APIs as much as possible. We should, therefore, be comfortable enough to start writing any client-side applications backed by REST APIs. As we proceed to the other exciting chapters, we should start thinking of how web sockets, JSONP, and RPC can be integrated with Ember-data seamlessly. Chapter 9. Logging, Debugging, and Error Management Until now, we have learned the basics of architecting and building Ember.js applications. In this chapter, we will learn how to debug these applications in order to not only reduce development time, but also to make development more fun. We will, therefore, cover the following topics: Logging Tracing events Debugging errors Using the Ember.js inspector Logging and debugging Ember.js can be downloaded in two formats that are meant to be used in development and production environments accordingly.

Exploring ES6 - Upgrade to the next version of JavaScript by Axel Rauschmayer

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

anti-pattern, domain-specific language, en.wikipedia.org, Firefox, Google Chrome, MVC pattern, web application, WebSocket

The following code demonstrates how to download the content pointed to by url as an ArrayBuffer: fetch(url) .then(request => request.arrayBuffer()) .then(arrayBuffer => ···); 20.6.4 Canvas Quoting the HTML5 specification¹²: ¹⁰http://www.w3.org/TR/XMLHttpRequest/ ¹¹https://fetch.spec.whatwg.org/ ¹²http://www.w3.org/TR/html5/scripting-1.html#the-canvas-element Typed Arrays 324 The canvas element provides scripts with a resolution-dependent bitmap canvas, which can be used for rendering graphs, game graphics, art, or other visual images on the fly. The 2D Context of canvas¹³ lets you retrieve the bitmap data as an instance of Uint8ClampedArray: let let let let canvas = document.getElementById('my_canvas'); context = canvas.getContext('2d'); imageData = context.getImageData(0, 0, canvas.width, canvas.height); uint8ClampedArray = imageData.data; 20.6.5 WebSockets WebSockets¹⁴ let you send and receive binary data via ArrayBuffers: let socket = new WebSocket('ws://127.0.0.1:8081'); socket.binaryType = 'arraybuffer'; // Wait until socket is open socket.addEventListener('open', function (event) { // Send binary data let typedArray = new Uint8Array(4); socket.send(typedArray.buffer); }); // Receive binary data socket.addEventListener('message', function (event) { let arrayBuffer = event.data; ··· }); 20.6.6 Other APIs • WebGL¹⁵ uses the Typed Array API for: accessing buffer data, specifying pixels for texture mapping, reading pixel data, and more. • The Web Audio API¹⁶ lets you decode audio data¹⁷ submitted via an ArrayBuffer. ¹³http://www.w3.org/TR/2dcontext/ ¹⁴http://www.w3.org/TR/websockets/ ¹⁵https://www.khronos.org/registry/webgl/specs/latest/2.0/ ¹⁶http://www.w3.org/TR/webaudio/ ¹⁷http://www.w3.org/TR/webaudio/#dfn-decodeAudioData 325 Typed Arrays • Media Source Extensions¹⁸: The HTML media elements are currently <audio> and <video>.

Typed Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 20.2.1 Element types . . . . . . . . . . . . . . . . . . . . . 20.2.2 Handling overflow and underflow . . . . . . . . . . 20.2.3 Endianness . . . . . . . . . . . . . . . . . . . . . . . 20.2.4 Negative indices . . . . . . . . . . . . . . . . . . . . 20.3 ArrayBuffers . . . . . . . . . . . . . . . . . . . . . . . . . . 20.3.1 ArrayBuffer constructor . . . . . . . . . . . . . . . 20.3.2 Static ArrayBuffer methods . . . . . . . . . . . . . 20.3.3 ArrayBuffer.prototype properties . . . . . . . . 20.4 Typed Arrays . . . . . . . . . . . . . . . . . . . . . . . . . 20.4.1 Typed Arrays versus normal Arrays . . . . . . . . . 20.4.2 Typed Arrays are iterable . . . . . . . . . . . . . . . 20.4.3 Converting Typed Arrays to and from normal Arrays 20.4.4 The Species pattern for Typed Arrays . . . . . . . . . 20.4.5 The inheritance hierarchy of Typed Arrays . . . . . . 20.4.6 Static TypedArray methods . . . . . . . . . . . . . . 20.4.7 TypedArray.prototype properties . . . . . . . . . 20.4.8 «ElementType»Array constructor . . . . . . . . . . 20.4.9 Static «ElementType»Array properties . . . . . . . 20.4.10«ElementType»Array.prototype properties . . . . 20.4.11Concatenating Typed Arrays . . . . . . . . . . . . . 20.5 DataViews . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.5.1 DataView constructor . . . . . . . . . . . . . . . . . 20.5.2 DataView.prototype properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307 307 308 309 310 311 312 312 313 313 313 313 313 314 314 315 315 315 317 320 320 321 321 322 322 322 CONTENTS 20.6 Browser APIs that support Typed Arrays . 20.6.1 File API . . . . . . . . . . . . . . . 20.6.2 XMLHttpRequest . . . . . . . . . 20.6.3 Fetch API . . . . . . . . . . . . . . 20.6.4 Canvas . . . . . . . . . . . . . . . 20.6.5 WebSockets . . . . . . . . . . . . . 20.6.6 Other APIs . . . . . . . . . . . . . 20.7 Extended example: JPEG SOF0 decoder . 20.7.1 The JPEG file format . . . . . . . . 20.7.2 The JavaScript code . . . . . . . . 20.8 Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 322 323 323 323 324 324 325 325 325 327 21.

Two kinds of views are used to access the data: • Typed Arrays (Uint8Array, Int16Array, Float32Array, etc.) interpret the ArrayBuffer as an indexed sequence of elements of a single type. • Instances of DataView let you access data as elements of several types (Uint8, Int16, Float32, etc.), at any byte offset inside an ArrayBuffer. The following browser APIs support Typed Arrays (details are mentioned later): • • • • • • File API XMLHttpRequest Fetch API Canvas WebSockets And more 307 Typed Arrays 308 20.2 Introduction Much data one encounters on the web is text: JSON files, HTML files, CSS files, JavaScript code, etc. For handling such data, JavaScript’s built-in string data type works well. However, until a few years ago, JavaScript was not well equipped to handle binary data. On 8 February 2011, the Typed Array Specification 1.0¹ standardized facilities for handling binary data.


pages: 834 words: 180,700

The Architecture of Open Source Applications by Amy Brown, Greg Wilson

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

8-hour work day, anti-pattern, bioinformatics, c2.com, cloud computing, collaborative editing, combinatorial explosion, computer vision, continuous integration, create, read, update, delete, David Heinemeier Hansson, Debian, domain-specific language, Donald Knuth, en.wikipedia.org, fault tolerance, finite state, Firefox, friendly fire, Guido van Rossum, linked data, load shedding, locality of reference, loose coupling, Mars Rover, MVC pattern, peer-to-peer, Perl 6, premature optimization, recommendation engine, revision control, Ruby on Rails, side project, Skype, slashdot, social web, speech recognition, the scientific method, The Wisdom of Crowds, web application, WebSocket

However, it is both specific to the Mozilla/XPCOM browser platform, as well as to the D-Bus/Telepathy messaging platform. 19.7.1. Cross-browser Transport To make this work across browsers and operating systems, we use the Web::Hippie4 framework, a high-level abstraction of JSON-over-WebSocket with convenient jQuery bindings, with MXHR (Multipart XML HTTP Request5) as the fallback transport mechanism if WebSocket is not available. For browsers with Adobe Flash plugin installed but without native WebSocket support, we use the web_socket.js6 project's Flash emulation of WebSocket, which is often faster and more reliable than MXHR. The operation flow is shown in Figure 19.17. Figure 19.17: Cross-Browser Flow The client-side SocialCalc.Callbacks.broadcast function is defined as: var hpipe = new Hippie.Pipe(); SocialCalc.Callbacks.broadcast = function(type, data) { hpipe.send({ type: type, data: data }); }; $(hpipe).bind("message.execute", function (e, d) { var sheet = SocialCalc.CurrentSpreadsheetControlObject.context.sheetobj; sheet.ScheduleSheetCommands( d.data.cmdstr, d.data.saveundo, true // isRemote = true ); break; }); Although this works quite well, there are still two remaining issues to resolve. 19.7.2.

The server needs to be configured as a proxy so that it can intercept any requests that are made to it without causing the calling Javascript to fall foul of the "Single Host Origin" policy, which states that only resources from the same server that the script was served from can be requested via Javascript. This is in place as a security measure, but from the point of view of a browser automation framework developer, it's pretty frustrating and requires a hack such as this. The reason for making an XmlHttpRequest call to the server is two-fold. Firstly, and most importantly, until WebSockets, a part of HTML5, become available in the majority of browsers there is no way to start up a server process reliably within a browser. That means that the server had to live elsewhere. Secondly, an XMLHttpRequest calls the response callback asynchronously, which means that while we're waiting for the next command the normal execution of the browser is unaffected. The other two ways to wait for the next command would have been to poll the server on a regular basis to see if there was another command to execute, which would have introduced latency to the users tests, or to put the Javascript into a busy loop which would have pushed CPU usage through the roof and would have prevented other Javascript from executing in the browser (since there is only ever one Javascript thread executing in the context of a single window).

There are many interesting possibilities with this open-source spreadsheet engine, and if you can find a way to embed SocialCalc into your favorite project, we'd definitely love to hear about it. Footnotes https://github.com/audreyt/wikiwyg-js http://one.laptop.org/ http://seeta.in/wiki/index.php?title=Collaboration_in_SocialCalc http://search.cpan.org/dist/Web-Hippie/ http://about.digg.com/blog/duistream-and-mxhr https://github.com/gimite/web-socket-js http://perlcabal.org/syn/S02.html http://fit.c2.com/ http://search.cpan.org/dist/Test-WWW-Mechanize/ http://search.cpan.org/dist/Test-WWW-Selenium/ https://www.socialtext.net/open/?cpal http://opensource.org/ http://www.fsf.org https://github.com/facebook/platform https://github.com/reddit/reddit The Architecture of Open Source Applications Amy Brown and Greg Wilson (eds.) ISBN 978-1-257-63801-7 License / Buy / Contribute Chapter 20.


pages: 570 words: 115,722

The Tangled Web: A Guide to Securing Modern Web Applications by Michal Zalewski

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

barriers to entry, business process, defense in depth, easy for humans, difficult for computers, fault tolerance, finite state, Firefox, Google Chrome, information retrieval, RFC: Request For Comment, semantic web, Steve Jobs, telemarketer, Turing test, Vannevar Bush, web application, WebRTC, WebSocket

In their view, any sites that need a form of authentication should instead rely on explicitly exchanged authentication tokens.[79] The other, more pragmatic criticism of CORS is that the scheme is needlessly complicated: It extends an already problematic and error-prone API without clearly explaining the benefits of some of the tweaks. In particular, it is not clear if the added complexity of preflight requests is worth the peripheral benefit of being able to issue cross-domain requests with unorthodox methods or random headers. The last of the weak complaints hinges on the fact that CORS is susceptible to header injection. Unlike some other recently proposed browser features, such as WebSockets (Chapter 17), CORS does not require the server to echo back an unpredictable challenge string to complete the handshake. Particularly in conjunction with preflight caching, this may worsen the impact of certain header-splitting vulnerabilities in the server-side code. XDomainRequest Microsoft’s objection to CORS appears to stem from the aforementioned concerns over the use of ambient authority, but it also bears subtle overtones of their dissatisfaction with interactions with W3C.

Binary HTTP SPDY[258] (“Speedy”) is a simple, encrypted drop-in replacement for HTTP that preserves the protocol’s key design principles (including the layout and function of most headers). At the same time, it mini- mizes the overhead associated with delivering concurrent requests or with the parsing of text-based requests and response data. The protocol is currently supported only in Chrome, and other than select Google services, it is not commonly encountered on the Web. It may be coming to Firefox soon, too, however. HTTP-less networking WebSocket[259] is a still-evolving API designed for negotiating largely unconstrained, bidirectional TCP streams for when the transactional nature of TCP gets in the way (e.g., in the case of a low-latency chat application). The protocol is bootstrapped using a keyed challenge-response handshake, which looks sort of like HTTP and which is (quite remarkably) impossible to spoof by merely exploiting a header-splitting flaw in the destination site.

[256] “navigator.registerProtocolHandler,” Mozilla Developer Network, https://developer.mozilla.org/en/DOM/window.navigator.registerProtocolHandler. [257] “Manipulating the Browser History,” Mozilla Developer Network, https://developer.mozilla.org/en/DOM/Manipulating_the_browser_history/. [258] A. Langley and M. Belsche, “SPDY: An Experimental Protocol for a Faster Web,” The Chromium Projects, http://www.chromium.org/spdy/spdy-whitepaper/. [259] I. Fette and A. Melnikov, “The WebSocket Protocol,” IETF Request for Comments draft (2011), http://tools.ietf.org/html/draft-ietf-hybi-thewebsocketprotocol-10/. [260] J. Rosenberg, M. Kaufman, M. Hiie, and F. Audet, “An Architectural Framework for Browser Based Real-Time Communications,” IETF Request for Comments draft (2011), http://tools.ietf.org/html/draft-rosenberg-rtcweb-framework-00/. [261] I. Hickson, “HTML5: 5.6—Offline Web Applications,” World Wide Web Consortium (2011), http://www.w3.org/TR/html5/offline.html


pages: 196 words: 58,122

AngularJS by Brad Green, Shyam Seshadri

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

combinatorial explosion, continuous integration, Firefox, Google Chrome, MVC pattern, node package manager, single page application, web application, WebSocket

Generically, we call these dependencies services, as they provide specific services to our application. For example, if in our shopping website a controller needs to get a list of items for sale from the server, we’d want some object—let’s call it Items—to take care of getting the items from the server. The Items object, in turn, needs some way to communicate with the database on the server over XHR or WebSockets. Doing this without modules looks something like this: function ItemsViewController($scope) { // make request to server … // parse response into Item objects … // set Items array on $scope so the view can display it ... } While this would certainly work, it has a number of potential problems. If some other controller also needs to get Items from the server, we now have to replicate this code.

throw operator, Expressions time zones, Common Gotchas tokens, XSRF transclude property, API Overview, Transclusion transformations, Transformations on Requests and Responses transitive changes, Performance Considerations in watch() U UIs (User Interfaces) creating dynamic, Data Binding separating responsibilities in, Separating UI Responsibilities with Controllers unauthorized transfers, XSRF unit tests, The Tests–Unit Tests for $http service, Unit Testing for app logic, A Few Words on Unobtrusive JavaScript for ngResource, Unit Test the ngResource in Karma, Integrating AngularJS with RequireJS Jasmine style, Unit Tests Jasmine-style, Project Organization with monkey patches, Organizing Dependencies with Modules uppercase filter, Formatting Data with Filters user input, validation of, Validating User Input, The Templates username requirement, enforcing, The Templates V validation tools, Karma, Directives and HTML Validation (see also form validation controls) variables, in data binding, An Example: Shopping Cart vendor folder, Project Organization View Controller, Controllers views adding with Yeoman, Adding New Routes, Views, and Controllers basics of, Model View Controller, Relationship Between Model, Controller, and Template changing with routes and $location, Changing Views with Routes and $location–controllers.js creation of, Model View Controller exposing model data to, Publishing Model Data with Scopes working example of, The Templates W watchAction, Observing Model Changes with $watch watchFn, Observing Model Changes with $watch, Performance Considerations in watch() web development platforms, IDEs web servers starting with ExpressJS, Without Yeoman starting with Yeoman, With Yeoman WebSockets, Organizing Dependencies with Modules WebStorm development platform, IDEs while loop, Expressions window.location vs. $location, $location Windows OS, and Yeoman, Project Organization workflow optimization, Yeoman: Optimizing Your Workflow X XHR, Organizing Dependencies with Modules xHTML naming format, Directives and HTML Validation XML naming format, Directives and HTML Validation XSRF, Talking to Servers XSRF (Cross-Site Request Forgery) attacks, XSRF Y Yeoman, Yeoman: Optimizing Your Workflow overview of, Project Organization starting web servers in, With Yeoman About the Authors Brad Green works at Google as an engineering manager.


pages: 193 words: 46,550

Twisted Network Programming Essentials by Jessica McKellar, Abe Fettig

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

continuous integration, WebSocket

Protocols and transports are decoupled, which makes transport reuse and protocol testing easy. The Twisted Core examples directory has many additional examples of basic servers and clients, including implementations for UDP and SSL. The Twisted Core HOWTO index has an extended “Twisted from Scratch” tutorial that builds a finger service from scratch. One real-world example of building a protocol in Twisted is AutobahnPython, a Web‐ Sockets implementation. 22 | Chapter 2: Building Basic Clients and Servers Twisted has been developing a new higher-level endpoints API for creating a connection between a client and server. The endpoints API wraps lower-level APIs like lis tenTCP and connectTCP, and provides greater flexibility because it decouples con‐ structing a connection from initiating use of the connection, allowing parameterization of the endpoint.

161 Index A adbapi switching from blocking API to, 77–79 using with SQLite, 78 addBoth method, 36, 56 addCallback method, 26, 31–35, 36 addCallbacks method, 27–28, 33–35, 36 addErrback method, 27–27, 31–35, 36 administrative Python shell, SSH providing, 153–155 Agent API, 53, 55–60 agent.request, 56 AlreadyCalledError, 35 ampoule, 101 API Agent, 53, 55–60 blocking, 77–79, 93 Deferred, 36, 50, 50 (see also Deferreds) platform-independent, 96 producer/consumer, 58 threading, 101 API documentation, using Twisted, 8–8 applications, deploying Twisted, 63–69 Applications, in Twisted application infrastruc‐ ture, 64 Ascher, David, Learning Python, xvi asynchronous code about using Deferreds in, 25 addCallback method vs. addErrback meth‐ od, 33–35 keyfacts about Deferreds, 35 managing callbacks not registered, 25 structure of Deferreds, 26–28 structuring, 25 using callback chains inside of reactor, 28–29 using callback chains outside of reactor, 26– 28 asynchronous headline retriever, 28 asynchronous responses, web server, 49–51 authentication in Twisted applications, 89–90 using public keys for, 151–153, 158 authentication, using Cred about, 81 chat-specific, 121–124 components of, 81–82 examples of, 82–86 process in, 84 AuthOptionMixin class, 89–91 AutobahnPython, Web-Sockets implementa‐ tion, 22 avatar ID, definition of, 82 avatar, definition of, 81 B blocking API, 77–79, 93 blockingApiCall, 95 We’d like to hear your suggestions for improving our indexes. Send email to index@oreilly.com. 163 blockingCallFromThread method, 96 blogs, for Twisted, 9 browsers GET request, 42 serializing requests to same resource, 51 buildProtocol method, 16, 85, 130 C C compiler, installing, 5 Calderone, JP, “Twisted Conch in 60 Seconds” series, 158 callback chains in Deferreds, 26–28 using inside of reactor, 28–29 using outside of reactor, 26–28 callbacks attaching to non-blocking database queries, 78, 79 attaching to writeSuccessResponse, 97 Deferreds using outside of reactor, 26–28 failing to register, 25 practice using, 30–31 registering multiple, 27–28 callFromThread method, 96 callInThread method, 93 callLater method, 29, 108, 112 callMultipleInThread method, 96 channelOpen method, 158 ChatFactory, 21, 106 ChatProtocol states, 21 chatserver, testing, 106–108 client, 142 (see also web client) communication in Twisted, 19 IRC, 119–121 POP3, 142 simultaneous connections to server, 19 SMTP, 127–129, 143 SSH, 156–158 TCP echo, 11–16 ClientCommandTransport class, 158 ClientConnection class, 158 clients IMAP, 137–139 closed method, 158 ColorizedLogObserver, 74 commands standard library module, 96–100 conchFactory, manhole_ssh, 155 ConchUser class, 149 164 | Index connection.SSHConnection class, 156 connectionLost method, 15, 56 connectionMade method, 15, 98, 99 connectionSecure method, 158 connectTCP method, 14 Cred authentication system about, 81 chat-specific, 121–124 components of, 81–82 examples of, 82–86 process in, 84 SSH server, 145–146 credentialInterfaces class variable, 87 credentialInterfaces, authenticating, 85 credentials checkers database-backed, 87–88 DBCredentialsChecker, 87–88, 110–112 definition of, 82 FilePasswordDB, 86 IMAP, 133 in UNIX systems, 91 POP3, 139 returning Deferred to Portal, 85 SSH server, 145–146, 153 credentials, definition of, 81 curses library, 150 D DailyLogFile class, 73 data, streaming large amounts of, 58 databases, non-blocking queries, 77–79 dataReceived method, 15, 56 dataReceived methods, IProtocol interface, 20 DBCredentialsChecker, 87–88, 110–112 decoupling, transports and protocols, 16 deferLater method, 94 Deferreds about Deferred API, 36 agent.request returning, 56 asynchronous responses on web server us‐ ing, 50 credentials checker to Portal, 85 in non-blocking database queries, 78, 79 keyfacts about, 35 POP3 client returning, 142 practice using, 30–35 shutting down reactor before firing, 95 testing, 109–112 using callback chains inside of reactor, 28–29 using callback chains outside of reactor, 26– 28 using in asynchronous code, 25 deferToThread method, 93 DirtyReactorAggregateError, 110 Dive Into Python (Pilgrim), xvi downloading Python, xvi TortoiseSVN, 6 Twisted, 3 web resources, 54–55 downloadPage helper, 54–55 dynamic content, serving, 45 dynamic URL dispatch, 46–48 E echo application, turning echo server into, 64 echo bot IRC, 119–121 talking in #twisted-bots with, 122 Echo protocol, testing, 104–105 echo TCP servers and clients, 11–16 EchoFactory class, 16, 64, 72, 85, 90, 104 emails IMAP client for, 137–139 POP3 servers for, 139–143 sending using SMTP, 127–128 serving messages using IMAP, 133–137 storing using SMTP servers, 130–132 emit method, 74 errbacks attaching to non-blocking database queries, 78 Deferreds using outside of reactor, 26–27 practice using, 30–33 errReceived method, 99 event-driven programming, 12–14 F fakeRunqueryMatchingPassword, 111–112 FileLogObserver, 73–74 FilePasswordDB credential checker, 86 Free Software (Open Source movements), x, xiii G GET requests handling, 43–48 making HTTP, 40–42 getHost method, ITransport interface, 14 getManholeFactory function, 155–155 getPage helper, 53–54 getPassword method, 158 getPeer method, ITransport interface, 14 getPrivateKey method, 158 getPrivateKeyString functions, 150, 150 getProcessOutput method, 96, 97, 98 getProcessValue method, 96 getPu blicKey method, 158 getPublicKeyString functions, 150, 150 H headline retriever, asynchronous, 28 HistoricRecvLine, 145, 149 HTTP client, 55 (see also web client) Agent API, 55–60 HTTP GET request, 40–42 HTTP HEAD request, 57–57 HTTP servers, 39 (see also web servers) about, 39 parsing requests, 42 responding to requests, 39–42 tutorials related to, 51 HTTPEchoFactory, 40, 66 I IAccount, imap4, 133 IAvatar, 150 IBodyProducer interface, 58 IChatService interface, InMemoryWordsRealm implementing, 121 ICredentialsChecker interface, 87 IMailbox, imap4, 133 IMAP (Internet Message Access Protocol) about, 125, 132 clients, 137–139 servers, 133–137 imap4 IAccount, 133 IMailbox, 133 IMessage, 133 IMessage, imap4, 133 IMessageDelivery interface, 130 in-application logging, 71–73 Index | 165 inConnectionLost method, 99, 99 infrastructure, Twisted application, 63–67 InMemoryWordsRealm, implementing IChat‐ Service interface, 121 installing Twisted, 3–6 insults library, 150 integration-friendly platform, xv IPlugin class, 67 IProcessProtocol, 98 IProtocol interface methods, 15 IProtocolAvatar interface, 85 IRC channels, for Twisted, 9 IRC clients, 119–121 IRC servers, 121–124 IRCFactory, 122 IRCUser protocol, 121 irc_* handler, implementing, 122 IResource interface, 44 irssi, connecting to twisted IRC server using, 122 IService interface, implementing, 64 IServiceMaker class, 67 ISession, 149 ISSH PrivateKey, 153 ITransport interface methods, 14 IUsernameHashedPassword, 88–88 K key-based authentication, supporting both user‐ name/password and, 153 Klein micro-web framework, 51 L Learning Python (Lutz and Ascher), xvi Lefkowitz, Matthew “the Glyph”, ix–xi lineReceived method, 40, 106, 145, 149 lineReceived methods, 20 LineReceiver, 97, 145 Linux installing PyCrypto for, 4 installing pyOpenSSL for, 4 installing Twisted on, 3–4 Linux distributions, OpenSSH SSH implemen‐ tation on, 146 listenTCP method, 14, 40 listSize method, 142 log.addObserver, 74 logging systems, 71–75 166 | Index LogObserver, 74 LoopingCall, 94 loseConnection method, ITransport interface, 14 Lutz, Mark, Learning Python, xvi M Mac OS X, 146 (see also OS X) OpenSSH SSH implementation on, 146 mail (see emails) Maildir IMAP server, 133–137 storage format, 130–132 using POP3, 139 mailing lists, for Twisted, 8–7 makeConnection method, 15 manhole_ssh, 155–155 manhole_ssh.ConchFactory class, 156 myCallback function, 26 myErrback function, 27 MyHTTP protocol, 43 MySQL, non-blocking interface for, 77 N namespace argument, 155–155 non-blocking code, using Deferreds in, 25 NOT_DONE_YET method, 50 nslookup command, 126 O Open Source movements (Free Software), x, xiii openShell method, 145–146, 150–150 OpenSSH SSH implementation, 146 optParameters instance variable, 67 OS X installing PyCrypto for, 5 installing pyOpenSSL for, 5 installing Twisted on, 5 OpenSSH SSH implementation on, 146 outConnectionLost method, 99 outReceived method, 99 P parsing HTTP requests, 42–43 passage of time, testing, 112–114 PasswordAuth class, 158 pauseProducing method, 58 persistent protocol state, stored in protocol fac‐ tory, 19 Pilgrim, Mark, Dive Into Python, xvi Planet Twisted blogs, 9 platform-independent API, 96 Plugins, in Twisted application infrastructure, 66–67, 69 POP3 (Post Office Protocol version 3) about, 125 servers, 139–143 Portal definition of, 82 IMAP, 133 in Cred authentication process, 85 POP3, 139 SSH server, 145–146 POST HTTP data, with Agent, 58 POST requests, handling, 48–49 Postgres, non-blocking interface for, 77 printing to stderr if headline is too long, 28 web resource, 53–54 printResource method, 56 private keys generating for SSH server, 150 RSA, 150 processEnded method, 99 processExited method, 99 ProcessProtocol, 98, 99 producer/consumer API, streaming large amounts of data using, 58 protocol code, mixing application-specific logic with, 22 protocol factories about, 16 IMAP server, 133 in Cred authentication process, 85–85 in HTTP GET request, 40, 43 persistent protocol state stored in, 19 POP3, 139 SMTP server, 130 protocol state machines, 19–21 protocols about, 15–16 creating subclass ResourcePrinter, 56 custom process, 98–100 decoupling, 16 HistoricRecvLine vs. regular, 149 IMAP server, 133 in Twisted Mail, 125 IRCUser, 121 POP3, 139 retrieving reason for terminated connection, 19 service implementations, 64 SMTP, 126–127 SSH server, 145–146, 149 testing, 104–108 Twisted Words, 119–124 proto_helpers, 104–105 public keys generating for SSH server, 150 using for authentication, 151, 158 PublicKeyCre dentialsChecker, 153 putChild method, 44 PyCrypto, installing for Linux, 4, 4 for OS X, 5, 5 Python about, xiii checking version of, 7 resources for learning and downloading, xvi Python shell, SSH providing administrative, 153–155 python-crypto,packages, for Windows, 4 python-openssl packages, for Windows, 4 python-twisted packages, 3 Q queries, non-blocking database, 77–79 quote, TCP servers and clients, 16–19 R reactor in serving static content, 44–44 shutting down before events complete, 95 testing and, 108–114 using callback chains inside of, 28–29 reactor event loop, 14 Realm IMAP, 133 POP3, 139 SSH server, 145–146, 150 realm, definition of, 82 receivedHeader method, 130 RecvLine class, 145 Index | 167 redirects, dynamic URL dispatch, 48 release tarball, installing Twisted from, 6 remote server using SSH, running commands on, 156–158 render_GET method, 46, 48, 50 render_POST method, 46 request blocks, rendering on web servers, 49–51 requestAvatar method, 86, 150, 153 requestAvatarId method, 87, 88, 153 requestAvatarID method, 110 Resource hierarchies, extending by registering child resources, 45 Resource subclass, defining dynamic resource by, 45 ResourcePrinter subclass, 56 resources, for answering questions about Twist‐ ed, 8–7 Response body, handling through agent.request, 56 Response metadata, retrieving, 57–57 resumeProducing method, 58 retrieve method, 142 rotateLength, 72, 73 RSA private keys, for SSH server, 150–150 RSA.generate, as blocking function, 150 RunCommand, 97 RunCommandFactory, 97 S Safari Books Online, xvii Scripts directory, adding to PATH in Windows, 4–5 sendData method, IProtocol interface, 20 sendLine methods, 20 sendRequest, 158 server, 51 (see also web server) client simultaneous connections to, 19 communication in Twisted, 19 examples at Twisted Web examples directo‐ ry, 51 IMAP, 133–137 IRC, 121–124 POP3, 139–143 SMTP, 128–132 SSH creating, 145–150 supporting both username/password and key-based authentication on, 153 168 | Index twisted.conch communicationg with, 156–158 TCP echo, 11–16 service plugin, components of, 67 Services, in twisted application infrastructure, 64 serviceStarted method, 158 serving dynamic content, 45 static content, 43–45 setResponseCode, 43 slowFunction, 109 SMTP (Simple Mail Transfer Protocol) about, 125 protocol, 126–127 sending emails using, 127–128 servers, 128–132 tutorial for building client, 143 source, installing Twisted from, 6 spawnProcess method, 98, 99 SQLite non-blocking interface for, 77 using adbapi with, 78 SSH (Secure SHell) about, 145 clients, 156–158 getting error on local machine, 149 providing administrative Python shell, 153– 155 running commands on remote server, 156– 158 server creating, 145–150 supporting both username/password and key-based authentication on, 153 using public keys for authentication, 151–153 ssh-keygen, using in Windows, 146 SSHDemoAvatar class, 149 SSHDemoProtocol class, 149 Stack Overflow programming Q & A site, for Twisted, 9 startLogging, 74 startProducing method, 58 startService method, 64 static content, serving, 43–45 static URL dispatch, 44 stderr, printing if headline is too long to, 28 stdout, logging to, 71–72 StdoutMessageDelivery, 130 StdoutSMTPFactory, 130 stopProducing method, 58 stopService method, 64 storing mail, 130 streaming, large amounts of data, 58 StringProducer, constructing, 58–60 StringTransport class, 104–105 subprocesses, running, 96–100 subproject documentation, using Twisted, 8 svn (subversion) repository, Twisted, 6 T TAC (Twisted Application Configuration) files, in Twisted application infrastructure, 64–65, 69 task module method, 94 TCP servers and clients echo, 11–16 quote, 16–19 TCP, HTTP using as transport-layer protocol, 40 telnet connections, terminating, 21 telnet utility, 40 TerminalRealm, manhole_ssh, 155 testing about, 103 Deferreds, 109–112 passage of time, 112–114 protocols, 104–108 reactor and, 108 writing and running unit tests with trial, 103–104 test_slowFunction, 109 threaded calls, making, 93–96, 101 threading API, 101 TortoiseSVN, downloading, 6 transport.SSHClientTransport class, 156 transports about, 14 decoupling, 16 twistd examples of, 68–68 in Twisted application infrastructure, 65–66 logging, 73 Twisted about, ix–xi, xiii–xv downloading and installing, 3–6 resources for answering questions about, 8– 7 svn repository, 6 testing installation of, 7–7 using API documentation, 8 Twisted Application Configuration (TAC) files, in Twisted application infrastructure, 64–65 Twisted applications authentication in, 89–91 deploying, 63–69 Twisted Conch examples, 158 Twisted Conch HOWTO, walking through im‐ plementing SSH client, 158 “Twisted Conch in 60 Seconds” series (Calder‐ one), 158 Twisted Core examples directory, 22 networking libraries, 8 Twisted Core HOWTO documents on Deferreds, 36 plugin discussion at, 69–69 TAC discussion at, 69–69 threads discussion at, 101 “Twisted From Scratch” tutorial, 22 Twisted Cred about, 81 authentication process in, 84 chat-specific authentication using, 121–124 components of, 81–82 examples of, 82–86 using on SSH server to support authentica‐ tion, 151–146 #twisted IRC channel, 9 Twisted Mail about, 125 examples directory, 143 Twisted Mail HOWTOtutorial, for building SMTP client, 143 Twisted Web Client HOWTO, discussing Agent API at, 60 Twisted Web HOWTO, tutorials related to HTTP servers, 51 Twisted Words, 119–124 #twisted-bots, talking with echo bot in, 122 twisted-python, mailing list, 8–9 twisted.application.service.Application, creating instance, 64–65 twisted.conch about, 145 Index | 169 communicationg with server using SSH, 156–158 writing SSH server and, 145 twisted.conch.avatar.ConchUser class, 149 twisted.conch.common.NS function, 158 twisted.conch.interfaces.IAvatar, 150 twisted.conch.interfaces.ISession, 149 twisted.conch.manhole_ssh module, 153 twisted.conch.recvline, 145, 149 twisted.conch.ssh.keys module, 150 twisted.enterprise.adbapi, as non-blocking in‐ terface, 77 twisted.internet.protocol.ProcessProtocol, 98 twisted.internet.task Clock class, 112 LoopingCall, 94 twisted.trial.unittest, 103–104 twisted.web implementations for common resources contained on, 44 mailing list, 9 parsing http requests from, 42–43 server, handling GET requests, 43–48 twisted.web.client downloadPage, 54–55 getPage, 53–54 initializing Agent, 55–56 U Ubuntu PPA, packages for Twisted, 4 unit tests, writing and running with trial, 103– 104 unittest framework, 103 unittest.tearDown test method, 108 UNIX systems curses library in, 150 using credentials checker in, 91 URL dispatch dynamic, 46–48 static, 44 userauth.SSHUserAuthClient class, 156, 158 170 | Index username/password, supporting both key-based authentication and, 153 V validateFrom method, 130 validateTo method, 130 verifyHostKey method, 158 verifySignature, 153 W wantReply, keyword argument, 158 web browsers GET request, 42 serializing requests to same resource, 51 web clients, Agent API, 55–60 web resources, downloading, 54–55 web servers about, 39 asynchronous responses on, 49–51 handling GET requests, 43–48 handling POST requests, 48–49 parsing requests, 42–43 responding to requests, 39–42 Windows adding the Scripts directory to PATH in, 4–5 installing PyCrypto for, 4 installing pyOpenSSL for, 4 installing Twisted on, 4–5 using ssh-keygen, 146 Wokkel library, 122 write method, ITransport interface, 14 writeSequence method, ITransport interface, 14 writeSuccessResponse, attaching callback to, 97 Z zope.interface import implements, 58 installing, 6 About the Authors Jessica McKellar is a software engineer from Cambridge, Massachusetts.


pages: 141 words: 9,896

Pragmatic Guide to JavaScript by Christophe Porteneuve

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

barriers to entry, commoditize, domain-specific language, en.wikipedia.org, Firefox, web application, WebSocket

eBook <www.wowebook.com>this copy is (P1.0 printing, November 2010) U SING DYNAMIC M ULTIPLE F ILE U PLOADS 26 Using Dynamic Multiple File Uploads The file upload feature currently built into HTML (as in, pre-HTML5) basically blows. It’s single-file, it has no upload progress feedback, it cannot filter on size or file type constraints, and so on. And it uses Base64 encoding, which means every file sent is blown up by 33 percent. Unless we use stuff like WebSockets or SWFUpload, we are stuck with most of these limitations. However, we can improve the user experience a bit by letting users pick multiple files in a nice way. When I say “nice” here, I basically mean “without as many visible file controls as there are files.” I like how 37signals presents lists of files-to-be-uploaded in their products: a flat, icon-decorated list of filenames with the option to remove them from the upload “queue.”


Martin Kleppmann-Designing Data-Intensive Applications. The Big Ideas Behind Reliable, Scalable and Maintainable Systems-O’Reilly (2017) by Unknown

active measures, Amazon Web Services, bitcoin, blockchain, business intelligence, business process, c2.com, cloud computing, collaborative editing, commoditize, conceptual framework, cryptocurrency, database schema, DevOps, distributed ledger, Donald Knuth, Edward Snowden, ethereum blockchain, fault tolerance, finite state, Flash crash, full text search, general-purpose programming language, informal economy, information retrieval, Internet of things, iterative process, John von Neumann, loose coupling, Marc Andreessen, natural language processing, Network effects, packet switching, peer-to-peer, performance metric, place-making, premature optimization, recommendation engine, Richard Feynman, Richard Feynman, self-driving car, semantic web, Shoshana Zuboff, social graph, social web, software as a service, software is eating the world, sorting algorithm, source of truth, SPARQL, speech recognition, statistical model, web application, WebSocket, wikimedia commons

The browser only reads the data at one point in time, assuming that it is static—it does not subscribe to updates from the server. Thus, the state on the device is a stale cache that is not updated unless you explicitly poll for changes. (HTTP-based feed subscription protocols like RSS are really just a basic form of poll‐ ing.) More recent protocols have moved beyond the basic request/response pattern of HTTP: server-sent events (the EventSource API) and WebSockets provide communi‐ cation channels by which a web browser can keep an open TCP connection to a server, and the server can actively push messages to the browser as long as it remains connected. This provides an opportunity for the server to actively inform the enduser client about any changes to the state it has stored locally, reducing the staleness of the client-side state. In terms of our model of write path and read path, actively pushing state changes all the way to client devices means extending the write path all the way to the end user.

The opposite of bounded. 558 | Glossary Index A aborts (transactions), 222, 224 in two-phase commit, 356 performance of optimistic concurrency con‐ trol, 266 retrying aborted transactions, 231 abstraction, 21, 27, 222, 266, 321 access path (in network model), 37, 60 accidental complexity, removing, 21 accountability, 535 ACID properties (transactions), 90, 223 atomicity, 223, 228 consistency, 224, 529 durability, 226 isolation, 225, 228 acknowledgements (messaging), 445 active/active replication (see multi-leader repli‐ cation) active/passive replication (see leader-based rep‐ lication) ActiveMQ (messaging), 137, 444 distributed transaction support, 361 ActiveRecord (object-relational mapper), 30, 232 actor model, 138 (see also message-passing) comparison to Pregel model, 425 comparison to stream processing, 468 Advanced Message Queuing Protocol (see AMQP) aerospace systems, 6, 10, 305, 372 aggregation data cubes and materialized views, 101 in batch processes, 406 in stream processes, 466 aggregation pipeline query language, 48 Agile, 22 minimizing irreversibility, 414, 497 moving faster with confidence, 532 Unix philosophy, 394 agreement, 365 (see also consensus) Airflow (workflow scheduler), 402 Ajax, 131 Akka (actor framework), 139 algorithms algorithm correctness, 308 B-trees, 79-83 for distributed systems, 306 hash indexes, 72-75 mergesort, 76, 402, 405 red-black trees, 78 SSTables and LSM-trees, 76-79 all-to-all replication topologies, 175 AllegroGraph (database), 50 ALTER TABLE statement (SQL), 40, 111 Amazon Dynamo (database), 177 Amazon Web Services (AWS), 8 Kinesis Streams (messaging), 448 network reliability, 279 postmortems, 9 RedShift (database), 93 S3 (object storage), 398 checking data integrity, 530 amplification of bias, 534 of failures, 364, 495 Index | 559 of tail latency, 16, 207 write amplification, 84 AMQP (Advanced Message Queuing Protocol), 444 (see also messaging systems) comparison to log-based messaging, 448, 451 message ordering, 446 analytics, 90 comparison to transaction processing, 91 data warehousing (see data warehousing) parallel query execution in MPP databases, 415 predictive (see predictive analytics) relation to batch processing, 411 schemas for, 93-95 snapshot isolation for queries, 238 stream analytics, 466 using MapReduce, analysis of user activity events (example), 404 anti-caching (in-memory databases), 89 anti-entropy, 178 Apache ActiveMQ (see ActiveMQ) Apache Avro (see Avro) Apache Beam (see Beam) Apache BookKeeper (see BookKeeper) Apache Cassandra (see Cassandra) Apache CouchDB (see CouchDB) Apache Curator (see Curator) Apache Drill (see Drill) Apache Flink (see Flink) Apache Giraph (see Giraph) Apache Hadoop (see Hadoop) Apache HAWQ (see HAWQ) Apache HBase (see HBase) Apache Helix (see Helix) Apache Hive (see Hive) Apache Impala (see Impala) Apache Jena (see Jena) Apache Kafka (see Kafka) Apache Lucene (see Lucene) Apache MADlib (see MADlib) Apache Mahout (see Mahout) Apache Oozie (see Oozie) Apache Parquet (see Parquet) Apache Qpid (see Qpid) Apache Samza (see Samza) Apache Solr (see Solr) Apache Spark (see Spark) 560 | Index Apache Storm (see Storm) Apache Tajo (see Tajo) Apache Tez (see Tez) Apache Thrift (see Thrift) Apache ZooKeeper (see ZooKeeper) Apama (stream analytics), 466 append-only B-trees, 82, 242 append-only files (see logs) Application Programming Interfaces (APIs), 5, 27 for batch processing, 403 for change streams, 456 for distributed transactions, 361 for graph processing, 425 for services, 131-136 (see also services) evolvability, 136 RESTful, 133 SOAP, 133 application state (see state) approximate search (see similarity search) archival storage, data from databases, 131 arcs (see edges) arithmetic mean, 14 ASCII text, 119, 395 ASN.1 (schema language), 127 asynchronous networks, 278, 553 comparison to synchronous networks, 284 formal model, 307 asynchronous replication, 154, 553 conflict detection, 172 data loss on failover, 157 reads from asynchronous follower, 162 Asynchronous Transfer Mode (ATM), 285 atomic broadcast (see total order broadcast) atomic clocks (caesium clocks), 294, 295 (see also clocks) atomicity (concurrency), 553 atomic increment-and-get, 351 compare-and-set, 245, 327 (see also compare-and-set operations) replicated operations, 246 write operations, 243 atomicity (transactions), 223, 228, 553 atomic commit, 353 avoiding, 523, 528 blocking and nonblocking, 359 in stream processing, 360, 477 maintaining derived data, 453 for multi-object transactions, 229 for single-object writes, 230 auditability, 528-533 designing for, 531 self-auditing systems, 530 through immutability, 460 tools for auditable data systems, 532 availability, 8 (see also fault tolerance) in CAP theorem, 337 in service level agreements (SLAs), 15 Avro (data format), 122-127 code generation, 127 dynamically generated schemas, 126 object container files, 125, 131, 414 reader determining writer’s schema, 125 schema evolution, 123 use in Hadoop, 414 awk (Unix tool), 391 AWS (see Amazon Web Services) Azure (see Microsoft) B B-trees (indexes), 79-83 append-only/copy-on-write variants, 82, 242 branching factor, 81 comparison to LSM-trees, 83-85 crash recovery, 82 growing by splitting a page, 81 optimizations, 82 similarity to dynamic partitioning, 212 backpressure, 441, 553 in TCP, 282 backups database snapshot for replication, 156 integrity of, 530 snapshot isolation for, 238 use for ETL processes, 405 backward compatibility, 112 BASE, contrast to ACID, 223 bash shell (Unix), 70, 395, 503 batch processing, 28, 389-431, 553 combining with stream processing lambda architecture, 497 unifying technologies, 498 comparison to MPP databases, 414-418 comparison to stream processing, 464 comparison to Unix, 413-414 dataflow engines, 421-423 fault tolerance, 406, 414, 422, 442 for data integration, 494-498 graphs and iterative processing, 424-426 high-level APIs and languages, 403, 426-429 log-based messaging and, 451 maintaining derived state, 495 MapReduce and distributed filesystems, 397-413 (see also MapReduce) measuring performance, 13, 390 outputs, 411-413 key-value stores, 412 search indexes, 411 using Unix tools (example), 391-394 Bayou (database), 522 Beam (dataflow library), 498 bias, 534 big ball of mud, 20 Bigtable data model, 41, 99 binary data encodings, 115-128 Avro, 122-127 MessagePack, 116-117 Thrift and Protocol Buffers, 117-121 binary encoding based on schemas, 127 by network drivers, 128 binary strings, lack of support in JSON and XML, 114 BinaryProtocol encoding (Thrift), 118 Bitcask (storage engine), 72 crash recovery, 74 Bitcoin (cryptocurrency), 532 Byzantine fault tolerance, 305 concurrency bugs in exchanges, 233 bitmap indexes, 97 blockchains, 532 Byzantine fault tolerance, 305 blocking atomic commit, 359 Bloom (programming language), 504 Bloom filter (algorithm), 79, 466 BookKeeper (replicated log), 372 Bottled Water (change data capture), 455 bounded datasets, 430, 439, 553 (see also batch processing) bounded delays, 553 in networks, 285 process pauses, 298 broadcast hash joins, 409 Index | 561 brokerless messaging, 442 Brubeck (metrics aggregator), 442 BTM (transaction coordinator), 356 bulk synchronous parallel (BSP) model, 425 bursty network traffic patterns, 285 business data processing, 28, 90, 390 byte sequence, encoding data in, 112 Byzantine faults, 304-306, 307, 553 Byzantine fault-tolerant systems, 305, 532 Byzantine Generals Problem, 304 consensus algorithms and, 366 C caches, 89, 553 and materialized views, 101 as derived data, 386, 499-504 database as cache of transaction log, 460 in CPUs, 99, 338, 428 invalidation and maintenance, 452, 467 linearizability, 324 CAP theorem, 336-338, 554 Cascading (batch processing), 419, 427 hash joins, 409 workflows, 403 cascading failures, 9, 214, 281 Cascalog (batch processing), 60 Cassandra (database) column-family data model, 41, 99 compaction strategy, 79 compound primary key, 204 gossip protocol, 216 hash partitioning, 203-205 last-write-wins conflict resolution, 186, 292 leaderless replication, 177 linearizability, lack of, 335 log-structured storage, 78 multi-datacenter support, 184 partitioning scheme, 213 secondary indexes, 207 sloppy quorums, 184 cat (Unix tool), 391 causal context, 191 (see also causal dependencies) causal dependencies, 186-191 capturing, 191, 342, 494, 514 by total ordering, 493 causal ordering, 339 in transactions, 262 sending message to friends (example), 494 562 | Index causality, 554 causal ordering, 339-343 linearizability and, 342 total order consistent with, 344, 345 consistency with, 344-347 consistent snapshots, 340 happens-before relationship, 186 in serializable transactions, 262-265 mismatch with clocks, 292 ordering events to capture, 493 violations of, 165, 176, 292, 340 with synchronized clocks, 294 CEP (see complex event processing) certificate transparency, 532 chain replication, 155 linearizable reads, 351 change data capture, 160, 454 API support for change streams, 456 comparison to event sourcing, 457 implementing, 454 initial snapshot, 455 log compaction, 456 changelogs, 460 change data capture, 454 for operator state, 479 generating with triggers, 455 in stream joins, 474 log compaction, 456 maintaining derived state, 452 Chaos Monkey, 7, 280 checkpointing in batch processors, 422, 426 in high-performance computing, 275 in stream processors, 477, 523 chronicle data model, 458 circuit-switched networks, 284 circular buffers, 450 circular replication topologies, 175 clickstream data, analysis of, 404 clients calling services, 131 pushing state changes to, 512 request routing, 214 stateful and offline-capable, 170, 511 clocks, 287-299 atomic (caesium) clocks, 294, 295 confidence interval, 293-295 for global snapshots, 294 logical (see logical clocks) skew, 291-294, 334 slewing, 289 synchronization and accuracy, 289-291 synchronization using GPS, 287, 290, 294, 295 time-of-day versus monotonic clocks, 288 timestamping events, 471 cloud computing, 146, 275 need for service discovery, 372 network glitches, 279 shared resources, 284 single-machine reliability, 8 Cloudera Impala (see Impala) clustered indexes, 86 CODASYL model, 36 (see also network model) code generation with Avro, 127 with Thrift and Protocol Buffers, 118 with WSDL, 133 collaborative editing multi-leader replication and, 170 column families (Bigtable), 41, 99 column-oriented storage, 95-101 column compression, 97 distinction between column families and, 99 in batch processors, 428 Parquet, 96, 131, 414 sort order in, 99-100 vectorized processing, 99, 428 writing to, 101 comma-separated values (see CSV) command query responsibility segregation (CQRS), 462 commands (event sourcing), 459 commits (transactions), 222 atomic commit, 354-355 (see also atomicity; transactions) read committed isolation, 234 three-phase commit (3PC), 359 two-phase commit (2PC), 355-359 commutative operations, 246 compaction of changelogs, 456 (see also log compaction) for stream operator state, 479 of log-structured storage, 73 issues with, 84 size-tiered and leveled approaches, 79 CompactProtocol encoding (Thrift), 119 compare-and-set operations, 245, 327 implementing locks, 370 implementing uniqueness constraints, 331 implementing with total order broadcast, 350 relation to consensus, 335, 350, 352, 374 relation to transactions, 230 compatibility, 112, 128 calling services, 136 properties of encoding formats, 139 using databases, 129-131 using message-passing, 138 compensating transactions, 355, 461, 526 complex event processing (CEP), 465 complexity distilling in theoretical models, 310 hiding using abstraction, 27 of software systems, managing, 20 composing data systems (see unbundling data‐ bases) compute-intensive applications, 3, 275 concatenated indexes, 87 in Cassandra, 204 Concord (stream processor), 466 concurrency actor programming model, 138, 468 (see also message-passing) bugs from weak transaction isolation, 233 conflict resolution, 171, 174 detecting concurrent writes, 184-191 dual writes, problems with, 453 happens-before relationship, 186 in replicated systems, 161-191, 324-338 lost updates, 243 multi-version concurrency control (MVCC), 239 optimistic concurrency control, 261 ordering of operations, 326, 341 reducing, through event logs, 351, 462, 507 time and relativity, 187 transaction isolation, 225 write skew (transaction isolation), 246-251 conflict-free replicated datatypes (CRDTs), 174 conflicts conflict detection, 172 causal dependencies, 186, 342 in consensus algorithms, 368 in leaderless replication, 184 Index | 563 in log-based systems, 351, 521 in nonlinearizable systems, 343 in serializable snapshot isolation (SSI), 264 in two-phase commit, 357, 364 conflict resolution automatic conflict resolution, 174 by aborting transactions, 261 by apologizing, 527 convergence, 172-174 in leaderless systems, 190 last write wins (LWW), 186, 292 using atomic operations, 246 using custom logic, 173 determining what is a conflict, 174, 522 in multi-leader replication, 171-175 avoiding conflicts, 172 lost updates, 242-246 materializing, 251 relation to operation ordering, 339 write skew (transaction isolation), 246-251 congestion (networks) avoidance, 282 limiting accuracy of clocks, 293 queueing delays, 282 consensus, 321, 364-375, 554 algorithms, 366-368 preventing split brain, 367 safety and liveness properties, 365 using linearizable operations, 351 cost of, 369 distributed transactions, 352-375 in practice, 360-364 two-phase commit, 354-359 XA transactions, 361-364 impossibility of, 353 membership and coordination services, 370-373 relation to compare-and-set, 335, 350, 352, 374 relation to replication, 155, 349 relation to uniqueness constraints, 521 consistency, 224, 524 across different databases, 157, 452, 462, 492 causal, 339-348, 493 consistent prefix reads, 165-167 consistent snapshots, 156, 237-242, 294, 455, 500 (see also snapshots) 564 | Index crash recovery, 82 enforcing constraints (see constraints) eventual, 162, 322 (see also eventual consistency) in ACID transactions, 224, 529 in CAP theorem, 337 linearizability, 324-338 meanings of, 224 monotonic reads, 164-165 of secondary indexes, 231, 241, 354, 491, 500 ordering guarantees, 339-352 read-after-write, 162-164 sequential, 351 strong (see linearizability) timeliness and integrity, 524 using quorums, 181, 334 consistent hashing, 204 consistent prefix reads, 165 constraints (databases), 225, 248 asynchronously checked, 526 coordination avoidance, 527 ensuring idempotence, 519 in log-based systems, 521-524 across multiple partitions, 522 in two-phase commit, 355, 357 relation to consensus, 374, 521 relation to event ordering, 347 requiring linearizability, 330 Consul (service discovery), 372 consumers (message streams), 137, 440 backpressure, 441 consumer offsets in logs, 449 failures, 445, 449 fan-out, 11, 445, 448 load balancing, 444, 448 not keeping up with producers, 441, 450, 502 context switches, 14, 297 convergence (conflict resolution), 172-174, 322 coordination avoidance, 527 cross-datacenter, 168, 493 cross-partition ordering, 256, 294, 348, 523 services, 330, 370-373 coordinator (in 2PC), 356 failure, 358 in XA transactions, 361-364 recovery, 363 copy-on-write (B-trees), 82, 242 CORBA (Common Object Request Broker Architecture), 134 correctness, 6 auditability, 528-533 Byzantine fault tolerance, 305, 532 dealing with partial failures, 274 in log-based systems, 521-524 of algorithm within system model, 308 of compensating transactions, 355 of consensus, 368 of derived data, 497, 531 of immutable data, 461 of personal data, 535, 540 of time, 176, 289-295 of transactions, 225, 515, 529 timeliness and integrity, 524-528 corruption of data detecting, 519, 530-533 due to pathological memory access, 529 due to radiation, 305 due to split brain, 158, 302 due to weak transaction isolation, 233 formalization in consensus, 366 integrity as absence of, 524 network packets, 306 on disks, 227 preventing using write-ahead logs, 82 recovering from, 414, 460 Couchbase (database) durability, 89 hash partitioning, 203-204, 211 rebalancing, 213 request routing, 216 CouchDB (database) B-tree storage, 242 change feed, 456 document data model, 31 join support, 34 MapReduce support, 46, 400 replication, 170, 173 covering indexes, 86 CPUs cache coherence and memory barriers, 338 caching and pipelining, 99, 428 increasing parallelism, 43 CRDTs (see conflict-free replicated datatypes) CREATE INDEX statement (SQL), 85, 500 credit rating agencies, 535 Crunch (batch processing), 419, 427 hash joins, 409 sharded joins, 408 workflows, 403 cryptography defense against attackers, 306 end-to-end encryption and authentication, 519, 543 proving integrity of data, 532 CSS (Cascading Style Sheets), 44 CSV (comma-separated values), 70, 114, 396 Curator (ZooKeeper recipes), 330, 371 curl (Unix tool), 135, 397 cursor stability, 243 Cypher (query language), 52 comparison to SPARQL, 59 D data corruption (see corruption of data) data cubes, 102 data formats (see encoding) data integration, 490-498, 543 batch and stream processing, 494-498 lambda architecture, 497 maintaining derived state, 495 reprocessing data, 496 unifying, 498 by unbundling databases, 499-515 comparison to federated databases, 501 combining tools by deriving data, 490-494 derived data versus distributed transac‐ tions, 492 limits of total ordering, 493 ordering events to capture causality, 493 reasoning about dataflows, 491 need for, 385 data lakes, 415 data locality (see locality) data models, 27-64 graph-like models, 49-63 Datalog language, 60-63 property graphs, 50 RDF and triple-stores, 55-59 query languages, 42-48 relational model versus document model, 28-42 data protection regulations, 542 data systems, 3 about, 4 Index | 565 concerns when designing, 5 future of, 489-544 correctness, constraints, and integrity, 515-533 data integration, 490-498 unbundling databases, 499-515 heterogeneous, keeping in sync, 452 maintainability, 18-22 possible faults in, 221 reliability, 6-10 hardware faults, 7 human errors, 9 importance of, 10 software errors, 8 scalability, 10-18 unreliable clocks, 287-299 data warehousing, 91-95, 554 comparison to data lakes, 415 ETL (extract-transform-load), 92, 416, 452 keeping data systems in sync, 452 schema design, 93 slowly changing dimension (SCD), 476 data-intensive applications, 3 database triggers (see triggers) database-internal distributed transactions, 360, 364, 477 databases archival storage, 131 comparison of message brokers to, 443 dataflow through, 129 end-to-end argument for, 519-520 checking integrity, 531 inside-out, 504 (see also unbundling databases) output from batch workflows, 412 relation to event streams, 451-464 (see also changelogs) API support for change streams, 456, 506 change data capture, 454-457 event sourcing, 457-459 keeping systems in sync, 452-453 philosophy of immutable events, 459-464 unbundling, 499-515 composing data storage technologies, 499-504 designing applications around dataflow, 504-509 566 | Index observing derived state, 509-515 datacenters geographically distributed, 145, 164, 278, 493 multi-tenancy and shared resources, 284 network architecture, 276 network faults, 279 replication across multiple, 169 leaderless replication, 184 multi-leader replication, 168, 335 dataflow, 128-139, 504-509 correctness of dataflow systems, 525 differential, 504 message-passing, 136-139 reasoning about, 491 through databases, 129 through services, 131-136 dataflow engines, 421-423 comparison to stream processing, 464 directed acyclic graphs (DAG), 424 partitioning, approach to, 429 support for declarative queries, 427 Datalog (query language), 60-63 datatypes binary strings in XML and JSON, 114 conflict-free, 174 in Avro encodings, 122 in Thrift and Protocol Buffers, 121 numbers in XML and JSON, 114 Datomic (database) B-tree storage, 242 data model, 50, 57 Datalog query language, 60 excision (deleting data), 463 languages for transactions, 255 serial execution of transactions, 253 deadlocks detection, in two-phase commit (2PC), 364 in two-phase locking (2PL), 258 Debezium (change data capture), 455 declarative languages, 42, 554 Bloom, 504 CSS and XSL, 44 Cypher, 52 Datalog, 60 for batch processing, 427 recursive SQL queries, 53 relational algebra and SQL, 42 SPARQL, 59 delays bounded network delays, 285 bounded process pauses, 298 unbounded network delays, 282 unbounded process pauses, 296 deleting data, 463 denormalization (data representation), 34, 554 costs, 39 in derived data systems, 386 materialized views, 101 updating derived data, 228, 231, 490 versus normalization, 462 derived data, 386, 439, 554 from change data capture, 454 in event sourcing, 458-458 maintaining derived state through logs, 452-457, 459-463 observing, by subscribing to streams, 512 outputs of batch and stream processing, 495 through application code, 505 versus distributed transactions, 492 deterministic operations, 255, 274, 554 accidental nondeterminism, 423 and fault tolerance, 423, 426 and idempotence, 478, 492 computing derived data, 495, 526, 531 in state machine replication, 349, 452, 458 joins, 476 DevOps, 394 differential dataflow, 504 dimension tables, 94 dimensional modeling (see star schemas) directed acyclic graphs (DAGs), 424 dirty reads (transaction isolation), 234 dirty writes (transaction isolation), 235 discrimination, 534 disks (see hard disks) distributed actor frameworks, 138 distributed filesystems, 398-399 decoupling from query engines, 417 indiscriminately dumping data into, 415 use by MapReduce, 402 distributed systems, 273-312, 554 Byzantine faults, 304-306 cloud versus supercomputing, 275 detecting network faults, 280 faults and partial failures, 274-277 formalization of consensus, 365 impossibility results, 338, 353 issues with failover, 157 limitations of distributed transactions, 363 multi-datacenter, 169, 335 network problems, 277-286 quorums, relying on, 301 reasons for using, 145, 151 synchronized clocks, relying on, 291-295 system models, 306-310 use of clocks and time, 287 distributed transactions (see transactions) Django (web framework), 232 DNS (Domain Name System), 216, 372 Docker (container manager), 506 document data model, 30-42 comparison to relational model, 38-42 document references, 38, 403 document-oriented databases, 31 many-to-many relationships and joins, 36 multi-object transactions, need for, 231 versus relational model convergence of models, 41 data locality, 41 document-partitioned indexes, 206, 217, 411 domain-driven design (DDD), 457 DRBD (Distributed Replicated Block Device), 153 drift (clocks), 289 Drill (query engine), 93 Druid (database), 461 Dryad (dataflow engine), 421 dual writes, problems with, 452, 507 duplicates, suppression of, 517 (see also idempotence) using a unique ID, 518, 522 durability (transactions), 226, 554 duration (time), 287 measurement with monotonic clocks, 288 dynamic partitioning, 212 dynamically typed languages analogy to schema-on-read, 40 code generation and, 127 Dynamo-style databases (see leaderless replica‐ tion) E edges (in graphs), 49, 403 property graph model, 50 edit distance (full-text search), 88 effectively-once semantics, 476, 516 Index | 567 (see also exactly-once semantics) preservation of integrity, 525 elastic systems, 17 Elasticsearch (search server) document-partitioned indexes, 207 partition rebalancing, 211 percolator (stream search), 467 usage example, 4 use of Lucene, 79 ElephantDB (database), 413 Elm (programming language), 504, 512 encodings (data formats), 111-128 Avro, 122-127 binary variants of JSON and XML, 115 compatibility, 112 calling services, 136 using databases, 129-131 using message-passing, 138 defined, 113 JSON, XML, and CSV, 114 language-specific formats, 113 merits of schemas, 127 representations of data, 112 Thrift and Protocol Buffers, 117-121 end-to-end argument, 277, 519-520 checking integrity, 531 publish/subscribe streams, 512 enrichment (stream), 473 Enterprise JavaBeans (EJB), 134 entities (see vertices) epoch (consensus algorithms), 368 epoch (Unix timestamps), 288 equi-joins, 403 erasure coding (error correction), 398 Erlang OTP (actor framework), 139 error handling for network faults, 280 in transactions, 231 error-correcting codes, 277, 398 Esper (CEP engine), 466 etcd (coordination service), 370-373 linearizable operations, 333 locks and leader election, 330 quorum reads, 351 service discovery, 372 use of Raft algorithm, 349, 353 Ethereum (blockchain), 532 Ethernet (networks), 276, 278, 285 packet checksums, 306, 519 568 | Index Etherpad (collaborative editor), 170 ethics, 533-543 code of ethics and professional practice, 533 legislation and self-regulation, 542 predictive analytics, 533-536 amplifying bias, 534 feedback loops, 536 privacy and tracking, 536-543 consent and freedom of choice, 538 data as assets and power, 540 meaning of privacy, 539 surveillance, 537 respect, dignity, and agency, 543, 544 unintended consequences, 533, 536 ETL (extract-transform-load), 92, 405, 452, 554 use of Hadoop for, 416 event sourcing, 457-459 commands and events, 459 comparison to change data capture, 457 comparison to lambda architecture, 497 deriving current state from event log, 458 immutability and auditability, 459, 531 large, reliable data systems, 519, 526 Event Store (database), 458 event streams (see streams) events, 440 deciding on total order of, 493 deriving views from event log, 461 difference to commands, 459 event time versus processing time, 469, 477, 498 immutable, advantages of, 460, 531 ordering to capture causality, 493 reads as, 513 stragglers, 470, 498 timestamp of, in stream processing, 471 EventSource (browser API), 512 eventual consistency, 152, 162, 308, 322 (see also conflicts) and perpetual inconsistency, 525 evolvability, 21, 111 calling services, 136 graph-structured data, 52 of databases, 40, 129-131, 461, 497 of message-passing, 138 reprocessing data, 496, 498 schema evolution in Avro, 123 schema evolution in Thrift and Protocol Buffers, 120 schema-on-read, 39, 111, 128 exactly-once semantics, 360, 476, 516 parity with batch processors, 498 preservation of integrity, 525 exclusive mode (locks), 258 eXtended Architecture transactions (see XA transactions) extract-transform-load (see ETL) F Facebook Presto (query engine), 93 React, Flux, and Redux (user interface libra‐ ries), 512 social graphs, 49 Wormhole (change data capture), 455 fact tables, 93 failover, 157, 554 (see also leader-based replication) in leaderless replication, absence of, 178 leader election, 301, 348, 352 potential problems, 157 failures amplification by distributed transactions, 364, 495 failure detection, 280 automatic rebalancing causing cascading failures, 214 perfect failure detectors, 359 timeouts and unbounded delays, 282, 284 using ZooKeeper, 371 faults versus, 7 partial failures in distributed systems, 275-277, 310 fan-out (messaging systems), 11, 445 fault tolerance, 6-10, 555 abstractions for, 321 formalization in consensus, 365-369 use of replication, 367 human fault tolerance, 414 in batch processing, 406, 414, 422, 425 in log-based systems, 520, 524-526 in stream processing, 476-479 atomic commit, 477 idempotence, 478 maintaining derived state, 495 microbatching and checkpointing, 477 rebuilding state after a failure, 478 of distributed transactions, 362-364 transaction atomicity, 223, 354-361 faults, 6 Byzantine faults, 304-306 failures versus, 7 handled by transactions, 221 handling in supercomputers and cloud computing, 275 hardware, 7 in batch processing versus distributed data‐ bases, 417 in distributed systems, 274-277 introducing deliberately, 7, 280 network faults, 279-281 asymmetric faults, 300 detecting, 280 tolerance of, in multi-leader replication, 169 software errors, 8 tolerating (see fault tolerance) federated databases, 501 fence (CPU instruction), 338 fencing (preventing split brain), 158, 302-304 generating fencing tokens, 349, 370 properties of fencing tokens, 308 stream processors writing to databases, 478, 517 Fibre Channel (networks), 398 field tags (Thrift and Protocol Buffers), 119-121 file descriptors (Unix), 395 financial data, 460 Firebase (database), 456 Flink (processing framework), 421-423 dataflow APIs, 427 fault tolerance, 422, 477, 479 Gelly API (graph processing), 425 integration of batch and stream processing, 495, 498 machine learning, 428 query optimizer, 427 stream processing, 466 flow control, 282, 441, 555 FLP result (on consensus), 353 FlumeJava (dataflow library), 403, 427 followers, 152, 555 (see also leader-based replication) foreign keys, 38, 403 forward compatibility, 112 forward decay (algorithm), 16 Index | 569 Fossil (version control system), 463 shunning (deleting data), 463 FoundationDB (database) serializable transactions, 261, 265, 364 fractal trees, 83 full table scans, 403 full-text search, 555 and fuzzy indexes, 88 building search indexes, 411 Lucene storage engine, 79 functional reactive programming (FRP), 504 functional requirements, 22 futures (asynchronous operations), 135 fuzzy search (see similarity search) G garbage collection immutability and, 463 process pauses for, 14, 296-299, 301 (see also process pauses) genome analysis, 63, 429 geographically distributed datacenters, 145, 164, 278, 493 geospatial indexes, 87 Giraph (graph processing), 425 Git (version control system), 174, 342, 463 GitHub, postmortems, 157, 158, 309 global indexes (see term-partitioned indexes) GlusterFS (distributed filesystem), 398 GNU Coreutils (Linux), 394 GoldenGate (change data capture), 161, 170, 455 (see also Oracle) Google Bigtable (database) data model (see Bigtable data model) partitioning scheme, 199, 202 storage layout, 78 Chubby (lock service), 370 Cloud Dataflow (stream processor), 466, 477, 498 (see also Beam) Cloud Pub/Sub (messaging), 444, 448 Docs (collaborative editor), 170 Dremel (query engine), 93, 96 FlumeJava (dataflow library), 403, 427 GFS (distributed file system), 398 gRPC (RPC framework), 135 MapReduce (batch processing), 390 570 | Index (see also MapReduce) building search indexes, 411 task preemption, 418 Pregel (graph processing), 425 Spanner (see Spanner) TrueTime (clock API), 294 gossip protocol, 216 government use of data, 541 GPS (Global Positioning System) use for clock synchronization, 287, 290, 294, 295 GraphChi (graph processing), 426 graphs, 555 as data models, 49-63 example of graph-structured data, 49 property graphs, 50 RDF and triple-stores, 55-59 versus the network model, 60 processing and analysis, 424-426 fault tolerance, 425 Pregel processing model, 425 query languages Cypher, 52 Datalog, 60-63 recursive SQL queries, 53 SPARQL, 59-59 Gremlin (graph query language), 50 grep (Unix tool), 392 GROUP BY clause (SQL), 406 grouping records in MapReduce, 406 handling skew, 407 H Hadoop (data infrastructure) comparison to distributed databases, 390 comparison to MPP databases, 414-418 comparison to Unix, 413-414, 499 diverse processing models in ecosystem, 417 HDFS distributed filesystem (see HDFS) higher-level tools, 403 join algorithms, 403-410 (see also MapReduce) MapReduce (see MapReduce) YARN (see YARN) happens-before relationship, 340 capturing, 187 concurrency and, 186 hard disks access patterns, 84 detecting corruption, 519, 530 faults in, 7, 227 sequential write throughput, 75, 450 hardware faults, 7 hash indexes, 72-75 broadcast hash joins, 409 partitioned hash joins, 409 hash partitioning, 203-205, 217 consistent hashing, 204 problems with hash mod N, 210 range queries, 204 suitable hash functions, 203 with fixed number of partitions, 210 HAWQ (database), 428 HBase (database) bug due to lack of fencing, 302 bulk loading, 413 column-family data model, 41, 99 dynamic partitioning, 212 key-range partitioning, 202 log-structured storage, 78 request routing, 216 size-tiered compaction, 79 use of HDFS, 417 use of ZooKeeper, 370 HDFS (Hadoop Distributed File System), 398-399 (see also distributed filesystems) checking data integrity, 530 decoupling from query engines, 417 indiscriminately dumping data into, 415 metadata about datasets, 410 NameNode, 398 use by Flink, 479 use by HBase, 212 use by MapReduce, 402 HdrHistogram (numerical library), 16 head (Unix tool), 392 head vertex (property graphs), 51 head-of-line blocking, 15 heap files (databases), 86 Helix (cluster manager), 216 heterogeneous distributed transactions, 360, 364 heuristic decisions (in 2PC), 363 Hibernate (object-relational mapper), 30 hierarchical model, 36 high availability (see fault tolerance) high-frequency trading, 290, 299 high-performance computing (HPC), 275 hinted handoff, 183 histograms, 16 Hive (query engine), 419, 427 for data warehouses, 93 HCatalog and metastore, 410 map-side joins, 409 query optimizer, 427 skewed joins, 408 workflows, 403 Hollerith machines, 390 hopping windows (stream processing), 472 (see also windows) horizontal scaling (see scaling out) HornetQ (messaging), 137, 444 distributed transaction support, 361 hot spots, 201 due to celebrities, 205 for time-series data, 203 in batch processing, 407 relieving, 205 hot standbys (see leader-based replication) HTTP, use in APIs (see services) human errors, 9, 279, 414 HyperDex (database), 88 HyperLogLog (algorithm), 466 I I/O operations, waiting for, 297 IBM DB2 (database) distributed transaction support, 361 recursive query support, 54 serializable isolation, 242, 257 XML and JSON support, 30, 42 electromechanical card-sorting machines, 390 IMS (database), 36 imperative query APIs, 46 InfoSphere Streams (CEP engine), 466 MQ (messaging), 444 distributed transaction support, 361 System R (database), 222 WebSphere (messaging), 137 idempotence, 134, 478, 555 by giving operations unique IDs, 518, 522 idempotent operations, 517 immutability advantages of, 460, 531 Index | 571 deriving state from event log, 459-464 for crash recovery, 75 in B-trees, 82, 242 in event sourcing, 457 inputs to Unix commands, 397 limitations of, 463 Impala (query engine) for data warehouses, 93 hash joins, 409 native code generation, 428 use of HDFS, 417 impedance mismatch, 29 imperative languages, 42 setting element styles (example), 45 in doubt (transaction status), 358 holding locks, 362 orphaned transactions, 363 in-memory databases, 88 durability, 227 serial transaction execution, 253 incidents cascading failures, 9 crashes due to leap seconds, 290 data corruption and financial losses due to concurrency bugs, 233 data corruption on hard disks, 227 data loss due to last-write-wins, 173, 292 data on disks unreadable, 309 deleted items reappearing, 174 disclosure of sensitive data due to primary key reuse, 157 errors in transaction serializability, 529 gigabit network interface with 1 Kb/s throughput, 311 network faults, 279 network interface dropping only inbound packets, 279 network partitions and whole-datacenter failures, 275 poor handling of network faults, 280 sending message to ex-partner, 494 sharks biting undersea cables, 279 split brain due to 1-minute packet delay, 158, 279 vibrations in server rack, 14 violation of uniqueness constraint, 529 indexes, 71, 555 and snapshot isolation, 241 as derived data, 386, 499-504 572 | Index B-trees, 79-83 building in batch processes, 411 clustered, 86 comparison of B-trees and LSM-trees, 83-85 concatenated, 87 covering (with included columns), 86 creating, 500 full-text search, 88 geospatial, 87 hash, 72-75 index-range locking, 260 multi-column, 87 partitioning and secondary indexes, 206-209, 217 secondary, 85 (see also secondary indexes) problems with dual writes, 452, 491 SSTables and LSM-trees, 76-79 updating when data changes, 452, 467 Industrial Revolution, 541 InfiniBand (networks), 285 InfiniteGraph (database), 50 InnoDB (storage engine) clustered index on primary key, 86 not preventing lost updates, 245 preventing write skew, 248, 257 serializable isolation, 257 snapshot isolation support, 239 inside-out databases, 504 (see also unbundling databases) integrating different data systems (see data integration) integrity, 524 coordination-avoiding data systems, 528 correctness of dataflow systems, 525 in consensus formalization, 365 integrity checks, 530 (see also auditing) end-to-end, 519, 531 use of snapshot isolation, 238 maintaining despite software bugs, 529 Interface Definition Language (IDL), 117, 122 intermediate state, materialization of, 420-423 internet services, systems for implementing, 275 invariants, 225 (see also constraints) inversion of control, 396 IP (Internet Protocol) unreliability of, 277 ISDN (Integrated Services Digital Network), 284 isolation (in transactions), 225, 228, 555 correctness and, 515 for single-object writes, 230 serializability, 251-266 actual serial execution, 252-256 serializable snapshot isolation (SSI), 261-266 two-phase locking (2PL), 257-261 violating, 228 weak isolation levels, 233-251 preventing lost updates, 242-246 read committed, 234-237 snapshot isolation, 237-242 iterative processing, 424-426 J Java Database Connectivity (JDBC) distributed transaction support, 361 network drivers, 128 Java Enterprise Edition (EE), 134, 356, 361 Java Message Service (JMS), 444 (see also messaging systems) comparison to log-based messaging, 448, 451 distributed transaction support, 361 message ordering, 446 Java Transaction API (JTA), 355, 361 Java Virtual Machine (JVM) bytecode generation, 428 garbage collection pauses, 296 process reuse in batch processors, 422 JavaScript in MapReduce querying, 46 setting element styles (example), 45 use in advanced queries, 48 Jena (RDF framework), 57 Jepsen (fault tolerance testing), 515 jitter (network delay), 284 joins, 555 by index lookup, 403 expressing as relational operators, 427 in relational and document databases, 34 MapReduce map-side joins, 408-410 broadcast hash joins, 409 merge joins, 410 partitioned hash joins, 409 MapReduce reduce-side joins, 403-408 handling skew, 407 sort-merge joins, 405 parallel execution of, 415 secondary indexes and, 85 stream joins, 472-476 stream-stream join, 473 stream-table join, 473 table-table join, 474 time-dependence of, 475 support in document databases, 42 JOTM (transaction coordinator), 356 JSON Avro schema representation, 122 binary variants, 115 for application data, issues with, 114 in relational databases, 30, 42 representing a résumé (example), 31 Juttle (query language), 504 K k-nearest neighbors, 429 Kafka (messaging), 137, 448 Kafka Connect (database integration), 457, 461 Kafka Streams (stream processor), 466, 467 fault tolerance, 479 leader-based replication, 153 log compaction, 456, 467 message offsets, 447, 478 request routing, 216 transaction support, 477 usage example, 4 Ketama (partitioning library), 213 key-value stores, 70 as batch process output, 412 hash indexes, 72-75 in-memory, 89 partitioning, 201-205 by hash of key, 203, 217 by key range, 202, 217 dynamic partitioning, 212 skew and hot spots, 205 Kryo (Java), 113 Kubernetes (cluster manager), 418, 506 L lambda architecture, 497 Lamport timestamps, 345 Index | 573 Large Hadron Collider (LHC), 64 last write wins (LWW), 173, 334 discarding concurrent writes, 186 problems with, 292 prone to lost updates, 246 late binding, 396 latency instability under two-phase locking, 259 network latency and resource utilization, 286 response time versus, 14 tail latency, 15, 207 leader-based replication, 152-161 (see also replication) failover, 157, 301 handling node outages, 156 implementation of replication logs change data capture, 454-457 (see also changelogs) statement-based, 158 trigger-based replication, 161 write-ahead log (WAL) shipping, 159 linearizability of operations, 333 locking and leader election, 330 log sequence number, 156, 449 read-scaling architecture, 161 relation to consensus, 367 setting up new followers, 155 synchronous versus asynchronous, 153-155 leaderless replication, 177-191 (see also replication) detecting concurrent writes, 184-191 capturing happens-before relationship, 187 happens-before relationship and concur‐ rency, 186 last write wins, 186 merging concurrently written values, 190 version vectors, 191 multi-datacenter, 184 quorums, 179-182 consistency limitations, 181-183, 334 sloppy quorums and hinted handoff, 183 read repair and anti-entropy, 178 leap seconds, 8, 290 in time-of-day clocks, 288 leases, 295 implementation with ZooKeeper, 370 574 | Index need for fencing, 302 ledgers, 460 distributed ledger technologies, 532 legacy systems, maintenance of, 18 less (Unix tool), 397 LevelDB (storage engine), 78 leveled compaction, 79 Levenshtein automata, 88 limping (partial failure), 311 linearizability, 324-338, 555 cost of, 335-338 CAP theorem, 336 memory on multi-core CPUs, 338 definition, 325-329 implementing with total order broadcast, 350 in ZooKeeper, 370 of derived data systems, 492, 524 avoiding coordination, 527 of different replication methods, 332-335 using quorums, 334 relying on, 330-332 constraints and uniqueness, 330 cross-channel timing dependencies, 331 locking and leader election, 330 stronger than causal consistency, 342 using to implement total order broadcast, 351 versus serializability, 329 LinkedIn Azkaban (workflow scheduler), 402 Databus (change data capture), 161, 455 Espresso (database), 31, 126, 130, 153, 216 Helix (cluster manager) (see Helix) profile (example), 30 reference to company entity (example), 34 Rest.li (RPC framework), 135 Voldemort (database) (see Voldemort) Linux, leap second bug, 8, 290 liveness properties, 308 LMDB (storage engine), 82, 242 load approaches to coping with, 17 describing, 11 load testing, 16 load balancing (messaging), 444 local indexes (see document-partitioned indexes) locality (data access), 32, 41, 555 in batch processing, 400, 405, 421 in stateful clients, 170, 511 in stream processing, 474, 478, 508, 522 location transparency, 134 in the actor model, 138 locks, 556 deadlock, 258 distributed locking, 301-304, 330 fencing tokens, 303 implementation with ZooKeeper, 370 relation to consensus, 374 for transaction isolation in snapshot isolation, 239 in two-phase locking (2PL), 257-261 making operations atomic, 243 performance, 258 preventing dirty writes, 236 preventing phantoms with index-range locks, 260, 265 read locks (shared mode), 236, 258 shared mode and exclusive mode, 258 in two-phase commit (2PC) deadlock detection, 364 in-doubt transactions holding locks, 362 materializing conflicts with, 251 preventing lost updates by explicit locking, 244 log sequence number, 156, 449 logic programming languages, 504 logical clocks, 293, 343, 494 for read-after-write consistency, 164 logical logs, 160 logs (data structure), 71, 556 advantages of immutability, 460 compaction, 73, 79, 456, 460 for stream operator state, 479 creating using total order broadcast, 349 implementing uniqueness constraints, 522 log-based messaging, 446-451 comparison to traditional messaging, 448, 451 consumer offsets, 449 disk space usage, 450 replaying old messages, 451, 496, 498 slow consumers, 450 using logs for message storage, 447 log-structured storage, 71-79 log-structured merge tree (see LSMtrees) replication, 152, 158-161 change data capture, 454-457 (see also changelogs) coordination with snapshot, 156 logical (row-based) replication, 160 statement-based replication, 158 trigger-based replication, 161 write-ahead log (WAL) shipping, 159 scalability limits, 493 loose coupling, 396, 419, 502 lost updates (see updates) LSM-trees (indexes), 78-79 comparison to B-trees, 83-85 Lucene (storage engine), 79 building indexes in batch processes, 411 similarity search, 88 Luigi (workflow scheduler), 402 LWW (see last write wins) M machine learning ethical considerations, 534 (see also ethics) iterative processing, 424 models derived from training data, 505 statistical and numerical algorithms, 428 MADlib (machine learning toolkit), 428 magic scaling sauce, 18 Mahout (machine learning toolkit), 428 maintainability, 18-22, 489 defined, 23 design principles for software systems, 19 evolvability (see evolvability) operability, 19 simplicity and managing complexity, 20 many-to-many relationships in document model versus relational model, 39 modeling as graphs, 49 many-to-one and many-to-many relationships, 33-36 many-to-one relationships, 34 MapReduce (batch processing), 390, 399-400 accessing external services within job, 404, 412 comparison to distributed databases designing for frequent faults, 417 diversity of processing models, 416 diversity of storage, 415 Index | 575 comparison to stream processing, 464 comparison to Unix, 413-414 disadvantages and limitations of, 419 fault tolerance, 406, 414, 422 higher-level tools, 403, 426 implementation in Hadoop, 400-403 the shuffle, 402 implementation in MongoDB, 46-48 machine learning, 428 map-side processing, 408-410 broadcast hash joins, 409 merge joins, 410 partitioned hash joins, 409 mapper and reducer functions, 399 materialization of intermediate state, 419-423 output of batch workflows, 411-413 building search indexes, 411 key-value stores, 412 reduce-side processing, 403-408 analysis of user activity events (exam‐ ple), 404 grouping records by same key, 406 handling skew, 407 sort-merge joins, 405 workflows, 402 marshalling (see encoding) massively parallel processing (MPP), 216 comparison to composing storage technolo‐ gies, 502 comparison to Hadoop, 414-418, 428 master-master replication (see multi-leader replication) master-slave replication (see leader-based repli‐ cation) materialization, 556 aggregate values, 101 conflicts, 251 intermediate state (batch processing), 420-423 materialized views, 101 as derived data, 386, 499-504 maintaining, using stream processing, 467, 475 Maven (Java build tool), 428 Maxwell (change data capture), 455 mean, 14 media monitoring, 467 median, 14 576 | Index meeting room booking (example), 249, 259, 521 membership services, 372 Memcached (caching server), 4, 89 memory in-memory databases, 88 durability, 227 serial transaction execution, 253 in-memory representation of data, 112 random bit-flips in, 529 use by indexes, 72, 77 memory barrier (CPU instruction), 338 MemSQL (database) in-memory storage, 89 read committed isolation, 236 memtable (in LSM-trees), 78 Mercurial (version control system), 463 merge joins, MapReduce map-side, 410 mergeable persistent data structures, 174 merging sorted files, 76, 402, 405 Merkle trees, 532 Mesos (cluster manager), 418, 506 message brokers (see messaging systems) message-passing, 136-139 advantages over direct RPC, 137 distributed actor frameworks, 138 evolvability, 138 MessagePack (encoding format), 116 messages exactly-once semantics, 360, 476 loss of, 442 using total order broadcast, 348 messaging systems, 440-451 (see also streams) backpressure, buffering, or dropping mes‐ sages, 441 brokerless messaging, 442 event logs, 446-451 comparison to traditional messaging, 448, 451 consumer offsets, 449 replaying old messages, 451, 496, 498 slow consumers, 450 message brokers, 443-446 acknowledgements and redelivery, 445 comparison to event logs, 448, 451 multiple consumers of same topic, 444 reliability, 442 uniqueness in log-based messaging, 522 Meteor (web framework), 456 microbatching, 477, 495 microservices, 132 (see also services) causal dependencies across services, 493 loose coupling, 502 relation to batch/stream processors, 389, 508 Microsoft Azure Service Bus (messaging), 444 Azure Storage, 155, 398 Azure Stream Analytics, 466 DCOM (Distributed Component Object Model), 134 MSDTC (transaction coordinator), 356 Orleans (see Orleans) SQL Server (see SQL Server) migrating (rewriting) data, 40, 130, 461, 497 modulus operator (%), 210 MongoDB (database) aggregation pipeline, 48 atomic operations, 243 BSON, 41 document data model, 31 hash partitioning (sharding), 203-204 key-range partitioning, 202 lack of join support, 34, 42 leader-based replication, 153 MapReduce support, 46, 400 oplog parsing, 455, 456 partition splitting, 212 request routing, 216 secondary indexes, 207 Mongoriver (change data capture), 455 monitoring, 10, 19 monotonic clocks, 288 monotonic reads, 164 MPP (see massively parallel processing) MSMQ (messaging), 361 multi-column indexes, 87 multi-leader replication, 168-177 (see also replication) handling write conflicts, 171 conflict avoidance, 172 converging toward a consistent state, 172 custom conflict resolution logic, 173 determining what is a conflict, 174 linearizability, lack of, 333 replication topologies, 175-177 use cases, 168 clients with offline operation, 170 collaborative editing, 170 multi-datacenter replication, 168, 335 multi-object transactions, 228 need for, 231 Multi-Paxos (total order broadcast), 367 multi-table index cluster tables (Oracle), 41 multi-tenancy, 284 multi-version concurrency control (MVCC), 239, 266 detecting stale MVCC reads, 263 indexes and snapshot isolation, 241 mutual exclusion, 261 (see also locks) MySQL (database) binlog coordinates, 156 binlog parsing for change data capture, 455 circular replication topology, 175 consistent snapshots, 156 distributed transaction support, 361 InnoDB storage engine (see InnoDB) JSON support, 30, 42 leader-based replication, 153 performance of XA transactions, 360 row-based replication, 160 schema changes in, 40 snapshot isolation support, 242 (see also InnoDB) statement-based replication, 159 Tungsten Replicator (multi-leader replica‐ tion), 170 conflict detection, 177 N nanomsg (messaging library), 442 Narayana (transaction coordinator), 356 NATS (messaging), 137 near-real-time (nearline) processing, 390 (see also stream processing) Neo4j (database) Cypher query language, 52 graph data model, 50 Nephele (dataflow engine), 421 netcat (Unix tool), 397 Netflix Chaos Monkey, 7, 280 Network Attached Storage (NAS), 146, 398 network model, 36 Index | 577 graph databases versus, 60 imperative query APIs, 46 Network Time Protocol (see NTP) networks congestion and queueing, 282 datacenter network topologies, 276 faults (see faults) linearizability and network delays, 338 network partitions, 279, 337 timeouts and unbounded delays, 281 next-key locking, 260 nodes (in graphs) (see vertices) nodes (processes), 556 handling outages in leader-based replica‐ tion, 156 system models for failure, 307 noisy neighbors, 284 nonblocking atomic commit, 359 nondeterministic operations accidental nondeterminism, 423 partial failures in distributed systems, 275 nonfunctional requirements, 22 nonrepeatable reads, 238 (see also read skew) normalization (data representation), 33, 556 executing joins, 39, 42, 403 foreign key references, 231 in systems of record, 386 versus denormalization, 462 NoSQL, 29, 499 transactions and, 223 Notation3 (N3), 56 npm (package manager), 428 NTP (Network Time Protocol), 287 accuracy, 289, 293 adjustments to monotonic clocks, 289 multiple server addresses, 306 numbers, in XML and JSON encodings, 114 O object-relational mapping (ORM) frameworks, 30 error handling and aborted transactions, 232 unsafe read-modify-write cycle code, 244 object-relational mismatch, 29 observer pattern, 506 offline systems, 390 (see also batch processing) 578 | Index stateful, offline-capable clients, 170, 511 offline-first applications, 511 offsets consumer offsets in partitioned logs, 449 messages in partitioned logs, 447 OLAP (online analytic processing), 91, 556 data cubes, 102 OLTP (online transaction processing), 90, 556 analytics queries versus, 411 workload characteristics, 253 one-to-many relationships, 30 JSON representation, 32 online systems, 389 (see also services) Oozie (workflow scheduler), 402 OpenAPI (service definition format), 133 OpenStack Nova (cloud infrastructure) use of ZooKeeper, 370 Swift (object storage), 398 operability, 19 operating systems versus databases, 499 operation identifiers, 518, 522 operational transformation, 174 operators, 421 flow of data between, 424 in stream processing, 464 optimistic concurrency control, 261 Oracle (database) distributed transaction support, 361 GoldenGate (change data capture), 161, 170, 455 lack of serializability, 226 leader-based replication, 153 multi-table index cluster tables, 41 not preventing write skew, 248 partitioned indexes, 209 PL/SQL language, 255 preventing lost updates, 245 read committed isolation, 236 Real Application Clusters (RAC), 330 recursive query support, 54 snapshot isolation support, 239, 242 TimesTen (in-memory database), 89 WAL-based replication, 160 XML support, 30 ordering, 339-352 by sequence numbers, 343-348 causal ordering, 339-343 partial order, 341 limits of total ordering, 493 total order broadcast, 348-352 Orleans (actor framework), 139 outliers (response time), 14 Oz (programming language), 504 P package managers, 428, 505 packet switching, 285 packets corruption of, 306 sending via UDP, 442 PageRank (algorithm), 49, 424 paging (see virtual memory) ParAccel (database), 93 parallel databases (see massively parallel pro‐ cessing) parallel execution of graph analysis algorithms, 426 queries in MPP databases, 216 Parquet (data format), 96, 131 (see also column-oriented storage) use in Hadoop, 414 partial failures, 275, 310 limping, 311 partial order, 341 partitioning, 199-218, 556 and replication, 200 in batch processing, 429 multi-partition operations, 514 enforcing constraints, 522 secondary index maintenance, 495 of key-value data, 201-205 by key range, 202 skew and hot spots, 205 rebalancing partitions, 209-214 automatic or manual rebalancing, 213 problems with hash mod N, 210 using dynamic partitioning, 212 using fixed number of partitions, 210 using N partitions per node, 212 replication and, 147 request routing, 214-216 secondary indexes, 206-209 document-based partitioning, 206 term-based partitioning, 208 serial execution of transactions and, 255 Paxos (consensus algorithm), 366 ballot number, 368 Multi-Paxos (total order broadcast), 367 percentiles, 14, 556 calculating efficiently, 16 importance of high percentiles, 16 use in service level agreements (SLAs), 15 Percona XtraBackup (MySQL tool), 156 performance describing, 13 of distributed transactions, 360 of in-memory databases, 89 of linearizability, 338 of multi-leader replication, 169 perpetual inconsistency, 525 pessimistic concurrency control, 261 phantoms (transaction isolation), 250 materializing conflicts, 251 preventing, in serializability, 259 physical clocks (see clocks) pickle (Python), 113 Pig (dataflow language), 419, 427 replicated joins, 409 skewed joins, 407 workflows, 403 Pinball (workflow scheduler), 402 pipelined execution, 423 in Unix, 394 point in time, 287 polyglot persistence, 29 polystores, 501 PostgreSQL (database) BDR (multi-leader replication), 170 causal ordering of writes, 177 Bottled Water (change data capture), 455 Bucardo (trigger-based replication), 161, 173 distributed transaction support, 361 foreign data wrappers, 501 full text search support, 490 leader-based replication, 153 log sequence number, 156 MVCC implementation, 239, 241 PL/pgSQL language, 255 PostGIS geospatial indexes, 87 preventing lost updates, 245 preventing write skew, 248, 261 read committed isolation, 236 recursive query support, 54 representing graphs, 51 Index | 579 serializable snapshot isolation (SSI), 261 snapshot isolation support, 239, 242 WAL-based replication, 160 XML and JSON support, 30, 42 pre-splitting, 212 Precision Time Protocol (PTP), 290 predicate locks, 259 predictive analytics, 533-536 amplifying bias, 534 ethics of (see ethics) feedback loops, 536 preemption of datacenter resources, 418 of threads, 298 Pregel processing model, 425 primary keys, 85, 556 compound primary key (Cassandra), 204 primary-secondary replication (see leaderbased replication) privacy, 536-543 consent and freedom of choice, 538 data as assets and power, 540 deleting data, 463 ethical considerations (see ethics) legislation and self-regulation, 542 meaning of, 539 surveillance, 537 tracking behavioral data, 536 probabilistic algorithms, 16, 466 process pauses, 295-299 processing time (of events), 469 producers (message streams), 440 programming languages dataflow languages, 504 for stored procedures, 255 functional reactive programming (FRP), 504 logic programming, 504 Prolog (language), 61 (see also Datalog) promises (asynchronous operations), 135 property graphs, 50 Cypher query language, 52 Protocol Buffers (data format), 117-121 field tags and schema evolution, 120 provenance of data, 531 publish/subscribe model, 441 publishers (message streams), 440 punch card tabulating machines, 390 580 | Index pure functions, 48 putting computation near data, 400 Q Qpid (messaging), 444 quality of service (QoS), 285 Quantcast File System (distributed filesystem), 398 query languages, 42-48 aggregation pipeline, 48 CSS and XSL, 44 Cypher, 52 Datalog, 60 Juttle, 504 MapReduce querying, 46-48 recursive SQL queries, 53 relational algebra and SQL, 42 SPARQL, 59 query optimizers, 37, 427 queueing delays (networks), 282 head-of-line blocking, 15 latency and response time, 14 queues (messaging), 137 quorums, 179-182, 556 for leaderless replication, 179 in consensus algorithms, 368 limitations of consistency, 181-183, 334 making decisions in distributed systems, 301 monitoring staleness, 182 multi-datacenter replication, 184 relying on durability, 309 sloppy quorums and hinted handoff, 183 R R-trees (indexes), 87 RabbitMQ (messaging), 137, 444 leader-based replication, 153 race conditions, 225 (see also concurrency) avoiding with linearizability, 331 caused by dual writes, 452 dirty writes, 235 in counter increments, 235 lost updates, 242-246 preventing with event logs, 462, 507 preventing with serializable isolation, 252 write skew, 246-251 Raft (consensus algorithm), 366 sensitivity to network problems, 369 term number, 368 use in etcd, 353 RAID (Redundant Array of Independent Disks), 7, 398 railways, schema migration on, 496 RAMCloud (in-memory storage), 89 ranking algorithms, 424 RDF (Resource Description Framework), 57 querying with SPARQL, 59 RDMA (Remote Direct Memory Access), 276 read committed isolation level, 234-237 implementing, 236 multi-version concurrency control (MVCC), 239 no dirty reads, 234 no dirty writes, 235 read path (derived data), 509 read repair (leaderless replication), 178 for linearizability, 335 read replicas (see leader-based replication) read skew (transaction isolation), 238, 266 as violation of causality, 340 read-after-write consistency, 163, 524 cross-device, 164 read-modify-write cycle, 243 read-scaling architecture, 161 reads as events, 513 real-time collaborative editing, 170 near-real-time processing, 390 (see also stream processing) publish/subscribe dataflow, 513 response time guarantees, 298 time-of-day clocks, 288 rebalancing partitions, 209-214, 556 (see also partitioning) automatic or manual rebalancing, 213 dynamic partitioning, 212 fixed number of partitions, 210 fixed number of partitions per node, 212 problems with hash mod N, 210 recency guarantee, 324 recommendation engines batch process outputs, 412 batch workflows, 403, 420 iterative processing, 424 statistical and numerical algorithms, 428 records, 399 events in stream processing, 440 recursive common table expressions (SQL), 54 redelivery (messaging), 445 Redis (database) atomic operations, 243 durability, 89 Lua scripting, 255 single-threaded execution, 253 usage example, 4 redundancy hardware components, 7 of derived data, 386 (see also derived data) Reed–Solomon codes (error correction), 398 refactoring, 22 (see also evolvability) regions (partitioning), 199 register (data structure), 325 relational data model, 28-42 comparison to document model, 38-42 graph queries in SQL, 53 in-memory databases with, 89 many-to-one and many-to-many relation‐ ships, 33 multi-object transactions, need for, 231 NoSQL as alternative to, 29 object-relational mismatch, 29 relational algebra and SQL, 42 versus document model convergence of models, 41 data locality, 41 relational databases eventual consistency, 162 history, 28 leader-based replication, 153 logical logs, 160 philosophy compared to Unix, 499, 501 schema changes, 40, 111, 130 statement-based replication, 158 use of B-tree indexes, 80 relationships (see edges) reliability, 6-10, 489 building a reliable system from unreliable components, 276 defined, 6, 22 hardware faults, 7 human errors, 9 importance of, 10 of messaging systems, 442 Index | 581 software errors, 8 Remote Method Invocation (Java RMI), 134 remote procedure calls (RPCs), 134-136 (see also services) based on futures, 135 data encoding and evolution, 136 issues with, 134 using Avro, 126, 135 using Thrift, 135 versus message brokers, 137 repeatable reads (transaction isolation), 242 replicas, 152 replication, 151-193, 556 and durability, 227 chain replication, 155 conflict resolution and, 246 consistency properties, 161-167 consistent prefix reads, 165 monotonic reads, 164 reading your own writes, 162 in distributed filesystems, 398 leaderless, 177-191 detecting concurrent writes, 184-191 limitations of quorum consistency, 181-183, 334 sloppy quorums and hinted handoff, 183 monitoring staleness, 182 multi-leader, 168-177 across multiple datacenters, 168, 335 handling write conflicts, 171-175 replication topologies, 175-177 partitioning and, 147, 200 reasons for using, 145, 151 single-leader, 152-161 failover, 157 implementation of replication logs, 158-161 relation to consensus, 367 setting up new followers, 155 synchronous versus asynchronous, 153-155 state machine replication, 349, 452 using erasure coding, 398 with heterogeneous data systems, 453 replication logs (see logs) reprocessing data, 496, 498 (see also evolvability) from log-based messaging, 451 request routing, 214-216 582 | Index approaches to, 214 parallel query execution, 216 resilient systems, 6 (see also fault tolerance) response time as performance metric for services, 13, 389 guarantees on, 298 latency versus, 14 mean and percentiles, 14 user experience, 15 responsibility and accountability, 535 REST (Representational State Transfer), 133 (see also services) RethinkDB (database) document data model, 31 dynamic partitioning, 212 join support, 34, 42 key-range partitioning, 202 leader-based replication, 153 subscribing to changes, 456 Riak (database) Bitcask storage engine, 72 CRDTs, 174, 191 dotted version vectors, 191 gossip protocol, 216 hash partitioning, 203-204, 211 last-write-wins conflict resolution, 186 leaderless replication, 177 LevelDB storage engine, 78 linearizability, lack of, 335 multi-datacenter support, 184 preventing lost updates across replicas, 246 rebalancing, 213 search feature, 209 secondary indexes, 207 siblings (concurrently written values), 190 sloppy quorums, 184 ring buffers, 450 Ripple (cryptocurrency), 532 rockets, 10, 36, 305 RocksDB (storage engine), 78 leveled compaction, 79 rollbacks (transactions), 222 rolling upgrades, 8, 112 routing (see request routing) row-oriented storage, 96 row-based replication, 160 rowhammer (memory corruption), 529 RPCs (see remote procedure calls) Rubygems (package manager), 428 rules (Datalog), 61 S safety and liveness properties, 308 in consensus algorithms, 366 in transactions, 222 sagas (see compensating transactions) Samza (stream processor), 466, 467 fault tolerance, 479 streaming SQL support, 466 sandboxes, 9 SAP HANA (database), 93 scalability, 10-18, 489 approaches for coping with load, 17 defined, 22 describing load, 11 describing performance, 13 partitioning and, 199 replication and, 161 scaling up versus scaling out, 146 scaling out, 17, 146 (see also shared-nothing architecture) scaling up, 17, 146 scatter/gather approach, querying partitioned databases, 207 SCD (slowly changing dimension), 476 schema-on-read, 39 comparison to evolvable schema, 128 in distributed filesystems, 415 schema-on-write, 39 schemaless databases (see schema-on-read) schemas, 557 Avro, 122-127 reader determining writer’s schema, 125 schema evolution, 123 dynamically generated, 126 evolution of, 496 affecting application code, 111 compatibility checking, 126 in databases, 129-131 in message-passing, 138 in service calls, 136 flexibility in document model, 39 for analytics, 93-95 for JSON and XML, 115 merits of, 127 schema migration on railways, 496 Thrift and Protocol Buffers, 117-121 schema evolution, 120 traditional approach to design, fallacy in, 462 searches building search indexes in batch processes, 411 k-nearest neighbors, 429 on streams, 467 partitioned secondary indexes, 206 secondaries (see leader-based replication) secondary indexes, 85, 557 partitioning, 206-209, 217 document-partitioned, 206 index maintenance, 495 term-partitioned, 208 problems with dual writes, 452, 491 updating, transaction isolation and, 231 secondary sorts, 405 sed (Unix tool), 392 self-describing files, 127 self-joins, 480 self-validating systems, 530 semantic web, 57 semi-synchronous replication, 154 sequence number ordering, 343-348 generators, 294, 344 insufficiency for enforcing constraints, 347 Lamport timestamps, 345 use of timestamps, 291, 295, 345 sequential consistency, 351 serializability, 225, 233, 251-266, 557 linearizability versus, 329 pessimistic versus optimistic concurrency control, 261 serial execution, 252-256 partitioning, 255 using stored procedures, 253, 349 serializable snapshot isolation (SSI), 261-266 detecting stale MVCC reads, 263 detecting writes that affect prior reads, 264 distributed execution, 265, 364 performance of SSI, 265 preventing write skew, 262-265 two-phase locking (2PL), 257-261 index-range locks, 260 performance, 258 Serializable (Java), 113 Index | 583 serialization, 113 (see also encoding) service discovery, 135, 214, 372 using DNS, 216, 372 service level agreements (SLAs), 15 service-oriented architecture (SOA), 132 (see also services) services, 131-136 microservices, 132 causal dependencies across services, 493 loose coupling, 502 relation to batch/stream processors, 389, 508 remote procedure calls (RPCs), 134-136 issues with, 134 similarity to databases, 132 web services, 132, 135 session windows (stream processing), 472 (see also windows) sessionization, 407 sharding (see partitioning) shared mode (locks), 258 shared-disk architecture, 146, 398 shared-memory architecture, 146 shared-nothing architecture, 17, 146-147, 557 (see also replication) distributed filesystems, 398 (see also distributed filesystems) partitioning, 199 use of network, 277 sharks biting undersea cables, 279 counting (example), 46-48 finding (example), 42 website about (example), 44 shredding (in relational model), 38 siblings (concurrent values), 190, 246 (see also conflicts) similarity search edit distance, 88 genome data, 63 k-nearest neighbors, 429 single-leader replication (see leader-based rep‐ lication) single-threaded execution, 243, 252 in batch processing, 406, 421, 426 in stream processing, 448, 463, 522 size-tiered compaction, 79 skew, 557 584 | Index clock skew, 291-294, 334 in transaction isolation read skew, 238, 266 write skew, 246-251, 262-265 (see also write skew) meanings of, 238 unbalanced workload, 201 compensating for, 205 due to celebrities, 205 for time-series data, 203 in batch processing, 407 slaves (see leader-based replication) sliding windows (stream processing), 472 (see also windows) sloppy quorums, 183 (see also quorums) lack of linearizability, 334 slowly changing dimension (data warehouses), 476 smearing (leap seconds adjustments), 290 snapshots (databases) causal consistency, 340 computing derived data, 500 in change data capture, 455 serializable snapshot isolation (SSI), 261-266, 329 setting up a new replica, 156 snapshot isolation and repeatable read, 237-242 implementing with MVCC, 239 indexes and MVCC, 241 visibility rules, 240 synchronized clocks for global snapshots, 294 snowflake schemas, 95 SOAP, 133 (see also services) evolvability, 136 software bugs, 8 maintaining integrity, 529 solid state drives (SSDs) access patterns, 84 detecting corruption, 519, 530 faults in, 227 sequential write throughput, 75 Solr (search server) building indexes in batch processes, 411 document-partitioned indexes, 207 request routing, 216 usage example, 4 use of Lucene, 79 sort (Unix tool), 392, 394, 395 sort-merge joins (MapReduce), 405 Sorted String Tables (see SSTables) sorting sort order in column storage, 99 source of truth (see systems of record) Spanner (database) data locality, 41 snapshot isolation using clocks, 295 TrueTime API, 294 Spark (processing framework), 421-423 bytecode generation, 428 dataflow APIs, 427 fault tolerance, 422 for data warehouses, 93 GraphX API (graph processing), 425 machine learning, 428 query optimizer, 427 Spark Streaming, 466 microbatching, 477 stream processing on top of batch process‐ ing, 495 SPARQL (query language), 59 spatial algorithms, 429 split brain, 158, 557 in consensus algorithms, 352, 367 preventing, 322, 333 using fencing tokens to avoid, 302-304 spreadsheets, dataflow programming capabili‐ ties, 504 SQL (Structured Query Language), 21, 28, 43 advantages and limitations of, 416 distributed query execution, 48 graph queries in, 53 isolation levels standard, issues with, 242 query execution on Hadoop, 416 résumé (example), 30 SQL injection vulnerability, 305 SQL on Hadoop, 93 statement-based replication, 158 stored procedures, 255 SQL Server (database) data warehousing support, 93 distributed transaction support, 361 leader-based replication, 153 preventing lost updates, 245 preventing write skew, 248, 257 read committed isolation, 236 recursive query support, 54 serializable isolation, 257 snapshot isolation support, 239 T-SQL language, 255 XML support, 30 SQLstream (stream analytics), 466 SSDs (see solid state drives) SSTables (storage format), 76-79 advantages over hash indexes, 76 concatenated index, 204 constructing and maintaining, 78 making LSM-Tree from, 78 staleness (old data), 162 cross-channel timing dependencies, 331 in leaderless databases, 178 in multi-version concurrency control, 263 monitoring for, 182 of client state, 512 versus linearizability, 324 versus timeliness, 524 standbys (see leader-based replication) star replication topologies, 175 star schemas, 93-95 similarity to event sourcing, 458 Star Wars analogy (event time versus process‐ ing time), 469 state derived from log of immutable events, 459 deriving current state from the event log, 458 interplay between state changes and appli‐ cation code, 507 maintaining derived state, 495 maintenance by stream processor in streamstream joins, 473 observing derived state, 509-515 rebuilding after stream processor failure, 478 separation of application code and, 505 state machine replication, 349, 452 statement-based replication, 158 statically typed languages analogy to schema-on-write, 40 code generation and, 127 statistical and numerical algorithms, 428 StatsD (metrics aggregator), 442 stdin, stdout, 395, 396 Stellar (cryptocurrency), 532 Index | 585 stock market feeds, 442 STONITH (Shoot The Other Node In The Head), 158 stop-the-world (see garbage collection) storage composing data storage technologies, 499-504 diversity of, in MapReduce, 415 Storage Area Network (SAN), 146, 398 storage engines, 69-104 column-oriented, 95-101 column compression, 97-99 defined, 96 distinction between column families and, 99 Parquet, 96, 131 sort order in, 99-100 writing to, 101 comparing requirements for transaction processing and analytics, 90-96 in-memory storage, 88 durability, 227 row-oriented, 70-90 B-trees, 79-83 comparing B-trees and LSM-trees, 83-85 defined, 96 log-structured, 72-79 stored procedures, 161, 253-255, 557 and total order broadcast, 349 pros and cons of, 255 similarity to stream processors, 505 Storm (stream processor), 466 distributed RPC, 468, 514 Trident state handling, 478 straggler events, 470, 498 stream processing, 464-481, 557 accessing external services within job, 474, 477, 478, 517 combining with batch processing lambda architecture, 497 unifying technologies, 498 comparison to batch processing, 464 complex event processing (CEP), 465 fault tolerance, 476-479 atomic commit, 477 idempotence, 478 microbatching and checkpointing, 477 rebuilding state after a failure, 478 for data integration, 494-498 586 | Index maintaining derived state, 495 maintenance of materialized views, 467 messaging systems (see messaging systems) reasoning about time, 468-472 event time versus processing time, 469, 477, 498 knowing when window is ready, 470 types of windows, 472 relation to databases (see streams) relation to services, 508 search on streams, 467 single-threaded execution, 448, 463 stream analytics, 466 stream joins, 472-476 stream-stream join, 473 stream-table join, 473 table-table join, 474 time-dependence of, 475 streams, 440-451 end-to-end, pushing events to clients, 512 messaging systems (see messaging systems) processing (see stream processing) relation to databases, 451-464 (see also changelogs) API support for change streams, 456 change data capture, 454-457 derivative of state by time, 460 event sourcing, 457-459 keeping systems in sync, 452-453 philosophy of immutable events, 459-464 topics, 440 strict serializability, 329 strong consistency (see linearizability) strong one-copy serializability, 329 subjects, predicates, and objects (in triplestores), 55 subscribers (message streams), 440 (see also consumers) supercomputers, 275 surveillance, 537 (see also privacy) Swagger (service definition format), 133 swapping to disk (see virtual memory) synchronous networks, 285, 557 comparison to asynchronous networks, 284 formal model, 307 synchronous replication, 154, 557 chain replication, 155 conflict detection, 172 system models, 300, 306-310 assumptions in, 528 correctness of algorithms, 308 mapping to the real world, 309 safety and liveness, 308 systems of record, 386, 557 change data capture, 454, 491 treating event log as, 460 systems thinking, 536 T t-digest (algorithm), 16 table-table joins, 474 Tableau (data visualization software), 416 tail (Unix tool), 447 tail vertex (property graphs), 51 Tajo (query engine), 93 Tandem NonStop SQL (database), 200 TCP (Transmission Control Protocol), 277 comparison to circuit switching, 285 comparison to UDP, 283 connection failures, 280 flow control, 282, 441 packet checksums, 306, 519, 529 reliability and duplicate suppression, 517 retransmission timeouts, 284 use for transaction sessions, 229 telemetry (see monitoring) Teradata (database), 93, 200 term-partitioned indexes, 208, 217 termination (consensus), 365 Terrapin (database), 413 Tez (dataflow engine), 421-423 fault tolerance, 422 support by higher-level tools, 427 thrashing (out of memory), 297 threads (concurrency) actor model, 138, 468 (see also message-passing) atomic operations, 223 background threads, 73, 85 execution pauses, 286, 296-298 memory barriers, 338 preemption, 298 single (see single-threaded execution) three-phase commit, 359 Thrift (data format), 117-121 BinaryProtocol, 118 CompactProtocol, 119 field tags and schema evolution, 120 throughput, 13, 390 TIBCO, 137 Enterprise Message Service, 444 StreamBase (stream analytics), 466 time concurrency and, 187 cross-channel timing dependencies, 331 in distributed systems, 287-299 (see also clocks) clock synchronization and accuracy, 289 relying on synchronized clocks, 291-295 process pauses, 295-299 reasoning about, in stream processors, 468-472 event time versus processing time, 469, 477, 498 knowing when window is ready, 470 timestamp of events, 471 types of windows, 472 system models for distributed systems, 307 time-dependence in stream joins, 475 time-of-day clocks, 288 timeliness, 524 coordination-avoiding data systems, 528 correctness of dataflow systems, 525 timeouts, 279, 557 dynamic configuration of, 284 for failover, 158 length of, 281 timestamps, 343 assigning to events in stream processing, 471 for read-after-write consistency, 163 for transaction ordering, 295 insufficiency for enforcing constraints, 347 key range partitioning by, 203 Lamport, 345 logical, 494 ordering events, 291, 345 Titan (database), 50 tombstones, 74, 191, 456 topics (messaging), 137, 440 total order, 341, 557 limits of, 493 sequence numbers or timestamps, 344 total order broadcast, 348-352, 493, 522 consensus algorithms and, 366-368 Index | 587 implementation in ZooKeeper and etcd, 370 implementing with linearizable storage, 351 using, 349 using to implement linearizable storage, 350 tracking behavioral data, 536 (see also privacy) transaction coordinator (see coordinator) transaction manager (see coordinator) transaction processing, 28, 90-95 comparison to analytics, 91 comparison to data warehousing, 93 transactions, 221-267, 558 ACID properties of, 223 atomicity, 223 consistency, 224 durability, 226 isolation, 225 compensating (see compensating transac‐ tions) concept of, 222 distributed transactions, 352-364 avoiding, 492, 502, 521-528 failure amplification, 364, 495 in doubt/uncertain status, 358, 362 two-phase commit, 354-359 use of, 360-361 XA transactions, 361-364 OLTP versus analytics queries, 411 purpose of, 222 serializability, 251-266 actual serial execution, 252-256 pessimistic versus optimistic concur‐ rency control, 261 serializable snapshot isolation (SSI), 261-266 two-phase locking (2PL), 257-261 single-object and multi-object, 228-232 handling errors and aborts, 231 need for multi-object transactions, 231 single-object writes, 230 snapshot isolation (see snapshots) weak isolation levels, 233-251 preventing lost updates, 242-246 read committed, 234-238 transitive closure (graph algorithm), 424 trie (data structure), 88 triggers (databases), 161, 441 implementing change data capture, 455 implementing replication, 161 588 | Index triple-stores, 55-59 SPARQL query language, 59 tumbling windows (stream processing), 472 (see also windows) in microbatching, 477 tuple spaces (programming model), 507 Turtle (RDF data format), 56 Twitter constructing home timelines (example), 11, 462, 474, 511 DistributedLog (event log), 448 Finagle (RPC framework), 135 Snowflake (sequence number generator), 294 Summingbird (processing library), 497 two-phase commit (2PC), 353, 355-359, 558 confusion with two-phase locking, 356 coordinator failure, 358 coordinator recovery, 363 how it works, 357 issues in practice, 363 performance cost, 360 transactions holding locks, 362 two-phase locking (2PL), 257-261, 329, 558 confusion with two-phase commit, 356 index-range locks, 260 performance of, 258 type checking, dynamic versus static, 40 U UDP (User Datagram Protocol) comparison to TCP, 283 multicast, 442 unbounded datasets, 439, 558 (see also streams) unbounded delays, 558 in networks, 282 process pauses, 296 unbundling databases, 499-515 composing data storage technologies, 499-504 federation versus unbundling, 501 need for high-level language, 503 designing applications around dataflow, 504-509 observing derived state, 509-515 materialized views and caching, 510 multi-partition data processing, 514 pushing state changes to clients, 512 uncertain (transaction status) (see in doubt) uniform consensus, 365 (see also consensus) uniform interfaces, 395 union type (in Avro), 125 uniq (Unix tool), 392 uniqueness constraints asynchronously checked, 526 requiring consensus, 521 requiring linearizability, 330 uniqueness in log-based messaging, 522 Unix philosophy, 394-397 command-line batch processing, 391-394 Unix pipes versus dataflow engines, 423 comparison to Hadoop, 413-414 comparison to relational databases, 499, 501 comparison to stream processing, 464 composability and uniform interfaces, 395 loose coupling, 396 pipes, 394 relation to Hadoop, 499 UPDATE statement (SQL), 40 updates preventing lost updates, 242-246 atomic write operations, 243 automatically detecting lost updates, 245 compare-and-set operations, 245 conflict resolution and replication, 246 using explicit locking, 244 preventing write skew, 246-251 V validity (consensus), 365 vBuckets (partitioning), 199 vector clocks, 191 (see also version vectors) vectorized processing, 99, 428 verification, 528-533 avoiding blind trust, 530 culture of, 530 designing for auditability, 531 end-to-end integrity checks, 531 tools for auditable data systems, 532 version control systems, reliance on immutable data, 463 version vectors, 177, 191 capturing causal dependencies, 343 versus vector clocks, 191 Vertica (database), 93 handling writes, 101 replicas using different sort orders, 100 vertical scaling (see scaling up) vertices (in graphs), 49 property graph model, 50 Viewstamped Replication (consensus algo‐ rithm), 366 view number, 368 virtual machines, 146 (see also cloud computing) context switches, 297 network performance, 282 noisy neighbors, 284 reliability in cloud services, 8 virtualized clocks in, 290 virtual memory process pauses due to page faults, 14, 297 versus memory management by databases, 89 VisiCalc (spreadsheets), 504 vnodes (partitioning), 199 Voice over IP (VoIP), 283 Voldemort (database) building read-only stores in batch processes, 413 hash partitioning, 203-204, 211 leaderless replication, 177 multi-datacenter support, 184 rebalancing, 213 reliance on read repair, 179 sloppy quorums, 184 VoltDB (database) cross-partition serializability, 256 deterministic stored procedures, 255 in-memory storage, 89 output streams, 456 secondary indexes, 207 serial execution of transactions, 253 statement-based replication, 159, 479 transactions in stream processing, 477 W WAL (write-ahead log), 82 web services (see services) Web Services Description Language (WSDL), 133 webhooks, 443 webMethods (messaging), 137 WebSocket (protocol), 512 Index | 589 windows (stream processing), 466, 468-472 infinite windows for changelogs, 467, 474 knowing when all events have arrived, 470 stream joins within a window, 473 types of windows, 472 winners (conflict resolution), 173 WITH RECURSIVE syntax (SQL), 54 workflows (MapReduce), 402 outputs, 411-414 key-value stores, 412 search indexes, 411 with map-side joins, 410 working set, 393 write amplification, 84 write path (derived data), 509 write skew (transaction isolation), 246-251 characterizing, 246-251, 262 examples of, 247, 249 materializing conflicts, 251 occurrence in practice, 529 phantoms, 250 preventing in snapshot isolation, 262-265 in two-phase locking, 259-261 options for, 248 write-ahead log (WAL), 82, 159 writes (database) atomic write operations, 243 detecting writes affecting prior reads, 264 preventing dirty writes with read commit‐ ted, 235 WS-* framework, 133 (see also services) WS-AtomicTransaction (2PC), 355 590 | Index X XA transactions, 355, 361-364 heuristic decisions, 363 limitations of, 363 xargs (Unix tool), 392, 396 XML binary variants, 115 encoding RDF data, 57 for application data, issues with, 114 in relational databases, 30, 41 XSL/XPath, 45 Y Yahoo!


pages: 540 words: 103,101

Building Microservices by Sam Newman

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

airport security, Amazon Web Services, anti-pattern, business process, call centre, continuous integration, create, read, update, delete, defense in depth, don't repeat yourself, Edward Snowden, fault tolerance, index card, information retrieval, Infrastructure as a Service, inventory management, job automation, load shedding, loose coupling, platform as a service, premature optimization, pull request, recommendation engine, social graph, software as a service, source of truth, the built environment, web application, WebSocket, x509 certificate

REST over HTTP payloads can actually be more compact than SOAP because it supports alternative formats like JSON or even binary, but it will still be nowhere near as lean a binary protocol as Thrift might be. The overhead of HTTP for each request may also be a concern for low-latency requirements. HTTP, while it can be suited well to large volumes of traffic, isn’t great for low-latency communications when compared to alternative protocols that are built on top of Transmission Control Protocol (TCP) or other networking technology. Despite the name, WebSockets, for example, has very little to do with the Web. After the initial HTTP handshake, it’s just a TCP connection between client and server, but it can be a much more efficient way for you to stream data for a browser. If this is something you’re interested in, note that you aren’t really using much of HTTP, let alone anything to do with REST. For server-to-server communications, if extremely low latency or small message size is important, HTTP communications in general may not be a good idea.

Industry 4.0: The Industrial Internet of Things by Alasdair Gilchrist

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

3D printing, additive manufacturing, Amazon Web Services, augmented reality, autonomous vehicles, barriers to entry, business intelligence, business process, chief data officer, cloud computing, connected car, cyber-physical system, deindustrialization, fault tolerance, global value chain, Google Glasses, hiring and firing, industrial robot, inflight wifi, Infrastructure as a Service, Internet of things, inventory management, job automation, low skilled workers, millennium bug, pattern recognition, peer-to-peer, platform as a service, pre–internet, race to the bottom, RFID, Skype, smart cities, smart grid, smart meter, smart transportation, software as a service, stealth mode startup, supply-chain management, trade route, web application, WebRTC, WebSocket, Y2K

IPv6 offers approximately 5 x 1028 addresses for every person in the world, enabling any embedded object or device in the world to have its own unique IP address and connect to the Internet. Especially designed for home or building automation, for example, IPv6 provides a basic transport mechanism to produce complex control systems and to communicate with devices in a costeffective manner via a low-power wireless network. Designed to send IPv6 packets over IEEE802.15.4-based networks and implement open IP standards including TCP, UDP, HTTP, COAP, MQTT, and web sockets, the standard offers end-to-end addressable nodes. This allows a router to connect the network to IP. 6LoWPAN is a mesh network that is robust, scalable, and self-healing. Mesh router devices can route data destined for other devices, while hosts are able to sleep for long periods of time. RPL Routing issues are very challenging for low-power networks such as 6LoWPAN. This is due to devices operating over poor lossy radio links, and the situation comes about due to the low power available to battery-supplied nodes.


pages: 462 words: 172,671

Clean Code: A Handbook of Agile Software Craftsmanship by Robert C. Martin

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

continuous integration, database schema, domain-specific language, don't repeat yourself, Donald Knuth, en.wikipedia.org, Eratosthenes, finite state, Ignaz Semmelweis: hand washing, iterative process, place-making, Rubik’s Cube, web application, WebSocket

For example, consider a single-threaded information aggregator that acquires information from many different Web sites and merges that information into a daily summary. Because this system is single threaded, it hits each Web site in turn, always finishing one before starting the next. The daily run needs to execute in less than 24 hours. However, as more and more Web sites are added, the time grows until it takes more than 24 hours to gather all the data. The single-thread involves a lot of waiting at Web sockets for I/O to complete. We could improve the performance by using a multithreaded algorithm that hits more than one Web site at a time. Or consider a system that handles one user at a time and requires only one second of time per user. This system is fairly responsive for a few users, but as the number of users increases, the system’s response time increases. No user wants to get in line behind 150 others!


pages: 1,025 words: 150,187

ZeroMQ by Pieter Hintjens

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

anti-pattern, carbon footprint, cloud computing, Debian, distributed revision control, domain-specific language, factory automation, fault tolerance, fear of failure, finite state, Internet of things, iterative process, premature optimization, profit motive, pull request, revision control, RFC: Request For Comment, Richard Stallman, Skype, smart transportation, software patent, Steve Jobs, Valgrind, WebSocket

There are thousands of IETF specifications, each solving part of the puzzle. For application developers, HTTP is perhaps the one solution to have been simple enough to work, but it arguably makes the problem worse by encouraging developers and architects to think in terms of big servers and thin, stupid clients. So today people are still connecting applications using raw UDP and TCP, proprietary protocols, HTTP, and WebSockets. It remains painful, slow, hard to scale, and essentially centralized. Distributed peer-to-peer architectures are mostly for play, not work. How many applications use Skype or BitTorrent to exchange data? Which brings us back to the science of programming. To fix the world, we needed to do two things. One, to solve the general problem of “how to connect any code to any code, anywhere.” Two, to wrap that up in the simplest possible building blocks that people could understand and use easily.