finite state

39 results back to index

Syntactic Structures by Chomsky, Noam

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

finite state, statistical model

The point i s that there are processes of sentence formation that finite state grammars are i ntrinsically not equipped to handle. If these pro­ cesses have no finite limit, we can prove the literal inapplicabi l i ty of this elementary theory. I f the processes have a limit, then the construction of a finite state grammar will not be litera lly out of the question, since it will be possible to list the sentences, and a list is essent ially a trivial finite state grammar. But this grammar will be so complex that it will be of little use or i nterest. In general, the assumption that languages are infinite is made in order to simplify 24 SYNTACTIC STRUCTURES the description of these languages. If a grammar does not have recursive devices (closed loops, as in (8), in the finite state grammar) it will be prohibitively complex.

We can also define the grammatical relations in these languages in a formal way in terms of the associated diagrams. 4.2 In § 3 we considered languages, calJed "fin i tt: state languages", which were generated by finite state M arkov processes. Now we are considering terminal languages that are generated by systems of the form [L, Fl. These two types of languages are related in the fol lowing way Theorem : Every finite state language is a term inal language, but there are terminal languages which are not finite state languages.4 The import of this theorem is that description in terms of phrase structure is essentially more powerful than description in terms of the elementary theory presented above in § 3. As examples of terminal languages that are not finite state l anguages we have the languages ( ! o i), ( 1 0 ii) discussed in § 3. Thus the language ( ! o i), consisting of all and only the strings ab, aabb, aaabbb, . . . can be prod uced by the [L, F) grammar ( I S)

Suppose that the machine begins in the i nitial state, runs through a sequence of states (producing a word with each transition), and ends in the final state. Then we call the sequence of words that has been produced a "sentence". Each such machine thus defines a certain language ; namely, the set of sentences that can be produced i n this way. Any language that can be produced by a machine of this sort we call a finite state language ; and we can call the machine itself a finite state grammar. A finite state grammar can be represented graphically i n the form o f a "state diagram".l For example, the grammar that produces just the two sentences "the man comes" and "the men come" can be represented by the following state diagram : THE ( 7) We can extend this grammar to produce an i nfinite number of sen­ tences by adding closed loops. Thus the finite grammar of the subpart of English containing the above sentences in addition to "the old man comes", "the old old man comes", . . . , "the old men come", "the old old men come", . . . , can be represented by the following state diagram : OLD THE (8) 1 C.


pages: 496 words: 70,263

Erlang Programming by Francesco Cesarini

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

cloud computing, fault tolerance, finite state, loose coupling, revision control, RFC: Request For Comment, sorting algorithm, Turing test, type inference, web application

. • If the stop message is received, terminate is called, destroying the window and all the widgets associated with it. The loop function is not called, allowing the process to terminate normally. Finite State Machines Erlang processes can be used to implement finite state machines. A finite state machine, or FSM for short, is a model that consists of a finite number of states and events. You can think of an FSM as a model of the world which will contain abstractions from the details of the real system. At any one time, the FSM is in a specific state. Depending on 126 | Chapter 5: Process Design Patterns the incoming event and the current state of the FSM, a set of actions and a transition to a new state will occur (see Figure 5-5). Figure 5-5. A finite state machine In Erlang, each state is represented as a tail-recursive function, and each event is represented as an incoming message.

Behaviors we have not covered but which we briefly introduced in this chapter include finite state machines, event handlers, and special processes. All of these behavior library modules have manual pages that you can reference. In addition, the Erlang documentation has a section on OTP design principles that provides more details and examples of these behaviors. Finite state machines are a crucial component of telecom systems. In Chapter 5, we introduced the idea of modeling a phone as a finite state machine. If the phone is not being used, it is in state idle. If an incoming call arrives, it goes to state ringing. This does not have to be a phone; it could instead be an ATM cross-connect or the handling of data in a protocol stack. The gen_fsm module provides you with a finite state machine behavior that you can use to solve these problems.

State machines are very common in all sorts of processing applications. In telecom systems, they are used not only to handle the state of equipment, as in the preceding example, but also in complex protocol stacks. The fact that Erlang handles them † Or any other relative of your choice who tends to call you very early on a Saturday morning. Finite State Machines | 127 Figure 5-6. Fixed-line phone finite state machine graciously is not a surprise. When prototyping with the early versions of Erlang between 1987 and 1991, it was the Plain Old Telephony System (POTS) finite state machines described in this section that the development team used to test their ideas of what Erlang should look like. With a tail-recursive function for every state, actions implemented as function calls, and events represented as messages, this is what the code for the idle state would look like: idle() -> receive {Number, incoming} -> start_ringing(), ringing(Number); off_hook -> start_tone(), dial() end. ringing(Number) -> receive {Number, other_on_hook} -> stop_ringing(), idle(); {Number, off_hook} -> 128 | Chapter 5: Process Design Patterns stop_ringing(), connected(Number) end.


pages: 229 words: 67,599

The Logician and the Engineer: How George Boole and Claude Shannon Created the Information Age by Paul J. Nahin

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Albert Einstein, Any sufficiently advanced technology is indistinguishable from magic, Claude Shannon: information theory, conceptual framework, Fellow of the Royal Society, finite state, four colour theorem, Georg Cantor, Grace Hopper, Isaac Newton, John von Neumann, knapsack problem, New Journalism, reversible computing, Richard Feynman, Richard Feynman, Schrödinger's Cat, Steve Jobs, Steve Wozniak, thinkpad, Turing machine, Turing test, V2 rocket

Depending both on its present state and the symbol just read, the fourth and final operation of a machine cycle occurs when the finite-state machine transitions to a new state (which may, in fact, be the present state). Then a new machine cycle begins. Figure 9.1.1 shows the connection of a finite-state machine, the read/write head, and the tape. The entire arrangement, all together, is what we call a Turing machine. When placed into operation we’ll imagine that the tape is initially blank (that is, the symbol 0 is on all of the tape’s squares)—except for some finite number of squares that have 1s. By convention, we’ll always take the finite-state machine as initially in state 1. And finally, we must specify over which square on the tape the read/write head is initially placed. The finite-state machine and the read/write head then move along the tape (we imagine the tape is motionless) according to the internal details of the finite-state machine and the particular sequence of symbols encountered on the tape.

—Claude Shannon, in a March 1952 talk at Bell Labs on creativity, during which he explained how he arrived at the logic circuitry for a machine that plays a perfect game of Nim 9.1 THE FIRST MODERN COMPUTER A Turing machine is the combination of a sequential, finite-state machine plus an external read/write memory storage medium called the tape (think of a ribbon of magnetic tape). The tape is a linear sequence of squares, with each square holding one of several possible symbols. Most generally, a Turing machine can have any number of different symbols it can recognize, but I’ll assume here that we are discussing the 2-symbol case (0 or 1). In 1956, Shannon showed that this in no way limits the power of what a Turing machine can do. The tape is arbitrarily long in at least one, perhaps both, directions. The finite-state machine is connected to a read/write head, which at each machine cycle (I’ll define what that is in just a moment) is located over a square on the tape.

The finite-state machine and the read/write head then move along the tape (we imagine the tape is motionless) according to the internal details of the finite-state machine and the particular sequence of symbols encountered on the tape. At some time after we turn the Turing machine on, it presumably completes its task (whatever that might be), and the finite-state machine enters state 0 (called the halting state) and stops. Figure 9.1.1. A Turing machine. As far as I know, nobody has ever actually constructed a Turing machine. It is a purely theoretical concept. Described in 1936 by the English mathematician Alan Turing (see Shannon’s mini-biography in Chapter 3)— that is, ten years before the first actual electronic computers began to be constructed—Turing’s machines are nevertheless as powerful as any modern machine in what they can compute. Compared to a modern, electronic-speed computer, a Turing machine is really, really slow (think of a race between a snail and a photon), but time is irrelevant.


pages: 1,025 words: 150,187

ZeroMQ by Pieter Hintjens

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

anti-pattern, carbon footprint, cloud computing, Debian, distributed revision control, domain-specific language, factory automation, fault tolerance, fear of failure, finite state, Internet of things, iterative process, premature optimization, profit motive, pull request, revision control, RFC: Request For Comment, Richard Stallman, Skype, smart transportation, software patent, Steve Jobs, Valgrind, WebSocket

#include "czmq.h" // States in which we can be at any point in time typedef enum { STATE_PRIMARY = 1, // Primary, waiting for peer to connect STATE_BACKUP = 2, // Backup, waiting for peer to connect STATE_ACTIVE = 3, // Active - accepting connections STATE_PASSIVE = 4 // Passive - not accepting connections } state_t; // Events, which start with the states our peer can be in typedef enum { PEER_PRIMARY = 1, // HA peer is pending primary PEER_BACKUP = 2, // HA peer is pending backup PEER_ACTIVE = 3, // HA peer is active PEER_PASSIVE = 4, // HA peer is passive CLIENT_REQUEST = 5 // Client makes request } event_t; // Our finite-state machine typedef struct { state_t state; // Current state event_t event; // Current event int64_t peer_expiry; // When peer is considered "dead" } bstar_t; // We send state information this often // If peer doesn't respond in two heartbeats, it is "dead" #define HEARTBEAT 1000 // In msec The heart of the Binary Star design is its finite-state machine (FSM). The FSM runs one event at a time. We apply an event to the current state, which checks if the event is accepted, and if so sets a new state (Example 4-62). Example 4-62. Binary Star server (bstarsrv.c): Binary Star state machine static Bool s_state_machine (bstar_t *fsm) { Bool exception = FALSE; // These are the PRIMARY and BACKUP states; we're waiting to become // ACTIVE or PASSIVE depending on events we get from our peer if (fsm->state == STATE_PRIMARY) { if (fsm->event == PEER_BACKUP) { printf ("I: connected to backup (passive), ready as active\n"); fsm->state = STATE_ACTIVE; } else if (fsm->event == PEER_ACTIVE) { printf ("I: connected to backup (active), ready as passive\n"); fsm->state = STATE_PASSIVE; } // Accept client connections } else if (fsm->state == STATE_BACKUP) { if (fsm->event == PEER_ACTIVE) { printf ("I: connected to primary (active), ready as passive\n"); fsm->state = STATE_PASSIVE; } else // Reject client connections when acting as backup if (fsm->event == CLIENT_REQUEST) exception = TRUE; } else The ACTIVE and PASSIVE states are laid out in Example 4-63.

\n", server [server_nbr]); client = zsocket_new (ctx, ZMQ_REQ); zsocket_connect (client, server [server_nbr]); // Send request again, on new socket zstr_send (client, request); } } } zctx_destroy (&ctx); return 0; } To test our Binary Star implementation, start the servers and client in any order: bstarsrv -p # Start primary bstarsrv -b # Start backup bstarcli You can then provoke failover by killing the primary server, and recovery by restarting the primary and killing the backup. Note how it’s the client vote that triggers failover and recovery. Binary Star is driven by a finite-state machine (Figure 4-8). States in white accept client requests, and states in gray refuse them. Events are the peer state, so “Peer Active” means the other server has told us it’s active. “Client Request” means we’ve received a client request. “Client Vote” means we’ve received a client request and our peer has been inactive for two heartbeats. Figure 4-8. Binary Star finite-state machine Note that the servers use PUB-SUB sockets for state exchange. No other socket combination will work here. PUSH and DEALER block if there is no peer ready to receive a message. PAIR does not reconnect if the peer disappears and comes back.

This section isn’t really about GSL at all, but about a useful and little-known trick that’s handy for ambitious architects who want to scale themselves, as well as their work. Once you learn the trick, you can whip up your own code generators in a short time. The code generators most software engineers know about come with a single hard-coded model. For instance, Ragel “compiles executable finite state machines from regular languages” (i.e., Ragel’s model is a regular language). This certainly works for a good set of problems, but it’s far from universal. How do you describe an API in Ragel? Or a project makefile? Or even a finite-state machine like the one we used to design the Binary Star pattern in Chapter 4? All of these would benefit from code generation, but there’s no universal model. So, the trick is to design your own models as you need them, and then make code generators as cheap compilers for those models.


pages: 232

A Discipline of Programming by E. Dijkstra

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

finite state, Turing machine, Y Combinator

He recalls, only when the transition table has been completed, that he has only binary variables (he calls them "flip-flops") at his disposal and that, if the number of states is, say, between 33 and 64 (bounds included) he needs at least six of them. Those six binary variables span a state space of 64 points and the designer is free in his choice how to identify the different states of his finite-state automaton with points in the 64-point state space. That this choice is not irrelevant becomes clear as soon as we realize that a state transition of the finite-state automaton has to be translated in combinations of boolean variables being operated upon under control of boolean values. To circuit designers this choice is known as the "state assignment problem" and technical constraints or optimization goals may make it a hairy one-so hairy, as a matter of fact, that one may be tempted to challenge the adequacy of the design methodology evoking it.

From a very etheric point of view, we could dis- miss the question, saying that this is none of our business, that this is the implementer's concern, and that as long as we don't want our programs to be executed the question is irrelevant anyhow. But to dismiss this question is not my intention. Actual machines have stores of a finite capacity and, left to themselves, they behave in principle like finite state automata which after a sufficient large number of steps must return to a state in which they have been before. (If, in addition, the machine is deterministic, history will from then on repeat itself.) For the overall behaviour of today's modern computers, the theory of finite state automata is, however, of limited significance, because their number of possible states is so incredibly huge that they could go on working for years without returning to a state in which they have been before. As an alternative we can study "the infinite state automaton" and drop the restric- tion of a finite store capacity.

T is the predicate that is true in all points of the state space concerned: the corresponding set is the universe. F is the predicate that is false in all points of the state space: it corresponds to the empty set. 3 THE CHARACTERIZATION OF SEMANTICS We are primarily interested in systems that, when started in an "initial state", will end up in a "final state" which, as a rule, depends on the choice of the initial state. This is a view that is somewhat different from the idea of the finite state automaton that on the one hand absorbs a stream of input characters and on the other hand produces a stream of output characters. To translate that in our picture we must assume that the value of the input (i.e. the argument) is reflected in the choice of the initial state and that the value of the output (i.e. the answer) is reflected in the final state. Our view relieves us from all sorts of peripheral complications.


pages: 647 words: 43,757

Types and Programming Languages by Benjamin C. Pierce

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Albert Einstein, combinatorial explosion, experimental subject, finite state, Henri Poincaré, recommendation engine, sorting algorithm, Turing complete, Turing machine, type inference, Y Combinator

To describe the class, we need some additional terminology. 21.5.10 Definition: Given an invertible generating function F and an element x ∈ U, the set pred F (x) (or just pred(x)) of immediate predecessors of x is ( ∅ if support(x) ↑ pred(x) = support(x) if support(x)↓ and its extension to sets X ⊆ U is [ pred(x). pred(X) = x∈X The set reachableF (X) (or just reachable(X)) of all elements reachable from a set X via support is defined as [ pred n (X). reachable(X) = n≥0 and its extension to single elements x ∈ U is reachable(x) = reachable({x}). An element y ∈ U is reachable from an element x if y ∈ reachable(x). 21.5.11 Definition: An invertible generating function F is said to be finite state if reachable(x) is finite for each x ∈ U. For a finite-state generating function, the search space explored by gfp is finite and gfp always terminates: 21.5.12 Theorem: If reachableF (X) is finite, then gfpF (X) is defined. Consequently, if F is finite state, then gfpF (X) terminates for any finite X ⊆ U. Proof: For each recursive call gfp(Y ) in the call graph generated by the original invocation gfp(X), we have Y ⊆ reachable(X). Moreover, Y strictly increases on each call. Since reachable(X) is finite, m(Y ) = |reachable(X)| − |Y | serves as a termination measure for gfp. 21.5.13 Exercise [«««]: Suppose F is an invertible generating function.

For part (2), we argue (by an easy induction on the depth of a run of the gfp tF algorithm, using Lemma 21.5.8) that if, for some A, we have gfp tF (A, x) = fail, then x ∉ νF. Since all of the algorithms in this section examine the reachable set, a sufficient termination condition for all of them is the same as that of the original gfp algorithm: they terminate on all inputs when F is finite state. 21.7 Regular Trees At this point, we have developed generic algorithms for checking membership in a set defined as the greatest fixed point of a generating function F, assuming that F is invertible and finite state; separately, we have shown how to define subtyping between infinite trees as the greatest fixed point of a particular generating function S. The obvious next step is to instantiate one of our algorithms with S. Of course, this concrete algorithm will not terminate on all inputs, since in general the set of states reachable from a given pair of infinite types can be infinite.

It is easy to check that, if y ∈ support(x) and both height(x) and height(y) are defined, then height(y) < height(x). 2. Observe that this way of phrasing the definition of height can easily be rephrased as the least fixed point of a monotone function on relations representing partial functions. 541 A Solutions to Selected Exercises Now, if F is finite state and finite height, then lfp(X) terminates for any finite input set X ⊆ U. To see this, observe that, since F is finite state, for every recursive call lfp(Y ) descended from the original call lfp(X), the set Y is finite. Since F is finite height, h(Y ) = max{height(y) | y ∈ Y } is well defined. Since h(Y ) decreases with each recursive call and is always non-negative, it serves as a termination measure for lfp. 21.8.5 A.17 Solution: The definition of Sd is the same as that of Sm , except that the last clause does not contain the conditions T ≠ µX.T1 and T ≠ Top.

On Language: Chomsky's Classic Works Language and Responsibility and Reflections on Language in One Volume by Noam Chomsky, Mitsou Ronat

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

conceptual framework, finite state, theory of mind

.: An intuition again founded on the same anti-empiricism. In my view, a finite state Markov source model might reasonably be considered as characterizing something like the outer limits of empiricist learning theory. In fact, a mathematical psychologist and logician, Patrick Suppes, gave precise expression to this intuition, or one version of it, a few years ago. He proved that a certain very rich version of stimulus-response learning theory must remain within the limits of finite state sources of the kind we have been discussing. He considered this to be a positive result. To me it seemed to be a negative result. The reason is this. As has been known for a long time, even elementary systems of knowledge cannot be represented in terms of finite state Markov sources—for example, our knowledge of English, or even much simpler systems, such as propositional calculus.

The viewpoint is very different. M.R.: Certain mathematical theories have seduced many linguists. I’m thinking of their “historic” encounters with telecommunications engineers . . . N.C.: Well, at the end of the forties and the beginning of the fifties, there were important developments in the mathematical theory of communication, information theory, and the theory of automata. Technically, models such as finite state Markov sources were proposedh . . . Very often it was supposed that these models were appropriate for the description of language. Jakobson referred to this vaguely, but Hockett utilized them quite explicitly. In 1955 he proposed a theory of language structure based on a Markov source model borrowed from the mathematical theory of communication. Similar theories were developed by psychologists, engineers, mathematicians.

.: Yes, and I hope that these studies10 will continue to be pursued, as well as the mathematical investigation of transformational grammars. There has been some interesting recent work by Stanley Peters and Robert Ritchie on this latter topic. Returning to the earlier point, it seems clear that empiricist learning theories are much too limited to be adequate; and it is even possible to demonstrate this if we accept the assumption that finite state Markov sources are the richest systems that can be attained by such theories, at the limit. This conclusion does not seem to be unreasonable to me, although naturally it is not a precise conclusion because the notion “empiricist theory” is not well defined. M.R.: Did you link your critique of-these theories right away to the critique of structural linguistics? N.C.: Well, indirectly. These theories were then very much in fashion, and they even aroused a certain degree of euphoria, I think it is fair to say.


pages: 834 words: 180,700

The Architecture of Open Source Applications by Amy Brown, Greg Wilson

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

8-hour work day, anti-pattern, bioinformatics, c2.com, cloud computing, collaborative editing, combinatorial explosion, computer vision, continuous integration, create, read, update, delete, Debian, domain-specific language, en.wikipedia.org, fault tolerance, finite state, Firefox, friendly fire, linked data, load shedding, locality of reference, loose coupling, Mars Rover, MVC pattern, premature optimization, recommendation engine, revision control, side project, Skype, slashdot, social web, speech recognition, the scientific method, The Wisdom of Crowds, web application, WebSocket

Other Worker Behaviors A large number of other worker behaviors can and have been implemented using these same ideas. 15.4.1. Finite State Machines Finite state machines (FSMs), implemented in the gen_fsm behavior module, are a crucial component when implementing protocol stacks in telecom systems (the problem domain Erlang was originally invented for). States are defined as callback functions named after the state that return a tuple containing the next State and the updated loop data. You can send events to these states synchronously and asynchronously. The finite state machine callback module should also export the standard callback functions such as init, terminate, and handle_info. Of course, finite state machines are not telecom specific. In Riak, they are used in the request handlers. When a client issues a request such as get, put, or delete, the process listening to that request will spawn a process implementing the corresponding gen_fsm behavior.

If a process terminates, it sends an EXIT signal to processes in its link set. If the termination reason is non-normal, the process terminates itself, propagating the EXIT signal further. By calling the process_flag(trap_exit, true) BIF, processes can receive the EXIT signals as Erlang messages in their mailbox instead of terminating. Riak uses EXIT signals to monitor the well-being of helper processes performing non-critical work initiated by the request-driving finite state machines. When these helper processes terminate abnormally, the EXIT signal allows the parent to either ignore the error or restart the process. 15.2. Process Skeletons We previously introduced the notion that processes follow a common pattern regardless of the particular purpose for which the process was created. To start off, a process has to be spawned and then, optionally, have its alias registered.

When in the body of the receive-evaluate loop, processes will receive different messages and handle them in different ways. Finally, on termination, the cleanup will vary from process to process. So, even if a skeleton of generic actions exists, these actions are complemented by specific ones that are directly related to the tasks assigned to the process. Using this skeleton as a template, programmers can create Erlang processes that act as servers, finite state machines, event handlers and supervisors. But instead of re-implementing these patterns every time, they have been placed in library modules referred to as behaviors. They come as part as the OTP middleware. 15.3. OTP Behaviors The core team of developers committing to Riak is spread across nearly a dozen geographical locations. Without very tight coordination and templates to work from, the result would consist of different client/server implementations not handling special borderline cases and concurrency-related errors.


pages: 480 words: 99,288

Mastering ElasticSearch by Rafal Kuc, Marek Rogozinski

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Amazon Web Services, create, read, update, delete, en.wikipedia.org, fault tolerance, finite state, full text search, information retrieval

This codec may give you performance boost on commonly used fields, but should be used with caution, as it is very memory intensive, because the terms and postings arrays need to be stored in the memory. Note Since all the terms are held in the byte array you can have upto 2.1 GB of memory used for this per segment. memory: As its name suggests, this codec writes all the data to disk, but reads the terms and post listings into the memory, using a structure called FST (Finite State Transducers). More information about this structure can be found in a great post by Mike McCandless at http://blog.mikemccandless.com/2010/12/using-finite-state-transducers-in.html). Because of storing the data in memory, this codec may result in performance boost for commonly used terms. bloom_default: It is an extension of the default codec that adds the functionality of a bloom filter that is written to the disk. When reading, the bloom filter is read and held into memory to allow very fast checking if a given value exists.

This allows us to create the autocomplete functionality in a very effective way, because complicated structures are stored in the index instead of being calculated during query time. Although this suggester is not about correcting user spelling mistakes we thought that it will be good to show at least a simple example of this highly efficient suggester. The logic behind completion suggester The prefix suggester is based on the data structure called FST (Finite State Transducer) (http://en.wikipedia.org/wiki/Finite_state_transducer). Although it is highly efficient, it may require significant resources to build on systems with large amount of data in them, systems that ElasticSearch is perfectly suitable for. If we like to build these structures on the nodes after each restart or cluster state change we may lose performance. Because of that ElasticSearch creators decided to create FST-like structures during index time and store it in the index so that it can be loaded into memory when needed.

Principles of Protocol Design by Robin Sharp

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

accounting loophole / creative accounting, business process, discrete time, fault tolerance, finite state, Gödel, Escher, Bach, information retrieval, loose coupling, packet switching, RFC: Request For Comment, stochastic process, x509 certificate

For each of the N parties to an N-peer communication, the protocol defines a language, whose sentences are the legal sequences of messages received by that party, and whose alphabet of symbols is the set of all possible messages. A machine to obey the rules of the protocol must thus essentially be a recogniser for the protocol language. For simple protocols, this is a useful abstraction, as the language is regular or at most context-free, and standard compiler techniques [124] can be used to implement the machine as a finite state machine or push-down automaton respectively. A trivial example of a simple protocol described in this way is given below. It is a type of stop-and-wait protocol. The sender requests the receiver to indicate when it is ready to receive data, and waits for this indication. On receipt of the indication, the sender sends the data and waits for an acknowledgment. The exchange of messages is as shown in Figure 1.1.

The exchange of messages is as shown in Figure 1.1. The languages to be recognised by the sender and receiver 1.2 Protocols as Processes 3 respectively (and of course to be generated by the receiver and sender respectively) are defined by the BNF: sender ::= readyindication acknowledge sender receiver ::= requesttoaccept data receiver Each party must generate and recognise sentences of a regular language. This is a simple task for a finite state machine. Unfortunately, there are some important objections to this language-oriented view of a protocol. The first is a practical objection: Simple languages generally do not correspond to protocols which can tolerate faults, such as missing or duplicated messages. Protocols which are fault-tolerant often require the use of state machines with enormous numbers of states, or they may define context-dependent languages.

There exists no local state of the participant such that its concurrency set contains both an abort and a commit state. 2. All states whose concurrency set contains a commit state are committable states. A committable state is one in which the participant knows with certainty that all the other participants have committed or will do so. The major states of Protocol 16 can be illustrated by the finite-state machines shown in Figure 5.9(a). The concurrency set of the state wc in the FSM for the co the concurrency set of the states wi in the FSMs ordinator is i {qi , wi , ai }, while for the slaves is {wc , ac , cc } j =i {q j , w j , a j , c j }, so the states w fail to fulfil either of the conditions of Theorem 5.1. Thus Protocol 16 is a blocking protocol. The simplest way to create a non-blocking protocol is then to introduce an extra round of message exchange: After receiving only votes for COMMIT from the slaves, the coordinator sends a PREPARE, rather than a COMMIT, message to all of them.


pages: 661 words: 187,613

The Language Instinct: How the Mind Creates Language by Steven Pinker

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Albert Einstein, cloud computing, David Attenborough, double helix, Drosophila, elephant in my pajamas, finite state, illegal immigration, Loebner Prize, Maui Hawaii, meta analysis, meta-analysis, natural language processing, out of africa, P = NP, phenotype, rolodex, Ronald Reagan, Saturday Night Live, speech recognition, Steven Pinker, theory of mind, transatlantic slave trade, Turing machine, Turing test, Yogi Berra

Goldwasser shut his eyes to draw the next card. It turned out to read in these days when. He pondered whether to select it is fashionable to scoff at the traditional morality of marriage and family life or it is no longer fashionable to scoff at the traditional morality of marriage and family life. The latter had more of the form’s authentic baroque splendor, he decided. Let’s call this a word-chain device (the technical name is a “finite-state” or “Markov” model). A word-chain device is a bunch of lists of words (or prefabricated phrases) and a set of directors for going from list to list. A processor builds a sentence by selecting a word from one list, then a word from another list, and so on. (To recognize a sentence spoken by another person, one just checks the words against each list in order.) Word-chain systems are commonly used in satires like Frayn’s, usually as do-it-yourself recipes for composing examples of a kind of verbiage.

Sleeping esophagus: Twain, “Double-Barreled Detective Story.” Example from Lederer, 1990. Pobbles: Edward Lear, “The Pobble Who Has No Toes.” Jabber-wocky: Carroll, 1871/1981. Colorless green ideas: Chomsky, 1957. Automated news story: Frayn, 1965. Example from Miller, 1967. Gobbledygook generators: Brandreth, 1980; Bolinger, 1980; Spy magazine, January 1993. Approximations to English: Miller & Selfridge, 1950. Finite-state devices and their problems: Chomsky, 1957; Miller & Chomsky, 1963; Miller, 1967. TV Guide example from Gleitman, 1981. Cook with round bottom: Columbia Journalism Review, 1980; Lederer, 1987. Impenetrable Chomsky: Chomsky, 1986, p. 79. Textbooks on modern grammatical theory: Friedin, 1992; Radford, 1988; Riemsdijk & Williams, 1986. Sex between parked cars: Columbia Journalism Review, 1980.

A set of affixes, positions, or word forms that a language uses to distinguish the different roles of the participants in some event or state. Cases typically correspond to the subject, object, indirect object, and the objects of various kinds of prepositions. In English, case is what distinguishes between I, he, she, we, they, which are used for subjects, and me, him, her, us, them, which are used for objects of verbs, objects of prepositions, and everywhere else. chain device. See finite-state device. chromosome. A long strand of DNA, containing thousands of genes, in a protective package. There are twenty-three chromosomes in a human sperm or egg; there are twenty-three pairs of chromosomes (one from the mother, one from the father) in all other human cells. clause. A kind of phrase that is generally the same thing as a sentence, except that some kinds of clause can never occur on their own but only inside a bigger sentence: THE CAT IS ON THE MAT; John arranged FOR MARY TO GO; The spy WHO LOVED ME disappeared; He said THAT SHE LEFT.


pages: 706 words: 120,784

The Joy of Clojure by Michael Fogus, Chris Houser

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

cloud computing, domain-specific language, Douglas Hofstadter, en.wikipedia.org, finite state, Gödel, Escher, Bach, haute couture, Paul Graham, rolodex, traveling salesman

This benefit will likely cause recur to live on, even should the JVM acquire TCO. 16 The Scala 2.8 compiler recognizes a @tailrec annotation and triggers an error whenever a marked function or method can’t be optimized. The final benefit of recur is that it allows the forms fn and loop to act as anonymous recursion points. Why recur indeed. 7.3.3. Don’t forget your trampoline We touched briefly on the fact that Clojure can also optimize a mutually recursive function relationship, but like the tail-recursive case, it’s done explicitly. Mutually recursive functions are nice for implementing finite state machines (FSA), and in this section we’ll show an example of a simple state machine modeling the operation of an elevator (Mozgovoy 2009) for a two-story building. There are only four states that the elevator FSA allows: on the first floor with the doors open or closed and on the second floor with the door open or closed. The elevator can also take four distinct commands: open doors, close doors, go up, and go down.

We’d like to create a function elevator that starts in the ff-open state, takes a sequence of commands, and returns true or false if they correspond to a legal schedule according to the FSA. For example, the sequence [:close :open :done] would be legal, if not pointless, whereas [:open :open :done] wouldn’t be legal, because an open door can’t be reopened. The function elevator could be implemented as shown next. Listing 7.6. Using mutually recursive functions to implement a finite state machine Using letfn in this way allows you to create local functions that reference each other, whereas (let [ff-open #(...)] ...) wouldn’t, because it executes its bindings serially. Each state function contains a case macro that dispatches to the next state based on a contextually valid command. For example, the sf-open state will transition to the sf-closed state given a :close command, will return true on a :done command (corresponding to a legal schedule), or will otherwise return false.

encapsulation, 5th block-level encapsulation local encapsulation namespace encapsulation Enlive enumeration values enumerator env ephemeral equality, 2nd, 6th, 7th equality partitions, 2nd, 3rd equality semantics error handling, 2nd, 3rd escaped evaluation contextual-eval, 2nd eval, 2nd meta-circular exceptions, 5th, 9th, 10th, 20th exceptions exceptions exceptions exceptions catch, 2nd checked compile-time, 2nd ConcurrentModification-Exception finally, 2nd handling java.lang.ClassCastException java.lang.Exception java.lang.NullPointer-Exception java.lang.RuntimeException runtime runtime vs. compile-time throw, 2nd, 3rd expand-clause expansion expected case experimentation expression problem extend, 2nd, 3rd, 4th extend-protocol, 2nd extend-type, 2nd Extensible Markup Language (XML), 2nd, 3rd F Factor (programming language), 2nd factory methods fail, 2nd false, 3rd evil-false Fantom (programming language) fence post errors filter, 2nd, 3rd, 4th, 5th find-doc find-ns finite state machines first, 2nd, 3rd, 4th, 5th First In, First Out (FIFO), 2nd First In, Last Out (FILO) first-class, 2nd, 3rd fixed-size pool FIXO, 3rd, 5th fixo-peek fixo-push, 2nd, 3rd flexibility float, 2nd floating point, 2nd, 5th overflow rounding error underflow, 2nd floats fluent builder FluentMove fn, 2nd, 3rd, 4th, 5th, 6th for, 2nd force, 2nd, 3rd forever form free variables freedom to focus frequencies Frink (programming language), 2nd frustrating fully qualified, 2nd, 3rd fun functions, 6th anonymous, 2nd, 3rd, 4th arity Calling Functions dangerous function signatures local multiple function bodies named arguments G Gang of Four garbage collection, 2nd, 3rd gcd gen-class, 2nd, 3rd, 4th, 5th, 6th generalized tail-call optimization, 2nd generic genotype gensym get, 2nd, 3rd, 4th get-in getter global hierarchy map goal Gödel, Escher, Bach: An Eternal Golden Braid good-move Graham, Paul, 2nd graphic graphical user interface (GUI), 2nd, 3rd, 4th graphics context greatest common denominator, 2nd green thread Greenspun’s Tenth Rule, 2nd Groovy (programming language) H Halloway, Stuart, 2nd has hash maps hash-map, 2nd, 3rd, 4th Haskell (programming language), 2nd, 3rd, 4th, 5th, 7th, 9th, 11th, 13th out of order execution Template Haskell typeclasses, 2nd heuristic Hickey, Rich, 2nd hidden hierarchy history homoiconicity, 2nd hooks, 2nd hops host semantics Hoyte, Doug hyphens I I/O, 2nd, 3rd idempotent, 2nd identical?

Monte Carlo Simulation and Finance by Don L. McLeish

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Black-Scholes formula, Brownian motion, capital asset pricing model, compound rate of return, discrete time, distributed generation, finite state, frictionless, frictionless market, implied volatility, incomplete markets, invention of the printing press, martingale, p-value, random walk, Sharpe ratio, short selling, stochastic process, stochastic volatility, the market place, transaction costs, value at risk, Wiener process, zero-coupon bond

This uses the simplest form (1) of the binomial generator and is not computationally efficient for large n. In R, rbinom(m,n,p) will generate a vector of length m of Binomial(n, p) variates. Random Samples Associated with Markov Chains Consider a finite state Markov Chain, a sequence of (discrete) random variables X1 , X2 , . . .each of which takes integer values 1, 2, . . . N (called states). The number of states of a Markov chain may be large or even infinite and it is not always convenient to label them with the positive integers and so it is common to define the state space as the set of all possible states of a Markov chain, but we will give some examples of this later. For the present we restrict attention to the case of a finite state space. The transition probability matrix is a matrix P describing the conditional probability of moving between possible states of the chain, so that P [Xn+1 = j|Xn = i] = Pij , i = 1, . . .

Of course the real power of Gibbs Sampling is achieved in problems that are not two-dimensional such as the example above, but have dimension sufficiently high that calculating the sums or integrals in the denominator of expressions like (3.31) is not computationally feasible. Coupling From the Past: Sampling from the stationary distribution of a Markov Chain All of the above methods assume that we generate from the stationary distribution of a Markov chain by the “until Hele freezes over” method, i.e. wait until run the chain from an arbitrary starting value and then delete the initial transient. An alternative elegant method that is feasible at least for some finite state Markov chains is the method of “coupling from the past” due to Propp and Wilson (1996). We assume that we are able to generate transitions in the Markov Chain. In other words if the chain is presently in state i at time n we are able to generate a random variable Xn+1 from the distribution proportional to Pij , j = 1, ...K. Suppose F (x|i) is the cumulative distribution function P (Xn+1 · x|Xn = i) and let us denote its inverse by F −1 (y|i).

Writing Effective Use Cases by Alistair Cockburn

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

business process, c2.com, create, read, update, delete, finite state, index card, information retrieval, iterative process, recommendation engine, Silicon Valley, web application

While it is not for the requirements document to describe the UI design, it is useful to augment the requirements document with samples of the UI design as it evolves. This design information can add to the readability of the requirements document, since it give both textual (abstract) and visual (concrete) renditions of the system’s behavior. The UI design has three levels of precision, low, medium and high: • The low-precision description of the user interface is a screen-navigation diagram, drawn as a finite state machine or statechart. Each state is the name of a screen the user will encounter. The finite state machine shows what user events cause movement from one screen to another. • The medium-precision description is a drawing or reduced size snapshot of the screen. Place this at the end of the use case, so that readers can both see and read what design is being nominated. • The high-precision description lists all the field types, lengths and validation checks of each screen, and does not belong in the requirements document at all!

The programmer and the user interface designer need to know what exactly is meant by address, which fields it contains, the lengths of the fields, the validation rules for addresses, zip codes, phone numbers, and the like. All of this information belongs in the requirements somewhere - and not in the use case! Use cases are only "chapter three" of the requirements document, the behavioral requirements. They do not contain performance requirements, business rules, user interface design, data descriptions, finite state machine behavior, priority, and probably some other information. "Well, where are those requirements?!" the system developers cry! It is all very well to say the use cases shouldn’t contain those, but they have to get documented sometime. Some of the information can, in fact, be attached to each use case as associated information. This might include * use case priority * expected frequency of occurrence * performance needs * delivery date * list of secondary actors * business rules (possibly) * open issues Different projects adjust this collection of information to contain whatever they consider important.


pages: 130 words: 11,880

Optimization Methods in Finance by Gerard Cornuejols, Reha Tutuncu

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

asset allocation, call centre, constrained optimization, correlation coefficient, diversification, finite state, fixed income, frictionless, frictionless market, index fund, linear programming, Long Term Capital Management, passive investing, Sharpe ratio, transaction costs, value at risk, Y2K

Furthermore, since we have a strictly feasible (and optimal) dual solution, any optimal solution of the primal must have tight constraints, indicating that there is no type-B arbitrage. 3.2 Arbitrage Detection Using Linear Programming The linear programming problems (3.4) and (3.4) we formulated for the proof of Theorem 3.2 can naturally be used for detection of arbitrage opportunities. However, as we discussed above, this argument works only for finite state spaces. In this section, we discuss how LP formulations can be used to detect arbitrage opportunities without limiting consideration to finite state spaces. The price we pay for this flexibility is the restriction on the selection of the securities: we only consider the prices of a set of derivative securities written on the same underlying with same maturity. This discussion is based on [7]. Consider an underlying security whose current (time 0) price is given by S0 and its (random) price at time 1 is denoted by S1 .


pages: 496 words: 174,084

Masterminds of Programming: Conversations With the Creators of Major Programming Languages by Federico Biancuzzi, Shane Warden

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

business intelligence, business process, cellular automata, cloud computing, complexity theory, conceptual framework, continuous integration, data acquisition, domain-specific language, Douglas Hofstadter, Fellow of the Royal Society, finite state, Firefox, follow your passion, Frank Gehry, general-purpose programming language, HyperCard, information retrieval, iterative process, John von Neumann, linear programming, loose coupling, Mars Rover, millennium bug, NP-complete, Paul Graham, performance metric, QWERTY keyboard, RAND corporation, randomized controlled trial, Renaissance Technologies, Silicon Valley, slashdot, software as a service, software patent, sorting algorithm, Steve Jobs, traveling salesman, Turing complete, type inference, Valgrind, Von Neumann architecture, web application

We didn’t take it out of that domain. How do you make the idea of syntax-driven transformations accessible to users who might not know very much or anything at all about finite-state machines and push-down automata? Al: Certainly as a user of AWK, you don’t need to know about these concepts. On the other hand, if you’re into language design and implementation, knowledge of finite-state machines and context-free grammars is essential. Should a user of lex or yacc understand the context-free grammar even if the programs they produce don’t require their users to understand them? Al: Most users of lex can use lex without understanding what a finite-state machine is. A user of yacc is really writing a context-free grammar, so from that perspective, the user of yacc certainly gets to appreciate grammars, but the user doesn’t have to become a formal language theorist to use yacc.

Al: Perhaps the greatest surprise has been its broad applicability. Let me interpret automata theory as formal languages and the automata that recognize them. Automata theory provides useful notations, particularly regular expressions and context-free grammars, for describing the important syntactic features of programming languages. The automata that recognize these formal languages, such as finite-state machines and push-down automata, can serve as models for the algorithms used by compilers to scan and parse programs. Perhaps the greatest benefit of automata theory to compiling comes from being able to build compiler-construction tools such as lex and yacc that automate the construction of efficient scanners and parsers based on these automata. What is preventing us from building a compiler (and/or a language) that identify all potential bugs?

My long-term vision is that through the use of stronger languages, more powerful verification tools, and better software-engineering practices, software will improve in reliability and quality. How can we design pattern-matching algorithms that take advantage of concurrency in multicore hardware? Al: This is currently an active research area. Many researchers are exploring parallel hardware and software implementations of pattern-matching algorithms like the Aho-Corasick algorithm or finite-state algorithms. Some of the strong motivators are genomic analyses and intrusion detection systems. What motivated you and Corasick to develop the Aho-Corasick algorithm? Al: The origin has a very interesting story behind it. I was working on the book The Design and Analysis of Computer Algorithms [Addison-Wesley] with John Hopcroft and Jeffrey Ullman back in the early 70s. I was giving a lecture at Bell Labs on algorithm design techniques.


pages: 239 words: 64,812

Geek Sublime: The Beauty of Code, the Code of Beauty by Vikram Chandra

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Apple II, barriers to entry, Berlin Wall, British Empire, business process, conceptual framework, create, read, update, delete, crowdsourcing, East Village, European colonialism, finite state, Firefox, Flash crash, glass ceiling, Grace Hopper, haute couture, iterative process, Jaron Lanier, John von Neumann, land reform, London Whale, Paul Graham, pink-collar, revision control, Silicon Valley, Silicon Valley ideology, Skype, Steve Jobs, Steve Wozniak, theory of mind, Therac-25, Turing machine, wikimedia commons, women in the workforce

The embodied language of websites, apps, and networks writes itself into us. One of the pioneers of the newly minted field of Aesthetic Computing, Paul Fishwick, points out that digital watches and DVRs surface the abstract computer-science notion of “finite state machines” in their menus—each time you press the “mode” button you move from one state to another: The way in which our thinking is changing culturally surfaces deep abstract concepts in computing to us as we use these devices: from number, to information structure, to process … It is not just that the finite state machine is embedded within the watch’s silicon, but also that the human wearing the watch becomes aware of this virtual machine’s structure and its components through the experience of using the watch. The state machine changes how the wearer thinks, even though the wearer is probably unaware of the formal mathematical notation of a state machine.


pages: 593 words: 118,995

Relevant Search: With Examples Using Elasticsearch and Solr by Doug Turnbull, John Berryman

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

crowdsourcing, domain-specific language, finite state, fudge factor, full text search, information retrieval, natural language processing, premature optimization, recommendation engine, sentiment analysis

Building fast completions via specialized search indexes Because completion is such an important element of relevance feedback, Elasticsearch introduced a specialized component called the completion suggester. This component circumvents the performance problem referenced previously and allows for custom sorting of completion results (rather than sorting by occurrence). Effectively, the completion suggester is a specialized search index that’s stored in parallel with the normal search index. It’s backed by a compact data structure (a finite state transducer) that provides a fast prefix-lookup capability. In many ways, this approach is the ideal solution for completion, but as you’ll see in a moment, it introduces a couple of problems of its own. Setting up the completion suggester is simple: you just declare one of the fields to be of type completion. Listing 8.5. Setting up Elasticsearch’s completion suggester In principle, you could copy the title field to the completion field and index the documents in the same way as demonstrated in the preceding section.

The completion implementation then moves on to find completions beginning with just star. Here, based on the context of the entire query, it would be unreasonable to return anything but Star Trek movies. But in the case of the Elasticsearch completion suggester, the top completion result is Star Wars: Episode IV—A New Hope. This demonstrates that the completion suggester is unaware of the search context. Another unfortunate consequence with using a finite state transducer to back completion is that the data structure is immutable. If a document is deleted from the index, the completions for that document will still exist. Currently, the only remedy for this situation is a full-index optimization, which effectively rebuilds the completion index from scratch. Despite the possible pitfalls, the completion suggester is an important tool for relevance feedback.

simple signals, 2nd Solr additive, with Boolean queries boosting feature mappings multiplicative, with function queries user ratings vs. filtering breadcrumb navigation browse experience browse interface, Yowl buckets section building signals bulk index API bulkMovies string business and domain awareness business concerns group business weight business-ranking logic BusinessScore C cast.name field, 2nd, 3rd, 4th, 5th cast.name scores cast.name.bigrammed field, 2nd, 3rd, 4th character filtering, 2nd, 3rd character offsets classic similarity classification features cleaning click-through rate co-occurrence counting cold-start problem COLLAB_FILTER filter, 2nd collaboration filtering, using co-occurrence counting search relevance and collation collocation extraction combining fields committed documents common words, removing completion field, 2nd completion suggester completion_analyzer completion_prefix variable complexphrase query parser compound queries, 2nd, 3rd concept search basic methods for building augmenting content with synonyms concept signals building using machine learning personalized search and configurations conflate tokens constant_score query, 2nd content augmentation curation engineer/curator pairing risk of miscommunication with content curator role of content curator exploring extracting into documents providing to search engine searching content group content weight, 2nd ContentScore control analysis controlling field matching converge conversion rate coord (coordinating factor), 2nd, 3rd, 4th, 5th, 6th, 7th copyField, 2nd, 3rd copy_to option, 2nd cosine similarity cross_fields, 2nd, 3rd searching, 2nd, 3rd, 4th Solr solving signal discordance with cuisine field cuisine_hifi field, 2nd cuisine_lofi field curation, search relevance and custom all field custom score query D data-driven culture debugging example search application Elasticsearch first searches with The Movie Database Python matching query matching analysis to solve matching issues comparing query to inverted index fixing by changing analyzers query parsing underlying strategy ranking computing weight explain feature scoring matches to measure relevance search term importance similarity vector-space model decay functions, 2nd deep paging default analyzer defType parameter delimiters acronyms modeling specificity phone numbers synonyms tokenizing geographic data tokenizing integers tokenizing melodies deployment, relevance-focused search application description field, 2nd, 3rd, 4th, 5th descriptive query directors field directors.name field, 2nd, 3rd, 4th directors.name score directors.name.bigrammed, 2nd, 3rd disable_coord option disabling tokenization discriminating fields DisjunctionMaximumQuery dismax, 2nd doc frequency, 2nd doc values document search and retrieval aggregations Boolean search facets filtering Lucene-based search positional and phrase matching ranked results relevance sorting document-ranking system documents analysis enhancement enrichment extraction flattening nested grouping similar matching meaning of scored search completion from documents being searched tokens as features of matching process meaning of documents dot character dot product, 2nd down-boosting title DSL (domain-specific language) E e-commerce search, 2nd easy_install utility edismax query parser, 2nd Elasticsearch example search application overview end sentinels engaged field engaged restaurants English analyzer overview reindexing with english_* filters english_bigrams analyzer english_keywords filter english_possessive_stemmer filter english_stemmer filter english_stop filter enrichment, 2nd ETL (extract, transform, load), 2nd every field gets a vote exact matching, 2nd, 3rd, 4th expert search, 2nd, 3rd explanation field external sources extract function, 2nd, 3rd extracting features extraction F faceted browsing overview Solr facet.prefix option facets, 2nd, 3rd fail fast, 2nd, 3rd, 4th fast vector highlighter feature modeling, 2nd feature selection feature space features creation of overview, 2nd, 3rd feedback at search box search completion search suggestions search-as-you-type business and domain awareness content curation risk of miscommunication with content curator role of content curator in search results listing grouping similar documents information presented snippet highlighting when there are no results search relevance and Solr faceted browsing field collapsing match phrase prefix relevance feedback feature mappings suggestion and highlighting components while browsing alternative results ordering breadcrumb navigation faceted browsing field boosts field collapsing overview Solr field discordance field mappings field normalization field scores, 2nd field synchronicity, signal modeling and field-by-field dismax field-centric methods, 2nd field-centric search, combining term-centric search and combining greedy search and conservative amplifiers like fields precision vs. recall Solr fieldNorms, 2nd, 3rd fields fieldType field_value_factor function fieldWeight, 2nd, 3rd, 4th filter clause filter element filter queries filtering Amazon-style collaborative overview using co-occurrence counting score shaping vs. boosting finite state transducer fire token first_name field floating-point numbers fragment_size parameter fudge factors full bulk command full search string full-text search full_name field function decay function queries, multiplicative boosting with Boolean queries vs. combining high-value tiers scored with simple Solr function_score query, 2nd, 3rd, 4th G garbage features Gaussian decay generalizing matches generate_word_parts genres aggregation genres.name field geographic data, tokenizing geohashing geolocation, 2nd getCastAndCrew function GitHub repository granular fields grouping fields H has_discount field high-quality signals highlighted snippets highlights HTMLStripCharFilter HTTP commands, 2nd I ideal document IDF (inverse document frequency) ignoring when ranking overview, 2nd inconsistent scoring index-time analysis, 2nd index-time personalization indexing documents information and requirements gathering business needs required and available information users and information needs information retrieval, creating relevance solutions through inner objects innermost calculation integers, tokenizing inventory-related files inventory_dir configuration inverse document frequency.

The Science of Language by Noam Chomsky

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Alfred Russel Wallace, British Empire, Brownian motion, dark matter, Drosophila, epigenetics, finite state, Howard Zinn, phenotype, statistical model, stem cell, Steven Pinker, theory of mind

I briefly address two questions: (i) where do linguistic hierarchical structures ‘come from,’ and (ii) what is their significance for understanding the way that language ‘works’? On the first question, it is important to recognize that it is difficult to avoid hierarchy. The successor function for generating the natural numbers introduces it, and in fact any finite state grammar introduces hierarchy: each additional element added in generating a string of words or other elements yields a larger set of which the prior-generated set is a subset. If one adds the operation of associativity, one can make the hierarchy disappear and end up with a string, but that requires an additional operation – and justification for introducing it. I do not mention a finite state grammar because it introduces the right hierarchy for natural languages. Chomsky demonstrated long ago that it did not (1957; see also Chomsky & Miller 1963). The issue is not how some kind of hierarchy is introduced, but rather how the ‘right’ kind is – the kind of hierarchy found in natural languages.

R. 26, 33, 197, 203, 268 Gandhi, Mahatma 114, 144 garden path sentences 50 Gauss, Carl F. 127 Gehring, Walter 46, 171, 258, 279 genes 46–49, 148, 173master genes 279, 280 and Merge 49 PAX-6 46, 280 universal genome 53 Gleitman, Lila 196 Godwin, Richard 123 Golinkoff, Roberta 196 Goodman, Nelson 81, 83, 88, 261, 285behaviorism 89, 285 and Chomsky 86–92 constructivism 285 nominalism 87, 91 personal relationship with Chomsky 91–92 Gould, Stephen J. 158, 170, 172, 173 grammar 277–278 and acquisition of language 24, 60 artifacts in theories of 238 extensional equivalence of 153 finite state 232 generative 63, 85, 91, 96, 99 generative capacity 236 phrase structure 233, 235 structure and hierarchy 236 transformational 25 ‘great leap forward' 13, 70, 179 growth 40, 73, 77cognitive growth 121 developmental constraints on 41, 45, 158 Haldane, J. B. S. 51, 53 Hale, Kenneth 17, 62 Halle, Morris 21 Hamilton, William D. 104 Harman, Gilbert 100 Harris, Zellig 38, 80, 81, 86 Hauser, Marc 100, 109, 286evolution of communication 12, 58 faculty of language 60, 170, 172, 268, 269 hearing 48 Helmholtz, Hermann von 73, 97 Herbert of Cherbury 181 Higginbotham, Jim 129, 130 Hirsh-Pasek, Kathy 196 homunculus 37, 290 Hornstein, Norbert 29, 183, 265 human behavior 138–151, 286 human evolution 2, 13, 71developmental constraints on 41 ‘great leap forward' 13, 70, 77 human nature 95–102, 108–112 and biological capacities 95 Chomsky on 95–102 determined and uniform 95, 99 distinctiveness of 176–179 enlightenment conception of 142 and evolution 103–107 ‘great leap forward' 179 moral agency 101 plasticity of 121 humanitarian intervention 121, 122, 287 humans, genetic variation 13 Hume, David 26, 90, 99, 106, 179color problem 247–248, 286 theory of moral nature 63, 99, 109 Huxley, Thomas 23 I-beliefs 153–156 definition of 156 I-concepts 153–156 definition of 155 I-language 81, 153–156, 164, 239, 258, 266intensional specification of 167 imagination 70, 161 inclusiveness 62, 281 induction 88, 90, 95 inference 73, 165, 221 information 208, 213, 218, 228, 229, 254pragmatic 30 semantic 29, 260 innateness 39–45, 60, 89, 91, 255, 267, 284 innatism 123 innovation 71, 74, 95, 177, 178, 185, 282technological 145 insects, study of 147 instinct 96, 143, 178, 181, 247, 248, 287 instrumentalism 211 intention (see also nativism) 163 internalism 6, 228, 248, 262–263, 269, 287and concepts 188, 190, 209, 255–257, 260, 272 intuitions 125, 126 island sentences 50 Jackendoff, Ray 170, 172 Jacob, François 24, 53, 60, 243 Joos, Martin 145 justice 120 Kahneman, Daniel 140 Kant, Immanuel 90 Kauffman, Stuart 21, 22, 266 Kayne, Richard 55, 84, 241 Keller, Helen 45 Kissinger, Henry 101, 107, 113, 287 Klein, Ralph 111 knowledge 70, 193See also information Kripke, Saul 126 Kropotkin, Peter 103, 111 languageand agency 124–128 as an animal instinct 178 and arithmetical capacities 16 and biology 21–30, 80, 235, 284 biophysical explanations of 208 and brain morphology 46 capacity for 70, 164 characteristic uses of 11–12 cognitive benefits of 2 competence and use 63 and complex thought 1 complexity of 52, 146 compositional character of 37 computational theory of 174, 272 and concepts 71, 198 conceptual resources of 212 displacement property 16 distinctive features 22 domination 232–238 expectations for 54 externalization of 52, 78, 79, 153, 222, 278 flexibility 95, 162, 197, 210, 224, 227 formal languages 16, 17, 289 formal theory of 21–30 functions of 11–20, 164, 165 generative capacity 49 head-first 240 hierarchical structure 232–238 I-language 153–156, 164, 239, 258, 266 interface conditions 25 internal 37 internal, individual and intensional 37, 154, 167 internal use of 52, 69, 124, 153, 160, 197, 262–263, 272–274 a ‘knowledge' system 187, 193 localization of 46, 59, 69–74 and mathematics 181 modularity of 59 movement property 16, 85, 108, 264–265 as a natural object 2, 7 nominalizing languages 155 open texture of 273 and other cognitive systems 271 phonetic features 42 phonological features 42, 57 precursors of 43, 77 properties of 22, 37, 60, 62 public language 153, 288 purposes of 224 and reason 181 result of historical events 84 rules of 165, 221, 223, 224, 225, 283, 284 and science 124–128 sounds available in 282 structural features of 42 structure of 236, 277–278 study of 36, 76, 79, 154See also linguistics theories of 164, 193, 239, 243, 285 unboundedness 177, 262 uniqueness to humans 150 variation in the use of 164, 239–242 language faculty 74, 172, 177, 243, 260, 261, 270adicity requirements of 198, 199 perfection of 50 language of thought 27, 71, 189, 190, 220, 230, 269 Lasnik, Howard 85 learning 95, 180, 200, 226, 281, 282empiricism and 173, 179 learning a language 187, 225, 226 Lenneberg, Eric 21, 43, 47, 59 Lepore, E. 195 Lewis, David 153, 165, 220, 222, 223, 224 Lewontin, Richard 58, 157, 170, 172, 173, 175, 231 lexical items 62categories of 234 origin of 46 liberalism 98 linguistic communities 222 linguistic development 39See also development linguistic practices 221, 223 linguistic principles 237, 276 linguistics 19, 36, 82, 145and biology 150 first factor considerations 45, 96, 148 and natural science 38 and politics 152 procedural theories in 149 second factor considerations 148, 277 structural 80 theories of 87, 265 third factor considerations:separate entry Locke, John 26, 125, 267personal identity 31, 271 secondary qualities 256 logic, formal 251 Logical Structure of Linguistic Theory 84–85 Lohndal, Terje 57 Lorenz, Konrad 21 Marx, Karl 122 mathematics 127, 165, 214, 215, 266capacity for 15, 136 formal functions in 166–169 and language 181 semantics for 251, 252 Mayr, Ernst 174 meaning 29, 98, 199, 206, 250, 252, 270, 273computational theory of 213 construction of a science of 226–230 externalist science of 209–220 methodology for a theory of 226, 227 study of 261 theories of 221 theory of 212, 214, 217, 226 Mehler, Jacques 55 Merge 16, 77, 91, 181, 236, 243, 263, 279–280 centrality of 41, 60, 62, 176, 245 consequences of 17 and edge properties 17, 41 Merge, external 17, 166, 201, 238, 263 Merge, internal 16, 25, 29, 85, 201, 238, 264 mutation giving rise to 43, 52 origin of 14, 15 Pair Merge 201, 264 and psychic identity 28 uniqueness to humans 25, 200, 205 metaphor 195 metaphysics 125, 157 Mikhail, John 63, 99, 100, 109, 129, 286 Mill, John Stuart 121, 122, 287 Miller, George 81 mindas a causal mechanism 138 computational sciences of 247 computational theory of 280 philosophy of 186, 255 place of language in 69–74 representational theory of 162, 188 science of 138–151, 212, 288 theory of 14 Minimalist Program 24, 83, 84, 233, 235–236, 237, 245, 246, 264and adaptationism 172 aim of 42, 199 simplicity and 80, 243, 285 modes of presentation (MOPs) 187, 190, 217, 219, 275roles of 218 morality 99, 100, 109, 287character of 110 conflicting systems 114 generation of action or judgment 110 moral truisms 101, 102 theories of 110, 135 trolley problems 109 and universalization 113–117 Moravcsik, Julius 164 morphemes 81, 149 morphology 52, 54, 195distributed 27 and syntax 200 Morris, Charles 250 Move 108 mutations 14, 43, 170, 171survival of 51, 53 mysterianism 97 Nagel, Thomas 98 Narita, Hiroki 57 nativism 187, 217, 283 natural numbers 204 natural sciences 18, 38 natural selection 58, 76, 104, 143, 157 Navajo language 277 neural networks 225 neurophysiology 74 Newton, Isaac 66, 67, 72, 88, 127, 134alchemy 67 nominalism 87, 91 non-violence 114 Norman Conquest 84 objective existence 169 optimism 118–123, 288 parameters 39–45, 54, 239–242, 277, 282, 283and acquisition of language 241 choice of 45, 83 developmental constraints in 243 functional categories 240 head-final 55, 240 headedness macroparameter 241, 276 linearization parameter 55 macroparameters 55 microparameters 55, 84, 241 polysynthesis 55 and simplicity 80 Peck, James 288 Peirce, Charles Sanders 96, 132, 184, 250abduction 168, 183, 246, 248 truth 133, 136 perfection 50–58, 172, 175, 263–264, 279 person, concept of 125, 126, 271, 284‘forensic' notion of 125 persuasion 114, 116 Pesetsky, David 30 Petitto, Laura-Ann 48, 78 phenomenalism 211 philosophers 129–131, 282, 283contribution of 129 contribution to science 129 philosophy 181accounts of visual sensations 255–257 of language 35, 273 of mind 186, 255 problems in 286 and psychology 140 phonemes 81 phonetic/phonological interfaces 161, 194, 253, 278 phonology 28, 40, 52, 54, 57, 109, 208 physicalism 187 physics 19, 65, 106, 144and chemistry 65 folk physics 72 theoretical 18, 65, 73, 100 Piattelli-Palmarini, Massimo 140, 246, 279 Pietroski, Paulconcepts 47, 199, 200, 209 semantics 198, 211, 223, 229, 254 Pinker, Steven 166, 170, 172, 176 Pirahã language 30 Plato 115 Plato's Problem 23, 195, 236, 244, 246, 266 Poincaré, Henri 65 politics 116, 119, 145, 146, 152 poverty of the stimulus observations 5, 23, 40, 177, 200, 227, 233, 262 power 120 pragmatic information 30 pragmatics 36, 130, 250–254, 289definition of 250 and reference 253 principles and parameters approach to linguistic theory 24, 53, 235, 236, 240, 245, 276language acquisition 60, 82, 83, 149 and simplicity 246 progress 118, 145, 183 projection problem 83, 89 prosody 37 psychic continuity 26, 205, 207, 271 psychology 219of belief and desire 138, 141 comparative 21 evolutionary 103–107, 111 folk psychology 72, 141 and philosophy 140 rationalistic 255 scientific 140 psychology, comparative 21 public intellectuals 122 Pustejovsky, James 164, 195 Putnam, Hilary 95, 126, 138 Quine, W.


pages: 463 words: 118,936

Darwin Among the Machines by George Dyson

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Ada Lovelace, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Albert Einstein, anti-communist, British Empire, carbon-based life, cellular automata, Claude Shannon: information theory, combinatorial explosion, computer age, Danny Hillis, fault tolerance, Fellow of the Royal Society, finite state, IFF: identification friend or foe, invention of the telescope, invisible hand, Isaac Newton, Jacquard loom, Jacquard loom, James Watt: steam engine, John Nash: game theory, John von Neumann, Menlo Park, Nash equilibrium, Norbert Wiener, On the Economy of Machinery and Manufactures, packet switching, pattern recognition, phenotype, RAND corporation, Richard Feynman, Richard Feynman, spectrum auction, strong AI, the scientific method, The Wealth of Nations by Adam Smith, Turing machine, Von Neumann architecture

Newman said, “was like catching mice just as they were entering a hole in the wall.”38 A series of electrical pulses, about a microsecond apart, were converted to a train of sound waves circulating in a long tube of mercury equipped with crystal transducers at both ends. About a thousand digits could be stored in the millisecond it took a train of pulses to travel the length of a five-foot “tank.” Viewed as part of a finite-state Turing machine, the delay line represented a continuous loop of tape, a thousand squares in length and making a thousand complete passes per second under the read-write head. Turing specified some two hundred tubes, each storing thirty-two words of 32 bits, for a total, “comparable with the memory capacity of a minnow,” of about 200,000 bits.39 “The property of being digital,” announced Turing to the London Mathematical Society in a 1947 lecture on his design, “should be of more interest than that of being electronic.”40 Whether memory took the form of paper tape, vacuum-tube flip-flops, mercury pulse trains, or even papyrus scrolls did not matter as long as discrete symbols could be freely read, written, relocated, and, when so instructed, erased.

See Institute for Advanced Study IAS (Institute for Advanced Study) computer, xii, 78–79, 91–92, 93–107 as ancestor of the microprocessor, 98, 203 and artificial life, xii, 111–18, 121, 124–26, 129, 192. see also Barricelli construction and operation, 97–107, 111 and digital computing at RAND, 104, 148, 178 duplication of, 97, 98, 107 logical and physical architecture, 98, 99–107, 157 and nuclear weapons, 78–79, 91–92, 107, 111 and origins of IBM model 701, 91, 106 origins of, and weather prediction, 87–88 peripheral equipment, 98, 101–102, 106, 144 programming of, 102, 106–107, 114, 121, 130 progress reports, and impact of, 98, 99, 121 and random-access memory, 98, 103–105, 113 shakedown run, 78–79, 111 siblings and offspring, listed, 97 and von Neumann, 78–79, 87–88, 91–92, 97, 98–102, 106–108, 125, 153 IBM (International Business Machines) 12, 91, 103–104, 106, 122, 144, 148, 179. see also SAGE and evolution of operating systems, 122, 189 and IAS computer project, 91, 106 and punched-card computing, 60, 78, 81, 82, 83, 106, 122, 144 and von Neumann, 91 IBM computers: model 650, 122; model 701, 91, 106, 178; model 704, 118, 184; model 7090, 151, 182 iconoscope, 85, 104 ideas. see also consciousness; meaning; mind Darwinian evolution of, 28, 184 and formal logic, 38, 43, 46, 49, 129 nature of, 136, 158, 225 IFF (Identification Friend or Foe) radar, 104 Illinois, University of, 107 immortality, and composite organisms, 175, 191, 210 and non-Darwinian evolution, 31 improbability, and origins of life, 29–30, 112, 177 incompleteness (mathematical), 49–50, 53–54, 70, 72, 78, 120, 167, 228 Industrial Revolution, 21–22, 134 infinity, and finite-state machines, 10, 35, 43, 56, 130, 190 information. see also bandwidth; bits; communication; cybernetics; telecommunication and cybernetics, 6, 98, 101 defined, by Bateson, 167 flow, in data networks, 12, 110, 150, 158–59, 205 mathematical theory of, 110, 153, 155 and meaning, 8, 155, 158, 167, 171, 184–85 and money, 162, 165 and origins of life, 12, 29 insects, 8, 13, 129, 170, 174, 210 Instinct and Reason (Smee), 45, 48 Institute for Advanced Study (IAS), Princeton, N.J.


pages: 249 words: 45,639

Learn Python the Hard Way by Zed Shaw

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

complexity theory, finite state, index card, web application

Write the room description as doc comments, and change the runner to print them. Once you have doc comments as the room description, do you need to have the function prompt even? Have the runner prompt the user, and pass that in to each function. Your functions should just be if-statements printing the result and returning the next room. This is actually a small version of something called a "finite state machine". Read about them. They might not make sense but try anyway. © Copyright 2010, Zed A. Shaw. Last updated on Jun 24, 2011. Exercise 42: Gothons Are Getting Classy While it's fun to put functions inside of dictionaries, you'd think there'd be something in Python that does this for you. There is: the class keyword. Using class is how you create an even more awesome "dict with functions" than the one you made in the last exercise.


pages: 259 words: 67,456

The Mythical Man-Month by Brooks, Jr. Frederick P.

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

finite state, HyperCard, Menlo Park, sorting algorithm, speech recognition, Steve Jobs, Tacoma Narrows Bridge, Turing machine

Harel usefully defines a prototype as [A version of a program that] reflects only the design decisions made in the process of preparing the conceptual model, and not decisions driven by implementation concerns.12 It is possible to build a prototype that is not at all part of a product growing toward shipment. For example, one may build an interface prototype that has no real program function behind it, merely the finite-state machine that makes it appear to go through its paces. One can even prototype and test interfaces by the Wizard-of-Oz technique, with a concealed human simulating the system's responses. Such prototyping can be very useful for getting early user feedback, but it is quite apart from testing a product growing toward shipment. Similarly, implementers may well undertake to build a vertical slice of a product, in which a very limited function set is constructed in full, so as to let early sunlight into places where performance snakes may lurk.


pages: 504 words: 89,238

Natural language processing with Python by Steven Bird, Ewan Klein, Edward Loper

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

bioinformatics, business intelligence, conceptual framework, elephant in my pajamas, en.wikipedia.org, finite state, Firefox, information retrieval, Menlo Park, natural language processing, P = NP, search inside the book, speech recognition, statistical model, text mining, Turing test

Check http: //www.nltk.org/ for news about developments after the publication date of this book. Contributions in the following areas are particularly encouraged: 444 | Afterword: The Language Challenge Phonology and morphology Computational approaches to the study of sound patterns and word structures typically use a finite-state toolkit. Phenomena such as suppletion and non-concatenative morphology are difficult to address using the string-processing methods we have been studying. The technical challenge is not only to link NLTK to a highperformance finite-state toolkit, but to avoid duplication of lexical data and to link the morphosyntactic features needed by morph analyzers and syntactic parsers. High-performance components Some NLP tasks are too computationally intensive for pure Python implementations to be feasible. However, in some cases the expense arises only when training models, not when using them to label inputs.


pages: 999 words: 194,942

Clojure Programming by Chas Emerick, Brian Carper, Christophe Grand

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Amazon Web Services, Benoit Mandelbrot, cloud computing, continuous integration, database schema, domain-specific language, en.wikipedia.org, failed state, finite state, Firefox, game design, general-purpose programming language, mandelbrot fractal, Paul Graham, platform as a service, premature optimization, random walk, Schrödinger's Cat, semantic web, software as a service, sorting algorithm, Turing complete, type inference, web application

This count will be stored in a map of words to their respective counts, held in an atom called word-freqs: (def url-queue (LinkedBlockingQueue.)) (def crawled-urls (atom #{})) (def word-freqs (atom {})) We’ll set up a bunch of agents in order to fully utilize all the resources we have available,[157] but we need to think through what state they’ll hold and what actions will be used to transition those states. In many cases, it is useful to think about agent state and the actions applied to it as forming a finite state machine; we’ve already walked through the workflow of our crawler, but we should formalize it. Figure 4-8. A web crawler’s primary state transitions The state of an agent at each point in this process should be obvious: prior to retrieving a URL, an agent will need a URL to retrieve (or some source of such URLs); prior to scraping, an agent will need a page’s content to scrape; and, prior to updating the cumulative crawler state, it will need the results of scraping.

It will leave the state of the agent set to a map containing the URL it pulls off the queue and its content: (declare run process handle-results) (defn ^::blocking get-url [{:keys [^BlockingQueue queue] :as state}] (let [url (as-url (.take queue))] (try (if (@crawled-urls url) state {:url url :content (slurp url) ::t #'process}) (catch Exception e ;; skip any URL we failed to load state) (finally (run *agent*))))) We’ll show run and explain what it’s doing in a little bit. If we’ve already crawled the URL pulled off the queue (or if we encounter an error while loading the URL’s content), we leave the agent’s state untouched. This implementation detail in our finite state machine adds a cycle to it where get-url will sometimes be invoked on a single agent multiple times before it transitions states. process will parse a URL’s content, using the links-from and words-from functions to obtain the URL’s content’s links and build a map containing the frequencies of each word found in the content. It will leave the state of the agent set to a map containing these values as well as the originating URL: (defn process [{:keys [url content]}] (try (let [html (enlive/html-resource (java.io.StringReader. content))] {::t #'handle-results :url url :links (links-from url html) :words (reduce (fn [m word] (update-in m [word] (fnil inc 0))) {} (words-from html))}) (finally (run *agent*)))) The :words map associates found words with their count within the retrieved page, which we produce by reducing a map through the seq of those words. fnil is a HOF that returns a function that uses some default value(s) (here, 0) in place of any nil arguments when calling the function provided to fnil; this keeps us from having to explicitly check if the value in the words map is nil, and returning 1 if so.


pages: 336 words: 88,320

Being Geek: The Software Developer's Career Handbook by Michael Lopp

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

finite state, game design, job satisfaction, John Gruber, knowledge worker, remote working, rolodex, Silicon Valley, Silicon Valley startup, Skype, sorting algorithm, web application

It's your inner dialog, and it's often full of shit. I'm not saying you deliberately lie to yourself. OK, maybe I am, but we're all doing it. We're all gathering data through the course of the day and creating a story based on that data, our experience, and our moods. It's a perfectly natural phenomenon to guide the narrative in our favor. We see the world how we want. A carpenter sees all problems as a nail. I see problems as finite state machines. As we edit our days into these stories, there is always a risk of fiction. This is why you need to identify and nurture Your People. You tell these stories to Your People without reservation. Your People love your stories—fiction and all. They love how you tell them, they laugh about the lies you tell yourself, and then they stop and they tell you the truth. Networking is the art of finding those who are willing to listen to and critique your stories, so go look at your Inbox.


pages: 502 words: 107,510

Natural Language Annotation for Machine Learning by James Pustejovsky, Amber Stubbs

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Amazon Mechanical Turk, bioinformatics, cloud computing, computer vision, crowdsourcing, easy for humans, difficult for computers, finite state, game design, information retrieval, iterative process, natural language processing, pattern recognition, performance metric, sentiment analysis, social web, speech recognition, statistical model, text mining

It has been used successfully in Speech Recognition, handwriting recognition, POS tagging, and many other NLP tasks. Because the number of possible label sequences gets unmanageable with long sequences, the model makes the same assumption that the sequence predictor from Chapter 3 did; namely, look at only the previous n labels for calculating the probability (see Language Models). The underlying assumption is that the elements transition from state to state via a finite-state automaton behavior, with probabilistic transitions between states. Then, this can be transformed into a new model, with hidden states, where all observation tokens are emitted from each state with a finite probability. Of particular interest to us is the fact that the richer the encoding of features (e.g., from annotations) the better one can calculate the transition probabilities from state to state in the model.


pages: 722 words: 90,903

Practical Vim: Edit Text at the Speed of Thought by Drew Neil

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

en.wikipedia.org, fault tolerance, finite state, place-making, QWERTY keyboard, web application

Meet Operator-Pending Mode Normal, Insert, and Visual modes are readily identified, but Vim has other modes that are easy to overlook. Operator-Pending mode is a case in point. We use it dozens of times daily, but it usually lasts for just a fraction of a second. For example, we invoke it when we run the command dw. It lasts during the brief interval between pressing d and w keys. Blink and you’ll miss it! If we think of Vim as a finite-state machine, then Operator-Pending mode is a state that accepts only motion commands. It is activated when we invoke an operator command, and then nothing happens until we provide a motion, which completes the operation. While Operator-Pending mode is active, we can return to Normal mode in the usual manner by pressing escape, which aborts the operation. Many commands are invoked by two or more keystrokes (for examples, look up g ​, z ​, ctrl-w ​, or [ ​), but in most cases, the first keystroke merely acts as a prefix for the second.


pages: 294 words: 81,292

Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

3D printing, AI winter, Amazon Web Services, artificial general intelligence, Automated Insights, Bernie Madoff, Bill Joy: nanobots, brain emulation, cellular automata, cloud computing, cognitive bias, computer vision, cuban missile crisis, Daniel Kahneman / Amos Tversky, Danny Hillis, data acquisition, don't be evil, Extropian, finite state, Flash crash, friendly AI, friendly fire, Google Glasses, Google X / Alphabet X, Isaac Newton, Jaron Lanier, John von Neumann, Kevin Kelly, Law of Accelerating Returns, life extension, Loebner Prize, lone genius, mutually assured destruction, natural language processing, Nicholas Carr, optical character recognition, PageRank, pattern recognition, Peter Thiel, prisoner's dilemma, Ray Kurzweil, Rodney Brooks, Search for Extraterrestrial Intelligence, self-driving car, semantic web, Silicon Valley, Singularitarianism, Skype, smart grid, speech recognition, statistical model, stealth mode startup, stem cell, Stephen Hawking, Steve Jobs, Steve Wozniak, strong AI, Stuxnet, superintelligent machines, technological singularity, The Coming Technological Singularity, traveling salesman, Turing machine, Turing test, Vernor Vinge, Watson beat the top human players on Jeopardy!, zero day

He said the halting problem is unsolvable because if the debugger encounters a halting problem in the target program, it will succumb to the infinite loop while analyzing it, and never determine if the halting problem was there. You, the programmer, would be waiting for it to come up with the answer for the same amount of time you’d wait for the original program to halt. That is, a very long time, perhaps forever. Marvin Minsky, one of the fathers of artificial intelligence, pointed out that “any finite-state machine, if left completely to itself, will fall eventually into a perfectly periodic repetitive pattern. The duration of this repeating pattern cannot exceed the number of internal states of the machine.” Translated, that means a computer of average memory, while running a program with a halting problem, would take a very long time to fall into a pattern of repetition, which could then be detected by a diagnostic program.


pages: 366 words: 107,145

Fuller Memorandum by Stross, Charles

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Any sufficiently advanced technology is indistinguishable from magic, Beeching cuts, British Empire, cognitive dissonance, complexity theory, congestion charging, dumpster diving, finite state, Firefox, HyperCard, invisible hand, land reform, linear programming, peak oil, security theater, sensible shoes, side project, telemarketer, Turing machine

He's been in his grave for nearly half a century, and would doubtless be spinning in it fast enough to qualify for carbon credits as an environmentally friendly power source if he could see us today in all our multi-ethnic anti-discriminatory splendor. But who cares? That is, indeed, the big-ticket question. Before the Laundry, things were a bit confused. You can do magic by hand, without computers, but magic performed by ritual without finite state automata in the loop--calculating machines, in other words--tends to be haphazard, unreliable, uncontrollable, prone to undesirable side effects, and difficult to repeat. It also tends to fuck with causality, the logical sequence of events, in a most alarming way. We've unintentionally rewritten our history over the centuries, would-be sorcerers unwinding chaos and pinning down events with the dead hand of consistency--always tending towards a more stable ground state because chaos is unstable; entropy is magic's great enemy.

Language and Mind by Noam Chomsky

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Alfred Russel Wallace, finite state, John von Neumann, pattern recognition, phenotype, theory of mind

In fact, it can be argued that the somewhat similar but not formally related concept of real-time deterministic automation is far more “natural” in terms of time and space conditions on computation.16 However, it is pointless to pursue this topic, because what is at stake is not the “simplicity” of phrase structure grammars but rather of transformational grammars with a phrase structure component that plays a role in generating deep structures. And there is absolutely no mathematical concept of “ease of computation” or “simplicity of algorithm” that even vaguely suggests that such systems may have some advantage over the kinds of automata that have been seriously investigated from this point of view – for example, finite state automata, linear bounded automata, and so on. The basic concept of “structure-dependent operation” has never even been considered in a strictly mathematical concept. The source of this confusion is a misconception on Putnam’s part as to the nature of grammatical transformations. They are not rules that “abbreviate” sentences; rather, they are operations that form surface structures from underlying deep structures, in such ways as are illustrated in the preceding lecture and the references there cited.17 Hence, to show that transformational grammars are the “simplest possible,” one would have to demonstrate that the “optimal” computing system would take a string of symbols as input and determine its surface structure, its underlying deep structure, and the sequence of transformational operations that relates them.


pages: 462 words: 172,671

Clean Code: A Handbook of Agile Software Craftsmanship by Robert C. Martin

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

continuous integration, database schema, domain-specific language, en.wikipedia.org, Eratosthenes, finite state, Ignaz Semmelweis: hand washing, iterative process, place-making, web application, WebSocket

G7: Base Classes Depending on Their Derivatives The most common reason for partitioning concepts into base and derivative classes is so that the higher level base class concepts can be independent of the lower level derivative class concepts. Therefore, when we see base classes mentioning the names of their derivatives, we suspect a problem. In general, base classes should know nothing about their derivatives. There are exceptions to this rule, of course. Sometimes the number of derivatives is strictly fixed, and the base class has code that selects between the derivatives. We see this a lot in finite state machine implementations. However, in that case the derivatives and base class are strongly coupled and always deploy together in the same jar file. In the general case we want to be able to deploy derivatives and bases in different jar files. Deploying derivatives and bases in different jar files and making sure the base jar files know nothing about the contents of the derivative jar files allow us to deploy our systems in discrete and independent components.

Pragmatic.Programming.Erlang.Jul.2007 by Unknown

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Debian, en.wikipedia.org, fault tolerance, finite state, full text search, RFC: Request For Comment, sorting algorithm

Module:terminate(Arg, State) -> term() Clean up before deletion. add_handler(EventMgrRef, Handler, Args) -> Result Add an event handler to a generic event manager. add_sup_handler(EventMgrRef, Handler, Args) -> Result Add a supervised event handler to a generic event manager. call(EventMgrRef, Handler, Request, Timeout) -> Result Make a synchronous call to a generic event manager. delete_handler(EventMgrRef, Handler, Args) -> Result Delete an event handler from a generic event manager. start(EventMgrName) -> Result Create a stand-alone event manager process. start_link(EventMgrName) -> Result Create a generic event manager process in a supervision tree. stop(EventMgrRef) -> ok Terminate a generic event manager. 475 M ODULE : GEN _ FSM swap_handler(EventMgrRef, {Handler1,Args1}, {Handler2,Args2}) -> Result Replace an event handler in a generic event manager. swap_sup_handler(EventMgrRef, {Handler1,Args1}, {Handler2,Args2}) -> Result Replace an event handler in a generic event manager. sync_notify(EventMgrRef, Event) -> ok Notify an event manager about an event. which_handlers(EventMgrRef) -> [Handler] Return all event handlers installed in a generic event manager. F.29 Module: gen_fsm Generic finite state machine behavior. Module:StateName(Event, StateData) -> Result Handle an asynchronous event. Module:StateName(Event, From, StateData) -> Result Handle a synchronous event. Module:code_change(OldVsn, StateName, StateData, Extra) -> {ok, NextStateName, NewStateData} Update the internal state data during upgrade/downgrade. Module:handle_event(Event, StateName, StateData) -> Result Handle an asynchronous event.


pages: 489 words: 148,885

Accelerando by Stross, Charles

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

call centre, carbon-based life, cellular automata, cognitive dissonance, Conway's Game of Life, dark matter, dumpster diving, Extropian, finite state, Flynn Effect, glass ceiling, gravity well, John von Neumann, knapsack problem, Kuiper Belt, Magellanic Cloud, mandelbrot fractal, market bubble, means of production, packet switching, performance metric, phenotype, planetary scale, Pluto: dwarf planet, reversible computing, Richard Stallman, SETI@home, Silicon Valley, Singularitarianism, slashdot, South China Sea, stem cell, technological singularity, telepresence, The Chicago School, theory of mind, Turing complete, Turing machine, Turing test, upwardly mobile, Vernor Vinge, Von Neumann architecture, web of trust, Y2K

"Oh, one of Johnny's toys – a micromechanical digital phonograph player," Gianni says dismissively. "He used to design Babbage engines for the Pentagon – stealth computers. (No van Eck radiation, you know.) Look." He carefully pulls a fabric-bound document out of the obsolescent data wall and shows the spine to Manfred: "On the Theory of Games, by John von Neumann. Signed first edition." Aineko meeps and dumps a slew of confusing purple finite state automata into Manfred's left eye. The hardback is dusty and dry beneath his fingertips as he remembers to turn the pages gently. "This copy belonged to the personal library of Oleg Kordiovsky. A lucky man is Oleg: He bought it in 1952, while on a visit to New York, and the MVD let him to keep it." "He must be –" Manfred pauses. More data, historical time lines. "Part of GosPlan?" "Correct."

Wireless by Stross, Charles

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

anthropic principle, back-to-the-land, Benoit Mandelbrot, Buckminster Fuller, Cepheid variable, cognitive dissonance, colonial exploitation, cosmic microwave background, epigenetics, finite state, Georg Cantor, gravity well, hive mind, jitney, Khyber Pass, Magellanic Cloud, mandelbrot fractal, peak oil, phenotype, Pluto: dwarf planet, security theater, sensible shoes, Turing machine

There’s an odd kind of electrical tingle you get when you make contact with certain types of summoning grid. I get a powerful jolt of it right now, sizzling up my arm and locking my fingers in place around the head of the chess piece. I try to pull it away from the board, but it’s no good: it only wants to move up or down, left or right . . . Left or right? I blink. It’s a state machine all right: one that’s locked by the law of sympathy to some other finite state automaton, one that grinds down slow and hard. I move the piece forward one square. It’s surprisingly heavy, the magnet a solid weight in its base—but more than magnetism holds it in contact with the board. As soon as I stop moving I feel a sharp sting in my fingertips. “Ouch!” I raise them to my mouth just as there’s a crash from outside. “InMATE! InMATE!” I begin to turn as a shadow falls across the board.


pages: 570 words: 115,722

The Tangled Web: A Guide to Securing Modern Web Applications by Michal Zalewski

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

barriers to entry, business process, defense in depth, easy for humans, difficult for computers, fault tolerance, finite state, Firefox, Google Chrome, information retrieval, RFC: Request For Comment, semantic web, Steve Jobs, telemarketer, Turing test, Vannevar Bush, web application, WebRTC, WebSocket

One could argue that practitioners are not the ones to be asked for nuanced definitions, but go ahead and pose the same question to a group of academics and they’ll offer you roughly the same answer. For example, the following common academic definition traces back to the Bell-La Padula security model, published in the 1960s. (This was one of about a dozen attempts to formalize the requirements for secure systems, in this case in terms of a finite state machine;[86] it is also one of the most notable ones.) A system is secure if and only if it starts in a secure state and cannot enter an insecure state. Definitions along these lines are fundamentally true, of course, and may serve as the basis for dissertations or even a couple of government grants. But in practice, models built on these foundations are bound to be nearly useless for generalized, real-world software engineering for at least three reasons: There is no way to define desirable behavior for a sufficiently complex computer system.


pages: 818 words: 153,952

C++ Concurrency in Action: Practical Multithreading by Anthony Williams

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

car-free, finite state, SETI@home

Synchronizing operations with message passing The idea of CSP is simple: if there’s no shared data, each thread can be reasoned about entirely independently, purely on the basis of how it behaves in response to the messages that it received. Each thread is therefore effectively a state machine: when it receives a message, it updates its state in some manner and maybe sends one or more messages to other threads, with the processing performed depending on the initial state. One way to write such threads would be to formalize this and implement a Finite State Machine model, but this isn’t the only way; the state machine can be implicit in the structure of the application. Which method works better in any given scenario depends on the exact behavioral requirements of the situation and the expertise of the programming team. However you choose to implement each thread, the separation into independent processes has the potential to remove much of the complication from shared-data concurrency and therefore make programming easier, lowering the bug rate.


pages: 1,085 words: 219,144

Solr in Action by Trey Grainger, Timothy Potter

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

business intelligence, cloud computing, conceptual framework, crowdsourcing, data acquisition, en.wikipedia.org, failed state, fault tolerance, finite state, full text search, glass ceiling, information retrieval, natural language processing, performance metric, premature optimization, recommendation engine, web application

In a large index, there can be many terms matching a prefix, so we also need a way to sort the suggestions by popularity, as we don’t want low-frequency, rare terms polluting our suggestion lists. Think of it in terms of doing the most good for the most users; if a term only occurs in a few documents out of millions, it’s probably not a good suggestion. In listing 10.11, we’re using the org.apache.solr.spelling.suggest.fst.FSTLookup class, which uses a data structure based on finite state automaton (FST) to provide fast, constant-time lookups regardless of prefix length. The FSTLookupFactory implementation is slower to build but requires a smaller memory footprint, which makes it a good choice for large-term dictionaries. The suggestion dictionary must be rebuilt to incorporate new terms as new documents are indexed. In our configuration, we chose to rebuild the dictionary after every commit (buildOnCommit=true).


pages: 846 words: 232,630

Darwin's Dangerous Idea: Evolution and the Meanings of Life by Daniel C. Dennett

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Albert Einstein, Alfred Russel Wallace, anthropic principle, buy low sell high, cellular automata, combinatorial explosion, complexity theory, computer age, conceptual framework, Conway's Game of Life, Danny Hillis, double helix, Douglas Hofstadter, Drosophila, finite state, Gödel, Escher, Bach, In Cold Blood by Truman Capote, invention of writing, Isaac Newton, Johann Wolfgang von Goethe, John von Neumann, Murray Gell-Mann, New Journalism, non-fiction novel, Peter Singer: altruism, phenotype, price mechanism, prisoner's dilemma, QWERTY keyboard, random walk, Richard Feynman, Richard Feynman, Rodney Brooks, Schrödinger's Cat, Stephen Hawking, Steven Pinker, strong AI, the scientific method, theory of mind, Thomas Malthus, Turing machine, Turing test

Chomsky's work followed closely in the path of Turing's purely logical investigations into the powers of what we now call computers. Chomsky eventually defined an ascending scale of types of grammars or types of languages — the Chomsky Hierarchy, on which all students of computation theory still cut their teeth — and showed how these grammars were interdefinable with an ascending scale of types of automata or computers — from "finite state machines" through "push-down automata" and "linear bounded machines" to "Turing machines." I can vividly remember the shock wave that rolled through philosophy when Chomsky's work first came to our attention a few years later. In 1960, my sophomore year at Harvard, I asked Professor Quine what critics of his views I should be reading. (I considered myself at the time to be an anti-Quinian of ferocious conviction, and was already beginning to develop the arguments for my senior thesis, attacking him.

Data Mining: Concepts and Techniques: Concepts and Techniques by Jiawei Han, Micheline Kamber, Jian Pei

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

bioinformatics, business intelligence, business process, Claude Shannon: information theory, cloud computing, computer vision, correlation coefficient, cyber-physical system, database schema, discrete time, distributed generation, finite state, information retrieval, iterative process, knowledge worker, linked data, natural language processing, Netflix Prize, Occam's razor, pattern recognition, performance metric, phenotype, random walk, recommendation engine, RFID, semantic web, sentiment analysis, speech recognition, statistical model, stochastic process, supply-chain management, text mining, thinkpad, web application

If the behavior attribute values of the object significantly deviate from the values predicted by the model, then the object can be declared a contextual outlier. By using a prediction model that links the contexts and behavior, these methods avoid the explicit identification of specific contexts. A number of classification and prediction techniques can be used to build such models such as regression, Markov models, and finite state automaton. Interested readers are referred to Chapter 8 and Chapter 9 on classification and the bibliographic notes for further details (Section 12.11). In summary, contextual outlier detection enhances conventional outlier detection by considering contexts, which are important in many applications. We may be able to detect outliers that cannot be detected otherwise. Consider a credit card user whose income level is low but whose expenditure patterns are similar to those of millionaires.