locality of reference

7 results back to index


pages: 894 words: 190,485

Write Great Code, Volume 1 by Randall Hyde

AltaVista, business process, Donald Knuth, John von Neumann, level 1 cache, locality of reference, machine readable, Von Neumann architecture, Y2K

The technical names given to these phenomena are temporal locality of reference and spatial locality of reference. When exhibiting spatial locality, a program accesses neighboring memory locations within a short period after the initial memory access. When displaying temporal locality of reference, a program accesses the same memory location repeatedly during a short time. Both forms of locality occur in the following Pascal code segment: for i := 0 to 10 do A [i] := 0; There are two occurrences each of spatial and temporal locality of reference within this loop. Let’s consider the obvious ones first.

The assignment statement also uses i as an array index. This shows temporal locality of reference in action because the CPU accesses i at three points in a short time period. This program also exhibits spatial locality of reference. The loop itself zeros out the elements of array A by writing a zero to the first location in A, then to the second location in A, and so on. Because Pascal stores the elements of A in consecutive memory locations, each loop iteration accesses adjacent memory locations. There is an additional example of temporal and spatial locality of reference in this Pascal example. Machine instructions also reside in memory, and the CPU fetches these instructions sequentially from memory and executes these instructions repeatedly, once for each loop iteration.

Some systems actually provide this functionality, and others do not for economic reasons. 11.4.7 Cache Use and Software A cache subsystem is not a panacea for slow memory access. In order for a cache system to be effective, software must exhibit locality of reference (either spatial or temporal). Fortunately, real-world programs tend to exhibit locality of reference, so most programs will benefit from the presence of a cache in the memory subsystem. But while programs do exhibit locality of reference, this is often accidental; programmers rarely consider the memory-access patterns of their software when designing and coding. Unfortunately, application programmers who work under the assumption that the cache will magically improve the performance of their applications are missing one important point — a cache can actually hurt the performance of an application.


pages: 474 words: 91,222

Effective STL: 50 Specific Ways to Improve Your Use of the Standard Template Library by Scott Meyers

functional programming, locality of reference, sorting algorithm, Y2K

Or you discover that allocator<T> takes precautions to be thread-safe, but you’re interested only in single-threaded execution and you don’t want to pay for the synchronization overhead you don’t need. Or you know that objects in certain containers are typically used together, so you’d like to place them near one another in a special heap to maximize locality of reference. Or you’d like to set up a unique heap that corresponds to shared memory, then put one or more containers in that memory so they can be shared by other processes. Congratulations! Each of these scenarios corresponds to a situation where custom allocators are well suited to the problem. For example, suppose you have special routines modeled after malloc and free for managing a heap of shared memory, void* mallocShared(size_t bytesNeeded); void freeShared(void *ptr); and you’d like to make it possible to put the contents of STL containers in that shared memory.

I didn’t read the source code that closely, sorry.) I hope it’s obvious that the various implementations’ policies on minimum allocations might be important to you if you expect to have lots of short strings and either (1) your release environment has very little memory or (2) you are concerned about locality of reference and want to cluster strings on as few pages as possible. Clearly, string implementations have more degrees of freedom than are apparent at first glance, and equally clearly, different implementers have taken advantage of this design flexibility in different ways. Let’s summarize the things that vary: • string values may or may not be reference counted.

(see Item 34). But why should a binary search through a (sorted) vector offer better performance than a binary search through a binary search tree? Because some things are trite but true, and one of them is that size matters. Others are less trite but no less true, and one of those is that locality of reference matters, too. Consider first the size issue. Suppose we need a container to hold Widget objects, and, because lookup speed is important to us, we are considering both an associative container of Widgets and a sorted vector<Widget>. If we choose an associative container, we’ll almost certainly be using a balanced binary tree.


pages: 828 words: 205,338

Write Great Code, Volume 2 by Randall Hyde

complexity theory, Donald Knuth, Free Software Foundation, G4S, locality of reference, NP-complete, premature optimization

For example: #include <stdlib.h> #include <stdio.h > static double OnePointZero_c = 1.0; int main( int argc, char **argv, char **envp ) { static double i; i = OnePointZero_c; } In this example, of course, you gain absolutely nothing by treating the floating-point constants as static variables. However, in more complex situations where you have several floating-point constants, you can analyze your program to determine which constants you access often and place the variables for those constants at adjacent memory locations. Because of the way most CPUs handle spatial locality of reference (that is, accessing nearby variables; see Write Great Code, Volume 1), when you access one of these constant objects, the cache line will be filled with the values of the adjacent objects as well. Therefore, when you access those other objects within a short period of time, it’s likely that their values will be in the cache.

for speed, 5.4.4.5 Controlling Compiler Optimization levels, 1.4 Thinking Low-Level, 6.1 Background, 6.2.3.5 GCC Assembly Language Output (80x86) effects on code generation, 6.2.3.5 GCC Assembly Language Output (80x86) for compilers, 6.1 Background of a while statement, 15.1 The while Loop of arithmetic expressions, Arithmetic and Logical Expressions of code involving pointers, 11.4.5 Logical AND/OR and Pointers of compilers, Thinking Low-Level, Writing High-Level of constant expressions, 7.3 Constant Expressions of strings by a compiler, 7.8 String Constants on the 80x86, 3.1 Learning One Assembly Language Is Good, Learning More Is Better phase for a compiler, 5.3.4 Incremental Compilers, 5.4.3 Intermediate Code Generation Optimizers, 13.2.8 Optimizers and Programmers Optional header in a COFF file, 5.6.1 The COFF File Header OR and pointers, 11.4.4 Comparing Pointers Order of evaluation, Arithmetic and Logical Expressions, 13.5 Avoiding Problems Caused by Side Effects in arithmetic expressions, Arithmetic and Logical Expressions Ordinal data types, 9.1.1.3 Declaring Arrays in Pascal, Delphi, and Kylix Organization, 12.1.3 Initialization of Record Data at Compile Time, 12.1.3 Initialization of Record Data at Compile Time, 16.5.2 Assigning Offsets to Local Variables of fields in a record/structure, 12.1.3 Initialization of Record Data at Compile Time of local variable declarations, 16.5.2 Assigning Offsets to Local Variables ori instruction (PowerPC), 7.1 Literal Constants and Program Efficiency OS (operating system), 11.6 Garbage Collection, 11.7 The OS and Memory Allocation, 11.7 The OS and Memory Allocation, 11.7 The OS and Memory Allocation API calls, 11.7 The OS and Memory Allocation memory allocation, 11.6 Garbage Collection system call overhead, 11.7 The OS and Memory Allocation Out-of-order instruction execution, 13.6 Forcing a Particular Order of Evaluation Overflow bit (PowerPC), 4.3.3.1 Condition-Code Registers Overflow flag (80x86), 3.3.2 80x86 General-Purpose Registers Overgeneralization in object-oriented programs, 12.8.7 Classes, Objects, and Performance Overhead, 11.5 A Simple Memory Allocator Example, 11.5 A Simple Memory Allocator Example, 11.8 Heap Memory Overhead, 16.1.1 Storing the Return Address associated with garbage collection, 11.5 A Simple Memory Allocator Example associated with memory allocation, 11.8 Heap Memory Overhead in call and return sequences, 16.1.1 Storing the Return Address Overlapping fields in a union, 12.2 Discriminant Unions Overlapping registers (80x86), 3.3.1 Registers Overlapping strings, 10.1.5 Descriptor-Based Strings Overloading, 12.8.5 Inheritance in Classes P packed keyword, 12.1.4 Memory Storage of Records, 12.1.4 Memory Storage of Records and packed records, 12.1.4 Memory Storage of Records Packing array elements, 9.1.3 Accessing Elements of an Array Padding, 5.8 Data and Code Alignment in an Object File, 5.8.1 Choosing a Section Alignment Size, 8.6 Variable Alignment in Memory, 9.1.2 Array Representation in Memory, 9.1.3 Accessing Elements of an Array, 9.1.3 Accessing Elements of an Array, 11.4.1 Adding an Integer to a Pointer, 12.1.4 Memory Storage of Records, 16.5.2 Assigning Offsets to Local Variables, 16.5.3.3 Passing Parameters in Registers, 16.5.4 Accessing Parameters and Local Variables array elements, 9.1.3 Accessing Elements of an Array bytes, 5.8 Data and Code Alignment in an Object File, 5.8.1 Choosing a Section Alignment Size, 8.6 Variable Alignment in Memory, 9.1.2 Array Representation in Memory, 11.4.1 Adding an Integer to a Pointer, 12.1.4 Memory Storage of Records, 16.5.2 Assigning Offsets to Local Variables, 16.5.3.3 Passing Parameters in Registers, 16.5.4 Accessing Parameters and Local Variables Pages in virtual memory, 5.7 Executable File Formats Parameters, Functions and Procedures, 16.1.1 Storing the Return Address, 16.5.2 Assigning Offsets to Local Variables, 16.5.4 Accessing Parameters and Local Variables offsets, 16.5.2 Assigning Offsets to Local Variables passing and calling conventions, Functions and Procedures passing mechanisms, 16.5.4 Accessing Parameters and Local Variables Parity flag (80x86), 3.3.2 80x86 General-Purpose Registers Parsers, 5.3.4 Incremental Compilers, 5.4.1 Lexical Analysis and Tokens Pascal, 1.4.3 How to Think in Assembly While Writing HLL Code, 8.2.3 Static Objects, 9.1.1.2 Declaring Arrays in HLA, 10.1.1.3 Avoid Recomputing Data, 10.2.1 Static Strings, Pointer Data Types, 12.1.1.1 Records in Pascal/Delphi, 12.3.1 Union Declarations in C/C++, 14.5.2 Forcing Complete Boolean Evaluation in an if Statement, 14.5.2 Forcing Complete Boolean Evaluation in an if Statement, 14.6 The switch/case Statement, 16.5.3 Associating Offsets with Parameters and complete Boolean evaluation, 14.5.2 Forcing Complete Boolean Evaluation in an if Statement and Delphi and Kylix array declarations, 9.1.1.2 Declaring Arrays in HLA and Delphi and Kylix unions (case-variant records), 12.3.1 Union Declarations in C/C++ and short-circuit Boolean evaluation, 14.5.2 Forcing Complete Boolean Evaluation in an if Statement calling convention, 16.5.3 Associating Offsets with Parameters case statement, 14.6 The switch/case Statement pointers, Pointer Data Types programming language, 1.4.3 How to Think in Assembly While Writing HLL Code records/structures, 12.1.1.1 Records in Pascal/Delphi strings, 10.1.1.3 Avoid Recomputing Data, 10.2.1 Static Strings Pass by, 16.4 Passing Parameters to a Function or Procedure, 16.4 Passing Parameters to a Function or Procedure, 16.6 Parameter-Passing Mechanisms, 16.6 Parameter-Passing Mechanisms, 16.6 Parameter-Passing Mechanisms, 16.6 Parameter-Passing Mechanisms, 16.6 Parameter-Passing Mechanisms, 16.6 Parameter-Passing Mechanisms, 16.6 Parameter-Passing Mechanisms, 16.6 Parameter-Passing Mechanisms name, 16.6 Parameter-Passing Mechanisms reference, 16.4 Passing Parameters to a Function or Procedure, 16.6 Parameter-Passing Mechanisms, 16.6 Parameter-Passing Mechanisms value, 16.4 Passing Parameters to a Function or Procedure, 16.6 Parameter-Passing Mechanisms, 16.6 Parameter-Passing Mechanisms value/result, 16.6 Parameter-Passing Mechanisms value/returned, 16.6 Parameter-Passing Mechanisms Passing arrays by value, 16.6 Parameter-Passing Mechanisms Passing parameters, 2.6 The Assembly Programming Paradigm (Thinking Low-Level), 2.6 The Assembly Programming Paradigm (Thinking Low-Level), 8.5.6 Register Variables, 16.1.2 Other Sources of Overhead, 16.4 Passing Parameters to a Function or Procedure, 16.4 Passing Parameters to a Function or Procedure, 16.5.3.2 The C Calling Convention in registers, 8.5.6 Register Variables, 16.1.2 Other Sources of Overhead, 16.5.3.2 The C Calling Convention to a function or procedure, 2.6 The Assembly Programming Paradigm (Thinking Low-Level), 16.4 Passing Parameters to a Function or Procedure Patched addresses in an object-code file, 5.6.3 COFF Section Headers PE/COFF, 8.3.1.4 Binding at Load Time Pentium instructions, 3.9 The Minimal 80x86 Instruction Set Performance, 11.2 Pointer Implementation in High-Level Languages, 11.2 Pointer Implementation in High-Level Languages, 11.7 The OS and Memory Allocation, 11.8 Heap Memory Overhead, Record Union, and Class Data Types, 12.8.6 Polymorphism in Classes cost associated with classes and objects, Record Union, and Class Data Types loss due to memory allocation, 11.8 Heap Memory Overhead of 80x86 versus PowerPC when using indirection, 11.2 Pointer Implementation in High-Level Languages of objects (classes), 12.8.6 Polymorphism in Classes of OS API calls, 11.7 The OS and Memory Allocation Perl programming language, 11.2 Pointer Implementation in High-Level Languages Phases of a compiler, 5.3.4 Incremental Compilers PhysicalAddress field in a COFF file, 5.6.3 COFF Section Headers Pipeline flush, Control Structures and Programmatic Decisions Plain vanilla text, 5.2 Programming Language Source Files Pointers, 8.1.5 The Heap Section and Dynamic Memory Allocation, 8.5.3 Storage Allocation for Intermediate Variables, Pointer Data Types, Pointer Data Types, Pointer Data Types, 11.1 Defining and Demystifying Pointers, 11.1 Defining and Demystifying Pointers, 11.1 Defining and Demystifying Pointers, 11.1 Defining and Demystifying Pointers, 11.2 Pointer Implementation in High-Level Languages, 11.4 Pointer Operations and Pointer Arithmetic, 11.4 Pointer Operations and Pointer Arithmetic, 11.4 Pointer Operations and Pointer Arithmetic, 11.4 Pointer Operations and Pointer Arithmetic, 11.4.1 Adding an Integer to a Pointer, 11.4.1 Adding an Integer to a Pointer, 11.4.3 Subtracting a Pointer from a Pointer, 11.4.4 Comparing Pointers, 11.4.4 Comparing Pointers, 11.4.6 Other Operations with Pointers, 11.9 Common Pointer Problems, 11.9 Common Pointer Problems, 11.9.1 Using an Uninitialized Pointer, 11.9.1 Using an Uninitialized Pointer, 11.9.1 Using an Uninitialized Pointer, 11.9.3 Continuing to Use Storage After It Has Been Freed, 11.9.4 Failing to Free Storage When Done with It, 11.9.4 Failing to Free Storage When Done with It, 11.9.4 Failing to Free Storage When Done with It adding an integer to a pointer, 11.4.1 Adding an Integer to a Pointer address assignment in byte-addressable memory, 11.4 Pointer Operations and Pointer Arithmetic allocating a block of storage, 11.4 Pointer Operations and Pointer Arithmetic AND operations on pointers, 11.4.4 Comparing Pointers arithmetic, Pointer Data Types, 11.4 Pointer Operations and Pointer Arithmetic, 11.4.1 Adding an Integer to a Pointer base addresses (of an allocated block), 11.4 Pointer Operations and Pointer Arithmetic coercion, 11.9.4 Failing to Free Storage When Done with It comparing pointers, 11.4.3 Subtracting a Pointer from a Pointer continuing to use storage after it has been freed, 11.9.1 Using an Uninitialized Pointer dangling pointer problem, 11.9.3 Continuing to Use Storage After It Has Been Freed dereferencing uninitialized pointers, 11.9.1 Using an Uninitialized Pointer double indirection, 11.1 Defining and Demystifying Pointers explained, Pointer Data Types failing to free allocated storage, 11.9.4 Failing to Free Storage When Done with It illegal pointer values, 11.9 Common Pointer Problems, 11.9.1 Using an Uninitialized Pointer implementation, Pointer Data Types, 11.1 Defining and Demystifying Pointers limiting pointer operations, 11.4.6 Other Operations with Pointers logical operations on pointers, 11.4.4 Comparing Pointers malloc function, 11.2 Pointer Implementation in High-Level Languages memory, 11.1 Defining and Demystifying Pointers, 11.1 Defining and Demystifying Pointers, 11.9 Common Pointer Problems, 11.9.4 Failing to Free Storage When Done with It addresses, 11.1 Defining and Demystifying Pointers leaks, 11.9 Common Pointer Problems, 11.9.4 Failing to Free Storage When Done with It Pointers, continued, Pointer Data Types, Pointer Data Types, Pointer Data Types, 11.1 Defining and Demystifying Pointers, 11.3 Pointers and Dynamic Memory Allocation, 11.4 Pointer Operations and Pointer Arithmetic, 11.4.2 Subtracting an Integer from a Pointer, 11.4.2 Subtracting an Integer from a Pointer, 11.4.2 Subtracting an Integer from a Pointer, 11.4.3 Subtracting a Pointer from a Pointer, 11.4.3 Subtracting a Pointer from a Pointer, 11.4.3 Subtracting a Pointer from a Pointer, 11.4.4 Comparing Pointers, 11.4.5 Logical AND/OR and Pointers, 11.4.6 Other Operations with Pointers, 11.4.6 Other Operations with Pointers, 11.8 Heap Memory Overhead, 11.9 Common Pointer Problems, 11.9 Common Pointer Problems, 11.9.5 Accessing Indirect Data Using the Wrong Data Type, 11.9.5 Accessing Indirect Data Using the Wrong Data Type negative results after a pointer subtraction, 11.4.3 Subtracting a Pointer from a Pointer offsets from a fixed address in memory (implementation of a pointer), 11.1 Defining and Demystifying Pointers operations, 11.3 Pointers and Dynamic Memory Allocation, 11.4.6 Other Operations with Pointers optimizing code involving pointers, 11.4.5 Logical AND/OR and Pointers OR operations on pointers, 11.4.4 Comparing Pointers Pascal, Pointer Data Types problems, Pointer Data Types, 11.8 Heap Memory Overhead program efficiency concerns, 11.4.6 Other Operations with Pointers sizeof function, 11.4 Pointer Operations and Pointer Arithmetic subtraction, 11.4.2 Subtracting an Integer from a Pointer, 11.4.2 Subtracting an Integer from a Pointer, 11.4.2 Subtracting an Integer from a Pointer, 11.4.3 Subtracting a Pointer from a Pointer integer from a pointer, 11.4.2 Subtracting an Integer from a Pointer pointer from a pointer, 11.4.2 Subtracting an Integer from a Pointer rules, 11.4.3 Subtracting a Pointer from a Pointer type casting, 11.9.5 Accessing Indirect Data Using the Wrong Data Type type-safe access, 11.9.5 Accessing Indirect Data Using the Wrong Data Type types, Pointer Data Types uninitialized, 11.9 Common Pointer Problems using storage after it has been freed, 11.9 Common Pointer Problems PointerToLinenumbers field in a COFF file, 5.6.3 COFF Section Headers PointerToRawData field in a COFF file, 5.6.3 COFF Section Headers PointerToRelocations field in a COFF file, 5.6.3 COFF Section Headers PointerToSymbolTable field in a COFF file, 5.6.1 The COFF File Header Polymorphism, Record Union, and Class Data Types, 12.8.5 Inheritance in Classes in classes, 12.8.5 Inheritance in Classes Popping data from a stack, 13.1.1.2 Pushing Data onto a Stack Popping return addresses off the stack, 16.1 Simple Function and Procedure Calls Portability, 1.8 Characteristics of Great Code, 5.3.4 Incremental Compilers, 5.3.4 Incremental Compilers, 5.3.4 Incremental Compilers in classes, 1.8 Characteristics of Great Code, 5.3.4 Incremental Compilers, 5.3.4 Incremental Compilers of byte code interpreters, 5.3.4 Incremental Compilers of code, 1.8 Characteristics of Great Code of machine code, 5.3.4 Incremental Compilers Postfix notation, 13.1.1.4 Arithmetic Operations on a Stack Machine Power Macintosh, 1.9 The Environment for This Text, PowerPC Assembly for the HLL Programmer PowerPC, 1.8 Characteristics of Great Code, PowerPC Assembly for the HLL Programmer, PowerPC Assembly for the HLL Programmer, 4.3 Basic PowerPC Architecture, 4.3 Basic PowerPC Architecture, 4.3 Basic PowerPC Architecture, 4.3 Basic PowerPC Architecture, 4.3 Basic PowerPC Architecture, 4.3 Basic PowerPC Architecture, 4.3 Basic PowerPC Architecture, 4.3 Basic PowerPC Architecture, 4.3 Basic PowerPC Architecture, 4.3 Basic PowerPC Architecture, 4.3.3 User-Mode-Accessible Special-Purpose Registers, 4.3.3.1 Condition-Code Registers, 4.3.3.1 Condition-Code Registers, 4.3.3.1 Condition-Code Registers, 4.3.3.1 Condition-Code Registers, 4.3.3.1 Condition-Code Registers, 4.3.3.1 Condition-Code Registers, 4.3.3.1 Condition-Code Registers, 4.3.3.1 Condition-Code Registers, 4.3.3.1 Condition-Code Registers, 4.3.3.1 Condition-Code Registers, 4.3.3.1 Condition-Code Registers, 4.3.3.1 Condition-Code Registers, 4.3.3.1 Condition-Code Registers, 4.3.3.3 XER Register, 4.3.3.3 XER Register, 4.3.3.3 XER Register, 4.3.3.3 XER Register, 4.3.3.3 XER Register, 4.3.3.3 XER Register, 4.3.3.4 The LINK Register, 4.4.2 Decimal Literal Constants, 4.5 Manifest (Symbolic) Constants in Assembly Language, 4.5 Manifest (Symbolic) Constants in Assembly Language, 4.5 Manifest (Symbolic) Constants in Assembly Language, 4.5 Manifest (Symbolic) Constants in Assembly Language, 4.6.3 PowerPC Memory Addressing Modes, 4.6.3 PowerPC Memory Addressing Modes, 4.6.3 PowerPC Memory Addressing Modes, 4.6.3 PowerPC Memory Addressing Modes, 4.6.3.1 Register Plus Displacement Addressing Mode, 4.6.3.1 Register Plus Displacement Addressing Mode, 4.6.3.1 Register Plus Displacement Addressing Mode, 4.6.3.1 Register Plus Displacement Addressing Mode, 4.6.3.1 Register Plus Displacement Addressing Mode, 4.6.3.1 Register Plus Displacement Addressing Mode, 4.6.3.1 Register Plus Displacement Addressing Mode, 4.6.3.1 Register Plus Displacement Addressing Mode, 4.6.3.1 Register Plus Displacement Addressing Mode, 4.7 Declaring Data in Assembly Language, 4.7 Declaring Data in Assembly Language, 4.7 Declaring Data in Assembly Language, 4.7 Declaring Data in Assembly Language, 4.8 Specifying Operand Sizes in Assembly Language, 4.9 The Minimal Instruction Set, 7.1 Literal Constants and Program Efficiency, 7.1 Literal Constants and Program Efficiency, 8.1.3 The BSS Section, 11.2 Pointer Implementation in High-Level Languages, 16.1 Simple Function and Procedure Calls, 16.1 Simple Function and Procedure Calls, 16.1.1 Storing the Return Address addressing modes, 4.5 Manifest (Symbolic) Constants in Assembly Language AltaVec instructions, 4.8 Specifying Operand Sizes in Assembly Language architecture, 4.3 Basic PowerPC Architecture assembly language, PowerPC Assembly for the HLL Programmer base register, 4.6.3.1 Register Plus Displacement Addressing Mode bl instruction, 16.1 Simple Function and Procedure Calls, 16.1.1 Storing the Return Address blr instruction, 16.1 Simple Function and Procedure Calls branch and link instruction, 4.3.3.4 The LINK Register byte count field in XER register, 4.3.3.1 Condition-Code Registers byte variables, 4.6.3.1 Register Plus Displacement Addressing Mode carry bit, 4.3.3.1 Condition-Code Registers condition-code registers, 4.3 Basic PowerPC Architecture COUNT register, 4.3 Basic PowerPC Architecture, 4.3.3.3 XER Register CPU, 1.8 Characteristics of Great Code, PowerPC Assembly for the HLL Programmer, 4.3 Basic PowerPC Architecture registers, 4.3 Basic PowerPC Architecture CTR register, 4.3.3.3 XER Register declaring data, 4.6.3.1 Register Plus Displacement Addressing Mode double word variables, 4.7 Declaring Data in Assembly Language double-precision floating-point variables, 4.7 Declaring Data in Assembly Language floating-point, 4.3 Basic PowerPC Architecture, 4.3 Basic PowerPC Architecture, 4.3.3.1 Condition-Code Registers, 4.3.3.1 Condition-Code Registers, 4.3.3.1 Condition-Code Registers, 4.3.3.1 Condition-Code Registers, 4.3.3.1 Condition-Code Registers, 4.3.3.1 Condition-Code Registers, 4.4.2 Decimal Literal Constants enable exception bit, 4.3.3.1 Condition-Code Registers exception bit, 4.3.3.1 Condition-Code Registers invalid exception bit, 4.3.3.1 Condition-Code Registers literal constants, 4.4.2 Decimal Literal Constants overflow exception bit, 4.3.3.1 Condition-Code Registers registers, 4.3 Basic PowerPC Architecture status and control register, 4.3 Basic PowerPC Architecture, 4.3.3.1 Condition-Code Registers general-purpose integer registers, 4.3 Basic PowerPC Architecture GT bit, 4.3.3.1 Condition-Code Registers halfword variables, 4.6.3.1 Register Plus Displacement Addressing Mode immediate addressing mode, 4.5 Manifest (Symbolic) Constants in Assembly Language index register, 4.6.3.1 Register Plus Displacement Addressing Mode indirect access via a pointer, 11.2 Pointer Implementation in High-Level Languages lbz instruction, 4.6.3 PowerPC Memory Addressing Modes lbzu instruction, 4.6.3.1 Register Plus Displacement Addressing Mode lbzx instruction, 4.6.3.1 Register Plus Displacement Addressing Mode LINK register, 4.3 Basic PowerPC Architecture, 4.3.3.3 XER Register lis instruction, 7.1 Literal Constants and Program Efficiency load/store architecture, 4.6.3 PowerPC Memory Addressing Modes LT bit, 4.3.3.1 Condition-Code Registers memory addressing modes, 4.6.3 PowerPC Memory Addressing Modes operand sizes, 4.9 The Minimal Instruction Set ori instruction, 7.1 Literal Constants and Program Efficiency quad word variables, 4.7 Declaring Data in Assembly Language register addressing modes, 4.5 Manifest (Symbolic) Constants in Assembly Language, 4.6.3 PowerPC Memory Addressing Modes, 4.6.3.1 Register Plus Displacement Addressing Mode plus displacement, 4.6.3 PowerPC Memory Addressing Modes plus register (indexed), 4.6.3.1 Register Plus Displacement Addressing Mode registers, 4.3 Basic PowerPC Architecture single-precision floating-point variables, 4.7 Declaring Data in Assembly Language stack pointer, 8.1.3 The BSS Section summary overflow bit, 4.3.3.1 Condition-Code Registers time base registers (TBR), 4.3 Basic PowerPC Architecture, 4.3.3.3 XER Register, 4.3.3.3 XER Register, 4.3.3.3 XER Register TBL register, 4.3.3.3 XER Register TBU register, 4.3.3.3 XER Register word variables, 4.6.3.1 Register Plus Displacement Addressing Mode XER register, 4.3.3 User-Mode-Accessible Special-Purpose Registers, 4.3.3.1 Condition-Code Registers zero bit, 4.3.3.1 Condition-Code Registers Pragmas, 12.1.4 Memory Storage of Records Preserving register values in a leaf procedure/function, 16.2 Leaf Functions and Procedures Primitive data types, Variables in a High-Level Language, 8.4 Common Primitive Data Types Problem decomposition, 16.3 Macros and Inline Functions Problem with optimizing compilers, 1.2 Why Learning Assembly Language Is Still a Good Idea Procedural programming languages, 1.4.3 How to Think in Assembly While Writing HLL Code Procedures, Functions and Procedures, Functions and Procedures, 16.4 Passing Parameters to a Function or Procedure calls, Functions and Procedures parameters, 16.4 Passing Parameters to a Function or Procedure Producing an assembly language listing during compilation, Tools for Analyzing Compiler Output Producing assembly output from a compiler, 6.1 Background Program status register, Control Structures and Programmatic Decisions Programming language source files, Compiler Operation and Code Generation Programming paradigm (assembly language), 2.5 Thinking High-Level, Writing Low-Level Prolog programming language, 9.1.6.2 Multidimensional Pseudo-Dynamic Arrays Pseudo-dynamic arrays, 9.1.5.7 Improving Array Access Efficiency in Your Applications, 9.1.6 Dynamic Versus Static Arrays, 9.1.6.1 Single-Dimensional Pseudo-Dynamic Arrays in GCC, 9.1.6 Dynamic Versus Static Arrays Pseudo-dynamic strings, 10.2 Static, Pseudo-Dynamic, and Dynamic Strings, 10.2.2 Pseudo-Dynamic Strings Pseudo-static arrays, 9.1.5.7 Improving Array Access Efficiency in Your Applications Pseudo-static binding of variables, 8.3.1.5 Static Variable Binding Pull operation on a stack, 13.1.1.2 Pushing Data onto a Stack Pure dynamic arrays, 9.1.6 Dynamic Versus Static Arrays, 9.1.6.2 Multidimensional Pseudo-Dynamic Arrays Pure interpreters, 5.3.1 Pure Interpreters, 5.4.1 Lexical Analysis and Tokens Pure macros, 16.3 Macros and Inline Functions Pure static arrays, 9.1.5.7 Improving Array Access Efficiency in Your Applications Pure static strings, 10.2.1 Static Strings Pushing onto a stack, 13.1.1 Stack-Based Machines, 13.1.1 Stack-Based Machines, 16.1 Simple Function and Procedure Calls data, 13.1.1 Stack-Based Machines return addresses, 16.1 Simple Function and Procedure Calls Q Quad words (PowerPC), 4.7 Declaring Data in Assembly Language Quotes in an HLA string, 3.4.3.2 Hexadecimal Literal Constants in Gas R R0 register as a base register (PowerPC), 4.6.3 PowerPC Memory Addressing Modes Read-only data section in memory, 8.1 Runtime Memory Organization Read-only memory objects as constants, 7.3 Constant Expressions Readability, 1.6 Assumptions, 1.6 Assumptions, 6.1 Background of code, 1.6 Assumptions of the compiler output, 6.1 Background Real (floating-point) strings, String Data Types Real variables, 8.4.1 Integer Variables Real32/Real4 data (80x86), 3.7 Declaring Data in Assembly Language Real64/Real8 data (80x86), 3.7 Declaring Data in Assembly Language Real80/Real10 data (80x86), 3.7 Declaring Data in Assembly Language Records, 8.6 Variable Alignment in Memory, 8.6.1 Records and Alignment, 8.6.1 Records and Alignment, Record Union, and Class Data Types, Record Union, and Class Data Types, Record Union, and Class Data Types, Record Union, and Class Data Types, Record Union, and Class Data Types, Record Union, and Class Data Types, Record Union, and Class Data Types, Record Union, and Class Data Types, 12.1.1.1 Records in Pascal/Delphi, 12.1.1.1 Records in Pascal/Delphi, 12.1.1.2 Records in C/C++, 12.1.2 Instantiation of a Record, 12.1.3 Initialization of Record Data at Compile Time, 12.1.3 Initialization of Record Data at Compile Time, 12.1.3 Initialization of Record Data at Compile Time, 12.1.3 Initialization of Record Data at Compile Time, 12.1.4 Memory Storage of Records, 16.6 Parameter-Passing Mechanisms advantages, 12.1.2 Instantiation of a Record alignment of fields in a record, 12.1.4 Memory Storage of Records and alignment, 8.6 Variable Alignment in Memory base address, 12.1.3 Initialization of Record Data at Compile Time C/C++, 12.1.1.1 Records in Pascal/Delphi data types, Record Union, and Class Data Types definition, Record Union, and Class Data Types dot operator (field selector), 12.1.3 Initialization of Record Data at Compile Time fields, 8.6.1 Records and Alignment, 8.6.1 Records and Alignment, Record Union, and Class Data Types alignment in an assembly language, 8.6.1 Records and Alignment alignment in HLA, 8.6.1 Records and Alignment HLA, 12.1.1.2 Records in C/C++ initialization, Record Union, and Class Data Types memory, Record Union, and Class Data Types, Record Union, and Class Data Types, 12.1.3 Initialization of Record Data at Compile Time representation, Record Union, and Class Data Types storage, 12.1.3 Initialization of Record Data at Compile Time organization of fields in a record/structure, 12.1.3 Initialization of Record Data at Compile Time Pascal/Delphi, 12.1.1.1 Records in Pascal/Delphi passed by value to a procedure/function, 16.6 Parameter-Passing Mechanisms variables, Record Union, and Class Data Types Recursion, 16.1.1 Storing the Return Address Recursive functions and activation records, 16.5 Activation Records and the Stack Reducible flow, 5.4.4.2 Optimization’s Effect on Compile Time, 5.4.4.2 Optimization’s Effect on Compile Time, 5.4.4.3 Basic Blocks, Reducible Code, and Optimization diagrams, 5.4.4.2 Optimization’s Effect on Compile Time graphs, 5.4.4.3 Basic Blocks, Reducible Code, and Optimization Reducing variable offset sizes by using a pointer, 8.5.4 Storage Allocation for Dynamic Variables and Pointers Reentrant code, 8.3.1.5 Static Variable Binding Reference counting for strings, String Data Types, 10.2.3 Dynamic Strings register keyword (C/C++), 8.5.6 Register Variables Register-based machines, 13.1.3 Register-Based Machines Registers, 3.2 80x86 Assembly Syntaxes, 3.2 80x86 Assembly Syntaxes, 3.6 80x86 Addressing Modes, 3.6.1 80x86 Register Addressing Modes, 3.6.4 Register Indirect Addressing Mode, 3.6.4 Register Indirect Addressing Mode, 3.6.4.3 Register Indirect Modes in Gas, 3.6.4.3 Register Indirect Modes in Gas, 3.6.4.3 Register Indirect Modes in Gas, 3.6.5.1 Indexed Addressing Mode in HLA, 4.3 Basic PowerPC Architecture, 4.5 Manifest (Symbolic) Constants in Assembly Language, 4.6.3 PowerPC Memory Addressing Modes, 4.6.3.1 Register Plus Displacement Addressing Mode, A.1.5 Registers addressing modes, 3.6 80x86 Addressing Modes, 3.6.1 80x86 Register Addressing Modes, 3.6.4.3 Register Indirect Modes in Gas, 3.6.5.1 Indexed Addressing Mode in HLA, 4.5 Manifest (Symbolic) Constants in Assembly Language, 4.6.3 PowerPC Memory Addressing Modes, 4.6.3.1 Register Plus Displacement Addressing Mode on the 80x86, 3.6.1 80x86 Register Addressing Modes, 3.6.4.3 Register Indirect Modes in Gas on the PowerPC, 4.5 Manifest (Symbolic) Constants in Assembly Language register-plus-displacement (PowerPC), 4.6.3 PowerPC Memory Addressing Modes register-plus-register (PowerPC), 4.6.3.1 Register Plus Displacement Addressing Mode and calculations (80x86), 3.2 80x86 Assembly Syntaxes indirect modes, 3.6.4 Register Indirect Addressing Mode, 3.6.4 Register Indirect Addressing Mode, 3.6.4.3 Register Indirect Modes in Gas, 3.6.4.3 Register Indirect Modes in Gas in Gas, 3.6.4.3 Register Indirect Modes in Gas in HLA, 3.6.4 Register Indirect Addressing Mode in MASM/TASM, 3.6.4 Register Indirect Addressing Mode Registers, continued, 3.5.2 Manifest Constants in Gas, 3.6.1.1 Register Access in HLA, 3.6.1.1 Register Access in HLA, 8.5.6 Register Variables, 13.8 The Relative Cost of Arithmetic Operations names in Gas (80x86), 3.6.1.1 Register Access in HLA operands (80x86), 3.5.2 Manifest Constants in Gas relative cost of arithmetic operations, 13.8 The Relative Cost of Arithmetic Operations variables, 8.5.6 Register Variables Relocation, 5.6.3 COFF Section Headers, 5.6.3 COFF Section Headers, 5.6.5 The Relocation Section, 6.3.1.2 The dumpbin.exe /disasm Command-Line Option, 6.3.1.2 The dumpbin.exe /disasm Command-Line Option list in a COFF file, 5.6.3 COFF Section Headers of an object-code file, 5.6.3 COFF Section Headers of entries in an object file, 6.3.1.2 The dumpbin.exe /disasm Command-Line Option sections in a COFF file, 5.6.5 The Relocation Section repeat..until, Iterative Control Structures, Iterative Control Structures, 15.1.2 Forcing Short-Circuit Boolean Evaluation in a while Loop loops, Iterative Control Structures statements, 15.1.2 Forcing Short-Circuit Boolean Evaluation in a while Loop Representation of, Constants and High-Level Languages, 7.5 Enumerated Types, 9.1.1.3 Declaring Arrays in Pascal, Delphi, and Kylix, 9.1.1.3 Declaring Arrays in Pascal, Delphi, and Kylix, Record Union, and Class Data Types arrays in memory, 9.1.1.3 Declaring Arrays in Pascal, Delphi, and Kylix Boolean values, 7.5 Enumerated Types noninteger constants, Constants and High-Level Languages record, union, and class data, Record Union, and Class Data Types ret instruction, Functions and Procedures Return address, 4.3.3.3 XER Register, 16.1 Simple Function and Procedure Calls, 16.1 Simple Function and Procedure Calls, 16.5.1 Composition of the Activation Record on the PowerPC, 4.3.3.3 XER Register storage, 16.1 Simple Function and Procedure Calls return statement, 14.3 The goto Statement Returning nonscalar (aggregate) values as function results, 16.6.2 Pass-by-Reference Reverse Polish notation, 13.1.1.4 Arithmetic Operations on a Stack Machine RISC, 8.4.2 Floating-Point/Real Variables, 8.5.2 Using Automatic Variables to Reduce Offset Sizes, A.1.1 Work Accomplished per Instruction, A.1.2 Instruction Size core (80x86), A.1.1 Work Accomplished per Instruction CPU character variables, 8.4.2 Floating-Point/Real Variables processors and activation record size, 8.5.2 Using Automatic Variables to Reduce Offset Sizes Robust code, 1.8 Characteristics of Great Code Row-major ordering, Array Data Types, 9.1.5.2 Mapping Multidimensional Array Elements to Memory, 9.1.5.5 Accessing Elements of a Multidimensional Array Rules for the subtraction of two pointer values, 11.4.3 Subtracting a Pointer from a Pointer Runtime, 5.7.1 Pages, Segments, and File Size, 5.7.2 Internal Fragmentation, Variables in a High-Level Language, 8.1 Runtime Memory Organization, 8.2.1 Attributes, 8.2.1 Attributes, 8.3.3 Dynamic Binding and Dynamic Variables binding, 8.2.1 Attributes dynamic memory allocation, 8.3.3 Dynamic Binding and Dynamic Variables memory, 5.7.1 Pages, Segments, and File Size, 5.7.2 Internal Fragmentation, Variables in a High-Level Language, 8.1 Runtime Memory Organization consumption, 5.7.1 Pages, Segments, and File Size organization, 8.1 Runtime Memory Organization used by compilers, Variables in a High-Level Language S Safe optimizations, 5.4.4.5 Controlling Compiler Optimization Scaled-indexed addressing modes, 3.6.6 Scaled-Indexed Addressing Modes, 3.6.6 Scaled-Indexed Addressing Modes, 3.6.6 Scaled-Indexed Addressing Modes, 3.6.6.1 Scaled-Indexed Addressing in HLA, 3.6.6.1 Scaled-Indexed Addressing in HLA in Gas, 3.6.6.1 Scaled-Indexed Addressing in HLA in HLA, 3.6.6 Scaled-Indexed Addressing Modes in MASM/TASM, 3.6.6.1 Scaled-Indexed Addressing in HLA on the 80x86, 3.6.6 Scaled-Indexed Addressing Modes Scanners, 5.4 The Translation Process, 5.4.1 Lexical Analysis and Tokens in a compiler, 5.4 The Translation Process Scope, 8.1.4 The Stack Section, 8.2.5 Scope variable attribute, 8.1.4 The Stack Section Searching for a particular routine in a disassembly listing of an object file, 6.5.1 Using an IDE’s Debugger Section alignment, 5.8 Data and Code Alignment in an Object File, 5.8.3 Controlling the Section Alignment, 5.8.3 Controlling the Section Alignment and library modules, 5.8.3 Controlling the Section Alignment sizes, 5.8 Data and Code Alignment in an Object File Section headers in a COFF file, 5.6.1 The COFF File Header, 5.6.2 The COFF Optional Header, 6.3.1.2 The dumpbin.exe /disasm Command-Line Option SectionAlignment field in a COFF file, 5.8 Data and Code Alignment in an Object File Sections, 5.6.3 COFF Section Headers, 5.7 Executable File Formats, Variables in a High-Level Language in a COFF file, 5.6.3 COFF Section Headers segalign linker option (Mac OS X), 5.8.3 Controlling the Section Alignment Segment registers, 3.2 80x86 Assembly Syntaxes Segments, 5.7 Executable File Formats, 8.1 Runtime Memory Organization Selecting a member of an array, Array Data Types Semantic correctness of a program, 5.4.1 Lexical Analysis and Tokens Semantic equivalence, 1.4.3 How to Think in Assembly While Writing HLL Code Semantics of a switch/case statement, 14.6 The switch/case Statement Sentinel characters (making the end of a string), String Data Types Sequence (of characters), String Data Types Sequence points, Arithmetic and Logical Expressions, 13.3 Side Effects in Arithmetic Expressions Set constants, 7.9 Composite Data Type Constants Seven-bit strings, 10.1.2 Length-Prefixed Strings, 10.1.2 Length-Prefixed Strings, 10.1.2 Length-Prefixed Strings, 10.1.2 Length-Prefixed Strings advantages, 10.1.2 Length-Prefixed Strings assembly language macro implementation, 10.1.2 Length-Prefixed Strings disadvantages, 10.1.2 Length-Prefixed Strings Shallow call trees, 16.2 Leaf Functions and Procedures Sharing VMTs, 12.8.3 Virtual Method Tables Short displacements to local variables (80x86), 16.5.4 Accessing Parameters and Local Variables short integer type (C/C++), 8.4.1 Integer Variables Short strings, 10.2.1 Static Strings Short-circuit and complete evaluation of arithmetic expressions, Arithmetic and Logical Expressions Short-circuit Boolean evaluation, 13.7 Short-Circuit Evaluation, 14.5.2 Forcing Complete Boolean Evaluation in an if Statement, 14.5.2 Forcing Complete Boolean Evaluation in an if Statement, 15.1.1.4 Using Unstructured Code, 15.2.1 Forcing Complete Boolean Evaluation in a repeat..until Loop, 15.3.2 Forcing Short-Circuit Boolean Evaluation in a forever Loop and Boolean expressions, 13.7 Short-Circuit Evaluation in a repeat..until loop, 15.2.1 Forcing Complete Boolean Evaluation in a repeat..until Loop in a while loop, 15.1.1.4 Using Unstructured Code SI register (80x86), 3.3.1 Registers Side effects, Arithmetic and Logical Expressions, 13.2.8 Optimizers and Programmers, 13.5 Avoiding Problems Caused by Side Effects of arithmetic expressions, Arithmetic and Logical Expressions Sign flag (80x86), 3.3.2 80x86 General-Purpose Registers Signed integer variables, 8.4.1 Integer Variables Signed versus unsigned operands, 13.2.5 Strength Reduction Simulating a while loop, 15.1 The while Loop Single indirection, 11.2 Pointer Implementation in High-Level Languages Single-address machines, 13.1.2 Accumulator-Based Machines Single-dimensional pseudo-dynamic arrays, 9.1.6 Dynamic Versus Static Arrays Single-precision floating-point constants, 7.7 Floating-Point Constants Single-precision floating-point values, 4.7 Declaring Data in Assembly Language Size, 8.4.1 Integer Variables, 9.1.6.2 Multidimensional Pseudo-Dynamic Arrays, 9.1.6.2 Multidimensional Pseudo-Dynamic Arrays, 16.1.1 Storing the Return Address of a procedure call (PowerPC), 16.1.1 Storing the Return Address of an array, 9.1.6.2 Multidimensional Pseudo-Dynamic Arrays of an integer variable, 8.4.1 Integer Variables sizeof function (C/C++), 11.4 Pointer Operations and Pointer Arithmetic SizeOfCode field in a COFF file, 5.6.2 The COFF Optional Header SizeOfInitializedData field in a COFF file, 5.6.2 The COFF Optional Header SizeOfOptionalHeader field in a COFF file, 5.6.1 The COFF File Header, 5.6.2 The COFF Optional Header SizeOfRawData field in a COFF file, 5.6.3 COFF Section Headers SizeOfUninitializedData field in a COFF file, 5.6.2 The COFF Optional Header SNOBOL4 programming language, 8.2.5 Scope, 9.1.6.2 Multidimensional Pseudo-Dynamic Arrays Software engineering conventions and great code, 1.8 Characteristics of Great Code Software-implemented stack, 8.1.3 The BSS Section Source files, Compiler Operation and Code Generation Source operand to a mov instruction (80x86), 3.5.2 Manifest Constants in Gas Source-level debuggers, 6.5.2 Using a Stand-Alone Debugger SP register (80x86), 3.3.1 Registers Space optimization, 5.4.4.5 Controlling Compiler Optimization Spaghetti code, 14.5.1 Improving the Efficiency of Certain if/else Statements SPARC processor family, 4.1 Learning One Assembly Language Is Good; More Is Better Spatial locality of reference, 7.7 Floating-Point Constants Specialized source file formats, 5.2.1 Tokenized Source Files Speed optimization, 5.4.4.5 Controlling Compiler Optimization Speeding up string function calls, 10.1.1.2 When Not to Use Standard Library Functions SSE instructions, 3.9 The Minimal 80x86 Instruction Set ST0 floating-point register (80x86), 16.6.2 Pass-by-Reference Stack-based CPU architectures, Control Structures and Programmatic Decisions Stack-based machines, 13.1 Arithmetic Expressions and Computer Architecture Stacking up activation records, 16.5 Activation Records and the Stack Stacks, 3.3.2 80x86 General-Purpose Registers, 8.1.3 The BSS Section, 8.1.3 The BSS Section, 8.1.4 The Stack Section, 8.5.1 Storage Allocation for Global and Static Variables, 13.1.1 Stack-Based Machines, 13.1.1.1 Basic Stack Machine Organization, 13.1.1.5 Real-World Stack Machines, 16.1.1 Storing the Return Address, 16.1.1 Storing the Return Address, 16.1.1 Storing the Return Address, 16.5 Activation Records and the Stack, 16.5 Activation Records and the Stack, A.1.4 Memory Access and Addressing Modes access versus register access, 16.1.1 Storing the Return Address frames, 8.5.1 Storage Allocation for Global and Static Variables, 16.5 Activation Records and the Stack machine organization, 13.1.1 Stack-Based Machines machines in the real world, 13.1.1.5 Real-World Stack Machines pointer, 16.5 Activation Records and the Stack pointer register, 3.3.2 80x86 General-Purpose Registers, 8.1.3 The BSS Section, 13.1.1.1 Basic Stack Machine Organization, 16.1.1 Storing the Return Address section in memory, 8.1.4 The Stack Section software-implemented, 8.1.3 The BSS Section Stale character data, 10.2.3 Dynamic Strings Standard entry and exit sequences for a procedure, 16.5.1 Composition of the Activation Record Starting address of an executable program in a COFF file, 5.6.2 The COFF Optional Header Static, Variables in a High-Level Language, 8.1.2 The Static Variables Section, 8.1.2 The Static Variables Section, 8.2.1 Attributes, 8.2.7 So What Is a Variable?


pages: 680 words: 157,865

Beautiful Architecture: Leading Thinkers Reveal the Hidden Beauty in Software Design by Diomidis Spinellis, Georgios Gousios

Albert Einstein, barriers to entry, business intelligence, business logic, business process, call centre, continuous integration, corporate governance, database schema, Debian, domain-specific language, don't repeat yourself, Donald Knuth, duck typing, en.wikipedia.org, fail fast, fault tolerance, financial engineering, Firefox, Free Software Foundation, functional programming, general-purpose programming language, higher-order functions, iterative process, linked data, locality of reference, loose coupling, meta-analysis, MVC pattern, Neal Stephenson, no silver bullet, peer-to-peer, premature optimization, recommendation engine, Richard Stallman, Ruby on Rails, semantic web, smart cities, social graph, social web, SPARQL, Steve Jobs, Stewart Brand, Strategic Defense Initiative, systems thinking, the Cathedral and the Bazaar, traveling salesman, Turing complete, type inference, web application, zero-coupon bond

Efficient indexing is basically impossible, as that would require the contents of the data fields to be parsed. Consequently querying into such fields would not perform well. The expected access patterns also do not favor a database; a mechanism that handles continuous streaming of data with a lot of locality of reference might well be preferable. Not using a database for the data items would mean that transactional semantics on the operations on the store would have to be implemented manually. A proposed solution for this was the extension of the principles employed by the maildir standard, which essentially allows lock-free ACID access by relying on atomic rename operations on the filesystem.[65] For the first pass implementation, it was decided that the database would store both data and metadata, with the intent of optimizing this at a later point, when the requirements of searching in particular would be more well defined.


pages: 834 words: 180,700

The Architecture of Open Source Applications by Amy Brown, Greg Wilson

8-hour work day, anti-pattern, bioinformatics, business logic, c2.com, cloud computing, cognitive load, collaborative editing, combinatorial explosion, computer vision, continuous integration, Conway's law, create, read, update, delete, David Heinemeier Hansson, Debian, domain-specific language, Donald Knuth, en.wikipedia.org, fault tolerance, finite state, Firefox, Free Software Foundation, friendly fire, functional programming, Guido van Rossum, Ken Thompson, linked data, load shedding, locality of reference, loose coupling, Mars Rover, MITM: man-in-the-middle, MVC pattern, One Laptop per Child (OLPC), peer-to-peer, Perl 6, premature optimization, recommendation engine, revision control, Ruby on Rails, side project, Skype, slashdot, social web, speech recognition, the scientific method, The Wisdom of Crowds, web application, WebSocket

The Access Methods: Btree, Hash, Recno, Queue The Berkeley DB access methods provide both keyed lookup of, and iteration over, variable and fixed-length byte strings. Btree and Hash support variable-length key/value pairs. Recno and Queue support record-number/value pairs (where Recno supports variable-length values and Queue supports only fixed-length values). The main difference between Btree and Hash access methods is that Btree offers locality of reference for keys, while Hash does not. This implies that Btree is the right access method for almost all data sets; however, the Hash access method is appropriate for data sets so large that not even the Btree indexing structures fit into memory. At that point, it's better to use the memory for data than for indexing structures.


pages: 923 words: 516,602

The C++ Programming Language by Bjarne Stroustrup

combinatorial explosion, conceptual framework, database schema, Dennis Ritchie, distributed generation, Donald Knuth, fault tolerance, functional programming, general-purpose programming language, higher-order functions, index card, iterative process, job-hopping, L Peter Deutsch, locality of reference, Menlo Park, no silver bullet, Parkinson's law, premature optimization, sorting algorithm

Exactly the same process can be seen in single-rooted class hierarchies, in which ‘‘interesting’’ data and functions tend to bubble up toward a root class (§24.4). Focussing on the specification of classes and the encapsulation of data addresses this problem by making the dependencies between different parts of a program explicit and tractable. More important, though, it reduces the number of dependencies in a system by improving locality of reference to data. The C++ Programming Language, Third Edition by Bjarne Stroustrup. Copyright ©1997 by AT&T. Published by Addison Wesley Longman, Inc. ISBN 0-201-88954-4. All rights reserved. Section 24.2.1 Ignoring Classes 727 However, some problems are best solved by writing a set of procedures.


The Art of Computer Programming: Sorting and Searching by Donald Ervin Knuth

card file, Charles Babbage, Claude Shannon: information theory, complexity theory, correlation coefficient, Donald Knuth, double entry bookkeeping, Eratosthenes, Fermat's Last Theorem, G4S, information retrieval, iterative process, John von Neumann, linked data, locality of reference, Menlo Park, Norbert Wiener, NP-complete, p-value, Paul Erdős, RAND corporation, refrigerator car, sorting algorithm, Vilfredo Pareto, Yogi Berra, Zipf's Law

A3) (Equality is allowed.) b) Subfiles of M or fewer elements are left unsorted until the very end of the procedure; then a single pass of straight insertion is used to produce the final ordering. Here M > 1 is a parameter that should be chosen as described in the text below. (This idea, due to R. Sedgewick, saves some of the overhead that would be necessary if we applied straight insertion directly to each small subfile, unless locality of reference is significant.) c) Records with equal keys are exchanged, although it is not strictly necessary to do so. (This idea, due to R. C. Singleton, keeps the inner loops fast and helps to split subfiles nearly in half when equal elements are present; see exercise 18.) Ql. [Initialize.] If N < M, go to step Q9.