database schema

49 results back to index

Catalyst 5.8: The Perl MVC Framework by Antano Solar John, Jonathan Rockway, Solar John Antano

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

c2.com, create, read, update, delete, database schema, Debian, en.wikipedia.org, Firefox, MVC pattern, Ruby on Rails, web application

Installing SQLite The first step is to install SQLite from the SQLite website at http://sqlite.org/ (or through your distribution's package manager, the package is usually called "sqlite3"). You'll also need to install the Perl version of SQLite, called DBD::SQLite, and DBIx::Class itself as well as an adapter that will link our database schema definition to the Catalyst application. You can install these, except for SQLite, using cpan with the following: $ cpan -i DBD::SQLite DBIx::Class Catalyst::Model::DBIC::Schema Creating a database schema Once the dependencies are installed, we'll create a sample database with the sqlite3 command-line utility: $ sqlite3 tmp/database SQLite version 3.3.8 Enter ".help" for instructions sqlite> CREATE TABLE test (id INTEGER PRIMARY KEY, subject TEXT, message TEXT, date DATETIME); sqlite> INSERT INTO test (id, subject, message, date) VALUES(NULL, "test message", "this is a test message", "2005-01-01 00:00"); sqlite> INSERT INTO test (id, subject, message, date) VALUES(NULL, "another test message", "this is a another test message, hooray", "200501-01 00:01"); sqlite> SELECT * FROM test; 1|test message|this is a test message|2005-01-01 00:00 2|another test message|this is a another test message, hooray|2005-01-01 00:01 sqlite> .quit $ Here we created a test table in a database file called tmp/database and added two rows to it

The header, footer, messages, and stylesheets will all be handled automatically and will be easy to customize later. Database design The first real step will be to think about what kind of data we need to store and then design a database schema to efficiently store that data. To keep things simple (but realistic), let's set our specification as follows: • The address book should keep track of multiple addresses for a person • Each person can have a first and last name • Each address can have a street address, a phone number, and an e-mail address Translated into a relational database schema, that means we'll have a people table that will assign a unique identifier to each firstname and lastname pair. Then we'll have an addresses table that will allow each person to have multiple addresses, each consisting of a unique ID (so it's easy to reference later), a location ("Office", "Home"), a free-form postal address (so we don't have to worry about the complexity of having a city, country, state, postal code, and so on), a phone number, and an e-mail address.

See  CPAN config method 68 configuration management 192 Content-type: header 157 Controller 182 Controller class 8 count_users function 142 COUNT function 117 CPAN about 10 Catalyst, installing from 12 disadvantages 12 modules, installing 41 CRUD 55 D Data::Page object 82 database installing 198 database, accessing ways Object-relational mapper (ORM) 35 database, skeleton application database model, creating 37 [ 223 ] database model, using 37-39 database schema, installing 36 SQLite, installing 36 database model creating 37, 39, 141, 142, 143 using 37, 39 database schema creating 36 data putting on web, ChatStat application _controversial function 126 action, adding 136 actions, adding to display ups or downs on page 139 actions, modifying 136, 138 all_things method 132 custom ResultSets method, writing 124 data, putting on web 121-123 everything query 131 everything ResultSet method 131 having attribute 127 highest_rated, custom ResultSets method 124, 125 index controller method, changing 134, 135 issues 128, 130 least_controversial, custom ResultSets method 124, 126 list_things macro, reusing 136 lowest_rated, custom ResultSets method 124, 125 macros, creating 121 macros, using 123 main page 120 main page, writing 122 most_controversial, custom ResultSets method 124, 126 one_thing method 132 person() links 134 person, macros 121 ResultSet, adding 130 root/src/thing.tt2, template 133, 134 score, macros 121 template root/src/pair.tt2, creating 121, 122 thing, macros 121 uri_for_action, using for link 130 data types 186 dbh_do method 144, 145 DBI::Class 69 dbh_do method 144 DBIC 35 DBIC schema ($self) 113 DBIx::Class 15.


pages: 458 words: 46,761

Essential Sqlalchemy by Jason Myers, Rick Copeland

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

create, read, update, delete, database schema, Ruby on Rails, side project, web application

Generic type representations SQLAlchemy Python SQL BigInteger int BIGINT Boolean bool BOOLEAN or SMALLINT Date datetime.date DATE (SQLite: STRING) DateTime datetime.datetime DATETIME (SQLite: STRING) Enum str ENUM or VARCHAR Float float or Decimal FLOAT or REAL Integer int INTEGER Interval datetime.timedelta INTERVAL or DATE from epoch LargeBinary byte BLOB or BYTEA Numeric decimal.Decimal NUMERIC or DECIMAL Unicode unicode UNICODE or VARCHAR Text str CLOB or TEXT Time datetime.time DATETIME Note It is important to learn these generic types, as you will need to use and define them regularly. In addition to the generic types listed in Table 1-1, both SQL standard and vendor-specific types are available and are often used when a generic type will not operate as needed within the database schema due to its type or the specific type specified in an existing schema. A few good illustrations of this are the CHAR and NVARCHAR types, which benefit from using the proper SQL type instead of just the generic type. If we are working with a database schema that was defined prior to using SQLAlchemy, we would want to match types exactly. It’s important to keep in mind that SQL standard type behavior and availability can vary from database to database. The SQL standard types are available within the sqlalchemy.types module.

Reflection with SQLAlchemy ORM and Automap As you learned in Chapter 5, reflection lets you populate a SQLAlchemy object from an existing database; reflection works on tables, views, indexes, and foreign keys. But what if you want to reflect a database schema into ORM-style classes? Fortunately, the handy SQLAlchemy extension automap lets you do just that. Reflection via automap is a very useful tool; however, as of version 1.0 of SQLAlchemy we cannot reflect CheckConstraints, comments, or triggers. You also can’t reflect client-side defaults or an association between a sequence and a column. However, it is possible to add them manually using the methods we learned in Chapter 6. Just like in Chapter 5, we are going to use the Chinook database for testing. We’ll be using the SQLite version, which is available in the CH11/ folder of this book’s sample code. That folder also contains an image of the database schema so you can visualize the schema we’ll be working with throughout this chapter.

SQLAlchemy also provides a lot of flexibility by supplying two major modes of usage: SQL Expression Language (commonly referred to as Core) and ORM. These modes can be used separately or together depending on your preference and the needs of your application. SQLAlchemy Core and the SQL Expression Language The SQL Expression Language is a Pythonic way of representing common SQL statements and expressions, and is only a mild abstraction from the typical SQL language. It is focused on the actual database schema; however, it is standardized in such a way that it provides a consistent language across a large number of backend databases. The SQL Expression Language also acts as the foundation for the SQLAlchemy ORM. ORM The SQLAlchemy ORM is similar to many other object relational mappers (ORMs) you may have encountered in other languages. It is focused around the domain model of the application and leverages the Unit of Work pattern to maintain object state.


pages: 357 words: 63,071

Essential SQLAlchemy by Rick Copeland

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

database schema, defense in depth, domain-specific language, web application

For instance, to retrieve all the users created in 2007, you would write: statement = user_table.select(and_( user_table.c.created >= date(2007,1,1), user_table.c.created < date(2008,1,1)) result = statement.execute() In order to use the SQL expression language, you need to provide SQLAlchemy with information about your database schema. For instance, if you are using the user table mentioned previously, your schema definition might be the following: metadata=MetaData('sqlite://') # use an in-memory SQLite database user_table = Table( 'tf_user', metadata, Column('id', Integer, primary_key=True), Column('user_name', Unicode(16), unique=True, nullable=False), Column('email_address', Unicode(255), unique=True, nullable=False), Column('password', Unicode(40), nullable=False), Column('first_name', Unicode(255), default=''), Column('last_name', Unicode(255), default=''), Column('created', DateTime, default=datetime.now)) If you would rather use an existing database schema definition, you still need to tell SQLAlchemy which tables you have, but SQLAlchemy can reflect the tables using the database server’s introspection capabilities.

The ORM is covered in more detail in Chapters 6, 7, and 8. Chapter 3. Engines and MetaData This chapter introduces SQLAlchemy’s Engine and MetaData classes. The Engine class provides database connectivity, including a connection pool with various strategies for acquiring connections from the pool. The MetaData class maintains information about your database schema, including any tables and indexes defined. In this chapter, you will learn how to define a new database schema using MetaData as well as how to connect a MetaData instance to an existing schema. Engines and Connectables The SQLAlchemy-provided Engine class is responsible for managing the connection to the database. It does this by incorporating a database connection pool and a database-specific Dialect layer to translate the SQL expression language (Chapter 5) into database-specific SQL.

To use these types, you must import them directly from the appropriate module in the sqlalchemy.databases package: from sqlalchemy.databases.mysql import MSEnum, MSBigInteger user_table = Table('tf_user', meta, Column('id', MSBigInteger), Column('honorific', MSEnum('Mr', 'Mrs', 'Ms', 'Miss', 'Dr', ... 'Prof'))) MetaData Management The MetaData object in SQLAlchemy is used to collect and organize information about your table layout (i.e., your database schema). We alluded to MetaData management before in describing how to create tables. A MetaData object must be created before any tables are defined, and each table must be associated with a MetaData object. MetaData objects can be created “bound” or “unbound,” based on whether they are associated with an engine. The following is an example of the different ways you can create MetaData objects: # create an unbound MetaData unbound_meta = MetaData() # create an Engine and bind the MetaData to it db1 = create_engine('sqlite://') unbound_meta.bind = db1 # Create an engine and then a bound MetaData db2 = MetaData('sqlite:///test1.db') bound_meta1 = MetaData(db2) # Create a bound MetaData with an implicitly created engine bound_meta2 = MetaData('sqlite:///test2.db') Although tables can be defined against unbound MetaData, it is often more convenient to eventually bind the metadata to an engine, as this allows the MetaData and the Table objects defined for it to access the database directly: # Create a bound MetaData meta = MetaData('sqlite://') # Define a couple of tables user_table = Table( 'tf_user', meta, Column('id', Integer, primary_key=True), Column('user_name', Unicode(16), unique=True, nullable=False), Column('password', Unicode(40), nullable=False)) group_table = Table( 'tf_group', meta, Column('id', Integer, primary_key=True), Column('group_name', Unicode(16), unique=True, nullable=False)) # Create all the tables in the (empty) database meta.create_all() # Select all the groups from the tf_group table result_set = group_table.select().execute() As mentioned previously, you can also reflect your schema by setting the autoload parameter to True in your Table creation.


pages: 509 words: 92,141

The Pragmatic Programmer by Andrew Hunt, Dave Thomas

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

A Pattern Language, Broken windows theory, business process, buy low sell high, c2.com, combinatorial explosion, continuous integration, database schema, domain-specific language, don't repeat yourself, Donald Knuth, general-purpose programming language, George Santayana, Grace Hopper, if you see hoof prints, think horses—not zebras, index card, loose coupling, Menlo Park, MVC pattern, premature optimization, Ralph Waldo Emerson, revision control, Schrödinger's Cat, slashdot, sorting algorithm, speech recognition, traveling salesman, urban decay, Y2K

With Perl you can manipulate text, interact with programs, talk over networks, drive Web pages, perform arbitrary precision arithmetic, and write programs that look like Snoopy swearing. Tip 28 Learn a Text Manipulation Language To show the wide-ranging applicability of text manipulation languages, here's a sample of some applications we've developed over the last few years. Database schema maintenance. A set of Perl scripts took a plain text file containing a database schema definition and from it generated: – The SQL statements to create the database – Flat data files to populate a data dictionary – C code libraries to access the database – Scripts to check database integrity – Web pages containing schema descriptions and diagrams – An XML version of the schema Java property access. It is good OO programming style to restrict access to an object's properties, forcing external classes to get and set them via methods.

Of course, this scheme works only if you make the code generation part of the build process itself.[9] [9] Just how do you go about building code from a database schema? There are several ways. If the schema is held in a flat file (for example, as create table statements), then a relatively simple script can parse it and generate the source. Alternatively, if you use a tool to create the schema directly in the database, then you should be able to extract the information you need directly from the database's data dictionary. Perl provides libraries that give you access to most major databases. Figure 3.3. Active code generator creates code from a database schema Another example of melding environments using code generators happens when different programming languages are used in the same application.

The same information is repeated three times. Change any one of these three sources, and the other two are immediately out of date. This is a clear violation of the DRY principle. To correct this problem, we need to choose the authoritative source of information. This may be the specification, it may be a database schema tool, or it may be some third source altogether. Let's choose the specification document as the source. It's now our model for this process. We then need to find a way to export the information it contains as different views—a database schema and a high-level language record, for example.[10] [10] See It's Just a View, page 157, for more on models and views. If your document is stored as plain text with markup commands (using HTML, LATEX, or troff, for example), then you can use tools such as Perl to extract the schema and reformat it automatically.


pages: 378 words: 67,804

Learning Android by Marko Gargenta

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

create, read, update, delete, database schema, Firefox, loose coupling, slashdot, web application

buttons, Programmatic User Interface, The StatusActivity Layout, Creating Your Application-Specific Object and Initialization Code, Update StatusActivity to Load the Menu, Add Menu Items, Toggle Service, Intent Service C C implementation, The APK, The Native Development Kit (NDK), Documentation and Standardized Headers, The JNI Header File, C Implementation Canvas, Custom Rose Widget classpaths, Adding the jtwitter.jar Library close() method, Refactoring Status Data coarse location permissions, The manifest file color, adding, Adding Color and Graphics, Adding Color Compass demo, Compass Demo, Custom Rose Widget compiling code, Compiling Code and Building Your Projects: Saving Files, Updating the Manifest File for Internet Permission comprehensive platform, Comprehensive, Yamba Project Overview content providers, Content Providers, Project Design, Creating a Content Provider, Getting the Data Type, Using Content Providers Through Widgets, Summary creating, Creating a Content Provider, Getting the Data Type overview, Content Providers, Project Design using through widgets, Using Content Providers Through Widgets, Summary CRUD principle, Content Providers cursors, Cursors, Creating the TimelineActivity Class, Querying Data custom permissions, Adding Custom Permissions to Send and Receive Broadcasts, Declaring Permissions in the Manifest File D .d() severity level, Logging in Android Dalvik, Dalvik databases, The Database, The Database Schema and Its Creation, Database Constraints, Creating the TimelineActivity Class constraints on, Database Constraints overview, The Database schema, creating, The Database Schema and Its Creation working with, Creating the TimelineActivity Class DbHelper, DbHelper DDMS, Setting Up Android Development Tools, LogCat from the Eclipse DDMS perspective, Exploring the Filesystem, The User Data Partition, Verify that the database was created debugging, Debugging Android Apps declarative user interface, Declarative User Interface delete() method, Four Major Operations, Creating a Content Provider design philosophy, Design Philosophy Destroyed State, Destroyed state development tools, Setting Up Android Development Tools distribution of applications, Application Distribution doInBackground() method, AsyncTask E .e() severity level, Logging in Android Eclipse Android Development Tools (ADT), Declarative User Interface Eclipse IDE, Installing Eclipse, Logging in Android, Using the WYSIWYG editor in Eclipse, Prefs Resource, PrefsActivity editing code in, Prefs Resource, PrefsActivity installing, Installing Eclipse Organize Imports tool, Logging in Android WYSIWYG Editor, Using the WYSIWYG editor in Eclipse emulators, The Emulator events, Other UI Events, Other UI Events execSQL() method, Four Major Operations, First Example F fetchStatusUpdates() method, Refactoring Status Data, Broadcasting Intents Fibonacci demo, An NDK Example: Fibonacci, Summary file system, Filesystem, The Filesystem Explained, Summary files, saving, Compiling Code and Building Your Projects: Saving Files fine location permissions, The manifest file format() method, Creating the TimelineActivity Class FrameLayout, FrameLayout G garbage collector, Creating the TimelineActivity Class Geocoder, Where Am I?

To use File Explorer in Eclipse, either open the DDMS perspective in the top-right corner of your Eclipse or go to Windows→Show View→Other…→Android→File Explorer. This will open the view of the filesystem of the device you are currently looking at. So far, you know that the database file is there, but don’t really know whether the database schema was created properly. The next section addresses that. Using sqlite3 Android ships with the command-line tool sqlite3. This tool gives you access to the database itself. To see whether your database schema was created properly: Open up your terminal or command-line window. Type adb shell to connect to your running emulator or physical phone. Change the directory to the location of your database file by typing cd /data/data/com.marakana.yamba/databases/. Connect to the database with the sqlite3 timeline.db command.

startActivity() method, Update StatusActivity to Handle Menu Events, Initial App Setup, Pending intents Starting State, Starting state startManagingCursor() method, Creating the TimelineActivity Class, Querying Data startService() method, Update the Options Menu Handling, Pending intents Status activity, Updating the Status Activity status data, Looping in the Service, Update UpdaterService, Refactoring Status Data, Refactoring Status Data, Defining the URI, Implementing the YambaWidget class status updates, The StatusActivity Layout, Looping in the Service, The Database, Broadcast Receivers, The TimelineReceiver, Updating the Services to Enforce Permissions, Using Content Providers Through Widgets, Summary, Intent Service, Sending Notifications, Summary checking for new, Looping in the Service using IntentService to run, Intent Service notification of, The TimelineReceiver, Sending Notifications, Summary permissions for, Updating the Services to Enforce Permissions screen, The StatusActivity Layout sending, Broadcast Receivers storing locally, The Database widget for displaying, Using Content Providers Through Widgets, Summary status.xml, The StatusActivity Layout StatusActivity, The StatusActivity Layout, The StatusActivity Java Class, AsyncTask, Update StatusActivity to Load the Menu, Simplifying StatusActivity Stopped State, Stopped state stopService() method, Update the Options Menu Handling, The Network Receiver strings resource, Important Widget Properties Stub() method, Implementing the Service, Binding to the Remote Service subscribers, About Broadcast Receivers system services, Services, Project Design, Services, UpdaterService, Update the Manifest File, Add Menu Items, Testing the Service, Looping in the Service, Testing the Service, System Services, Compass Demo, Custom Rose Widget, Common Steps in Using System Services, Location Service, Updating the Status Activity, Intent Service, Intent Service, Sending Notifications adding and handling menus, Add Menu Items Android Services vs. native services, Services common steps in using, Common Steps in Using System Services Compass demo, Compass Demo, Custom Rose Widget creating, UpdaterService defining in manifest file, Update the Manifest File intents, Intent Service, Intent Service location service, Location Service, Updating the Status Activity looping in, Looping in the Service notification service, Sending Notifications overview, Services, System Services and project design, Project Design testing, Testing the Service, Testing the Service T TableLayout, TableLayout TableRow, TableLayout testing, Testing the Service, Testing the Service, Testing the Service, Testing That It All Works text property, Important Widget Properties TextWatcher, Other UI Events Thread.sleep() method, Looping in the Service, Looping in the Service threading, Threading in Android timeline activity example, TimelineActivity, Creating an Adapter in TimelineActivity.java, Toggle Service, The TimelineReceiver timeline adapter example, TimelineAdapter, ViewBinder: A Better Alternative to TimelineAdapter timeline receivers, The TimelineReceiver, The TimelineReceiver, Updating the Services to Enforce Permissions toggle service, Toggle Service Twitter, A Real-World Example, Broadcast Receivers, The Yamba Application, Creating Your Application-Specific Object and Initialization Code, Updating the Manifest File for Internet Permission, AsyncTask, Other UI Events, Other UI Events, Other UI Events, Shared Preferences, UpdaterService, Pulling Data from Twitter, Pulling Data from Twitter, The Database Schema and Its Creation, Update UpdaterService, Lists and Adapters 140-character counter, Other UI Events, Other UI Events creating compatible apps, Creating Your Application-Specific Object and Initialization Code, Updating the Manifest File for Internet Permission, AsyncTask, Other UI Events, Shared Preferences, Lists and Adapters example of app, A Real-World Example, Broadcast Receivers pulling data from, UpdaterService, Pulling Data from Twitter, Pulling Data from Twitter, The Database Schema and Its Creation, Update UpdaterService and Yamba, The Yamba Application U UI (user interface), Part 1: Android User Interface, Two Ways to Create a User Interface, Creating Your Application-Specific Object and Initialization Code, Optimizing the User Interface Android objects, Creating Your Application-Specific Object and Initialization Code optimizing, Optimizing the User Interface two ways to create, Two Ways to Create a User Interface Uniform Resource Identifier (URI), Defining the URI unmarshaling, The Android Interface Definition Language update() method, Four Major Operations, Creating a Content Provider updateAppWidget() method, Implementing the YambaWidget class UpdaterService, UpdaterService, Pulling Data from Twitter, Update UpdaterService, Update UpdaterService, BootReceiver, The TimelineReceiver, Broadcasting Intents, Updating the Services to Enforce Permissions updateStatus() method, Single Thread URI (Uniform Resource Identifier), Defining the URI user data partition, Filesystem Partitions, The User Data Partition user interface, Two Ways to Create a User Interface (see UI) user preferences, Preferences (see preferences) username, The User Data Partition V ViewBinder, ViewBinder: A Better Alternative to TimelineAdapter views and layouts, Views and Layouts viruses, What about viruses, malware, spyware, and other bad things?


pages: 90 words: 17,297

Deploying OpenStack by Ken Pepple

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Amazon Web Services, cloud computing, database schema, Infrastructure as a Service, Ruby on Rails, web application, x509 certificate

Here is an example of the queues created in RabbitMQ for a simple all-in-one node installation: $ sudo rabbitmqctl list_queues Listing queues ... scheduler_fanout_15b1731c5ac34aae8970369911f04542 0 volume 0 volume_fanout_e42438faedb84ab8aad8d85e29916424 0 compute_fanout_38a37d3dc7564b66a5a540a1e222b12b 0 compute.cactus 0 volume_fanout_d62eb016a76341f4899c91d5a8fbb0a9 0 volume_fanout_dcaebd5edb3045ff8b86636040d34071 0 volume.cactus 0 network_fanout_64b9cb80b2c34c7a8da983219c787147 0 compute 0 network_fanout_362393151e7c465a8e3ed003ac6dbc1b 0 compute_fanout_74165ee38c9d4c1ea1003ccd88a91c22 0 scheduler 0 network.cactus 0 network 0 scheduler_fanout_9444b4c8d5d6497b9b5e2df4eca33b0d 0 scheduler.cactus 0 As you can see from the example, topic, fanout, and host queues have been created for each service (nova-scheduler, nova-compute, nova-volume, nova-network). Database The database stores most of the configuration and run-time state for a cloud infrastructure. This includes the instance types that are available for use, instances in use, networks available, and projects. Table 4-4 details all the tables in the current Nova database scheme. Table 4-4. Nova Database Schema Table NameDescription migrate_version Stores current version of the database schema as well as other migration-related info. Only used internally and by developers during upgrades. migrations Used for running host-to-host migration. auth_tokens Maps Authorization tokens (for all API transactions) to actual users (via the user id field). certificates Mappings for user, projects, and x509 certificates files networks Information pertaining to networks defined in Nova.

It has two subcommands: sync and version. The sync subcommand will upgrade the database scheme for new versions of Nova and the version will report the current version. Note Nova uses a database abstraction library called SQL-Alchemy to interact with its database. A complimentary package called sqlalchemy-migrate is used to manage the database schema. Inspired by Ruby on Rails’ migrations feature, it provides a programmatic way to handle database schema changes. For Nova administrators, this only applies when they are upgrading versions. To upgrade scheme versions, use the nova-manage db sync. This should be rarely used unless you are installing from source or upgrading your installation. If there are pending scheme migrations, it will apply those to your database. If there are not, it will return nothing

First up is adding the network manager related options to /etc/nova.conf: --network_manager=nova.network.manager.FlatDHCPManager --fixed_range=10.0.0.0/8 --network_size=8 Now, let’s add our network into the database with the nova-manage command: # nova-manage network create 10.0.0.0/8 1 8 Finally, we will create the floating addresses using a range that will not overlap with the StackOps installation. # nova-manage floating create cactus 192.168.1.136/29 With all the /etc/nova.conf changes made, we can create the database schema by running the nova-manage db commands. First, we will sync the database (create the schema), then we will version the database to make sure that the schema creation succeeded. $ sudo nova-manage db sync $ sudo nova-manage db version 14 And we are done with the configuration. All that is left to do now is restart the Nova daemons to pick the new configuration options. $ sudo restart libvirt-bin; sudo restart nova-network; sudo restart nova-compute; \ sudo restart nova-api; sudo restart nova-objectstore; sudo restart nova-scheduler; \ sudo restart nova-volume libvirt-bin start/running, process 22673 nova-network start/running, process 22729 nova-compute start/running, process 22776 nova-api start/running, process 22838 nova-objectstore start/running, process 22846 nova-scheduler start/running, process 22933 Chapter 8.


pages: 224 words: 48,804

The Productive Programmer by Neal Ford

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

anti-pattern, business process, c2.com, continuous integration, database schema, domain-specific language, don't repeat yourself, Firefox, general-purpose programming language, knowledge worker, Larry Wall, Ruby on Rails, side project, type inference, web application, William of Occam

Using an O/R mapper introduces repetition in projects where we have essentially the same information in three places: in the database schema, in an XML mapping document, and in a class file. This represents two violations of the DRY principle. The solution to this problem is to create a single representation and generate the other two. The first step is deciding who is the “official” holder of this knowledge. For example, if the database is the canonical source, generate the XML mapping and the corresponding class files. In this example, I used Groovy (the scripting language dialect of Java) to solve the mismatch. On the project that spawned this example, the developers had no control over the database schema. Thus, I decided that the database is the canonical representation of the data. I was using the open source iBatis‡ SQL mapping tool (which doesn’t generate SQL; it merely handles mapping classes to SQL results).

DRY Impedance Mismatches 85 To add behavior to your generated code, you can inherit (in languages like Java), use open classes (in languages like Ruby, Groovy, and Python), or use partial classes (in languages like C#). Now, I’ve solved all the DRY problems in my O/R mapping code. The Groovy script runs as part of the build process so that any time the database schema changes, it automatically generates the corresponding mapping file and the Java class to which it maps. Migrations Another situation where duplication creeps into projects also arises from the impedance mismatch between code and SQL. Lots of projects treat source code and SQL as completely separate artifacts, sometimes created by entirely separate groups of developers. Yet, for the source code to function correctly, it must rely on a certain version of the database schema and data. There are a couple of ways to solve this problem, one framework is specific and the other was designed to work across frameworks and languages. NOTE Always keep code and schemas in sync.

because they always live in an up-to-date state. NOTE Never keep two copies of the same thing (like code and a diagram describing it). If you are going to draw your diagrams first, use the tool to generate the code. If you create informal diagrams using the whiteboard approach and need more formal diagrams later, generate them from code. Otherwise, they will always get out of sync. Database Schemas Just like class diagrams, database schemas are a dangerous area for needless repetition. SchemaSpy§ is an open source tool that does for database entity/relationship diagrams what yDoc does for code. It attaches to the database and produces table information (including metadata) and relationship diagrams like the one shown in Figure 5-6. § Download at 92 http://schemaspy.sourceforge.net/. CHAPTER 5: CANONICALITY FIGURE 5-5.


pages: 360 words: 96,275

PostgreSQL 9 Admin Cookbook: Over 80 Recipes to Help You Run an Efficient PostgreSQL 9. 0 Database by Simon Riggs, Hannu Krosing

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

business intelligence, business process, database schema, Debian, en.wikipedia.org, full text search, Skype

After all of these have executed, the databases are fully replicated with slon daemons running. ff 01_create_init.sh creates the bash scripts for later use. You can edit $SLON_ TMP/my_replica.sh and add/remove tables, if needed. So selective replication is a simple matter of editing those files. ff create_struct.sh performs a dump from Master and a restore to the slave of the database schema. This dumps the whole database schema. ff 02_exec_init.sh executes the scripts. This copies the data, so may take a while. Maintaining replication Slony doesn't replicate the following items: ff Table definition changes (DDL commands) ff Changes to users and roles ff Changes to large objects (BLOBS) ff Changes made by the TRUNCATE command (may change in future) So, once you've set up Slony, you'll need to make any further changes to the replicated tables onto each of the nodes in the configuration.

In order to do so, you may have to change the log_line_prefix parameter to include the %u format string. A recommended minimal log_line_prefix format string for auditing DDL is '%t %u %d', which tells postgresql to log timestamp, database user, and database name at the start of every log line. Can't I find out this information from the database If you don't have logging enabled, or don't have all the logs, then you can get only very limited information on who changed the database schema and when from system tables, and even that is not reliable. What you can can get is the "owner" of the database object (table, sequence, function, and so on) but this may have been changed by "ALTER TABLE … SET OWNER to yyyy". You may be able to guess the approximate time of object creation or of the latest modification by looking up the transaction id in xmin system column in pg_class and pg_attrib system tables.

If you've used a SERIAL data type, and an implicit sequence has been created, then that also moves to the new schema. Schemas are a purely an administrative concept, and do not affect the location of the table's data files. Tablespaces don't work this way, as we will see in later recipes. Databases, users/roles, languages, conversions don't exist in a schema. Schemas exist in a particular database. = Schemas don't exist within schemas; they are not arranged in a tree or hierarchy. There's more... Text search objects exist in a specific schema though there are no commands to move them to a new schema. Similarly, operator(s), operator class(es), and operator family(s) exist in a schema, though there are no commands to move them to new schemas. Also casts don't exist in a schema, though the data types and functions they reference don't.


pages: 408 words: 63,990

Build Awesome Command-Line Applications in Ruby: Control Your Computer, Simplify Your Life by David B. Copeland

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Chris Wanstrath, database schema, en.wikipedia.org, full stack developer, Ruby on Rails, web application

First let’s review the current “pretty” format of todo by adding a few tasks, completing one, and getting the list. (Recall that we must use bundle exec since we are running out of our source tree for this example; users of todo will, of course, be able to run todo on its own, as discussed in Chapter 9, ​Be Easy to Maintain​.) ​$ bundle exec bin/todo new​ ​Reading new tasks from stdin...​ ​Design database schema​ ​Get access to production logs​ ​Code Review​ ​Implement model objects for new scheme​ ​^D​ ​$ bundle exec bin/todo done 3​ ​$ bundle exec bin/todo list​ ​1 - Design database schema​ ​ Created: Sun Oct 30 12:53:11 -0400 2011​ ​2 - Get access to production logs​ ​ Created: Sun Oct 30 12:53:11 -0400 2011​ ​3 - Code Review​ ​ Created: Sun Oct 30 12:53:11 -0400 2011​ ​ Completed: Sun Oct 30 13:00:05 -0400 2011​ ​4 - Implement model objects for new schema​ ​ Created: Sun Oct 30 12:53:11 -0400 2011​ Let’s enhance the output of list as follows: Show the task name in a brighter/bolder font (the ANSI escape sequences provide two version of each color: a normal color and a brighter/bolder version).

Since the location of the “home directory” is really just shorthand for “whatever directory is in ENV[’HOME’],” we now have test coverage without worrying that our personal task list will be touched. Let’s run our new test and make sure it passes. To prove that we aren’t touching our home directory, we’ll list our actual task list before and after running our feature. ​$ todo list​ ​1 - Design database schema​ ​ Created: Sun Oct 02 08:06:12 -0500 2011​ ​2 - Get access to production logs​ ​ Created: Sun Oct 02 08:06:12 -0500 2011​ ​3 - Code Review​ ​ Created: Sun Oct 02 08:06:12 -0500 2011​ ​$ rake features​ ​Feature: We can add new tasks​ ​ As a busy developer with a lot of things to do​ ​ I want to keep a list of tasks I need to work on​ ​​ ​ Scenario: The task list is in our home directory by default​ ​ Given there is no task list in my home directory​ ​ When I successfully run `todo new 'Some new todo item'`​ ​ Then the task list should exist in my home directory​ ​ When I successfully run `todo list`​ ​ Then the stdout should contain "Some new todo item"​ ​​ ​1 scenarios (1 passed)​ ​5 steps (5 passed)​ ​0m0.793s​ ​$ todo list​ ​1 - Design database schema​ ​ Created: Sun Oct 02 08:06:12 -0500 2011​ ​2 - Get access to production logs​ ​ Created: Sun Oct 02 08:06:12 -0500 2011​ ​3 - Code Review​ ​ Created: Sun Oct 02 08:06:12 -0500 2011​ Our test passed, but our task list wasn’t modified—everything worked!

​$ todo list​ ​1 - Design database schema​ ​ Created: Sun Oct 02 08:06:12 -0500 2011​ ​2 - Get access to production logs​ ​ Created: Sun Oct 02 08:06:12 -0500 2011​ ​3 - Code Review​ ​ Created: Sun Oct 02 08:06:12 -0500 2011​ ​$ rake features​ ​Feature: We can add new tasks​ ​ As a busy developer with a lot of things to do​ ​ I want to keep a list of tasks I need to work on​ ​​ ​ Scenario: The task list is in our home directory by default​ ​ Given there is no task list in my home directory​ ​ When I successfully run `todo new 'Some new todo item'`​ ​ Then the task list should exist in my home directory​ ​ When I successfully run `todo list`​ ​ Then the stdout should contain "Some new todo item"​ ​​ ​1 scenarios (1 passed)​ ​5 steps (5 passed)​ ​0m0.793s​ ​$ todo list​ ​1 - Design database schema​ ​ Created: Sun Oct 02 08:06:12 -0500 2011​ ​2 - Get access to production logs​ ​ Created: Sun Oct 02 08:06:12 -0500 2011​ ​3 - Code Review​ ​ Created: Sun Oct 02 08:06:12 -0500 2011​ Our test passed, but our task list wasn’t modified—everything worked! Manipulating the environment is a great technique for testing behavior like this; the environment works as a “middleman” that allows us to change things (like the location of the user’s home directory) without affecting the code of our tests or our app.


pages: 257 words: 64,973

Intrusion Detection With Snort, Apache, Mysql, Php, and Acid by Rafeeq Ur Rehman

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Chuck Templeton: OpenTable, database schema, stealth mode startup, web application, x509 certificate

The following command shows the structure of the iphdr table: mysql> describe iphdr; +----------+----------------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +----------+----------------------+------+-----+---------+-------+ | sid | int(10) unsigned | | PRI | 0 | | | cid | int(10) unsigned | | PRI | 0 | | | ip_src | int(10) unsigned | | MUL | 0 | | | ip_dst | int(10) unsigned | | MUL | 0 | | | ip_ver | tinyint(3) unsigned | YES | | NULL | | | ip_hlen | tinyint(3) unsigned | YES | | NULL | | | ip_tos | tinyint(3) unsigned | YES | | NULL | | | ip_len | smallint(5) unsigned | YES | | NULL | | | ip_id | smallint(5) unsigned | YES | | NULL | | | ip_flags | tinyint(3) unsigned | YES | | NULL | | | ip_off | smallint(5) unsigned | YES | | NULL | | | ip_ttl | tinyint(3) unsigned | YES | | NULL | | | ip_proto | tinyint(3) unsigned | | | 0 | | | ip_csum | smallint(5) unsigned | YES | | NULL | | +----------+----------------------+------+-----+---------+-------+ 14 rows in set (0.00 sec) mysql> For people who want to go into details of how data is stored, database schema provides great information. You can view complete database schema at http://www.incident.org/snortdb/. 5.1.5.1 Creating Extra Tables When you are using some other programs with database and Snort to map service numbers to service names, additional mapping information is needed. For example, TCP port 23 is used for Telnet. However the tcphdr table contains only the port number, not the textual description. If you want to display source and destination ports as text "Telnet port" instead of "23", you need this information.

Fortunately you can use the script create_mysql in the contrib directory and it will create all of the necessary tables for you. The contrib directory is present when you download Snort in the source code form from its web site http://www.snort.org and extract its source files. The create_mysql script is present along with other useful stuff in this directory. For example, scripts to create database schema in other types of database servers are also found in this directory. The following command uses this script to create all database tables in the snort database. [root@laptop]# mysql -h localhost -u rr -p snort < contrib/create_mysql Enter password: [root@laptop]# Different command line options are used with this command. The "-h localhost" part of the command is used to tell the mysql client that the database server is running on the same machine as the client.

More information about IP headers can be found in RFC 791 and Appendix C. The opt table contains options. The reference and reference_system tables contain information about reference sites used to get more information about a vulnerability. This is the same information that is used inside Snort rules using the ref keyword as discussed in Chapter 3. The schema tables shows the version of database schema. The sensor table contains information about different sensors that are logging data to the Snort database. If there is only one Snort sensor, the table contains only one row. Similarly, the table contains one row for each sensor. The sig_class contains information about different classes of Snort rules as discussed in Chapter 3. As an example, it contains entries like "attempted-recon", "misc-attack" and so on.


pages: 757 words: 193,541

The Practice of Cloud System Administration: DevOps and SRE Practices for Web Services, Volume 2 by Thomas A. Limoncelli, Strata R. Chalup, Christina J. Hogan

active measures, Amazon Web Services, anti-pattern, barriers to entry, business process, cloud computing, commoditize, continuous integration, correlation coefficient, database schema, Debian, defense in depth, delayed gratification, DevOps, domain-specific language, en.wikipedia.org, fault tolerance, finite state, Firefox, Google Glasses, information asymmetry, Infrastructure as a Service, intermodal, Internet of things, job automation, job satisfaction, load shedding, loose coupling, Malcom McLean invented shipping containers, Marc Andreessen, place-making, platform as a service, premature optimization, recommendation engine, revision control, risk tolerance, side project, Silicon Valley, software as a service, sorting algorithm, statistical model, Steven Levy, supply-chain management, Toyota Production System, web application, Yogi Berra

) * * * 11.8 Live Schema Changes Sometimes a new software release expects a different database schema from the previous release. If the service could withstand downtime, one would bring down the service, upgrade the software and change the database schema, and then restart the service. However, downtime is almost always unacceptable in a web environment. In a typical web environment there are many web server frontends, or replicas, that all talk to the same database server. The older software release is unable to understand the new database schema and will malfunction or crash. The newer software release is unable to understand the old database schema and will also malfunction. Therefore you cannot change the database schema and then do the software upgrade: the older replicas will fail as soon as the database is modified.

Therefore you cannot change the database schema and then do the software upgrade: the older replicas will fail as soon as the database is modified. There will be no service until replicas are upgraded. You cannot upgrade the replicas and then change the database schema because any upgraded replica will fail. The schema cannot be upgraded during the rolling upgrade: new replicas will fail, and then at the moment the database schema changes, all the failing replicas will start to work and the working replicas will start to fail. What chaos! Plus, none of these methods has a decent way to roll back changes if there is a problem. One way to deal with this scenario is to use database views. Each view provides a different abstraction to the same database. A new view is coded for each new software version. This decouples software upgrades from schema changes. Now when the schema changes, each view’s code must change to provide the same abstraction to the new schema.

Proportional shedding involves slowly migrating traffic from an old system to a new system, until the old system is receiving no new traffic. Flag flips are a technique whereby new features are included in a release but are disabled. A flag or software toggle enables the feature at a designated time. This makes it easy to revert a change if problems are found: simply turn the toggle off. Database schema changes on live systems can be difficult to coordinate. It is not possible to change the software of all clients at the exact same time as the database schema. Instead, the McHenry Technique is used to decouple the changes. New fields are added, software is upgraded to use the old and new fields, software is upgraded to use only the new fields, and any old fields are then removed from the schema. When a push fails, it is sometimes possible to roll forward to the next release rather than reverting code back to the last known good release.

Programming Android by Zigurd Mednieks, Laird Dornin, G. Blake Meike, Masumi Nakamura

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

anti-pattern, business process, conceptual framework, create, read, update, delete, database schema, Debian, domain-specific language, en.wikipedia.org, fault tolerance, Google Earth, interchangeable parts, iterative process, loose coupling, MVC pattern, revision control, RFID, web application

We provide several examples in this chapter that will help you to begin using our job database. Note The tables created by SQL CREATE TABLE statements and the attributes they contain are called a database schema. DROP TABLE This removes a table added with the CREATE TABLE statement. It takes the name of the table to be deleted. On completion, any data that was stored in the table may not be retrieved. Here is some SQL code that will create and then delete a simple table for storing contacts: CREATE TABLE contacts ( first_name TEXT, last_name TEXT, phone_number TEXT, height_in_meters REAL); DROP TABLE contacts; When entering commands through sqlite3, you must terminate each command with a semicolon. You may change the database schema after you create tables (which you may want to do to add a column or change the default value of a column) by entering the ALTER TABLE command.

Detailed information on Monkey can be found at http://developer.android.com/guide/developing/tools/monkey.html. sqlite3 Android uses SQLite as the database system for many system databases and provides APIs for applications to make use of SQLite, which is convenient for data storage and presentation. SQLite also has a command-line interface, and the sqlite3 command enables developers to dump database schemas and perform other operations on Android databases. These databases are, of course, in an Android device, or they are contained in an AVD, and therefore the sqlite3 command is available in the adb shell. Detailed directions for how to access the sqlite3 command line from inside the adb shell are available at http://developer.android.com/guide/developing/tools/adb.html#shellcommands. We introduce sqlite3 in Example Database Manipulation Using sqlite3.

Thus, if you insert a row into the employees table with an employer_id for a row that does not exist in the employers table, many flavors of SQL will raise a constraint violation. This may help you to avoid orphaned references, also known as enforcement of foreign keys. However, the foreign key constraint in SQLite is optional, and is turned off in Android. As of Android 2.2, you cannot rely on a foreign key constraint to catch incorrect foreign key references, so you will need to take care when creating database schemas that use foreign keys. There are several other constraints with less far-reaching effects: UNIQUE Forces the value of the given column to be different from the values in that column in all existing rows, whenever a row is inserted or updated. Any insert or update operation that attempts to insert a duplicate value will result in an SQLite constraint violation. NOT NULL Requires a value in the column; NULL cannot be assigned.

Refactoring: Improving the Design of Existing Code by Martin Fowler, Kent Beck, John Brant, William Opdyke, Don Roberts

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

conceptual framework, database schema, index card, MVC pattern, place-making, sorting algorithm

Let us know about these problems. As we learn more about refactoring, we will come up with more solutions to problems and learn about what problems are difficult to solve. Databases One problem area for refactoring is databases. Most business applications are tightly coupled to the database schema that supports them. That's one reason that the database is difficult to change. Another reason is data migration. Even if you have carefully layered your system to minimize the dependencies between the database schema and the object model, changing the database schema forces you to migrate the data, which can be a long and fraught task. With nonobject databases a way to deal with this problem is to place a separate layer of software between your object model and your database model. That way you can isolate the changes to the two different models.

Whereas applying (and possibly backing out) a series of incremental refactorings is fairly natural for Smalltalk and CLOS, the cost per iteration (in terms of recompilation and retesting) is higher for C++ programs; thus programmers tend to be less willing to make these small changes. Many applications use a database. Changes to the structure of objects in a C++ program may require that corresponding schematic changes be made to the database. (Many of the ideas I applied in my refactoring work came from research into object-oriented database schema 317 evolution.) Another limitation, which may interest software researchers more than many software practitioners, is that C++ does not provide support for metalevel program analysis and change. There is nothing analogous to the metaobject protocol available for CLOS. The metaobject protocol of CLOS, for example, supports a sometimes useful refactoring for changing selected instances of a class into instances of a different class while having all references to the old objects automatically point to the new objects.

The user might then decide that the refactoring is a bad idea after all or to change the parts of the program that refer 321 to that variable and apply the refactoring to remove the variable. There are many other checks, most as simple as this, some more complex. In my research, I defined safety in terms of program properties (related to activities such as scoping and typing) that need to continue to hold after refactoring. Many of these program properties are similar to integrity constraints that must be maintained when database schemas change.[17] Each refactoring has associated with it a set of necessary preconditions that if true would ensure that the program properties are preserved. Only if the tool were to determine that everything is safe would the tool perform the refactoring. Fortunately, determining whether a refactoring is safe often is trivial, especially for the low-level refactorings that constitute most of our refactoring.


pages: 313 words: 75,583

Ansible for DevOps: Server and Configuration Management for Humans by Jeff Geerling

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Amazon Web Services, Any sufficiently advanced technology is indistinguishable from magic, cloud computing, continuous integration, database schema, Debian, defense in depth, DevOps, fault tolerance, Firefox, full text search, Google Chrome, inventory management, loose coupling, Minecraft, Ruby on Rails, web application

Add a line to the main.yml file to include a new deploy.yml playbook: 1 --- 2 - include: provision.yml 3 - include: deploy.yml Now we’re ready to create a the deploy.yml playbook, which will do the following: Use git to check out the latest production release of the Rails app. Copy over a secrets.yml template that holds some secure app data required for running the app. Make sure all the gems required for the app are installed (via Bundler). Create the database (if it doesn’t already exist). Run rake tasks to make sure the database schema is up-to-date and all assets (like JS and CSS) are compiled. Make sure the app files’ ownership is set correctly so Passenger and Nginx can serve them without error. If any changes or updates were made, restart Passenger and Nginx. Most of these tasks will use Ansible’s modules, but for a few, we’ll just wrap the normal deployment-related commands in shell since there aren’t pre-existing modules to take care of them for us: 1 --- 2 - hosts: all 3 sudo: yes 4 5 vars_files: 6 - vars.yml 7 8 roles: 9 - geerlingguy.passenger 10 11 tasks: 12 - name: Ensure demo application is at correct release. 13 git: 14 repo: https://github.com/geerlingguy/demo-rails-app.git 15 version: "{{ app_version }}" 16 dest: "{{ app_directory }}" 17 accept_hostkey: true 18 register: app_updated 19 notify: restart nginx 20 21 - name: Ensure secrets file is present. 22 template: 23 src: templates/secrets.yml.j2 24 dest: "{{ app_directory }}/config/secrets.yml" 25 owner: "{{ app_user }}" 26 group: "{{ app_user }}" 27 mode: 0664 28 notify: restart nginx 29 30 - name: Install required dependencies with bundler. 31 shell: "bundle install --path vendor/bundle chdir={{ app_directory }}" 32 when: app_updated.changed == true 33 notify: restart nginx 34 35 - name: Check if database exists. 36 stat: "path={{ app_directory }}/db/{{ app_environment.RAILS_ENV }}.sqlite3" 37 register: app_db_exists 38 39 - name: Create database. 40 shell: "bundle exec rake db:create chdir={{ app_directory }}" 41 when: app_db_exists.stat.exists == false 42 notify: restart nginx 43 44 - name: Perform deployment-related rake tasks. 45 shell: "{{ item }} chdir={{ app_directory }}" 46 with_items: 47 - bundle exec rake db:migrate 48 - bundle exec rake assets:precompile 49 environment: app_environment 50 when: app_updated.changed == true 51 notify: restart nginx 52 53 - name: Ensure demo application has correct user for files. 54 file: 55 path: "{{ app_directory }}" 56 state: directory 57 owner: "{{ app_user }}" 58 group: "{{ app_user }}" 59 recurse: yes 60 notify: restart nginx The first thing you’ll notice (besides the fact that we’ve included the vars.yml file again, since we need those variables in this playbook as well) is that we’ve added the geerlingguy.passenger role in this playbook.

(Lines 35-42) Create the application database with rake db:create if it doesn’t already exist. Since this application uses a simple SQLite database, it’s a matter of checking if the .sqlite3 file exists, and if not, running the db:create task. (Lines 44-51) If the app_updated variable shows that a change occurred as part of the first git task, we’ll also run a couple rake tasks to make sure the database schema is up to date, and all assets (like scripts and stylesheets) are compiled. (Lines 53-60) Make sure all app files have the correct permissions for Passenger/Nginx to serve them correctly. Because many of the tasks result in filesystem changes that could change the behavior of the application, they all notify the restart nginx handler provided by the geerlingguy.passenger role, so Passenger reloads the configuration and restarts the app.

This way the entire state of your infrastructure is encapsulated in your playbook files, which ideally should be version controlled and managed similarly to the application they deploy. Plus, who wants to enter any more information on the command line than is absolutely required? Our application is a fairly generic web application that has updates to application code (which require a webserver reload), styles (which need recompiling), and possibly the database schema (which needs rake migrate tasks to be run). Any time app_version is changed inside playbooks/vars.yml, the deploy playbook will automatically run all the required tasks to get our app running with the latest code. Update app_version to 1.3.0, and then run the following command again: $ ansible-playbook deploy.yml -i inventory-ansible After a minute or so, the deployment should complete, and once that’s done, you’ll see the much improved new version of the Demonstration Ruby on Rails Application: Rails app - version 1.3.0 with a responsive UI.


pages: 655 words: 141,257

Programming Android: Java Programming for the New Generation of Mobile Devices by Zigurd Mednieks, Laird Dornin, G. Blake Meike, Masumi Nakamura

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

anti-pattern, business process, conceptual framework, create, read, update, delete, database schema, Debian, domain-specific language, en.wikipedia.org, fault tolerance, Google Earth, interchangeable parts, iterative process, loose coupling, MVC pattern, revision control, RFID, web application, yellow journalism

We provide several examples in this chapter that will help you to begin using our job database. Note The tables created by SQL CREATE TABLE statements and the attributes they contain are called a database schema. DROP TABLE This removes a table added with the CREATE TABLE statement. It takes the name of the table to be deleted. On completion, any data that was stored in the table may not be retrieved. Here is some SQL code that will create and then delete a simple table for storing contacts: CREATE TABLE contacts ( first_name TEXT, last_name TEXT, phone_number TEXT, height_in_meters REAL); DROP TABLE contacts; When entering commands through sqlite3, you must terminate each command with a semicolon. You may change the database schema after you create tables (which you may want to do to add a column or change the default value of a column) by entering the ALTER TABLE command.

Detailed information on Monkey can be found at http://developer.android.com/guide/developing/tools/monkey.html. sqlite3 Android uses SQLite as the database system for many system databases and provides APIs for applications to make use of SQLite, which is convenient for data storage and presentation. SQLite also has a command-line interface, and the sqlite3 command enables developers to dump database schemas and perform other operations on Android databases. These databases are, of course, in an Android device, or they are contained in an AVD, and therefore the sqlite3 command is available in the adb shell. Detailed directions for how to access the sqlite3 command line from inside the adb shell are available at http://developer.android.com/guide/developing/tools/adb.html#shellcommands. We introduce sqlite3 in Example Database Manipulation Using sqlite3.

Thus, if you insert a row into the employees table with an employer_id for a row that does not exist in the employers table, many flavors of SQL will raise a constraint violation. This may help you to avoid orphaned references, also known as enforcement of foreign keys. However, the foreign key constraint in SQLite is optional, and is turned off in Android. As of Android 2.2, you cannot rely on a foreign key constraint to catch incorrect foreign key references, so you will need to take care when creating database schemas that use foreign keys. There are several other constraints with less far-reaching effects: UNIQUE Forces the value of the given column to be different from the values in that column in all existing rows, whenever a row is inserted or updated. Any insert or update operation that attempts to insert a duplicate value will result in an SQLite constraint violation. NOT NULL Requires a value in the column; NULL cannot be assigned.

Django Book by Matt Behrens

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Benevolent Dictator For Life (BDFL), create, read, update, delete, database schema, distributed revision control, don't repeat yourself, en.wikipedia.org, Firefox, full text search, loose coupling, MVC pattern, revision control, Ruby on Rails, school choice, slashdot, web application

Middleware modules running outside of it run with commit-on-save – the default Django behavior. Middleware modules running inside it (coming later in the stack) will be under the same transaction control as the view functions. See Appendix B for more about information about database transactions. What’s Next? Web developers and database-schema designers don’t always have the luxury of starting from scratch. In the next chapter, we’ll cover how to integrate with legacy systems, such as database schemas you’ve inherited from the 1980s. © Copyright 2012, Matt Behrens. Created using Sphinx 1.2.2. index next | previous | Django Book 0.1 documentation » Chapter 18: Integrating with Legacy Databases and Applications Django is best suited for so-called green-field development – that is, starting projects from scratch, as if you were constructing a building on a fresh field of green grass.

It looks at all of the models in each app in your INSTALLED_APPS setting, checks the database to see whether the appropriate tables exist yet, and creates the tables if they don’t yet exist. Note that syncdb does not sync changes in models or deletions of models; if you make a change to a model or delete a model, and you want to update the database, syncdb will not handle that. (More on this in the “Making Changes to a Database Schema” section in Chapter 10.) If you run python manage.py syncdb again, nothing happens, because you haven’t added any models to the books app or added any apps to INSTALLED_APPS. Ergo, it’s always safe to run python manage.py syncdb – it won’t clobber things. If you’re interested, take a moment to dive into your database server’s command-line client and see the database tables Django created.

Here’s the revised code: class Book(models.Model): title = models.CharField(max_length=100) authors = models.ManyToManyField(Author) publisher = models.ForeignKey(Publisher) publication_date = models.DateField(blank=True, null=True) Adding null=True is more complicated than adding blank=True, because null=True changes the semantics of the database – that is, it changes the CREATE TABLE statement to remove the NOT NULL from the publication_date field. To complete this change, we’ll need to update the database. For a number of reasons, Django does not attempt to automate changes to database schemas, so it’s your own responsibility to execute the appropriate ALTER TABLE statement whenever you make such a change to a model. Recall that you can use manage.py dbshell to enter your database server’s shell. Here’s how to remove the NOT NULL in this particular case: ALTER TABLE books_book ALTER COLUMN publication_date DROP NOT NULL; (Note that this SQL syntax is specific to PostgreSQL.)


pages: 205 words: 47,169

PostgreSQL: Up and Running by Regina Obe, Leo Hsu

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

cloud computing, database schema, Debian, en.wikipedia.org, full text search, web application

database Each PostgreSQL server houses many databases. table Table are the workhorses of any database. What is unique about PostgreSQL tables is the inheritance support and the fact that every table automatically begets an accompanying custom data type. Tables can inherit from other tables and querying can bring up child records from child tables. schema Schemas are part of the ANSI-SQL standards, so you’ll see them in other databases. Schemas are the logical containers of tables and other objects. Each database can have multiple schemas. tablespace Tablespace is the physical location where data is stored. PostgreSQL allows tablespaces to be independently managed, which means you can easily move databases to different drives with just a few commands. view Most relational databases have views for abstracting queries. In PostgreSQL, you can also have views that can be updated.

Data export. pgAdmin can easily export query results as CSV or other delimited format. It can even export as HTML, providing you with a turn-key reporting engine, albeit a bit crude. Backup and restore wizard. Can’t remember the myriad of commands and switches to perform a backup or restore using pg_restore and pg_dump? pgAdmin has a nice interface that’ll let you selectively back up and restore databases, schemas, single tables, and globals, and the message tab shows you the command line pg_dump or pg_restore it used to do it. Grant Wizard. This time-saver will allow you to change permissions on many database objects in one fell swoop. pgScript engine. This is a quick and dirty way to run scripts that don’t have to complete as a transaction. With this you can run loops and so forth that commit on each SQL update, unlike stored functions that require all steps completed before the work is committed.

If you want to backup the whole server, doing a pg_dumpall, then use the Tools → Backup Server option. Selective Backup of Database Objects pgAdmin provides a graphical interface to pg_dump that we covered in Selective Backup Using pg_dump for doing selective backup of objects. To back up selective objects right-mouse click on the object you want to back up and select Backup .... You can back up a whole database, schema, table, or anything else. Figure 4-11. PgAdmin Right-click Backup Schema If all you wanted to back up was that one object, you can forgo the other tabs and just do as we did in Figure 4-10. However, you can selectively pick or uncheck some more items by clicking on the objects tab as shown in Figure 4-12. Figure 4-12. PgAdmin Right-click Backup Selective Note pgAdmin behind the scenes just runs pg_dump to perform the backup.

Data Mining: Concepts and Techniques: Concepts and Techniques by Jiawei Han, Micheline Kamber, Jian Pei

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

bioinformatics, business intelligence, business process, Claude Shannon: information theory, cloud computing, computer vision, correlation coefficient, cyber-physical system, database schema, discrete time, distributed generation, finite state, information retrieval, iterative process, knowledge worker, linked data, natural language processing, Netflix Prize, Occam's razor, pattern recognition, performance metric, phenotype, random walk, recommendation engine, RFID, semantic web, sentiment analysis, speech recognition, statistical model, stochastic process, supply-chain management, text mining, thinkpad, Thomas Bayes, web application

Many concept hierarchies are implicit within the database schema. For example, suppose that the dimension location is described by the attributes number, street, city, province_or_state, zip_code, and country. These attributes are related by a total order, forming a concept hierarchy such as “street < city < province_or_state < country.” This hierarchy is shown in Figure 4.10(a). Alternatively, the attributes of a dimension may be organized in a partial order, forming a lattice. An example of a partial order for the time dimension based on the attributes day, week, month, quarter, and year is “day <{month < quarter; week} < year.”1 This lattice structure is shown in Figure 4.10(b). A concept hierarchy that is a total or partial order among attributes in a database schema is called a schema hierarchy. Concept hierarchies that are common to many applications (e.g., for time) may be predefined in the data mining system.

More than one concept hierarchy can be defined for the same attribute to accommodate the needs of various users. Figure 3.12 A concept hierarchy for the attribute price, where an interval ($X … $Y] denotes the range from $X (exclusive) to $Y (inclusive). 6. Concept hierarchy generation for nominal data, where attributes such as street can be generalized to higher-level concepts, like city or country. Many hierarchies for nominal attributes are implicit within the database schema and can be automatically defined at the schema definition level. Recall that there is much overlap between the major data preprocessing tasks. The first three of these strategies were discussed earlier in this chapter. Smoothing is a form of data cleaning and was addressed in Section 3.2.2. Section 3.2.3 on the data cleaning process also discussed ETL tools, where users specify transformations to correct data inconsistencies.

In particular, we study concept hierarchy generation for nominal attributes. Nominal attributes have a finite (but possibly large) number of distinct values, with no ordering among the values. Examples include geographic_location, job_category, and item_type. Manual definition of concept hierarchies can be a tedious and time-consuming task for a user or a domain expert. Fortunately, many hierarchies are implicit within the database schema and can be automatically defined at the schema definition level. The concept hierarchies can be used to transform the data into multiple levels of granularity. For example, data mining patterns regarding sales may be found relating to specific regions or countries, in addition to individual branch locations. We study four methods for the generation of concept hierarchies for nominal data, as follows. 1.


pages: 292 words: 62,575

97 Things Every Programmer Should Know by Kevlin Henney

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

A Pattern Language, active measures, business intelligence, commoditize, continuous integration, crowdsourcing, database schema, deliberate practice, domain-specific language, don't repeat yourself, Donald Knuth, fixed income, general-purpose programming language, Grace Hopper, index card, inventory management, job satisfaction, loose coupling, Silicon Valley, sorting algorithm, The Wisdom of Crowds

If an object typically requires several things to happen before it can be used, this can be accomplished with an Abstract Factory or a Factory Method pattern. If an object has many possible variations in its behavior, these behaviors can be injected using the Strategy pattern rather than large if-then structures. In fact, the formulation of design patterns themselves is an attempt to reduce the duplication of effort required to solve common problems and discuss such solutions. In addition, DRY can be applied to structures, such as database schema, resulting in normalization. A Matter of Principle Other software principles are also related to DRY. The Once and Only Once principle, which applies only to the functional behavior of code, can be thought of as a subset of DRY. The Open/Closed Principle, which states that "software entities should be open for extension, but closed for modification," only works in practice when DRY is followed.

After all, classes are supposed to be collections of functions that operate on common variables. However, the problem is that the three functions change for entirely different reasons. The calculatePay function will change whenever the business rules for calculating pay do. The reportHours function will change whenever someone wants a different format for the report. The save function will change whenever the DBAs change the database schema. These three reasons to change combine to make Employee very volatile. It will change for any of those reasons. More importantly, any classes that depend upon Employee will be affected by those changes. Good system design means that we separate the system into components that can be independently deployed. Independent deployment means that if we change one component, we do not have to redeploy any of the others.


Learning Flask Framework by Matt Copperwaite, Charles Leifer

create, read, update, delete, database schema, Debian, DevOps, don't repeat yourself, full text search, place-making, Skype, web application

Summary By now you should be familiar with using SQLAlchemy to work with a relational database. We covered the benefits of using a relational database and an ORM, configured a Flask application to connect to a relational database, and created SQLAlchemy models. All this allowed us to create relationships between our data and perform queries. To top it off, we also used a migration tool to handle future database schema changes. In Chapter 3, Templates and Views we will set aside the interactive interpreter and start creating views to display blog entries in the web browser. We will put all our SQLAlchemy knowledge to work by creating interesting lists of blog entries, as well as a simple search feature. We will build a set of templates to make the blogging site visually appealing, and learn how to use the Jinja2 templating language to eliminate repetitive HTML coding.

Below the tags relationship, add the following code to the Entry model definition: class Entry(db.Model): # ... tags = db.relationship('Tag', secondary=entry_tags, backref=db.backref('entries', lazy='dynamic')) comments = db.relationship('Comment', backref='entry', lazy='dynamic') [ 148 ] Chapter 7 We've specified the relationship as lazy='dynamic', which, as you will recall from Chapter 5, Authenticating Users, means that the comments attribute on any given Entry instance will be a filterable query. Creating a schema migration In order to start using our new model, we need to update our database schema. Using the manage.py helper, create a schema migration for the Comment model: (blog) $ python manage.py db migrate INFO [alembic.migration] Context impl SQLiteImpl. INFO [alembic.migration] Will assume non-transactional DDL. INFO [alembic.autogenerate.compare] Detected added table 'comment' Generating /home/charles/projects/blog/app/migrations/ versions/490b6bc5f73c_.py ... done Then apply the migration by running upgrade: (blog) $ python manage.py db upgrade INFO [alembic.migration] Context impl SQLiteImpl.


pages: 933 words: 205,691

Hadoop: The Definitive Guide by Tom White

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Amazon Web Services, bioinformatics, business intelligence, combinatorial explosion, database schema, Debian, domain-specific language, en.wikipedia.org, fault tolerance, full text search, Grace Hopper, information retrieval, Internet Archive, linked data, loose coupling, openstreetmap, recommendation engine, RFID, SETI@home, social graph, web application

The Hive Web Interface (HWI) As an alternative to the shell, you might want to try Hive’s simple web interface. Start it using the following commands: % export ANT_LIB=/path/to/ant/lib % hive --service hwi (You only need to set the ANT_LIB environment variable if Ant’s library is not found in /opt/ant/lib on your system.) Then navigate to http://localhost:9999/hwi in your browser. From there, you can browse Hive database schemas and create sessions for issuing commands and queries. It’s possible to run the web interface as a shared service to give users within an organization access to Hive without having to install any client software. There are more details on the Hive Web Interface on the Hive wiki at https://cwiki.apache.org/confluence/display/Hive/HiveWebInterface. Hive clients If you run Hive as a server (hive --service hiveserver), then there are a number of different mechanisms for connecting to it from applications.

Tables A Hive table is logically made up of the data being stored and the associated metadata describing the layout of the data in the table. The data typically resides in HDFS, although it may reside in any Hadoop filesystem, including the local filesystem or S3. Hive stores the metadata in a relational database—and not in HDFS, say (see The Metastore). In this section, we shall look in more detail at how to create tables, the different physical storage formats that Hive offers, and how to import data into them. Multiple Database/Schema Support Many relational databases have a facility for multiple namespaces, which allow users and applications to be segregated into different databases or schemas. Hive supports the same facility, and provides commands such as CREATE DATABASE dbname, USE dbname, and DROP DATABASE dbname. You can fully qualify a table by writing dbname.tablename. If no database is specified, tables belong to the default database.

To install and configure MySQL, follow the documentation at http://dev.mysql.com/doc/refman/5.1/en/. Chapter 2 (“Installing and Upgrading MySQL”) in particular should help. Users of Debian-based Linux systems (e.g., Ubuntu) can type sudo apt-get install mysql-client mysql-server. RedHat users can type sudo yum install mysql mysql-server. Now that MySQL is installed, let’s log in and create a database (Example 15-1). Example 15-1. Creating a new MySQL database schema % mysql -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 349 Server version: 5.1.37-1ubuntu5.4 (Ubuntu) Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> CREATE DATABASE hadoopguide; Query OK, 1 row affected (0.02 sec) mysql> GRANT ALL PRIVILEGES ON hadoopguide.* TO '%'@'localhost'; Query OK, 0 rows affected (0.00 sec) mysql> GRANT ALL PRIVILEGES ON hadoopguide.* TO ''@'localhost'; Query OK, 0 rows affected (0.00 sec) mysql> quit; Bye The password prompt above asks for your root user password.


pages: 485 words: 74,211

Developing Web Applications with Haskell and Yesod by Michael Snoyman

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

create, read, update, delete, database schema, Debian, domain-specific language, don't repeat yourself, full text search, MVC pattern, web application

Your table might look something like: CREATE TABLE Person(id SERIAL PRIMARY KEY, name VARCHAR NOT NULL, age INTEGER) And if you are using a database like PostgreSQL, you can be guaranteed that the database will never store some arbitrary text in your age field. (The same cannot be said of SQLite, but let’s forget about that for now.) To mirror this database table, you would likely create a Haskell data type that looks something like: data Person = Person { personName :: Text , personAge :: Int } It looks like everything is type safe: the database schema matches our Haskell data types, the database ensures that invalid data can never make it into our data store, and everything is generally awesome. Well, until: You want to pull data from the database, and the database layer gives you the data in an untyped format. You want to find everyone older than 32, and you accidently write “thirtytwo” in your SQL statement. Guess what: that will compile just fine, and you won’t find out you have a problem until runtime.

Suppose that we now want to add a field for a person’s favorite programming language: {-# LANGUAGE QuasiQuotes, TypeFamilies, GeneralizedNewtypeDeriving, TemplateHaskell, OverloadedStrings, GADTs, FlexibleContexts #-} import Database.Persist import Database.Persist.Sqlite import Database.Persist.TH import Data.Time share [mkPersist sqlSettings, mkMigrate "migrateAll"] [persist| Person name String age Int Maybe created UTCTime default=now() language String default='Haskell' |] main = withSqliteConn ":memory:" $ runSqlConn $ do runMigration migrateAll Note The default attribute has absolutely no impact on the Haskell code itself; you still need to fill in all values. This will only affect the database schema and automatic migrations. We need to surround the string with single quotes so that the database can properly interpret it. Finally, Persistent can use double quotes for containing white space, so if we want to set someone’s default home country to be El Salvador: {-# LANGUAGE QuasiQuotes, TypeFamilies, GeneralizedNewtypeDeriving, TemplateHaskell, OverloadedStrings, GADTs, FlexibleContexts #-} import Database.Persist import Database.Persist.Sqlite import Database.Persist.TH import Data.Time share [mkPersist sqlSettings, mkMigrate "migrateAll"] [persist| Person name String age Int Maybe created UTCTime default=now() language String default='Haskell' country String "default='El Salvador'" |] main = withSqliteConn ":memory:" $ runSqlConn $ do runMigration migrateAll One last trick you can do with attributes is to specify the names to be used for the SQL tables and columns.


pages: 470 words: 109,589

Apache Solr 3 Enterprise Search Server by Unknown

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

bioinformatics, continuous integration, database schema, en.wikipedia.org, fault tolerance, Firefox, full text search, information retrieval, Internet Archive, natural language processing, performance metric, platform as a service, Ruby on Rails, web application

In order to make it easier for you to play with this data, the online code supplement to this book includes the data in formats that can readily be imported into Solr. Alternatively, if you already have your own data then I recommend starting with that, using this book as a guide. The MusicBrainz.org database is highly relational. Therefore, it will serve as an excellent instructional data set to discuss Solr schema choices. The MusicBrainz database schema is quite complex, and it would be a distraction to go over even half of it. I'm going to use a subset of it and express it in a way that has a straightforward mapping to the user interface seen on the MusicBrainz website. Each of these tables depicted in the following diagram can be easily constructed through SQL sub-queries or views from the actual MusicBrainz tables. To describe the major tables above, I'll use some examples here from my favorite band, the Smashing Pumpkins.

Even if you have one type of data to search for in an application, you might still use multiple cores (with the same configuration) and shard the data for scaling. Managing Solr Cores is discussed further in the deployment chapter. One combined index A combined index might also be called an aggregate index. As mentioned in the first chapter, an index is conceptually like a single-table relational database schema, thus sharing similarities with some NoSQL (non-relational) databases. In spite of this limitation, there is nothing to stop you from putting different types of data (say, artists and releases from MusicBrainz) into a single index. All you have to do is use different fields for the different document types, and use a field to discriminate between the types. An identifier field would need to be unique across all documents in this index, no matter what the type is, so you could easily do this by concatenating the field type and the entity's identifier.

The name attribute is the Solr schema field name that the column is going into. If it is not specified, then it defaults to the column name. When a column in the result can be placed directly into Solr without further processing, there is no need to specify the field declaration, because it is implied. When importing from a database, use the SQL AS keyword to use the same names as the Solr schema instead of the database schema. This reduces the number of<field/> elements and shortens existing ones. An attribute of the entity declaration that we didn't mention yet is transformer. This declares a comma-separated list of transformers that create, modify, and delete fields and even entire documents. The transformers are evaluated in-order, which can be significant. Usually the transformers use attributes specific to them on a given field to trigger that it should take some action, whether it be splitting the field into multiple values or formatting it or whatever.

Python Web Development With Django by Jeff Forcier

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

create, read, update, delete, database schema, Debian, don't repeat yourself, en.wikipedia.org, Firefox, full text search, Guido van Rossum, loose coupling, MVC pattern, revision control, Ruby on Rails, Silicon Valley, slashdot, web application

He holds degrees in Computer Science, Mathematics, and Music from the University of California. Introduction Ichange f you’re a Web developer, a programmer who creates Web sites, then Django just might your life. It has certainly changed ours. Anyone with even a little experience building dynamic Web sites knows the pain of reinventing certain standard features over and over.You need to create database schemas. You need to get data into and out of the database.You need to parse URLs.You need to sanitize input.You need to provide content-editing tools.You need to attend to security and usability.And so on. Where Web Frameworks Come From At some point you realize the wastefulness of reimplementing all these features on every new project; you decide to code your own libraries from scratch to provide them—or, more likely, you extract those libraries from your latest and greatest creation.Thereafter, when you start a new project, the first thing you do is install your library code.

Modeling a real-world problem in an object-oriented system is often a comparatively simple task, but for high-traffic Web sites the most realistic model isn’t always the most efficient. The model encompasses a wide range of potential pitfalls, one of which is changing the model code after your application has been deployed.Although you are “just changing code,” you have actually altered your database schema under the covers, and this often causes side effects to the preexisting data stored in the database.We go over many of these real-life concerns in the chapters ahead when exploring the design of some example applications. Views Views form much (sometimes most or all) of the logic in Django applications.Their definition is deceptively simple: Python functions are linked to one or more defined URLs, which return HTTP response objects.What happens in-between those two endpoints of Django’s HTTP mechanisms is entirely up to you.

Instead, it makes sure all model classes are represented as database tables, creating new tables as necessary— but not altering existing ones. Database “Synchronization” The reasoning behind syncdb’s behavior is Django’s core development team strongly believes one’s production data should never be at the mercy of an automated process. Additionally, it is a commonly held belief that changes to database schemas should only be performed when a developer understands SQL well enough to execute those changes by hand. The authors tend to agree with this approach; a better understanding of underlying technology is always preferable when developing with higher-layer tools. At the same time, an automatic or semi-automatic schema change-set mechanism (such as Rails’ migrations) can often speed up the development process.


Martin Kleppmann-Designing Data-Intensive Applications. The Big Ideas Behind Reliable, Scalable and Maintainable Systems-O’Reilly (2017) by Unknown

active measures, Amazon Web Services, bitcoin, blockchain, business intelligence, business process, c2.com, cloud computing, collaborative editing, commoditize, conceptual framework, cryptocurrency, database schema, DevOps, distributed ledger, Donald Knuth, Edward Snowden, ethereum blockchain, fault tolerance, finite state, Flash crash, full text search, general-purpose programming language, informal economy, information retrieval, Internet of things, iterative process, John von Neumann, loose coupling, Marc Andreessen, natural language processing, Network effects, packet switching, peer-to-peer, performance metric, place-making, premature optimization, recommendation engine, Richard Feynman, Richard Feynman, self-driving car, semantic web, Shoshana Zuboff, social graph, social web, software as a service, software is eating the world, sorting algorithm, source of truth, SPARQL, speech recognition, statistical model, web application, WebSocket, wikimedia commons

The column name in the database maps to the field name in Avro. Now, if the database schema changes (for example, a table has one column added and one column removed), you can just generate a new Avro schema from the updated database schema and export data in the new Avro schema. The data export process does not need to pay any attention to the schema change—it can simply do the schema conversion every time it runs. Anyone who reads the new data files will see that the fields of the record have changed, but since the fields are identified by name, the updated writer’s schema can still be matched up with the old reader’s schema. By contrast, if you were using Thrift or Protocol Buffers for this purpose, the field tags would likely have to be assigned by hand: every time the database schema changes, an administrator would have to manually update the mapping from data‐ base column names to field tags.

For example, say you are cur‐ rently storing each user’s full name in one field, and you instead want to store the first name and last name separately [23]. In a document database, you would just start writing new documents with the new fields and have code in the application that handles the case when old documents are read. For example: if (user && user.name && !user.first_name) { // Documents written before Dec 8, 2013 don't have first_name user.first_name = user.name.split(" ")[0]; } On the other hand, in a “statically typed” database schema, you would typically per‐ form a migration along the lines of: ALTER TABLE users ADD COLUMN first_name text; UPDATE users SET first_name = split_part(name, ' ', 1); -- PostgreSQL UPDATE users SET first_name = substring_index(name, ' ', 1); -- MySQL Schema changes have a bad reputation of being slow and requiring downtime. This reputation is not entirely deserved: most relational database systems execute the ALTER TABLE statement in a few milliseconds.


PostGIS in Action, 2nd Edition by Regina O. Obe, Leo S. Hsu

call centre, crowdsourcing, database schema, Debian, domain-specific language, en.wikipedia.org, Firefox, Google Earth, job automation, McMansion, megacity, Mercator projection, Network effects, openstreetmap, planetary scale, profit maximization, Ruby on Rails, Skype, South of Market, San Francisco, traveling salesman, web application

It logically segments objects (tables, views, functions, and so on) for easier management. CREATE SCHEMA ch01; In PostgreSQL it’s very easy to back up selected schemas and also to set up permissions based on schemas. You could, for example, have a big schema of fairly static data that you exclude from your daily backups, and also divide schemas along user groups so that you can allow each group to manage their own schema set of data. The postgis_in _action database schemas are chapter-themed so that it’s easy to download just the set of data you need for a specific chapter. Refer to appendix D for more details about schemas and security management. RESTAURANTS TABLE Next, you need to create a lookup table to map franchise codes to meaningful names. You can then add all the franchises you’ll be dealing with. Listing 1.2 Create franchise lookup table CREATE TABLE ch01.lu_franchises ( id char(3) PRIMARY KEY, franchise varchar(30) ); Create table INSERT INTO ch01.lu_franchises(id, franchise) Populate table VALUES ('BKG', 'Burger King'), ('CJR', 'Carl''s Jr'), ('HDE', 'Hardee'), ('INO', 'In-N-Out'), ('JIB', 'Jack in the Box'), ('KFC', 'Kentucky Fried Chicken'), ('MCD', 'McDonald'), ('PZH', 'Pizza Hut'), ('TCB', 'Taco Bell'), ('WDY', 'Wendy's'); Finally, you need to create a table to hold the data you’ll be loading.

The main downside of Imposm is that it only runs on Linux and OS X, and not on Windows, whereas osm2pgsql has support for all the platforms PostGIS runs on. Imposm also currently doesn’t support OSM differential loads, so it’s not a good solution if you have a large OSM data set and want to perform differential updates as things get updated on OSM. One key benefit of Imposm is that it supports custom database schemas, so you can better control the table structures of your data as part of the load process. You can find out more about Imposm at http://imposm.org. Visit www.openstreetmap .org/export to download both utilities. Now that we’ve covered the common free options available for loading data, let’s test these tools. 4.4.1 Getting OSM data You can choose to download and load the whole OSM database, which is about 16 GB in size, or you can download extracts of key areas such as those available at http:// download.geofabrik.de/ or http://metro.teczno.com.

Licensed to tracy moore <nordick.an@gmail.com> www.it-ebooks.info 206 CHAPTER 8 Listing 8.2 PostGIS TIGER geocoder Generate nationscript (for Linux/Unix/Mac) \t \a \o /gisdata/nationscript.sh SELECT loader_generate_nation_script('sh'); \o Listing 8.3 Generate nationscript (for Windows) \t \a \o /gisdata/nationscript.bat SELECT loader_generate_nation_script('windows'); \o Psql will place the script file in the gisdata directory. Open up the file in an editor and make changes to the path and database connection as appropriate. Once you’ve made the changes and double-checked your typing, go ahead and execute the script file from your OS command line. After you’ve executed the script, you should see a couple of new tables in your tiger_data database schema. The two of most importance are states_all and county_all. Now that you’ve had some practice at loading data, let’s generate a script to load individual state data. If you’re on Linux, Unix, or Mac, run listing 8.4. If you’re on Windows, run listing 8.5. If you created a custom profile in the tiger.loader_platform table, use that name instead of sh or windows. Also, be sure to substitute in your favorite states.


pages: 398 words: 86,855

Bad Data Handbook by Q. Ethan McCallum

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Amazon Mechanical Turk, asset allocation, barriers to entry, Benoit Mandelbrot, business intelligence, cellular automata, chief data officer, Chuck Templeton: OpenTable, cloud computing, cognitive dissonance, combinatorial explosion, commoditize, conceptual framework, database schema, en.wikipedia.org, Firefox, Flash crash, Gini coefficient, illegal immigration, iterative process, labor-force participation, loose coupling, natural language processing, Netflix Prize, quantitative trading / quantitative finance, recommendation engine, selection bias, sentiment analysis, statistical model, supply-chain management, survivorship bias, text mining, too big to fail, web application

Delimiters Delimiters separate fields and records within a file. For example, in a comma-separated file, the newline character separates records, and commas separate fields. Picking a delimiter lives in a class of problems that sucks more energy than is possible to imagine. I know of a production system that uses four bars (||||) to delimit strings, to work around not being able to modify the database schema and a fear of modifying the underlying data to escape text. Paying homage to http://xkcd.com/927/, the ASCII standard includes delimiter characters that are almost never used. ASCII 30 is the record separator and ASCII 31 is designated for delimiting fields. In principle, user-facing systems reading in this data could neatly align things together in a tabular form. In practice, there are no user-facing systems that I’m aware of that even support these delimiters, let alone do something nice for the user.

Whatever our schema, using SQL and RDBMS technology to solve graph problems courts an impedance mismatch often seen in object-oriented systems backed by relational databases. Impedance matching originates in the field of electrical engineering and is the process of matching source and load impedance to maximize power transfer and minimize signal reflection in the transmission line. The object-relational impedance mismatch is a metaphor applying when a relational database schema (source) is at odds with the target object model in object-oriented languages such as Java or Ruby (load), necessitating a mapping between the two (signal reflection/power loss). Object-relational mapping (ORM) frameworks such as Hibernate, MyBatis, and ActiveRecord attempt to abstract away this complexity. The downside is that each tool introduces a potentially long learning curve, possible performance concerns, and of course, bugs.


pages: 448 words: 84,462

Testing Extreme Programming by Lisa Crispin, Tip House

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

c2.com, continuous integration, data acquisition, database schema, Donner party, Drosophila, hypertext link, index card, job automation, web application

In a way, these tests defy the whole idea of design, because if you can spec them out in advance, they aren't really unexpected, are they? This is definitely an area where you may need to wait for the system to be available and then play with it to get an idea of how to abuse it. These kinds of tests require a lot of creativity; have fun with them. Sometimes it's tough to even think how to test a particular item. For example, what if a story implements a data model within the database schema but no story uses it in a user interface? It may be that this type of test is better handled by the unit tests—it may not even be appropriate to have a separate acceptance test. Consider whether it impinges on the customer in some indirect way, and design tests for that. After all, if the customer really can't tell whether the story has been implemented or not, that should be a pretty quick story to implement, eh?

The same operations on different items in the database needed completely different classes and thus completely different tests. Each test was unique code from start to finish. A SubmitRequest for content item A would not work for submitting a request for content item B. Very little code could be shared between test scripts. The team was next asked to develop a similar application for a different Web site but was allowed to design the database schema. They were able to make this application much more generic, using only four tables. The object model needed only nine classes. Now the SubmitRequest test script worked for every type of content. This made test script development go much faster, and the scripts were vastly easier to maintain. Rethinking the design for all the content management, the team came up with a design needing just two tables, where a single definition of content could be used to represent all types of information.


pages: 713 words: 93,944

Seven Databases in Seven Weeks: A Guide to Modern Databases and the NoSQL Movement by Eric Redmond, Jim Wilson, Jim R. Wilson

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Amazon Web Services, create, read, update, delete, data is the new oil, database schema, Debian, domain-specific language, en.wikipedia.org, fault tolerance, full text search, general-purpose programming language, linked data, MVC pattern, natural language processing, node package manager, random walk, recommendation engine, Ruby on Rails, Skype, social graph, web application

Each day should contain a count of the number of events for that date or should remain blank if no event occurs. 2.4 Day 3: Full-Text and Multidimensions We’ll spend Day 3 investigating the many tools at our disposal to build a movie query system. We’ll begin with the many ways that PostgreSQL can search actor/movie names using fuzzy string matching. Then we’ll discover the cube package by creating a movie suggestion system based on similar genres of movies we already like. Since these are all contributed packages, the implementations are special to PostgreSQL and not part of the SQL standard. Commonly, when designing a relational database schema, you’ll start with an entity diagram. We’ll be writing a personal movie suggestion system that keeps track of movies, their genres, and their actors, as modeled in Figure 6, ​Our movie suggestion system​. * * * Figure 6. Our movie suggestion system As a reminder, on Day 1 we installed several contributed packages. Today we’ll need them all. Again, the list we’ll need installed is as follows: tablefunc, dict_xsyn, fuzzystrmatch, pg_trgm, and cube.

When you don’t know in advance what exactly your data will look like, document databases are a good bet. Also, because of the nature of documents, they often map well to object-oriented programming models. This means less impedance mismatch when moving data between the database model and application model. Not-So-Good For: If you’re used to performing elaborate join queries on highly normalized relational database schemas, you’ll find the capabilities of document databases lacking. A document should generally contain most or all of the relevant information required for normal use. So while in a relational database you’d naturally normalize your data to reduce or eliminate copies that can get out of sync, with document databases, denormalized data is the norm. Graph Graph databases are an emerging class of database that focuses more on the free interrelation of data than the actual values.


pages: 696 words: 111,976

SQL Hacks by Andrew Cumming, Gordon Russell

bioinformatics, business intelligence, business process, database schema, en.wikipedia.org, Erdős number, Firefox, full text search, Hacker Ethic, Paul Erdős, Stewart Brand, web application

The marshalling of columns for transport, sending them to your code, and then splitting them up again all require CPU time and effort. You could perform the addition in the database system and save all that code. If lots of code needs those rows added, you could put that processing into a view. This might be more reliable. It would certainly promote code reuse, and would offer a single point of control so that when the database schema is altered, you've just got to modify the view. Put another way, the choice comes down to this: mysql> CREATE TABLE widetable (a INT,b INT, c INT, d INT, e INT); Query OK, 0 rows affected (0.00 sec) mysql> INSERT INTO widetable VALUES (5,10,19,11,3); Query OK, 1 row affected (0.00 sec) mysql> SELECT a,b,c,d,e FROM widetable; +------+------+------+------+------+ | a | b | c | d | e | +------+------+------+------+------+ | 5 | 10 | 19 | 11 | 3 | +------+------+------+------+------+ 1 row in set (0.00 sec) Then, in your programming language, do (5+10+19+11+3), throw away a,b,c,d,e, and just use the value 48 or this: mysql> SELECT a+b+c+d+e AS v FROM widetable; +------+ | v | +------+ | 48 | +------+ 1 row in set (0.00 sec) In addition to arithmetic, other calculations may best be done in the database.

You simply have to ensure that your original string is unusual enough so as not to appear accidentally in another context. Another approach is to have the table names combined with a program variable, and then to set the variable to the new prefix. 11.3.2. Keep Your CREATE Script When I am working on a new application one of the first things I do is to create a file called mktable.sql that has the CREATE script for every table being used. Of course, I never get the database schema right the first time, so I have to edit and run the mktable.sql script many times before I have it the way I want it: DROP TABLE casting; DROP TABLE movie; DROP TABLE actor; CREATE TABLE actor (id INTEGER NOT NULL ,name VARCHAR(35) ,PRIMARY KEY (id) ); CREATE INDEX actor_name ON actor(name); CREATE TABLE movie (id INTEGER NOT NULL ,title VARCHAR(70) ,yr DECIMAL(4) ,score FLOAT ,votes INTEGER ,director INTEGER ,PRIMARY KEY (id) ,FOREIGN KEY (director) REFERENCES actor(id) ); CREATE INDEX movie_title ON movie(title); CREATE TABLE casting (movieid INTEGER NOT NULL ,actorid INTEGER NOT NULL ,ord INTEGER ,PRIMARY KEY (movieid, actorid) ,FOREIGN KEY (movieid) REFERENCES movie(id) ,FOREIGN KEY (actorid) REFERENCES actor(id) ); Notice that the DROP commands come before the CREATE commands.


pages: 153 words: 27,424

REST API Design Rulebook by Mark Masse

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

anti-pattern, conceptual framework, create, read, update, delete, data acquisition, database schema, hypertext link, information retrieval, web application

For example, this URI design: http://api.soccer.restapi.org/leagues/seattle/teams/trebuchet indicates that each of these URIs should also identify an addressable resource: http://api.soccer.restapi.org/leagues/seattle/teams http://api.soccer.restapi.org/leagues/seattle http://api.soccer.restapi.org/leagues http://api.soccer.restapi.org Resource modeling is an exercise that establishes your API’s key concepts. This process is similar to the data modeling for a relational database schema or the classical modeling of an object-oriented system. Before diving directly into the design of URI paths, it may be helpful to first think about the REST API’s resource model. Resource Archetypes When modeling an API’s resources, we can start with the some basic resource archetypes. Like design patterns, the resource archetypes help us consistently communicate the structures and behaviors that are commonly found in REST API designs.


pages: 999 words: 194,942

Clojure Programming by Chas Emerick, Brian Carper, Christophe Grand

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Amazon Web Services, Benoit Mandelbrot, cloud computing, continuous integration, database schema, domain-specific language, don't repeat yourself, en.wikipedia.org, failed state, finite state, Firefox, game design, general-purpose programming language, Guido van Rossum, Larry Wall, mandelbrot fractal, Paul Graham, platform as a service, premature optimization, random walk, Ruby on Rails, Schrödinger's Cat, semantic web, software as a service, sorting algorithm, Turing complete, type inference, web application

Without those middlemen, there is no need to maintain a canonical mapping of things to their IDs: either the “things” are different and, as values, are enough to identify themselves, or they are not. As a bonus, eliminating synthetic identifiers means that different threads or even processes no longer need to communicate to assign unique identifiers to chunks of data, because you can derive natural identifiers from the data at hand. The debate of the respective virtues of synthetic keys versus natural keys is not new, but is still very much alive when it comes to database schema design. The arguments for synthetic identifiers revolve around two preoccupations: that data is going to survive the process that created it (and the rules it embodies), and that data is hard to refactor. The implication is that synthetic identifiers are of little use for in-process or otherwise application-level computation. A typical example of unnecessary synthetic identifiers is the distinct states generated by parsers or regular expression engines.

If all we want is to run a query and fetch the results in their entirety, we can easily set up a utility function to do this for us: (defn fetch-results [db-spec query] (jdbc/with-connection db-spec (jdbc/with-query-results res query (doall res)))) ;= #'user/fetch-results (fetch-results db-spec ["SELECT * FROM authors"]) ;= ({:id 1, :first_name "Chas", :last_name "Emerick"} ;= {:id 2, :first_name "Christophe", :last_name "Grand"} ;= {:id 3, :first_name "Brian", :last_name "Carper"}) Transactions Performing database operations as a single transaction is a simple matter of wrapping your code in a transaction form. transaction accepts any number of forms and executes each in turn as part of a single transaction. If an exception is thrown, or if the operation(s) performed will violate a constraint established in your database schema, the transaction will be rolled back. If the body of transaction finishes executing without error, the transaction is committed. (jdbc/with-connection db-spec (jdbc/transaction (jdbc/delete-rows :authors ["id = ?" 1]) (throw (Exception. "Abort transaction!")))) ;= ; Exception Abort transaction! (fetch-results ["SELECT * FROM authors where id = ?" 1]) ;= ({:id 1, :first_name "Chas", :last_name "Emerick"}) Here we throw an exception to forcibly abort our transaction.

DOCTYPE hibernate-configuration SYSTEM "http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd"> <hibernate-configuration> <session-factory> <property name="hibernate.connection.driver_class">org.sqlite.JDBC</property> <property name="hibernate.connection.url">jdbc:sqlite::memory:</property> <property name="hibernate.dialect">org.hibernate.dialect.HSQLDialect</property> <!-- Drop and re-create the database schema on startup --> <property name="hbm2ddl.auto">create</property> <mapping class="com.clojurebook.hibernate.Author"/> </session-factory> </hibernate-configuration> Finally, if we’re using Leiningen, we just need to add a couple of keys to our project.clj; one to indicate the source root for Java code we’d like it to compile (i.e., Author.java), and another to indicate the resources root, where we placed our hibernate.cfg.xml file: :java-source-path "java" :resources-path "rsrc" This leaves our project structure looking like so: |-- project.clj |-- rsrc | `-- hibernate.cfg.xml |-- java | `-- com | `-- clojurebook | `-- hibernate | |-- Author.java We can now compile our Java class and start a REPL to start seeing how we can use Hibernate from Clojure: % lein javac ... % lein repl We first need to import the required Java classes from Hibernate, and our new Author class: (import 'org.hibernate.SessionFactory 'org.hibernate.cfg.Configuration 'com.clojurebook.hibernate.Author) Hibernate requires setting up a session factory object to allow us to open and close database sessions and run queries.


pages: 648 words: 108,814

Solr 1.4 Enterprise Search Server by David Smiley, Eric Pugh

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Amazon Web Services, bioinformatics, cloud computing, continuous integration, database schema, domain-specific language, en.wikipedia.org, fault tolerance, Firefox, information retrieval, Internet Archive, Ruby on Rails, web application, Y Combinator

Another interesting bit of data MusicBrainz stores is the PUID "lookup count", which is how often it has been requested by their servers—a decent measure of popularity. MusicBrainz uses the proprietary "MusicDNS" audio fingerprinting technology, which was donated to the project by MusicIP. It is not open source. One combined index or multiple indices As mentioned in the first chapter, technically, an index is basically like a single-table database schema. Imagine a massive spreadsheet, if you will. Inspite of this limitation, there is nothing to stop you from putting different types of data (say, artists and tracks from MusicBrainz) into a single index, thereby, in effect mitigating this limitation. All you have to do is use different fields for the different document types, and use a field to discriminate between the types. An identifier field would need to be unique across all documents in this index, no matter the type, so you could easily do this by concatenating the field type and the entity's identifier.

DIH fields and transformers Within the <entity/> are some <field/> elements that declare how the columns in the query map to Solr. The field element must have a column attribute that matches the corresponding named column in the SQL query. The name attribute is the Solr schema field name that the column is going into. If it is not specified (and it never is for our example), then it defaults to the column name. Use the SQL as a keyword as we've done to use the same names as the Solr schema instead of the database schema. This reduces the number of explicit mappings needed in <field/> elements and shortens existing ones. When a column in the result can be placed directly into Solr without further processing, there is no need to specify the field declaration, because it is implied. An attribute of the entity declaration that we didn't mention yet is transformer. This declares a comma-separated list of transformers that manipulate the transfer of data from the JDBC resultset into a Solr field.


pages: 420 words: 79,867

Developing Backbone.js Applications by Addy Osmani

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Airbnb, anti-pattern, create, read, update, delete, database schema, don't repeat yourself, Firefox, full text search, Google Chrome, Khan Academy, loose coupling, MVC pattern, node package manager, pull request, Ruby on Rails, side project, single page application, web application

Let us go back to server.js and define a simple route: // Routes app.get( '/api', function( request, response ) { response.send( 'Library API is running' ); }); The get function takes a URL as the first parameter and a function as the second. The function will be called with request and response objects. Now you can restart node and go to our specified URL: Connect to the database Fantastic. Now, since we want to store our data in MongoDB, we need to define a schema. Add this to server.js: //Connect to database mongoose.connect( 'mongodb://localhost/library_database' ); //Schemas var Book = new mongoose.Schema({ title: String, author: String, releaseDate: Date }); //Models var BookModel = mongoose.model( 'Book', Book ); As you can see, schema definitions are quite straight forward. They can be more advanced, but this will do for us. I also extracted a model (BookModel) from Mongo. This is what we will be working with. Next up, we define a GET operation for the REST API that will return all books: //Get a list of all books app.get( '/api/books', function( request, response ) { return BookModel.find( function( err, books ) { if( !


pages: 420 words: 61,808

Flask Web Development: Developing Web Applications With Python by Miguel Grinberg

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

database schema, Firefox, full text search, Minecraft, platform as a service, web application

Flask-SQLAlchemy creates database tables from models only when they do not exist already, so the only way to make it update tables is by destroying the old tables first, but of course this causes all the data in the database to be lost. A better solution is to use a database migration framework. In the same way source code version control tools keep track of changes to source code files, a database migration framework keeps track of changes to a database schema, and then incremental changes can be applied to the database. The lead developer of SQLAlchemy has written a migration framework called Alembic, but instead of using Alembic directly, Flask applications can use the Flask-Migrate extension, a lightweight Alembic wrapper that integrates with Flask-Script to provide all operations through Flask-Script commands. Creating a Migration Repository To begin, Flask-Migrate must be installed in the virtual environment: (venv) $ pip install flask-migrate Example 5-8 shows how the extension is initialized.

Learning Node.js: A Hands-On Guide to Building Web Applications in JavaScript by Marc Wandschneider

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

database schema, en.wikipedia.org, Firefox, Google Chrome, node package manager, telemarketer, web application

Note that the 2.x series of the mysql module is a significant improvement and departure from the 0.x series. Again, one of the trickier parts of Node.js is that it’s constantly under development and improvement; things are still changing quite a bit. You use the 2.x series in this book because it’s much more robust than what was there before. Creating a Schema for the Database When working with MySQL, you need to create a database schema for the application, which you place in schema.sql. The first part of this is to create a database with UTF-8 as the default character set and sorting order, as follows: DROP DATABASE IF EXISTS PhotoAlbums; CREATE DATABASE PhotoAlbums DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci; USE PhotoAlbums; You then have to create a schema for your Users table, where you place information for registered users of your photo-sharing app.


pages: 315 words: 70,044

Learning SPARQL by Bob DuCharme

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

database schema, Donald Knuth, en.wikipedia.org, G4S, linked data, semantic web, SPARQL, web application

Many developers are finding that the easiest way to make relational data available is to install a middleware layer that accepts SPARQL queries, translates them to SQL, queries the relational data, and returns the data using the SPARQL Query Results XML Format, as shown in Figure 7-5. If you’ve followed along with the examples in this chapter, then you’ve already used one of these combinations of an RDBMS and SPARQL middleware without knowing it: the Linked Moved Database stores its data using the MySQL database and uses the D2RQ server as a middleware layer. D2RQ is free, and once you give it read access to a relational database, it can read the database schema and generate the files it needs to map SPARQL queries to SQL for that database. (These include the resource and property names that you’ll use in your queries, and can be customized.) If the same D2RQ server has this kind of access to multiple databases, a single SPARQL query can ask for data from multiple databases at once, which is not possible with a standard SQL query. The Free University of Berlin team that developed D2RQ came up with their own ontology for mapping between relational schemas and RDF vocabularies.


pages: 1,266 words: 278,632

Backup & Recovery by W. Curtis Preston

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Berlin Wall, business intelligence, business process, database schema, Debian, dumpster diving, failed state, fault tolerance, full text search, job automation, side project, Silicon Valley, web application

Fortunately, even if Oracle is down, there is an easy way to determine which files belong to the required tablespaces. Finding out if the datafile contains a rollback segment is a little more difficult, but it is still possible. If you use rman with a recovery catalog, you can connect to the catalog, set the DBID of the database, and issue the report schema command. This command displays the files, locations, and whether they contain rollback segments. RMAN> report schema; Report of database schema File Size(MB) Tablespace RB segs Datafile Name ---- ---------- ---------------- ------- ------------------- 1 307200 SYSTEM NO /oracle/oradata/trgt/system01.dbf 2 20480 UNDOTBS YES /oracle/oradata/trgt/undotbs01.dbf 3 10240 CWMLITE NO /oracle/oradata/trgt/cwmlite01.dbf 4 10240 DRSYS NO /oracle/oradata/trgt/drsys01.dbf 5 10240 EXAMPLE NO /oracle/oradata/trgt/example01.dbf 6 10240 INDX NO /oracle/oradata/trgt/indx01.dbf 7 10240 TOOLS NO /oracle/oradata/trgt/tools01.dbf 8 10240 USERS NO /oracle/oradata/trgt/users01.dbf To find out which datafiles are in the system tablespace, you’ll need to query sys.dba_data_files: SQL> select file_name, tablespace_name from sys.dba_data_files; /oracle/oradata/trgt/system01.dbf SYSTEM /oracle/oradata/trgt/undotbs01.dbf UNDOTBS /oracle/oradata/trgt/cwmlite01.dbf CWMLITE /oracle/oradata/trgt/drsys01.dbf DRSYS /oracle/oradata/trgt/example01.dbf EXAMPLE /oracle/oradata/trgt/indx01.dbf INDX /oracle/oradata/trgt/tools01.dbf TOOLS /oracle/oradata/trgt/users01.dbf USERS This example report shows three datafiles that are members of the system , sysaux, and undo tablespaces.

restoring, Restoring an RDBMS, Loss of Any Nondata Disk, Loss of a Data Disk, Loss of a Data Disk, Online Partial Restores, Documentation and Testing documentation and testing, Documentation and Testing loss of a data disk, Loss of a Data Disk loss of a nondata disk, Loss of Any Nondata Disk loss of the master database, Loss of a Data Disk online partial restores, Online Partial Restores rollback log, Rollback log row (tuple), Row segments, Tablespace set up information, recording, Take an Inventory tables, Table tablespace, Extents transaction, Transaction transaction log, Transaction log transaction logs, importance of, Don’t Go Overboard view, Table what can go wrong, What Can Happen to an RDBMS? DB2, DB2 Architecture, Managing archive logs, Instance, Databases, Schemas, Tables, Views, Indexes, DB2 engine dispatch units, System catalog tables, Database partition, Database partition, Containers, Tablespaces, Tablespaces, Tablespaces, Large objects (LOBs), Large objects (LOBs), Transaction logs, Transaction logs, Managing archive logs, Managing archive logs, Managing archive logs, The backup Command, Backup levels, Backup levels, Backup levels, Backup path and filenaming convention, Backup path and filenaming convention, Discovering the history of your backup operations, Automatic maintenance, Automatic maintenance, Crash recovery, Version recovery, Rollforward recovery, The restore Command, The rollforward Command, The recover Command, Performing an In-Place Version Recovery, Performing a Redirected Version Recovery, Performing a Rollforward Recovery, Performing a Rollforward Recovery, Reorganizing Data and Collecting Statistics architecture, DB2 Architecture, Managing archive logs, Instance, Databases, Schemas, Tables, Views, Indexes, DB2 engine dispatch units, System catalog tables, Database partition, Database partition, Containers, Tablespaces, Tablespaces, Tablespaces, Large objects (LOBs), Large objects (LOBs), Transaction logs, Transaction logs, Managing archive logs, Managing archive logs archive logging, Managing archive logs circular logging, Managing archive logs container, Containers database, Databases database managed spaces (DMS), Tablespaces engine dispatch units (EDUs), DB2 engine dispatch units index, Indexes instance, Instance large (or long) tablespace, Large objects (LOBs) large objects (LOBs), Large objects (LOBs) partition, Database partition partition group, Database partition schema, Schemas system catalog tables, System catalog tables system managed spaces (SMS), Tablespaces tables, Tables tablespaces, Tablespaces transaction logs, Transaction logs view, Views write ahead logging, Transaction logs automated backup utilities, Automatic maintenance backup command, The backup Command, Backup levels, Backup levels, Backup levels, Backup path and filenaming convention, Discovering the history of your backup operations delta backup, Backup levels full backup, Backup levels incremental backup, Backup levels pathname and filename, Backup path and filenaming convention recovery history file, Discovering the history of your backup operations crash recovery, Crash recovery health monitor, Automatic maintenance in-place version recovery, Performing an In-Place Version Recovery offline backup, Managing archive logs partial backups, Backup path and filenaming convention recover command, The recover Command redirected version recovery, Performing a Redirected Version Recovery restore command, The restore Command rollforward command, The rollforward Command rollforward recovery, Rollforward recovery, Performing a Rollforward Recovery, Performing a Rollforward Recovery point in time (PIT), Performing a Rollforward Recovery statistics, gathering, Reorganizing Data and Collecting Statistics version recovery, Version recovery DB2 UDB, IBM DB2 Backup and Recovery, Reorganizing Data and Collecting Statistics dd utility, Backups change atime, Backing Up and Restoring with the dd Utility, Basic dd Options, Using dd to Copy a File or Raw Device, Using dd to Convert Data, Using dd to Determine the Block Size of a Tape, Using dd to Figure out the Backup Format atime, changing, Backups change atime backup format, Using dd to Figure out the Backup Format convert data with, Using dd to Convert Data copy a file or raw device with, Using dd to Copy a File or Raw Device determine block size of a tape with, Using dd to Determine the Block Size of a Tape options, Basic dd Options de-duplication, De-Duplication Backup Systems, De-duplication backup systems, De-Duplication Backup Systems density versus compression, Density Versus Compression development procedures (backups), following proper, Following Proper Development Procedures differential backups, Backup Levels digital audio tape (DAT), DDS drive Digital Data Storage (DDS), DDS drive Digital Linear Tape (DLT), DLT drives (end-of-lifed) directories, cpio’s Restore Options, Torture-Testing Backup Programs, Pass I, Pass III cpio restore, making needed, cpio’s Restore Options data, writing (dump utility), Pass III dump utility, evaluating for backup, Pass I files, becoming (volatile filesystem backups), Torture-Testing Backup Programs disasters, types of, Be Ready for Anything: 10 Types of Disasters, Be Ready for Anything: 10 Types of Disasters disk drives, Be Ready for Anything: 10 Types of Disasters, Reliability failure, losing data from, Be Ready for Anything: 10 Types of Disasters versus tape media, Reliability disk mirroring, System Recovery and Disk Mirroring disk staging, Disk Targets disk-as-disk targets, Disk-As-Disk Targets disk-as-tape units (virtual tape libraries), Disk-As-Tape: Virtual Tape Libraries disk-based backup, You Want to Learn About Disk-Based Backup disk-to-disk-to-tape (D2D2T) backup, You Want to Learn About Disk-Based Backup disks left powered off, Re-presentation ditto utility, Backing Up and Restoring with the ditto Utility, Syntax of ditto When Backing Up, The Options to the ditto Command, Syntax of ditto when Restoring options, The Options to the ditto Command restoring with, Syntax of ditto when Restoring syntax, Syntax of ditto When Backing Up documentation, Take an Inventory, 12,000 gold pieces, Self-preservation: Document, document, document backups, 12,000 gold pieces importance in backup plan, Take an Inventory restore program, importance of, Self-preservation: Document, document, document downtime, cost of, What Will Downtime Cost You?

DB2, DB2 Architecture, Managing archive logs, Instance, Databases, Schemas, Tables, Views, Indexes, DB2 engine dispatch units, System catalog tables, Database partition, Database partition, Containers, Tablespaces, Tablespaces, Tablespaces, Large objects (LOBs), Large objects (LOBs), Transaction logs, Transaction logs, Managing archive logs, Managing archive logs, Managing archive logs, The backup Command, Backup levels, Backup levels, Backup levels, Backup path and filenaming convention, Backup path and filenaming convention, Discovering the history of your backup operations, Automatic maintenance, Automatic maintenance, Crash recovery, Version recovery, Rollforward recovery, The restore Command, The rollforward Command, The recover Command, Performing an In-Place Version Recovery, Performing a Redirected Version Recovery, Performing a Rollforward Recovery, Performing a Rollforward Recovery, Reorganizing Data and Collecting Statistics architecture, DB2 Architecture, Managing archive logs, Instance, Databases, Schemas, Tables, Views, Indexes, DB2 engine dispatch units, System catalog tables, Database partition, Database partition, Containers, Tablespaces, Tablespaces, Tablespaces, Large objects (LOBs), Large objects (LOBs), Transaction logs, Transaction logs, Managing archive logs, Managing archive logs archive logging, Managing archive logs circular logging, Managing archive logs container, Containers database, Databases database managed spaces (DMS), Tablespaces engine dispatch units (EDUs), DB2 engine dispatch units index, Indexes instance, Instance large (or long) tablespace, Large objects (LOBs) large objects (LOBs), Large objects (LOBs) partition, Database partition partition group, Database partition schema, Schemas system catalog tables, System catalog tables system managed spaces (SMS), Tablespaces tables, Tables tablespaces, Tablespaces transaction logs, Transaction logs view, Views write ahead logging, Transaction logs automated backup utilities, Automatic maintenance backup command, The backup Command, Backup levels, Backup levels, Backup levels, Backup path and filenaming convention, Discovering the history of your backup operations delta backup, Backup levels full backup, Backup levels incremental backup, Backup levels pathname and filename, Backup path and filenaming convention recovery history file, Discovering the history of your backup operations crash recovery, Crash recovery health monitor, Automatic maintenance in-place version recovery, Performing an In-Place Version Recovery offline backup, Managing archive logs partial backups, Backup path and filenaming convention recover command, The recover Command redirected version recovery, Performing a Redirected Version Recovery restore command, The restore Command rollforward command, The rollforward Command rollforward recovery, Rollforward recovery, Performing a Rollforward Recovery, Performing a Rollforward Recovery point in time (PIT), Performing a Rollforward Recovery statistics, gathering, Reorganizing Data and Collecting Statistics version recovery, Version recovery DB2 UDB, IBM DB2 Backup and Recovery, Reorganizing Data and Collecting Statistics dd utility, Backups change atime, Backing Up and Restoring with the dd Utility, Basic dd Options, Using dd to Copy a File or Raw Device, Using dd to Convert Data, Using dd to Determine the Block Size of a Tape, Using dd to Figure out the Backup Format atime, changing, Backups change atime backup format, Using dd to Figure out the Backup Format convert data with, Using dd to Convert Data copy a file or raw device with, Using dd to Copy a File or Raw Device determine block size of a tape with, Using dd to Determine the Block Size of a Tape options, Basic dd Options de-duplication, De-Duplication Backup Systems, De-duplication backup systems, De-Duplication Backup Systems density versus compression, Density Versus Compression development procedures (backups), following proper, Following Proper Development Procedures differential backups, Backup Levels digital audio tape (DAT), DDS drive Digital Data Storage (DDS), DDS drive Digital Linear Tape (DLT), DLT drives (end-of-lifed) directories, cpio’s Restore Options, Torture-Testing Backup Programs, Pass I, Pass III cpio restore, making needed, cpio’s Restore Options data, writing (dump utility), Pass III dump utility, evaluating for backup, Pass I files, becoming (volatile filesystem backups), Torture-Testing Backup Programs disasters, types of, Be Ready for Anything: 10 Types of Disasters, Be Ready for Anything: 10 Types of Disasters disk drives, Be Ready for Anything: 10 Types of Disasters, Reliability failure, losing data from, Be Ready for Anything: 10 Types of Disasters versus tape media, Reliability disk mirroring, System Recovery and Disk Mirroring disk staging, Disk Targets disk-as-disk targets, Disk-As-Disk Targets disk-as-tape units (virtual tape libraries), Disk-As-Tape: Virtual Tape Libraries disk-based backup, You Want to Learn About Disk-Based Backup disk-to-disk-to-tape (D2D2T) backup, You Want to Learn About Disk-Based Backup disks left powered off, Re-presentation ditto utility, Backing Up and Restoring with the ditto Utility, Syntax of ditto When Backing Up, The Options to the ditto Command, Syntax of ditto when Restoring options, The Options to the ditto Command restoring with, Syntax of ditto when Restoring syntax, Syntax of ditto When Backing Up documentation, Take an Inventory, 12,000 gold pieces, Self-preservation: Document, document, document backups, 12,000 gold pieces importance in backup plan, Take an Inventory restore program, importance of, Self-preservation: Document, document, document downtime, cost of, What Will Downtime Cost You?


pages: 602 words: 207,965

Practical Ext JS Projects With Gears by Frank Zammetti

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

a long time ago in a galaxy far, far away, Albert Einstein, corporate raider, create, read, update, delete, database schema, en.wikipedia.org, Firefox, full text search, Gordon Gekko, Larry Wall, loose coupling, Ronald Reagan, web application

If you’re ready to move ahead, though, let’s start by taking a look at the UML class diagram for the DAO class housed in the DAO.js file, seen in Figure 4-4. 203 204 Ch apt er 4 ■ Ma K ING p r OJ eC t Ma Na G eMeN t C O O L: t I M e K e e p e r e X t Figure 4-4. UML class diagram of the DAO class The class starts out with the databaseName field that names the Gears database we’ll be using. The value in this case is TimekeeperExt. Following that we find 15 fields, the value of each of which is an SQL statement. There are three types of entities this application deals with: projects, tasks, and resources. The database schema is simple, as you can see in Figure 4-5, Figure 4-6, and Figure 4-7—one for each of the three tables corresponding to the three entities involved. C ha p ter 4 ■ M a K I N G p r O Je C t M a N a G e M e N t C O O L: t I M e K e e p e r e X t Figure 4-5. Table structure of the projects table Figure 4-6. Table structure of the tasks table 205 206 Ch apt er 4 ■ Ma K ING p r OJ eC t Ma Na G eMeN t C O O L: t I M e K e e p e r e X t Figure 4-7.

To do that we’ll have to play with Gears and SQLite a bit: var db = google.gears.factory.create("beta.database"); db.open(inDatabaseName); var rs = db.execute ( "SELECT name, sql FROM sqlite_master where type='table';" ); 393 394 Ch apt er 7 ■ YOU r Da D ha D a W Or K B eNC h , N O W S O D O Y O U : S Q L W O r K B e N C h Figure 7-10. The Tables Window Every SQLite database has a special read-only table named sqlite_master in it. The data in this table describes the database schema. The structure of this table is always what is shown in Table 7-1. Table 7-1. The Structure of the sqlite_master Table Field Description type Tells the type of entity the record describes. For our purposes, the only value we care about is table. name Gives you the name of the entity the record describes. For our purposes, that means the name of the table. tbl_name When the value of type is index, the record is describing a table index.


The Art of Scalability: Scalable Web Architecture, Processes, and Organizations for the Modern Enterprise by Martin L. Abbott, Michael T. Fisher

always be closing, anti-pattern, barriers to entry, Bernie Madoff, business climate, business continuity plan, business intelligence, business process, call centre, cloud computing, combinatorial explosion, commoditize, Computer Numeric Control, conceptual framework, database schema, discounted cash flows, en.wikipedia.org, fault tolerance, finite state, friendly fire, hiring and firing, Infrastructure as a Service, inventory management, new economy, packet switching, performance metric, platform as a service, Ponzi scheme, RFC: Request For Comment, risk tolerance, Rubik’s Cube, Search for Extraterrestrial Intelligence, SETI@home, shareholder value, Silicon Valley, six sigma, software as a service, the scientific method, transaction costs, Vilfredo Pareto, web application, Y2K

R OLLBACK C APABILITIES Rollback Technology Considerations We often hear during our discussions around the rollback insurance policy that clients in general agree that being able to roll back would be great but that it is technically not feasible for them. Our answer to this is that it is almost always possible; it just may not be possible with your current team, processes, or architecture. The most commonly cited reason for an inability to roll back in Web enabled platforms and back office IT systems is database schema incompatibility. The argument usually goes that for any major development effort, there may be significant changes to the schema resulting in an incompatibility with the way old and new data is stored. This modification may result in table relationships changing, candidate keys changing, table columns changing, tables added, tables merged, tables disaggregated, and tables removed. The key to fixing these database issues is to grow your schema over time and keep old database relationships and entities for at least as long as it would require you to roll back to them should you run into significant performance issues.

They do not tend to significantly increase the complexity of your operations or production environment. X-axis implementations are limited by the aforementioned replication technology limitations and the size of data that is being replicated. In general, x-axis implementations do not scale well with data size and growth. The Y-Axis of the AKF Database Scale Cube The y-axis of the AKF Database Scale Cube represents a separation of data meaning within your database schema or data storage system. When discussing database scale, we are usually aligning data with a predetermined application y-axis split. The y-axis split addresses the monolithic nature of the data architecture by separating the data into schemas that have meaning relative to the applications performing work on that T HE Y-A XIS OF THE AKF D ATABASE S CALE C UBE data or reading from that data.


pages: 314 words: 94,600

Business Metadata: Capturing Enterprise Knowledge by William H. Inmon, Bonnie K. O'Neil, Lowell Fryman

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

affirmative action, bioinformatics, business intelligence, business process, call centre, carbon-based life, continuous integration, corporate governance, create, read, update, delete, database schema, en.wikipedia.org, informal economy, knowledge economy, knowledge worker, semantic web, The Wisdom of Crowds, web application

Noted author and data modeler extraordinaire David Hay is fond of emphasizing the importance of ER models (as are the authors of the present volume) and credits the Barker Notation for much of this communication facilitation. For more detail in this regard, see his book Data Model Patterns: Conventions of Thought1. It is therefore the consensus of the authors that an ER model constitutes business metadata because it communicates relationships between concepts in both a graphical and linguistic way. Parenthetically, ER models are also used to generate technical metadata (database schema), and to communicate relationships between data elements to technical people: database designers, database administrators (DBAs), and developers. 11.4.5 Conceptual Model, RDF and OWL, Topic Map, UML 11.4.5.1 Conceptual Model The techniques we have just discussed (glossaries, thesauri, and data models) are used to capture business metadata in ways that can be understood by human beings.


Python Geospatial Development - Second Edition by Erik Westra

capital controls, database schema, Firefox, Golden Gate Park, Google Earth, Mercator projection, natural language processing, openstreetmap, Silicon Valley, web application

Shoreline and lake boundaries (as well as other land-water boundaries such as islands within lakes) are readily available using the GSHHS shoreline database.City and town data can be found in two places: The GNIS Database (http://geonames.usgs.gov/domestic) provides official place-name data for the United States, while the GEOnet Names Server (http://earth-info.nga.mil/gns/html) provides similar data for the rest of the world. Looking at these data sources, we can start to design the database schema for the DISTAL system: Tip The level field in the shorelines table corresponds to the level value in the GSHHS database: a value of 1 represents a coastline, 2 represents a lake, 3 represents an island within a lake, and 4 represents a pond on an island in a lake. While this is very simple, it's enough to get us started. Let's use this schema to create our database, firstly in MySQL: import MySQLdb connection = MySQLdb.connect(user="


pages: 1,758 words: 342,766

Code Complete (Developer Best Practices) by Steve McConnell

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Ada Lovelace, Albert Einstein, Buckminster Fuller, call centre, choice architecture, continuous integration, data acquisition, database schema, don't repeat yourself, Donald Knuth, fault tolerance, Grace Hopper, haute cuisine, if you see hoof prints, think horses—not zebras, index card, inventory management, iterative process, Larry Wall, late fees, loose coupling, Menlo Park, Perl 6, place-making, premature optimization, revision control, Sapir-Whorf hypothesis, slashdot, sorting algorithm, statistical model, Tacoma Narrows Bridge, the scientific method, Thomas Kuhn: the structure of scientific revolutions, Turing machine, web application

A telecommunications organization went from 86 percent correct before reviewing code changes to 99.6 percent afterward (Perrott 2004). Adjust your approach depending on the risk level of the refactoring Some refactorings are riskier than others. A refactoring like "Replace a magic number with a named constant" is relatively risk-free. Refactorings that involve class or routine interface changes, database schema changes, or changes to boolean tests, among others, tend to be more risky. For easier refactorings, you might streamline your refactoring process to do more than one refactoring at a time and to simply retest, without going through an official review. For riskier refactorings, err on the side of caution. Do the refactorings one at a time. Have someone else review the refactoring or use pair programming for that refactoring, in addition to the normal compiler checking and unit tests.

You can deal with proliferating software versions by using version-control tools for Source-code control Dependency control like that offered by the make utility associated with UNIX Project documentation versioning Relating project artifacts like requirements, code, and test cases so that when a requirement changes, you can find the code and tests that are affected Data Dictionaries A data dictionary is a database that describes all the significant data in a project. In many cases, the data dictionary focuses primarily on database schemas. On large projects, a data dictionary is also useful for keeping track of the hundreds or thousands of class definitions. On large team projects, it's useful for avoiding naming clashes. A clash might be a direct, syntactic clash, in which the same name is used twice, or it might be a more subtle clash (or gap) in which different names are used to mean the same thing or the same name is used to mean subtly different things.


pages: 603 words: 141,814

Python for Unix and Linux System Administration by Noah Gift, Jeremy M. Jones

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

Amazon Web Services, bash_history, Bram Moolenaar, cloud computing, create, read, update, delete, database schema, Debian, distributed revision control, Firefox, Guido van Rossum, industrial robot, inventory management, job automation, Mark Shuttleworth, MVC pattern, skunkworks, web application

Here is the relevant line from the URL config file: # Uncomment this for admin: (r'^admin/', include('django.contrib.admin.urls')), The tool that created the urls.py created it with a line to include the admin interface, but the line needs to be uncommented. You can see that we have simply removed the # character from the beginning of the line to include the admin URLs config file. Now that we have configured a database, added the admin and inventory applications, and added the admin interface to the URLs config file, we are ready to start defining the database schema. In Django, each application has its own schema definition. In each application directory, “inventory” in this case, there is a file named models.py that contains definitions for the tables and columns that your application will use. With Django, as well as many other web frameworks that rely on ORMs, it is possible to create and use a database without having to write a single SQL expression.


pages: 462 words: 172,671

Clean Code: A Handbook of Agile Software Craftsmanship by Robert C. Martin

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

continuous integration, database schema, domain-specific language, don't repeat yourself, Donald Knuth, en.wikipedia.org, Eratosthenes, finite state, Ignaz Semmelweis: hand washing, iterative process, place-making, Rubik’s Cube, web application, WebSocket

Still more subtle are the modules that have similar algorithms, but that don’t share similar lines of code. This is still duplication and should be addressed by using the TEMPLATE METHOD,4 or STRATEGY5 pattern. 4. [GOF]. 5. [GOF]. Indeed, most of the design patterns that have appeared in the last fifteen years are simply well-known ways to eliminate duplication. So too the Codd Normal Forms are a strategy for eliminating duplication in database schemae. OO itself is a strategy for organizing modules and eliminating duplication. Not surprisingly, so is structured programming. I think the point has been made. Find and eliminate duplication wherever you can. G6: Code at Wrong Level of Abstraction It is important to create abstractions that separate higher level general concepts from lower level detailed concepts. Sometimes we do this by creating abstract classes to hold the higher level concepts and derivatives to hold the lower level concepts.

Scala in Action by Nilanjan Raychaudhuri

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

continuous integration, create, read, update, delete, database schema, domain-specific language, don't repeat yourself, en.wikipedia.org, failed state, fault tolerance, general-purpose programming language, index card, MVC pattern, type inference, web application

The following is how you represent a story class in Scala: class Story(val number: String, val title: String, val phase: String) To make this class work with Squeryl, you have to do a couple of simple setups. First you have to tell Squeryl you need a table that will store all the stories for you in the database. The way to do that in Squeryl is to create a subclass of org.squeryl.Schema. Think of this class as equivalent to a database schema where you’ll keep all the data definitions of an application. The following code defines the schema with a table called “STORIES” for your Story class: package com.kanban.models import org.squeryl._ object KanbanSchema extends Schema { val stories = table[Story]("STORIES") } Save this as the file KanbanSchema.scala under src/main/scala/com/kanban/models. One thing to note here is I’m defining the table in a type-safe manner.


pages: 1,065 words: 229,099

Real World Haskell by Bryan O'Sullivan, John Goerzen, Donald Stewart, Donald Bruce Stewart

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

bash_history, database schema, Debian, digital map, distributed revision control, domain-specific language, en.wikipedia.org, Firefox, general-purpose programming language, Guido van Rossum, job automation, Larry Wall, p-value, Plutocrats, plutocrats, revision control, sorting algorithm, transfer pricing, type inference, web application, Y Combinator

By reading the data lazily, we can print out extremely large result sets using a constant amount of memory. With the lazy version, results will be evaluated in chunks; with the strict version, all results are read up front, stored in RAM, and then printed. Database Metadata Sometimes it can be useful for a program to learn information about the database itself. For instance, a program may want to see what tables exist so that it can automatically create missing tables or upgrade the database schema. In some cases, a program may need to alter its behavior depending on the database backend in use. First, there is a getTables function that will obtain a list of defined tables in a database. You can also use the describeTable function, which will provide information about the defined columns in a given table. You can learn about the database server in use by calling dbServerVer and proxiedClientName, for instance.


pages: 719 words: 181,090

Site Reliability Engineering by Betsy Beyer, Chris Jones, Jennifer Petoff, Niall Richard Murphy

Air France Flight 447, anti-pattern, barriers to entry, business intelligence, business process, Checklist Manifesto, cloud computing, combinatorial explosion, continuous integration, correlation does not imply causation, crowdsourcing, database schema, defense in depth, DevOps, en.wikipedia.org, fault tolerance, Flash crash, George Santayana, Google Chrome, Google Earth, job automation, job satisfaction, linear programming, load shedding, loose coupling, meta analysis, meta-analysis, minimum viable product, MVC pattern, performance metric, platform as a service, revision control, risk tolerance, side project, six sigma, the scientific method, Toyota Production System, trickle-down economics, web application, zero day

In particular, some of this chapter’s authors were in charge of a portion of the serving system (shown in Figure 31-1) that continually extracts and processes data from the database, in order to generate a set of indexed files that are then loaded and served around the world. This system was distributed over several datacenters and used about 1,000 CPUs and 8 TB of RAM to index 100 TB of data every day. Figure 31-1. A generic ads serving system The migration was nontrivial: in addition to migrating to a new technology, the database schema was significantly refactored and simplified thanks to the ability of F1 to store and index protocol buffer data in table columns. The goal was to migrate the processing system so that it could produce an output perfectly identical to the existing system. This allowed us to leave the serving system untouched and to perform, from the user’s perspective, a seamless migration. As an added restriction, the product required that we complete a live migration without any disruption of the service to our users at any time.


pages: 923 words: 516,602

The C++ Programming Language by Bjarne Stroustrup

Amazon: amazon.comamazon.co.ukamazon.deamazon.fr

combinatorial explosion, conceptual framework, database schema, distributed generation, Donald Knuth, fault tolerance, general-purpose programming language, index card, iterative process, job-hopping, locality of reference, Menlo Park, Parkinson's law, premature optimization, sorting algorithm

State machines that are smaller, faster, and more correct than most programmers could produce can be generated from specifications or by a direct manipulation interface. These techniques work well in specific areas where there is either a sound theoretical foundation (e.g., math, state machines, and relational databases) or where a general framework exists into which small application fragments can be embedded (e.g., graphical user interfaces, network simulations, and database schema). The obvious usefulness of these techniques in limited – and typically crucial – areas can tempt people to think that the elimination of traditional programming by these techniques is ‘‘just around the corner.’’ It is not. The reason is that expanding specification techniques outside areas with sound theoretical frameworks implies that the complexity of a general-purpose programming language would be needed in the specification language.