Database configuration objects

QDB databases are managed with a set of configuration objects, each of which configures one database. The configuration objects are text files that provide the paths to the database schema and storage files and specify policy settings such as backups.

By storing the separate databases in separate files, QDB allows you to dynamically load and unload individual databases so you can keep in memory only the data needed for the active client application.

For example, suppose you had two mediastores named mp3 and cdrom. You would then create configuration objects whose respective file names matched those of each of the two mediastores. For an overview of all database-related files, see "Summary of database files"

Note: QDB doesn't support "on-the-fly" configuration changes after a database is loaded. To modify the configuration, you must unload the database, update its configuration object, and reload the database. You must also ensure the changes are compatible with the previous configuration.

A database configuration object is a text file that gets parsed by QDB to set up the database. Blank lines are ignored, as is any leading or trailing white space. Lines specifying parameters are in the form key::value. Unknown parameter types are ignored and so they can be made into comments. You must still use the :: delimiter on the comment line for the encoding to be valid. For example, you can enter MyComment:: on a line and QDB will consider it to be an unsupported parameter and so will ignore that line when parsing the file.

The database is configured with the following parameters:

Filename::
This entry sets the name of the actual database file. This is the raw SQLite file. It must be an absolute path but it can be to any file location. At database loading time either this file or the directory in which it will be created must exist; otherwise the loading attempt fails and QDB sets the status to Error. If the database file doesn't exist, it is restored from the newest valid backup if possible, or a blank database file is created.
SchemaFile::
This option names the file (with an absolute path) containing the SQL commands that create the initial schema of tables, indexes, and views of a new database. The schema file is used only to set up the database if it didn't already exist.

An initial schema is optional; without an initial schema, a new database will just be empty.

DataSchemaFile::
This option names the file (with an absolute path) that creates the initial data in the database. This text file contains the SQL commands to populate a database when it is created.

Note that this option is processed only if the Schema File option is set.

ClientSchemaFile::
This entry names the client schema file (with an absolute path) that contains the SQL commands to execute every time a client calls qdb_connect().

Use this feature for changing database settings that can't be premanently modified. An example would be the PRAGMA commands, which modify non-table data such as journaling mode or case-sensitive-like status. Don't use client schema files to do regular database work because doing so will slow down new connections.

You can also use this mechanism to implement cross-database triggers.

AutoAttach::
This entry specifies other databases to attach to the current one (using the SQL ATTACH DATABASE statement) whenever a database connection is established. You can specify multiple databases in a comma-separated list.

Attached databases are a convenience to provide access to tables that are physically stored in a different database file. This is useful for breaking up a database into separate pieces for performance reasons (each piece gets its own lock, which makes multi-user access more responsive). It's also useful for moving parts of a database to different storage medias (such as a RAM filesystem).

QDB allows you to include attached databases in other maintenance operations, such as backup or vacuum.

Note: If any attached database is unavailable at loading time, QDB sets the current database's status to AttachWait and makes the database inaccessible.
BackupDir::
This entry specifies the directories to store database backups. You can specify multiple directories in a comma-separated list; these directories will be used in rotation to store the backup files. This feature ensures that should a backup be interrupted or aborted by a power failure, another, older, backup is still available.

These directories must exist at loading time (though they don't need to contain valid backups); otherwise the loading attempt fails and QDB sets the database status to Error. If any existing backup files are located in these directories, they are sorted by date and overwritten oldest-to-newest when performing backup operations, and used in newest-to-oldest order when restoring a missing or corrupt database.

Compression::none | lzo | bzip | diocopy
This entry specifies a compression algorithm to apply to backups. The supported options are none (for no compression), lzo (for LZO compression), bzip (for BZIP2 compression), or diocopy (for direct I/O copy).

The lzo compression algorithm is the fastest, but the bzip algorithm offers the highest compression. Direct I/O doesn't perform any compression; instead, QDB uses an external utility to copy the database using direct memory access (DMA). Direct I/O is a fast way to back up data if the persistent storage supports DMA.

The compressed files are created with appropriate extensions added to the original database filename. By default, backup files aren't compressed.

Collation::, Function::
These entries install user-provided collation (sorting) routines and user scalar/aggregate functions respectively. The argument format is a comma-separated list of library symbols in the form tag@library.so, where tag is the symbol name of the function description structure and library.so is the name of the shared library containing the code.

Unlike the paths to other key files, the library file path can be relative or absolute. Relative paths are searched for in the library search directories (refer to the QNX Neutrino C Library Reference on dlopen() for more detail). You can also specify symbols from as many shared libraries as you want. For example, you could write:

Collation::UTF-8_Sort@libsort.so,UTF-16_Sort@libsort.so
Function::sampleData@libstats.so,implToMtrc@libunits.so

For more information, see the Writing User-Defined Functions chapter.

QDB checks for the existence of the libraries and the specified symbols at loading time, and if any are not found, the loading attempt fails and QDB sets the database status to Error.

VacuumAttached::, BackupAttached::, SizeAttached::
These entries control what maintenance operations should apply by default to attached databases when a command is issued to the main database. Each parameter is set to a comma-separated list of databases to which the operation is applied.

Here's a sample configuration for a database named mp3_tunes_0:

Filename::/fs/tmpfs/mp3_tunes_0
AutoAttach::mp3_tunes_1,mp3_tunes_2
VacuumAttached::mp3_tunes_1

In this example, a qdb_vacuum() operation on mp3_tunes_0 will also vacuum mp3_tunes_1 but not mp3_tunes_2.

Note: Any database named in an operation-specific attachment list such as VacuumAttached must also be named in the AutoAttach list, or that database won't be processed during the operation.

For more details on the scope of maintenance operations for attached databases, refer to qdb_vacuum(), qdb_backup(), and qdb_getdbsize().

BackupVia::
This entry specifies an interim directory into which the database is copied as part of the backup. To make sure the backup is consistent, QDB places a read lock on the database while copying and compressing it, so the database may be locked a long time if the destination is slow (for example, flash).

For example, you could specify BackupVia::/dev/shmem. When backing up, QDB locks the database, copies it to /dev/shmem, and then releases the lock. Then, in a second step, QDB performs the copy and compress operation into the location specified by BackupDir::, without needing to lock the database.

CompressionVia::TRUE | FALSE
This entry is used in conjunction with the BackupVia:: entry and any Compression:: setting specified for the backup. By default, the BackupVia:: feature first makes a raw/uncompressed copy of the database in the temporary directory and then performs the compression as the second step. This works if you have space, and read-locks the database for the least amount of time, but you can use less space (at the expense of more time) by compressing during the copy. FALSE is the default; if you make this setting TRUE, then compression is done in the first step.