qdb [-A] [-c config_file] [-C policy] [-D]
[-I test] [-n mountpoint] [-N control] [-o option[,option2...]]
[-P permissions] [-R mode] [-r mode] [-s [data@]routine]
[-t timeout] [-T timeout] [-vV] [-W time] [-X path]
- Turn off exclusive mode: allow other applications to use the database files.
- -c config_file
- Specify a configuration file of databases and policies. See the “Configuration file” section below for more information.
- -C policy
- Specify a database connection sharing policy. The policy can be one of:
- See the “Sharing connections between clients” section below for more information.
- Disable shared cache. You should only use this option if you need to debug shared caching.
- -I test
- Perform a database integrity test at startup. The test can be one of:
- See the “Database integrity testing” section below for more information.
- -n mountpoint
- The QDB resource manager mountpoint. By default this is /dev/qdb.
- -N control
- Name of the database control entry. By default this is .control.
- -o option
- Configure miscellaneous options. The options are:
- unblock=0|1 — set whether or not to install an unblock handler (that is, to allow a signal to interrupt an SQL operation).
- threadmax — the maximum number of threads to allocate to qdb; default is 64.
- threadhi — the maximum number of threads that can be kept in a blocked state ready to work.
- threadlo — the minimum number of threads to be kept in a blocked state ready for work.
See also thread_pool_create() in the Neutrino Library reference.
- tempstore=directory — set the directory name where qdb places certain temporary files. You can set this to a tmpfs RAM disk location to prevent excessive disk access.
- bkcopy=buffer_size — set the size of the buffer to use when making a backup or compressing. The default value is 64 KB, and is probably acceptable for most cases.
- trace — log SQL statements before QDB executes them. You must set verbosity (-v) to six for this feature to work.
- profile — log SQL statements after QDB executes them, as well as the time it took to execute them. You can additionally specify the -Wtime option to log only SQL statements that take more than the specified time, in milliseconds. You must set verbosity (-v) to six for this feature to work.
- -P permissions
- Access permissions for the database and backup files. By default this is 0664.
- -R mode
- Set the database creation and recovery mode. The mode can be one of:
- See the “Database recovery” section below for more information.
- -r mode
- Set the connection recovery mode. The mode specifies what happens when a database problem is discovered and corrected. It can be one of:
- manual — clients receive ESTALE errors until they disconnect and reconnect.
- auto — clients are automatically reconnected, and receive no notification that a problem was detected and repaired.
- -s [data@]routine
- Change the configuration of a user-defined collation sequence.
All collation setup functions expect data in the same format as you
would specify it through
For example, -s en_US@cldr would name the cldr
collation routine and invoke its setup function at startup, passing
in the en_US string.
- Note that the data portion of the argument is optional;
specifying an -s option without data lets you set the
configuration of a collation to its default setting.
- To configure multiple collations, use multiple -s options.
All registered collations, which are listed in the Collation option
in the database configuration object, have their setup functions
invoked at startup so they may initialize themselves, even if they're
not named in -s options.
However, to pass nondefault data to a collation setup function,
you must use this option and provide the data in the argument value.
- -t timeout | block | nonblock
- Set the busy-wait timeout on database access, in milliseconds. By default, this is 5000 milliseconds. See the “Busy timeout” section below for more information.
- -T timeout | block | nonblock
- Set the busy-wait timeout on database connection, in milliseconds. By default, this is 5000 milliseconds. See the “Busy timeout” section below for more information.
- Increase output verbosity. Messages are written to sloginfo.
- Replicate output messages to the console, as well as to sloginfo.
- -W time
- Used in conjunction with the -o profile option: log only SQL statements that take longer than time (specified in milliseconds). The default for time is 5000 milliseconds.
- -X path
- Set a script to run when the QDB encounters a corrupt database. See “Handling corrupt databases” below.
The QDB database directory can be on any QNX or POSIX filesystem with read/write access (including memory-based filesystems, such as tmp-fs). QDB can run from QNX filesystems visible via Qnet, but can not run from a CIFS or NFS filesystem.
A schema file contains all the SQL commands to create the database schema
the way you want. Here's an example:
CREATE TABLE customers(
customerid INTEGER PRIMARY KEY AUTOINCREMENT,
Save that content in /db/customerdb.sql.
If you have any database schema files (for example, /db/customerdb.sql), you need to add them to the QDB configuration file before starting the QDB server. For information on specifying schema files and other database configuration settings,
see “Configuration file” below.
The QDB server must be run as root.
For debugging purposes, you should start qdb with -v options to get verbose output. The v option is cumulative, with each additional v adding a level of verbosity, up to 7 levels.
The -V option sends output to the console and to sloginfo.
Once QDB is running, you can check that it sees your databases by
running ls /dev/qdb/. Using the example of a database named customerdb, we should see a file called /dev/qdb/customerdb.
The filesystem that QDB uses for temporary storage must support POSIX file locking. File locking is required for database vacuuming.
The QDB checks its temporary storage as follows:
- If the tempstore option (-o tempstore) is specified on the command line, the QDB checks to see if the specified location:
- is writable
- is not /dev/shmem
- is not a link to /dev/shmem
If all the above conditions are met, the QDB sets the internal temporary storage to the location specified by the tempstore option. If any of the above conditions are not met, the QDB logs errors to the slog and fails to start up.
- If no tempstore option (-o tempstore) is specified on the command line, the QDB uses the environment variable TMPDIR to obtain the location it should use for temporary storage. The QBD checks if TMPDIR exists and the location specified by this variable:
- is writable
- is not /dev/shmem
- is not a link to /dev/shmem
If all the above conditions are met, the QDB sets the internal temporary storage to the value of TMPDIR. If any of the above conditions are not met, the QDB logs errors to the slog and fails to start up.
You can create a list of databases that you'd like to be combined as if they formed a single database. This is called auto-attaching a database.
This is useful for breaking up a database into separate pieces for
performance reasons (each piece gets its own lock, which makes multi-user access more responsive). It's also useful for moving parts of a database to different storage mediums (such as a RAM filesystem).
The list of databases is read from a configuration file, specified by the Auto Attach= option. For more information, see “Configuration file” below.
When using the Auto Attach parameter to attach more than one database to another database (attaching multiple sections to one section) you must make sure that the order in which the sections are listed in the configuration file are the same as the order in which they are listed via the Auto Attach parameters. The examples below show incorrect and correct lists. To simplify the examples, only the section headings are shown; parameters are not shown.
Note that the section definition order does not match the attach order.
Auto Attach = mme_library
Auto Attach = mme_custom
Auto Attach = mme_temp
Note that the section definition order matches the attach order.
Auto Attach = mme_library
Auto Attach = mme_custom
Auto Attach = mme_temp
Note that the attach order matches the section definition order.
Auto Attach = mme_library
Auto Attach = mme_temp
Auto Attach = mme_custom
At startup, QDB tests the integrity of databases, according the -I option specified. It will execute statements based on this option, as follows:
- none — don't perform a database integrity check.
- basic — verify only that QDB can parse a string.
- partial — validate the PRAGMA database list. This is equivalent to running the command PRAGMA database_list;.
- full — validate the database integrity. This is equivalent to running the command PRAGMA integrity_check;.
||The more verification the QDB performs at startup, the greater the time needed for startup. For production environments, you will need to find the optimal balance between the amount of verification required and the time needed to start the QDB.|
You can execute SQL statements on your QDB databases from the command-line using the qdbc utility. See qdbc for more information.
QDB is configured with a single file, which is specified with the -c command-line option. If this file is in the same location as the database SQL files (by default, /db/), you can use relative paths in the configuration file to point to schema files and database locations. Otherwise, you need to use absolute paths.
The configuration file is composed of lines of text. Blank lines are ignored, as is any leading or trailing white space. Lines beginning with a number character (#) are comments.
The file contains named sections, each of which configures the database named in the enclosing square brackets ([ ]) that begin the section.
Suppose you have a database named customerdb. You would then write a section beginning with [customerdb] to identify the settings that QDB should apply to that database.
Following each section header are parameter lines in the form key=value. Parameters apply to the current section.
The Filename parameter, which names the database storage file, is mandatory but all other parameters are optional.
A basic configuration for the customerdb database would look like:
Each section (and database) name is presented in /dev/qdb. Clients use that same name to establish a connection. The database is then configured using the following parameters:
- Set the name of the actual database file. This is the raw SQLite file. It can be an absolute path to any file location, or can be a relative name (in which case it is relative to the directory that holds the configuration file). At startup either this file or the directory in which it will be created must exist (otherwise qdb will exit with an appropriate error). If the database file does not exist, it is restored from the newest valid backup if possible, or a blank database file is created.
- These settings describe the initial schema of the database (as SQL commands used to create the initial set of tables, indexes, views, and rows) of a new database (if it did not already exist).
In the first form of this option, the SQL commands are in the configuration file. The second form names a file (with either an absolute or relative path) containing the SQL commands.
- An initial schema is optional; without an initial schema, a new database will just be empty.
- Client Schema=
- This entry specifies a client schema, which is executed every time a client calls qdb_connect(). You can use this mechanism to implement cross-database triggers.
- Auto Attach=
- This entry specifies another database to be attached to the current one (using the SQL ATTACH DATABASE statement whenever a database connection is established). The name is the section name of the other database, not a filename. You can specify multiple databases, each on its own Auto Attach= line.
- Attached databases are a convenience to provide access to tables that are physically stored in a different database file. Facilities exist in QDB to also include attached databases in other maintenance operations, such as backup or vacuum.
- See also “Auto-attaching databases” above.
- Backup Dir=
- This entry specifies a directory which is used to store a backup of the database. UPi can specify multiple directories, each on its own Backup Dir= line, and they will be used in rotation to store backup files. This feature ensures that should a backup be interrupted or aborted by a power-failure, another, older, backup is still available.
- This directory must exist at startup (though it does not need to contain a valid backup); otherwise qdb exits with an appropriate error. If any existing backup files are located in these directories, they are sorted by date and overwritten oldest-to-newest when performing backup operations, and used in newest-to-oldest order to restore a missing or corrupt database.
- This entry specifies a compression algorithm to apply to backups. The supported options are none (for no compression), lzo (for LZO compression), or bzip (for BZIP2 compression). The lzo compression algorithm is fastest, but the bzip algorithm offers the highest compression. The compressed files are created with appropriate extensions added to the original database filename. By default, backup files are not compressed.
- These entries install user-provided collation (sorting) routines and user functions (scalar or aggregate) routines respectively. The argument format is email@example.com, where tag is the symbol name of the function description structure and library.so is the name of the shared library containing the code. For more information, see the Writing User-Defined Functions chapter.
- QDB checks for the existence of the library and the specified symbol at startup, and exits with an appropriate error if they're not found.
- Vacuum Attached=
- These entries control what maintenance operations should apply by default to attached databases when a command is issued to the main database. These options can have a value of TRUE|FALSE, YES|NO or ON|OFF. The default setting for each is FALSE. You can change the option multiple times within the database section to apply differently to attached databases.
- Here's a sample configuration:
Vacuum Attached = TRUE
Auto Attach = db1
Vacuum Attached = FALSE
Auto Attach = db2
- In this example, a qdb_vacuum() operation on db will also vacuum db1 but not db2.
- You can use the Backup Attached=TRUE setting to provide a facility similar to the old *.bks files. For more details on the scope of maintenance operations with respect to attached databases, refer to qdb_vacuum(), qdb_backup(), and qdb_getdbsize().
- To create RAM-based databases, point the Filename= option to the RAM-disk file.
- You can also create temporary databases by defining a database with a Filename=:memory: entry. This action creates a private, temporary, in-memory database, visible only in the scope of the database connection. Each connection to such a database has its own temporary file, which is removed when the connection is closed.
- Backup Via=
- This entry specifies an interim directory into which the database is copied as part of the backup. To make sure the database backup is consistent, qdb places a read lock on the database while it is copying and compressing it, so the database may be locked a long time if the destination is slow (for example, flash).
- For example, you could specify Backup Via=/dev/shmem. When backing up, QDB locks the database, copies it to /dev/shmem, and then releases the lock. Then, in a second step, qdb performs the copy and compress operation into the location specified by Backup Dir=, without needing to lock the database.
- Compress Via=TRUE|FALSE
- This entry is used in conjunction with the Backup Via= entry and any Compression= setting specified for the backup. By default, the Backup Via= makes a raw/uncompressed copy of the database into the temporary directory, and then performs the compression at the second step. This works if you have space, and read-locks the database for the least amount of time, but you can use less space (at the expense of more time) by compressing during the first copy. FALSE is the default; if you make this setting TRUE, then compression is done in the first step.
You can allow multiple clients to share a database connection.
This is controlled by the -C option. These modes are:
- Each individual client request gets a new connection.
This mode exists for pre-3.3.1 SQLite libraries, which were not thread-safe in any way.
- Each client has a private persistent connection for its session; this connection is created when the client attaches and destroyed when it detaches. This mode is the backwards-compatible mode; it is also the mode forced when not using the QDB_CONN_DFLT_SHARE flag to qdb_connect().
- Like private, except that connections are returned to a pool rather than being destroyed, and can be assigned from there to a new client for use in its duration.
- Like unique, except a connection pool is also used. This mode multiplexes all clients over a small number of active database connections.
Connection sharing exists because a non-negligible amount of work must be done to establish a database connection (QDB must allocate memory, access files, attach databases and callback functions, configure connection parameters, and so on), and if clients do not assume any state, then this processing work can be avoided. The QDB server detects if connection parameters have been changed by a client, and restores them when the connection moves in or out of the pool in unique, reuse or share modes.
This connection sharing should be safe (unless the client destructively modifies the environment via SQL, such as by executing a DETACH DATABASE statement). However, for full backwards compatibility, connection sharing can be overridden on each qdb_connect() call, and the default libqdb access mode is private.
If a client is leaving open transactions across multiple calls to qdb_statement(),
then it needs a dedicated connection (private or reuse or should not set the QDB_CONN_DFLT_SHARE flag).
The default startup mode for QDB is with both shared caching and exclusive modes enabled:
- If you want to disable shared caching, you must use the new -D command-line option.
- For shared caching, the QDB reserves exclusive write privileges to the database. To allow other applications to use the database files, use the new -A option.
||QDB will exit immediately if it is started with shared cache disabled and exclusive mode enabled. For example:
# qdb -c /db/qdb.cfg -v -D -Otempstore=/fs/tmpfs -Rset
qdb: Exclusive locking mode requires that shared cache be enabled
Shared caching can both improve performance times and reduce the total amount of memory cache required for multiple connections. Shared caching also reduces the total amount of memory required for multiple database connections, because multiple connections can share the same memory cache.
For example, without shared caching, if 1 MB of memory is required for each database connection, 40 connections require 40 MB of memory. However, with shared cache enabled, these 40 connections can share the same memory cache, allowing you to reduce the memory cache to 25 MB (or another size determined by your environment and performance requirements). Further, with shared cache, there is no duplication in memory, so in the 25 MB of memory you may have almost the entire database, virtually eliminating the need for disk I/O.
The -R option controls the recovery actions QDB performs when it encounters a missing or corrupt database file. The options are:
- In this mode, file manipulation is fully automatic and a best-effort is always made to establish a valid database connection at startup. Files are backed up individually, and restored individually.
- A corrupt or missing database file is restored from the most recent, valid backup that can be located. If there is no such backup, then a blank database is recreated from the original schema definition.
- In this mode, the only action performed is to create a blank database from the original schema definition if the database file is missing at startup. Databases are not restored from backups. If the file is corrupt, the server will not start. If the file is detected, missing, or corrupt at runtime, no access to that database is permitted, and it will not be restored or re-created. This mode is intended to allow the creation of a new system, or to give manual control over error recovery (for example, to preserve the corrupt database for later analysis).
- In this mode, backups of attached databases are treated as a coherent set, so an error with any one of the component databases causes qdb to restore a complete and matching set of all database files. This is useful if attached databases refer to each other.
- The set master is the database that attaches other databases (by using the Auto Attach option in the configuration file). The backup set contains the set master and all attached databases that have Backup Attached enabled. The set master can be backed up incrementally and still belong to the set.
||QNX recommends the following in order to back up and restore your databases as a coherent set:
- For the master database (the database to which the other databases are attached), in the QDB configuration file:
- Use the -R set option when starting QDB.
- When doing backups, call qdb_backup() on the master database with the scope argument set to QDB_ATTACH_DEFAULT.
The two timeout settings are differentiated as follows:
- The -t option sets the default user-level timeout which applies to each qdb_connect() connection, and can be privately modified with qdb_setbusytimeout().
- The -T option sets the global internal timeout which applies to database connections made without a client context. Examples include verifying existing databases or constructing new databases at startup, and auto-attaching databases.
A value of block is equivalent to an infinite timeout period,
and nonblock is equivalent to a timeout period of 0.
The -X lets you provide qdb with a program or script to run when it encounters a corrupt database. If the program or script appears to run correctly, qdb will continue. The program or script is responsible for stopping and starting qdb if a start of stop is necessary.
Below is a sample qdb startup command with the -X:
# qdb -c /etc/qdb.cfg -X /usr/bin/recover_db.sh
Below is a sample script that can be launched by qdb when it encounters a corrupt database:
# This script will kill qdb and mme,
# remove the database files
# on disk, and restart qdb and mme.
slay qdb mme-generic
rm -f /fs/tmpfs/*
rm -f /mnt/qdb_backup/*
# Call an external program
# to launch qdb and the MME.
- To kill qdb without killing the script, send SIGTERM (the default for slay). With this method qdb keeps the thread used by popen() to start the script available and logs output until the script quits.
- If you send SIGKILL, qdb is killed immediately. The script continues to run but its output is lost.
You can write some maintenance commands to the /dev/qdb/.control entry (and read back the result). The current commands supported are (where DBNAME is the name of the database):
- backup DBNAME — make a backup of the database (qdb_backup())
- vacuum DBNAME — vacuum the database (qdb_vacuum())
- verify DBNAME — verify database integrity (like the -I full command-line option)
- cancel DBNAME — cancel any in-progress backups (qdb_bkcancel())