Chapter 15. Deployment Guide

Fred Toussi

The HSQL Development Group

$Revision: 5035 $

Copyright 2002-2012 Fred Toussi. Permission is granted to distribute this document without any alteration under the terms of the HSQLDB license. Additional permission is granted to the HSQL Development Group to distribute this document with or without alterations under the terms of the HSQLDB license.

2012-08-06 00:10:58+0100

Table of Contents

Memory and Disk Use
Table Memory Allocation
Result Set Memory Allocation
Temporary Memory Use During Operations
Data Cache Memory Allocation
Object Pool Memory Allocation
Lob Memory Usage
Disk Space
Managing Database Connections
Tweaking the Mode of Operation
Application Development and Testing
Embedded Databases in Desktop Applications
Embedded Databases in Server Applications
Mixed Mode : Embedding a HyperSQL Server (Listener)
Using HyperSQL Without Logging Data Change
Bulk Inserts, Updates and Deletes
Using NIO File Access
Server Databases
Upgrading Databases
Upgrading From Older Versions
Manual Changes to the *.script File
Backward Compatibility Issues
HyperSQL Dependency Settings for Applications
What version to Pull
Using the HyperSQL Snapshot Repository
Range Versioning

Memory and Disk Use

Memory used by the program can be thought of as two distinct pools: memory used for table data which is not released unless the data is deleted and memory that can be released or is released automatically, including memory used for caching, building result sets and other internal operations such as storing the information needed for a rollback a transaction.

Most JVM implementations allocate up to a maximum amount of memory (usually 64 MB by default). This amount is generally not adequate when large memory tables are used, or when the average size of rows in cached tables is larger than a few hundred bytes. The maximum amount of allocated memory can be set on the Java command line that is used for running HyperSQL. For example, with Sun JVM, parameter -Xmx256m increases the amount to 256 MB.

Table Memory Allocation

The memory used for a MEMORY table is the sum of memory used by each row. Each MEMORY table row is a Java object that has 2 int or reference variables. It contains an array of objects for the fields in the row. Each field is an object such as Integer, Long, String, etc. In addition each index on the table adds a node object to the row. Each node object has 6 int or reference variables. As a result, a table with just one column of type INTEGER will have four objects per row, with a total of 10 variables of 4 bytes each - currently taking up 80 bytes per row. Beyond this, each extra column in the table adds at least a few bytes to the size of each row.

Result Set Memory Allocation

By default, all the rows in the result set are built in memory, so very large result sets may not be possible to build. A server mode databases releases the result set from the server memory once the database server has returned the result set. An in-process database releases the memory when the application program closes the java.sql.ResultSet object. A server mode database requires additional memory for returning result sets, as it converts the full result set into an array of bytes which is then transmitted to the client.

HyperSQL 2.0 supports disk-based result sets. The commands, SET SESSION RESULT MEMORY ROWS <integer> and SET DATABASE DEFAULT RESULT MEMORY ROWS <integer> specify a threshold for the number of rows. Results with row counts above the threshold are stored on disk. These settings also apply to temporary tables, views and subquery tables.

Disk-based result sets slow down the database operations and should be used only when absolutely necessary, perhaps with result sets that are larger than tens of thousands of rows.

In a server mode database, when the setFetchSize() method of the Statement interface is used to limit the number rows fetched, the whole result is held by the engine and is returned to the JDBC ResultSet in blocks of rows of the specified fetch size.

Temporary Memory Use During Operations

When UPDATE and DELETE queries are performed on CACHED tables, the full set of rows that are affected, including those affected due to ON UPDATE actions, is held in memory for the duration of the operation. This means it may not be possible to perform deletes or updates involving very large numbers of rows of CACHED tables. Such operations should be performed in smaller sets. This memory is released as soon as the DELETE or UPDATE is performed.

When transactions support is enabled with SET AUTOCOMMIT FALSE, lists of all insert, delete or update operations are stored in memory so that they can be undone when ROLLBACK is issued. For CACHED tables, only the transaction information is held in memory, not the actual rows that have changed. Transactions that span thousands of modification to data will take up a lot of memory until the next COMMIT or ROLLBACK clears the list. Each row modification uses less than 100 bytes until COMMIT.

When subqueries or views are used in SELECT and other statements, transient tables are created and populated by the engine. If the SET SESSION RESULT MEMORY ROWS <integer> statement has been used, these transient tables are stored on disk when they are larger than the threshold.

Data Cache Memory Allocation

With CACHED tables, the data is stored on disk and only up to a maximum number of rows are held in memory at any time. The default is up to 50,000 rows. The SET FILES CACHE ROWS command or the hsqldb.cache_rows connection property can be set to alter this amount. As any random subset of the rows in any of the CACHED tables can be held in the cache, the amount of memory needed by cached rows can reach the sum of the rows containing the largest field data. For example if a table with 100,000 rows contains 40,000 rows with 1,000 bytes of data in each row and 60,000 rows with 100 bytes in each, the cache can grow to contain 50,000 of the smaller rows, but as explained further, only 10,000 or the large rows.

An additional property, hsqldb.cache_size is used in conjunction with the hsqldb.cache_rows property. This puts a limit in bytes on the total size of rows that are cached. The default values is 10,000KB. (This is the size of binary images of the rows and indexes. It translates to more actual memory, typically 2-4 times, used for the cache because the data is represented by Java objects.)

If memory is limited, the hsqldb.cache_rows or hsqldb.cache_size database properties can be reduced. In the example above, if the hsqldb.cache_size is reduced from 10,000 to 5,000, it will allow the number of cached rows to reach 50,000 small rows, but only 5,000 of the larger rows.

Data for CLOB and BLOB columns is not cached and does not affect the CACHED table memory cache.

The use of Java NIO file access method also increases memory usage. Access with NIO improves database update speed and is used by default for data files up to 256 MB. For minimal memory use, nio access should be disabled.

The operating system usually allocates a large amount of buffer memory for speed up file read operations. Therefore when a lot of memory is available to the operating system, all database operations perform faster.

Object Pool Memory Allocation

HyperSQL uses a set of fast pools for immutable objects such as Integer, Long and short String objects that are stored in the database. In most circumstances, this reduces the memory footprint still further as fewer copies of the most frequently-used objects are kept in memory. The object pools are shared among all databases in the JVM. The size of each pool can be modified only by altering and recompiling the org.hsqldb.store.ValuePool class.

Lob Memory Usage

Access to lobs is always performed in chunks, so it is perfectly possible to store and access a CLOB or BLOB that is larger than the JVM memory allocation. The actual total size of lobs is almost unlimited. We have tested with over 100 GB of lobs without any loss of performance.

By default, HyperSQL 2.0 uses memory-based tables for the lob schema (not the actual lob data). Therefore it is practical to store about 100,000 individual lobs in the database with the default JVM memory allocation. More lobs can be stored with larger JVM memory allocations. In order to store more than a few hundreds of thousands of lobs, you can change the lob schema storage to CACHED tables with the following statements:

Example 15.1. Using CACHED tables for the LOB schema

  SET TABLE SYSTEM_LOBS.BLOCKS TYPE CACHED
  SET TABLE SYSTEM_LOBS.LOBS TYPE CACHED
  SET TABLE SYSTEM_LOBS.LOB_IDS TYPE CACHED

Disk Space

With file: database, the engine uses the disk for storage of data and any change. For safely, the engine backs up the data internally during operation. Spare space, at least equal to the size of the .data and .script file is needed. The .lobs file is not backed up during operation as it is not necessary for safety.

Managing Database Connections

In all running modes (server or in-process) multiple connections to the database engine are supported. in-process (standalone) mode supports connections from the client in the same Java Virtual Machine, while server modes support connections over the network from several different clients.

Connection pooling software can be used to connect to the database but it is not generally necessary. Connection pools may be used for the following reasons.

  • To allow new queries to be performed while a time-consuming query is being performed in the background. In HyperSQL, blocking depends on the transaction control model, the isolation level, and the current activity by other sessions.

  • To limit the maximum number of simultaneous connections to the database for performance reasons. With HSQLDB this can be useful if your application is designed in a way that opens and closes connections for each small task. Also, the overall performance may be higher when fewer simultaneous connections are used. If you want to reduce the number of simultaneous sessions, you can use a connection pool with fewer pooled connections.

An application that is not both multi-threaded and transactional, such as an application for recording user login and logout actions, does not need more than one connection. The connection can stay open indefinitely and reopened only when it is dropped due to network problems.

When using an in-process database, when the last connection to the database is closed, the database still remains open, waiting for the next connection to be made. From version 2.2.9, each time the last connection is closed all the data changes are logged and synched to disk.

An explicit SHUTDOWN command, with or without an argument, is required to close the database. A connection property, shutdown=true, can be used on the connection URL or in a properties object to shutdown the database when the last connection is closed.

When using a server database (and to some extent, an in-process database), care must be taken to avoid creating and dropping JDBC Connections too frequently. Failure to observe this will result in poor performance when the application is under heavy load.

A common error made by users in load-test simulations is to use a single client machine to open and close thousands of connections to a HyperSQL server instance. The connection attempts will fail after a few thousand because of OS restrictions on opening sockets and the delay that is built into the OS in closing them.

Tweaking the Mode of Operation

Different modes of operation and settings are used for different purposes. Some scenarios are discussed below:

Application Development and Testing

First thing to be aware of is the SQL conformance settings of HyperSQL. By default version 2.0 applies stricter conformance rules than version 1.8 and catches long strings or decimal values that do not fit within the specified length or precision settings. However, there are several conformance settings that are turned off by default. This is to enable easier migration from earlier versions, and also greater compatibility with databases such as MySQL that are sometimes very liberal with type conversions. The conformance settings are listed in the System Management chapter and their connection property equivalents are listed in the Database Properties chapter. Ideally, all the settings should have a true value for best error checking.

For application unit testing you can use an all-in-memory, in-process database.

If the tests are all run in one process, then the contents of a mem: database survives between tests. To release the contents you can use the SHUTDOWN command (an SQL command). You can even use multiple mem: databases in your tests and SHUTDOWN each one separately.

If the tests are in different processes and you want to keep the data between the tests, the best solution is to use a Server instance that has a mem: database. After the tests are done, you can SHUTDOWN this database, which will shutdown the server.

The Server has an option that allows databases to be created as needed by making a connection (see the Listeners Chapter). This option is useful for testing, as your server is never shut down when a database is shutdown. Each time you connect to the mem: database that is served by the Server, the database is created if it does not exist (i.e. has been previously shut down).

If you do not want to run a Server instance, and you need persistence between tests in different processes, then you should use a file: database. From version 2.2.9 when the last existing connection to the database is closed, the latest changes to the database are persisted fully. The database is still in an open state until it is shut down. You can use the shutdown=true connection property to close the database automatically after the connections are closed. An alternative option is to use hsqldb.write_delay=false connection property, but this is slightly slower than the other option and should be used in situations where the test application does not close the connections.

It has been reported that some data access frameworks do not close all their connection to the database after the tests. In such situations, you need to use zero WRITE DELAY if you want the data to persist at the end of the tests

You may actually want to use a file: database, or a server instance that serves a file: database in preference to a mem: database. As HyperSQL logs the DDL and DML statements in the .log file, this file can be used to check what is being sent to the database. Note that UPDATE statements are represented by a DELETE followed by an INSERT statement. Statements are written out when the connection commits. The write delay also has an effect on how soon the statements are written out. By default, the write delay is 0.5 second.

The new SQL logging feature in version 2.2.x and later records all executed statements and can be used for debugging your application.

Some types of tests start with a database that already contains the tables and data, and perform various operations on it during the tests. You can create and populate the initial database then set the property "files_readonly=true" in the .properties file of the database. The tests can then modify the database, but these modifications are not persisted after the tests have completed.

Databases with "files_readonly=true" can be placed within the classpath and in a jar file. In this case, the connection URL must use the res: protocol, which treats the database as a resource.

Embedded Databases in Desktop Applications

In this usage, the amount of data change is often limited and there is often a requirement to persist the data immediately. The default write delay of 0.5 second is fine for many applications. You can also use the property hsqldb.write_delay_millis=100 to reduce it to 0.1 second, or the property hsqldb.write_delay=false to force a disk sync after each commit. Before the application is closed, you should perform the SHUTDOWN command to ensure the database is opened instantly when it is next opened. Note you don't need to use SHUTDOWN COMPACT as routine.

Embedded Databases in Server Applications

This usage involves a server application, such as a web application, connecting to an embedded HyperSQL instance. In this usage, the database is often accessed heavily, therefore performance and latency is a consideration. If the database is updated heavily, the default value of the WRITE DELAY property (0.5 sec) is often enough, as it is assumed the server or the application does not go down frequently. If it is necessary, you can reduce the WRITE DELAY to a small value (20 ms) without impacting the update speed. If you reduce WRITE DELAY to zero, performance drops to the speed of disk file sync operation.

Alternatively, a server application can use an all-in-mem database instance for fast access, while sending the data changes to a persistent, disk based instance either periodically or in real time.

Mixed Mode : Embedding a HyperSQL Server (Listener)

Since you won't be able to access in-process database instances from other processes, you will often want to run a Listener in your applications that use embedded databases. You can do this by starting up a Server or WebServer instance programmatically, but you could also use the class org.hsqldb.util.MainInvoker to start up your application and a HyperSQL Server or WebServer without any programming.

Example 15.2. MainInvoker Example

  java -cp path/to/your/app.jar:path/to/hsqldb.jar your.App "" org.hsqldb.server.Server


(Use ; instead of : to delimit classpath elements on Windows). Specify the same in-process JDBC URL to your app and in the server.properties file. You can then connect to the database from outside using a JDBC URL like jdbc:hsqldb:hsql://hostname, while connecting from inside the application using something like jdbc:hsqldb:file:<filepath of database> .

This tactic can be used to run off-the-shelf server applications with an embedded HyperSQL Server, without doing any coding.

MainInvoker can be used to run any number of Java class main method invocations in a single JVM. See the API spec for MainInvoker for details on its usage.

Using HyperSQL Without Logging Data Change

All file database that are not readonly, write changes to the .log file. There are scenarios where writing to the .log file can be turned off to improve performance, especially with larger databases. For these applications you can set the property hsqldb.log_data=false to disable the recovery log and speed up data change performance. The equivalent SQL command is SET FILES LOG FALSE.

With this setting, no data is logged, but all the changes to cached tables are written to the .data file. To persist all the data changes up to date, you can use the CHECKPOINT command. If you perform SHUTDOWN, the data is also persisted correctly. If you do not use CHECKPOINT or SHUTDOWN when you terminate the application, all the changes are lost and the database reverts to its original state when it is opened without losing any of the original data.

Your server applications can use a database as a temporary disk data cache which is not persisted past the lifetime of the application. For this usage, delete the database files when the application ends.

On some platforms, such as embedded devices which have a reliable storage device, this is also a useful option. Your application issues CHECKPOINT to save the changes made so far. This method of use reduces write operations on SSD devices. For this usage, the lock file should also be disabled with the connection property hsqldb.lock_file=false.

Bulk Inserts, Updates and Deletes

Bulk inserts, deletes and updates are performed with the best performance with the following method. The database remains safe and consistent using this method. In the event of a machine crash during the operation, the database can be recovered to the point just before the bulk operation.

  1. Before the operation, execute the SET FILES LOG FALSE statement.

  2. Execute the CHECKPOINT statement.

  3. Perform all the bulk operations, using batched prepared statements. A batch size of 1000 to 10000 is adequate.

  4. After all the bulk operations are complete, execute the SET FILES LOG TRUE statement.

  5. Finally execute the CHECKPOINT statement.

  6. If you have performed many thousands of updates or deletes (not just inserts), it is a good idea to execute CHECKPOINT DEFRAG, instead of CHECKPOINT at the end.

  7. If things go wrong during the bulk operation, for example when a unique constraint violation aborts the operation, and you want to redo the operation, just use SHUTDOWN IMMEDIATELY instead of CHECKPOINT. When you restart the database it will revert to the state at the first CHECKPOINT and the bulk operation can be redone.

Using NIO File Access

This method of file access uses the operating system's memory-mapped file buffer for the .data file. For larger databases with CACHED tables, use of nio improves database access speed significantly. Performance improvements can be tenfold or even higher. By default, NIO is used for .data files from 16 MB up to 256 MB. You can increase the limit with the SET FILES NIO SIZE <value> statement. There should be enough RAM available to accommodate the memory mapped buffers. For vary large nio usage, a 64 bit JVM must be used. The memory is not taken from the JVM memory allocation, therefore there is no need to increase the -Xmx parameter of the JVM. If not enough memory is available for the specified value, nio is not used.

Server Databases

Running databases in a HyperSQL server is the best overall method of access. As the JVM process is separate from the application, this method is the most reliable as well as the most accessible method of running databases.

Upgrading Databases

Any database that is not produced with the release version of HyperSQL 2.0 must be upgraded to this version.

Procedure 15.1. Upgrading Databases Created with Version 1.8.x

  1. Open the database with the jar that created it and perform the SHUTDOWN statement as an SQL statement.

  2. Open the database with the HyperSQL 2.0 jar.

  3. Perform the SHUTDOWN COMPACT statement..

The first step is to guarantee there is no .log file for the database. When upgrading an application that has been deployed on a large scale, it is sometimes not practical to perform the first step of this procedure (with the old jar). You can ignore the first step but you may lose part of the database statements that are stored in the .log file. Therefore you need to test with databases created with your application to make sure typical statements that are logged in the .log file are compatible with the new version. Examples of known incompatible statements are some DDL statements used for changing the data type or default values of column.

A note about SHUTDOWN modes. SHUTDOWN COMPACT is equivalent to SHUTDOWN SCRIPT plus opening the database and then performing a simple SHUTDOWN.

After upgrading a database, you may want to change some of its settings. For example, the new SET FILES BACKUP INCREMENT TRUE statement can improve the shutdown and checkpoint times of larger databases.

Once a database is upgraded to 2.0, it can no longer be used with previous versions of HyperSQL.

Upgrading From Older Versions

To upgrade from version 1.8.x with the default TEXT format script files, follow the instructions above. If the 1.8.x files have database script format set to BINARY or COMPRESSED (ZIPPED) you must issue the SET SCRIPTFORMAT TEXT and SHUTDOWN SCRIPT commands with the old version, then open with the new version of the engine. In most cases the upgrade is successful and complete.

It is strongly recommended to execute SHUTDOWN COMPACT after an automatic upgrade from previous versions.

If your database has been created with version 1.7.2 or 1.7.3, first upgrade to version 1.8.1 and perform a SHUTDOWN SCRIPT with this version. You can then upgrade the database to version 2.0.

To upgrade from older version database files (1.7.1 and older) that contain CACHED tables, use the SCRIPT procedure below. In all versions of HyperSQL, the SCRIPT 'filename' command (used as an SQL statement) allows you to save a full record of your database, including database object definitions and data, to a file of your choice. You can export a script file using the old version of the database engine and open the script as a database with 2.0.

Procedure 15.2. Upgrade Using the SCRIPT Procedure for Very Old Versions

  1. Open the original database in the old version of DatabaseManager

  2. Issue the SCRIPT command, for example SCRIPT 'newversion.script' to create a script file containing a copy of the database.

  3. SHUTDOWN this database.

  4. Copy the original *.properties file into newversion.properties in the same directory as newversion.script

  5. Try to open the new database newversion using DatabaseManager of version 1.8.1.

  6. If there is any inconsistency in the data, the script line number is reported on the console and the opening process is aborted. Edit and correct any problems in the newversion.script before attempting to open again. Use the guidelines in the next section (Manual Changes to the .script File). Use a programming editor that is capable of handling very large files and does not wrap long lines of text.

Manual Changes to the *.script File

In HyperSQL 2.0 the full range of ALTER TABLE commands is available to change the data structures and their names. However, if an old database cannot be opened due to data inconsistencies, or it uses index or column names that are not compatible with 2.0, manual editing of the *.script file can be performed and can be faster.

  • Version 2.0 does not accept duplicate names for indexes that were allowed before 1.7.2.

  • Version 2.0 does not accept some table or column names that are SQL reserved keywords without double quoting.

  • Version 2.0 does not accept unquoted table or column names which begin with an underscore, unless the connection sql.regular_names is set false.

  • Version 2.0 is more strict with check conditions and default values.

Other manual changes are also possible. Note that the *.script file must be the result of a SHUTDOWN SCRIPT and must contain the full data for the database. The following changes can be applied so long as they do not affect the integrity of existing data.

  • Names

    Names of tables, columns and indexes can be changed. These changes must be consistent regarding foreign key constraint references.

  • CHECK

    A check constraint can always be removed.

  • NOT NULL

    A not-null constraint can always be removed.

  • PRIMARY KEY

    A primary key constraint can be removed. It cannot be removed if there is a foreign key referencing the column(s).

  • UNIQUE

    A UNIQUE constraint can be removed if there is no foreign key referencing the column(s).

  • FOREIGN KEY

    A FOREIGN KEY constraint can always be removed.

  • COLUMN TYPES

    Some changes to column types are possible. For example an INTEGER column can be changed to BIGINT.

After completing the changes and saving the modified .script file, you can open the database as normal.

Backward Compatibility Issues

HyperSQL 2.0 conforms to the SQL Standard better than previous versions and has many more features. For these reasons, there may be some compatibility issues when converting old database, or using applications that were written for version 1.8.x or earlier. Some of the potential issues (and enhancements) are listed here. See the full list of connection properties for alternatives.

  • By default, when comparison strings, the shorter string is padded with spaces. This has an effect on comparing 'test' and 'test ' which are now considered equal, despite the length difference. This behaviour is controlled by the default PAD SPACE property of collations, which can be changed to NO PAD. See the statement SET DATABASE COLLATION <name> [ PAD SPACE | NO PAD ].

  • User names and passwords are case-sensitive. Check the .script file of a database for the correct case of user name and password and use this form in the connection properties or on connection URL.

  • It is now possible to specify the admin username and password for a new database (instead of SA and the empty password).

  • HyperSQL 2.0 has several settings that relax its conformance to the SQL Standard in the areas of type conversion and object names. These settings can be turned on for maximum conformance.

  • Check constraints must conform to the SQL Standard. A check constraint is rejected if it is not deterministic or retrospectively deterministic. When opening an old database, HyperSQL silently drops check constraints that no longer compile. See under check constraints for more detail about what is not allowed.

  • Type declarations in column definition and in cast expressions must have the necessary size parameters.

  • In connection with the above, an old database that did not have the enforce_strict_size property, is now converted to version 2.0 with the engine supplying the missing size parameters. For example, a VARCHAR column declaration that has no size, is given a 32K size, a LONGVARCHAR column is given a 16MB size. Check these sizes are adequate for your use, and change the column definition as necessary.

  • Column names in a GROUP BY clause were previously resolved to the column label. They are now resolved to column name first, and if the name does not match, to the column label.

  • If two or more tables in a join contain columns with the same name, the columns cannot be referenced in join and where conditions. Use table names before column names to qualify the references to such columns. The SET DATABASE SQL REFERENCES { TRUE | FALSE } statement enables or disables this check.

  • If the unqualified wild card is used, as in the statement SELECT * FROM ... no additional column references are allowed. A table-qualified wild card allows additional column references in the SELECT list

  • Table definitions containing GENERATED BY DEFAULT AS IDENTITY but with no PRIMARY KEY do not automatically create a primary key. Database .script files made with 1.8 are fine, as the PRIMARY KEY clause is always included. But the CREATE TABLE statements in your application program may assume an automatic primary key is created. The old shortcut, IDENTITY, is retained with the same meaning. So CREATE TABLE T (ID IDENTITY, DAT VARCHAR(20)) is translated into CREATE TABLE T(ID INTEGER GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY, DAT VARCHAR(20)). This last form is the correct way of defining both autoincrement and primary key in versions 1.8 and 2.0.

  • CREATE ALIAS is now obsolete. Use the new function definition syntax. The org.hsqldb.Library class no longer exists. You should use the SQL form of the old library functions. For example, use LOG(x) rather than the direct form, "org.hsqldb.Library.log"(x).

  • The names of some commands for changing database and session properties have changed. See the list of statements in this chapter.

  • Computed columns in SELECT statements which did not have an alias: These columns had no ResultMetaData label in version 1.8, but in version 2.0, the engine generates labels such as C1, C2.

  • The issue with the JDBC ResultSetMetaData methods, getColumnName(int column) and getColumnLabel(int column) has been clarified by the JDBC 4 specification. getColumName() returns the underlying column name, while getColumnLabel() returns any specified or generated alias. HyperSQL 1.8 and 2.0 have a connection property, get_column_name, which defaults to true in version 2.0, but defaulted to false in some releases of version 1.8.x. You have to explicitly specify this property as false if you want (non-standard behaviour) getColumnName() to return the same value as getColumnLabel().

HyperSQL Dependency Settings for Applications

Dependency settings using Gradle, Ivy, Maven, Groovy

This section is about building applications that have build-time dependencies upon HyperSQL, and for executions that use a dependency library system. Examples of the second type are unit test runs, job runs triggered by a build system, or systems like Grape that pull libraries from the network at end-user run time.

What version to Pull

The best option for most developers is to use our snapshot repository, or at least to depend upon the latest public version of HyperSQL with a range pattern like [2,). Here are exceptional cases where you should depend on a static version.

  • Your application has code dependencies upon version-specific details of the HyperSQL distribution. In this case, the specific dependency specification should be checked in to your source code control system alongside the code that manifests the version-dependency. If your code is enhanced to use a newer version of HyperSQL, you should update the version specification so that whenever code + configs are checked out, the dependency will always match the code.
  • Your organization only allows the use of vetted libraries. In this case, you vigorously maintain your configurations, updating your dependencies and regression testing as soon as new versions of HyperSQL are vetted. To get the best performance and reliability from HyperSQL, you should urge the appropriate parties to vet new versions as soon as they are publicly released.
  • You need precisely reproducible builds.

If none of these situations apply to you, then follow the suggestions in the appropriate sections below. If you need to specify a specific version, follow the instructions in the range-versioning section but change the version range specifications to literal versions like 2.2.9.

Using the HyperSQL Snapshot Repository

Use the Latest & Greatest with Snapshots

The HyperSQL Snapshot repository resides at http://hsqldb.org/repos/

[Note]Limitation of Classifiers

Classifiers are incompatible with real repository snapshots. Builders can only publish one jar variant per product, and at this time our snapshot jars are always built debug-enabled with Java 6.

Where you insert the <repository> element depends on whether you want the definition to be personal, shared, or project-specific, so see the Maven documentation about that. But you can paste this element verbatim:

Example 15.3. HyperSQL Snapshot Repository Definition

    <repository>
      <releases>
        <enabled>false</enabled>
      </releases>
      <snapshots>
        <enabled>true</enabled>
        <updatePolicy>always</updatePolicy>
        <checksumPolicy>fail</checksumPolicy>
      </snapshots>
      <id>hsqldb_snapshots</id>
      <name>HyperSQL Snapshots</name>
      <url>http://hsqldb.org/repos</url>
      <layout>default</layout>
    </repository>

Snapshot Dependency Specification Examples

Example 15.4. Sample Snapshot Ivy Dependency

       <dependency org="org.hsqldb" name="hsqldb" rev="SNAPSHOT" conf="buildOnly"/>

Example 15.5. Sample Snapshot Maven Dependency

        <dependency>
          <groupId>org.hsqldb<groupId>
          <artifactId>hsqldb<artifactId>
          <version>SNAPSHOT<version>
          <!-- Scope defaults to "compile":
          <scope>test<scope>
          -->
        <dependency>

Example 15.6. Sample Snapshot Gradle Dependency

    dependencies.compile (group: 'org.hsqldb', name: 'hsqldb', version:'SNAPSHOT')
    dependencies {
        runtime 'org.hsqldb:hsqldb:SNAPSHOT',
                'org.hsqldb:sqltool:SNAPSHOT'
    }

If you want to use an ivy.xml file with a Gradle build, you will need use the Ivyxml Gradle Plugin. It just takes a few links of code in your build.gradle file to hook in ivyxml. See the Ivyxml documentation to see exactly how.

Example 15.7. Sample Snapshot ivy.xml loaded by Ivyxml plugin

    <ivy-module version="2.0">
    ...
    <dependency org="org.hsqldb" name="hsqldb" rev="SNAPSHOT"/>


Example 15.8. Sample Snapshot Groovy Dependency, using Grape

@Grab(group='org.hsqldb', module='hsqldb', version='SNAPSHOT')

Range Versioning

Keeping up-to-date with Range Dependencies

[Note]Limitation of Maven Version Range Specifiers

Note that Ivy (and the many systems that use Ivy underneath, like Grape and Gradle) supports the opening exclusive ] in addition to [, whereas Maven supports only the opening inclusive [ specifier. See the relevant Ivy or Maven documentation for details. There are special cases where you should depend on a specific version instead.

Range Dependency Specification Examples

[Important]Important

For all examples below, when a range pattern is given, it means the latest version equal or greater than version 2. If a classifier is shown, it is optional and you can skip it to get the default (no-classifier) jar.

Example 15.9. Sample Range Ivy Dependency

       <dependency org="org.hsqldb" name="hsqldb" rev="[2,)" conf="j6->default"/>

I give no example here of specifying a classifier in ivy.xml because I have so far failed to get that to succeed. Classifiers in in ivy.xml are supported if using Gradle, as covered below.


Example 15.10. Sample Range Maven Dependency

See note above about Maven range specifications.

        <dependency>
          <groupId>org.hsqldb<groupId>
          <artifactId>hsqldb<artifactId>
          <version>[2,)<version>
          <!-- Scope defaults to "compile":
          <scope>test<scope>
            Use a classifier to pull one of our alternative jars:
          <classifier>jdk5<classifier>
          -->
        <dependency>

Example 15.11. Sample Range Gradle Dependency

    dependencies.compile (group: 'org.hsqldb', name: 'hsqldb', version:'[2,)', classifier: 'jdk5')
    dependencies {
        runtime 'org.hsqldb:hsqldb:[2,):jdk6debug@jar',
                'org.hsqldb:sqltool:[2,):jdk6debug@jar'
    }

If you want to use an ivy.xml file with a Gradle build, you will need use the Ivyxml Gradle Plugin. It just takes a few links of code in your build.gradle file to hook in ivyxml. See the Ivyxml documentation to see exactly how.

Example 15.12. Sample Range ivy.xml loaded by Ivyxml plugin

    <ivy-module version="2.0" xmlns:m="http://ant.apache.org/ivy/maven">
    ...
    <dependency org="org.hsqldb" name="hsqldb" rev="[2,)" m:classifier="jdk5"/>


Example 15.13. Sample Range Groovy Dependency, using Grape

@Grab(group='org.hsqldb', module='hsqldb', version='[2,)', classifier='jdk6debug')


$Revision: 4864 $