Chapter 2. SQL Language

Fred Toussi

The HSQL Development Group

$Revision: 5039 $

Copyright 2002-2012 Fred Toussi. Permission is granted to distribute this document without any alteration under the terms of the HSQLDB license. Additional permission is granted to the HSQL Development Group to distribute this document with or without alterations under the terms of the HSQLDB license.

2012-08-06 00:10:58+0100

Table of Contents

Standards Support
SQL Data and Tables
Temporary Tables
Persistent Tables
Lob Data
Short Guide to Data Types
Data Types and Operations
Numeric Types
Boolean Type
Character String Types
Binary String Types
Bit String Types
Storage and Handling of Java Objects
Type Length, Precision and Scale
Datetime types
Interval Types
Arrays
Array Definition
Array Reference
Array Operations
Indexes and Query Speed
Query Processing and Optimisation
Indexes and Conditions
Indexes and Operations
Indexes and ORDER BY, OFFSET and LIMIT

Standards Support

HyperSQL 2.0 supports the dialect of SQL defined by SQL standards 92, 1999, 2003 and 2008. This means where a feature of the standard is supported, e.g. left outer join, the syntax is that specified by the standard text. Almost all syntactic features of SQL-92 up to Advanced Level are supported, as well as SQL:2008 core and many optional features of this standard. Work is in progress for a formal declaration of conformance.

At the time of this release, HyperSQL supports the widest range of SQL standard features among all open source RDBMS.

Various chapters of this guide list the supported syntax. When writing or converting existing SQL DDL (Data Definition Language), DML (Data Manipulation Language) or DQL (Data Query Language) statements for HSQLDB, you should consult the supported syntax and modify the statements accordingly. Some statements written for older versions may have to be modified.

Over 300 words are reserved by the standard and should not be used as table or column names. For example, the word POSITION is reserved as it is a function defined by the Standards with a similar role as String.indexOf() in Java. HyperSQL does not currently prevent you from using a reserved word if it does not support its use or can distinguish it. For example CUBE is a reserved words that is not currently supported by HyperSQL and is allowed as a table or column name. You should avoid using such names as future versions of HyperSQL are likely to support the reserved words and may reject your table definitions or queries. The full list of SQL reserved words is in the appendix Lists of Keywords .

If you have to use a reserved keyword as the name of a database object, you can enclose it in double quotes.

HyperSQL also supports enhancements with keywords and expressions that are not part of the SQL standard. Expressions such as SELECT TOP 5 FROM .., SELECT LIMIT 0 10 FROM ... or DROP TABLE mytable IF EXISTS are among such constructs.

Many print books cover SQL Standard syntax and can be consulted. For a well-written basic guide to SQL with examples, you can also consult PostgreSQL: Introduction and Concepts by Bruce Momjian, which is available on the web. Most of the core SQL coverage in the book applies also to HyperSQL. There are some differences in keywords supported by one and not the other engine (OUTER, OID's, etc.) or used differently (IDENTITY/SERIAL, TRIGGER, SEQUENCE, etc.).

In HyperSQL version 2.0, all features of JDBC4 that apply to the capabilities of HSQLDB are fully supported. The relevant JDBC classes are thoroughly documented with additional clarifications and HyperSQL specific comments. See the JavaDoc for the org.hsqldb.jdbc.* classes.

SQL Data and Tables

In an SQL system, all significant data is stored in tables and sequence generators. Therefore, the first step in creating a database is defining the tables and their columns. The SQL standard supports temporary tables, which are for temporary data, and permanent base tables, which are for persistent data.

Temporary Tables

Data in TEMPORARY tables is not saved and lasts only for the lifetime of the session. The contents of each TEMP table is visible only from the session that is used to populate it.

HyperSQL supports two types of temporary tables.

The GLOBAL TEMPORARY type is a schema object. It is created with the CREATE GLOBAL TEMPORARY TABLE statement. The definition of the table persists, and each session has access to the table. But each session sees its own copy of the table, which is empty at the beginning of the session.

The LOCAL TEMPORARY type is not a schema object. It is created with the DECLARE LOCAL TEMPORARY TABLE statement. The table definition lasts only for the duration of the session and is not persisted in the database. The table can be declared in the middle of a transaction without committing the transaction.

When the session commits, the contents of all temporary tables are cleared by default. If the table definition statements includes ON COMMIT PRESERVE ROWS, then the contents are kept when a commit takes place.

The rows in temporary tables are stored in memory by default. If the hsqldb.result_max_memory_rows ( SET SESSION RESULT MEMORY ROWS <row count> ) has been specified, tables with row count above the setting are stored on disk.

Persistent Tables

HyperSQL supports the Standard definition of persistent base table, but defines three types according to the way the data is stored. These are MEMORY tables, CACHED tables and TEXT tables.

Memory tables are the default type when the CREATE TABLE command is used. Their data is held entirely in memory but any change to their structure or contents is written to the *.log and *.script files. The *.script file and the *.log file are read the next time the database is opened, and the MEMORY tables are recreated with all their contents. So unlike TEMPORARY tables, MEMORY tables are persistent. When the database is opened, all the data for the memory tables is read and inserted. This process may take a long time if the database is larger than tens of megabytes. When the database is shutdown, all the data is saved. This can also take a long time.

CACHED tables are created with the CREATE CACHED TABLE command. Only part of their data or indexes is held in memory, allowing large tables that would otherwise take up to several hundred megabytes of memory. Another advantage of cached tables is that the database engine takes less time to start up when a cached table is used for large amounts of data. The disadvantage of cached tables is a reduction in speed. Do not use cached tables if your data set is relatively small. In an application with some small tables and some large ones, it is better to use the default, MEMORY mode for the small tables.

TEXT tables use a CSV (Comma Separated Value) or other delimited text file as the source of their data. You can specify an existing CSV file, such as a dump from another database or program, as the source of a TEXT table. Alternatively, you can specify an empty file to be filled with data by the database engine. TEXT tables are efficient in memory usage as they cache only part of the text data and all of the indexes. The Text table data source can always be reassigned to a different file if necessary. The commands are needed to set up a TEXT table as detailed in the Text Tables chapter.

With all-in-memory databases, both MEMORY table and CACHED table declarations are treated as declarations for non-persistent memory tables. In the latest versions of HyperSQL, TEXT table declarations are allowed in all-in-memory databases.

The default type of tables resulting from future CREATE TABLE statements can be specified with the SQL command:

    SET DATABASE DEFAULT TABLE TYPE { CACHED | MEMORY };

The type of an existing table can be changed with the SQL command:

    SET TABLE <table name> TYPE { CACHED | MEMORY };

SQL statements access different types of tables uniformly. No change to statements is needed to access different types of table.

Lob Data

Lobs are logically stored in columns of tables. Their physical storage is a separate *.lobs file. This file is created as soon as a BLOB or CLOB is inserted into the database. The file will grow as new lobs are inserted into the database. In version 2.x, the *.lobs file is never deleted even if all lobs are deleted from the database (In this case you can delete the .lobs file after a SHUTDOWN).

Short Guide to Data Types

Most other RDBMS do not conform to the SQL Standard in all areas, but they are gradually moving towards Standard conformance. When switching from another SQL dialect, the following should be considered:

  • Numeric types TINYINT, SMALLINT, INTEGER and BIGINT are types with fixed binary precision. These types are more efficient to store and retrieve. NUMERIC and DECIMAL are types with user-defined decimal precision. They can be used with zero scale to store very large integers, or with a non-zero scale to store decimal fractions. The DOUBLE type is a 64 bit, approximate floating point types. HyperSQL even allows you to store infinity in this type.

  • The BOOLEAN type is for logical values and can hold TRUE, FALSE or UNKNOWN. Although HyperSQL allows you to use one and zero in assignment or comparison, you should use the standard values for this type.

  • Character string types are CHAR(L), VARCHAR(L) and CLOB. CHAR is for fixed width strings and any string that is assigned to this type is padded with spaces at the end. Do not use this type for general storage of strings. If you use CHAR without the length L, then it is interpreted as a single character string. Use VARCHAR(L) for general strings. There are only memory limits and performance implications for the maximum length of VARCHAR(L). If the strings are larger than a few kilobytes, consider using CLOB. The CLOB types is for very large strings. Do not use this type for short strings as there are performance implications. The CLOB type is a better choice for the storage of long strings. By default LONGVARCHAR is a synonym for a long VARCHAR and can be used without specifying the size. You can set LONGVARCHAR to map to CLOB, with the sql.longvar_is_lob connection property or the SET DATABASE SQL LONGVAR IS LOB TRUE statement.

  • Binary string types are BINARY(L), VARBINARY(L) and BLOB. Do not use BINARY(L) unless you are storing keys such as UUID. This type pads short binary strings with zero bytes. BINARY without the length L means a single byte. Use VARBINARY(L) for general binary strings, and BLOB for large binary objects. You should apply the same considerations as with the character string types. By default LONGVARBINARY is a synonym for a long VARCHAR and can be used without specifying the size. You can set LONGVARBINARY to map to BLOB, with the sql.longvar_is_lob connection property or the SET DATABASE SQL LONGVAR IS LOB TRUE statement.

  • The BIT(L) and BITVARYING(L) types are for bit maps. Do not use them for other types of data. BIT without the length L argument means a single bit and is sometimes used as a logical type. Use BOOLEAN instead of this type.

  • The datetime types DATE, TIME and TIMESTAMP, together with their WITH TIME ZONE variations are available. Read the details in this chapter on how to use these types.

  • The INTERVAL type is very powerful when used together with the datetime types. This is very easy to use, but is supported mainly by "big iron" database systems. Note that functions that add days or months to datetime values are not really a substitute for the INTERVAL type. Expressions such as (datecol - 7 DAY) > CURRENT_DATE are optimised to use indexes when it is possible, while the equivalent function calls are not optimised.

  • The OTHER type is for storage of Java objects. If your objects are large, serialize them in your application and store them as BLOB in the database.

  • The ARRAY type supports all base types except LOB and OTHER types. ARRAY data objects are held in memory while being processed. It is therefore not recommended to store more than about a thousand objects in an ARRAY in normal operations with disk based databases. For specialised applications, use ARRAY with as many elements as your memory allocation can support.

HyperSQL 2.2.x has several compatibility modes which allow the type names that are used by other RDBMS to be accepted and translated into the closest SQL Standard type. For example the type TEXT, supported by MySQL and PostgreSQL is translated in these compatibility modes.

Data Types and Operations

HyperSQL supports all the types defined by SQL-92, plus BOOLEAN, BINARY and LOB types that were added later to the SQL Standard. It also supports the non-standard OTHER type to store serializable Java objects.

SQL is a strongly typed language. All data stored in specific columns of tables and other objects (such as sequence generators) have specific types. Each data item conforms to the type limits such as precision and scale for the column. It also conforms to any additional integrity constraints that are defined as CHECK constraints in domains or tables. Types can be explicitly converted using the CAST expression, but in most expressions they are converted automatically.

Data is returned to the user (or the application program) as a result of executing SQL statements such as query expressions or function calls. All statements are compiled prior to execution and the return type of the data is known after compilation and before execution. Therefore, once a statement is prepared, the data type of each column of the returned result is known, including any precision or scale property. The type does not change when the same query that returned one row, returns many rows as a result of adding more data to the tables.

Some SQL functions used within SQL statements are polymorphic, but the exact type of the argument and the return value is determined at compile time.

When a statement is prepared, using a JDBC PreparedStatement object, it is compiled by the engine and the type of the columns of its ResultSet and / or its parameters are accessible through the methods of PreparedStatement.

Numeric Types

TINYINT, SMALLINT, INTEGER, BIGINT, NUMERIC and DECIMAL (without a decimal point) are the supported integral types. They correspond respectively to byte, short, int, long, BigDecimal and BigDecimal Java types in the range of values that they can represent (NUMERIC and DECIMAL are equivalent). The type TINYINT is an HSQLDB extension to the SQL Standard, while the others conform to the Standard definition. The SQL type dictates the maximum and minimum values that can be held in a field of each type. For example the value range for TINYINT is -128 to +127. The bit precision of TINYINT, SMALLINT, INTEGER and BIGINT is respectively 8, 16, 32 and 64. For NUMERIC and DECIMAL, decimal precision is used.

DECIMAL and NUMERIC with decimal fractions are mapped to java.math.BigDecimal and can have very large numbers of digits. In HyperSQL the two types are equivalent. These types, together with integral types, are called exact numeric types.

In HyperSQL, REAL, FLOAT, DOUBLE are equivalent and all mapped to double in Java. These types are defined by the SQL Standard as approximate numeric types. The bit-precision of all these types is 64 bits.

The decimal precision and scale of NUMERIC and DECIMAL types can be optionally defined. For example, DECIMAL(10,2) means maximum total number of digits is 10 and there are always 2 digits after the decimal point, while DECIMAL(10) means 10 digits without a decimal point. The bit-precision of FLOAT can also be defined, but in this case, it is ignored and the default bit-precision of 64 is used. The default precision of NUMERIC and DECIMAL (when not defined) is 100.

Note: If a database has been set to ignore type precision limits with the SET DATABASE SQL SIZE FALSE command, then a type definition of DECIMAL with no precision and scale is treated as DECIMAL(100,10). In normal operation, it is treated as DECIMAL(100).

Integral Types

In expressions, TINYINT, SMALLINT, INTEGER, BIGINT, NUMERIC and DECIMAL (without a decimal point) are fully interchangeable, and no data narrowing takes place.

If the SELECT statement refers to a simple column or function, then the return type is the type corresponding to the column or the return type of the function. For example:

    CREATE TABLE t(a INTEGER, b BIGINT);
    SELECT MAX(a), MAX(b) FROM t;

will return a ResultSet where the type of the first column is java.lang.Integer and the second column is java.lang.Long. However,

    SELECT MAX(a) + 1, MAX(b) + 1 FROM t;

will return java.lang.Long and BigDecimal values, generated as a result of uniform type promotion for all the return values. Note that type promotion to BigDecimal ensures the correct value is returned if MAX(b) evaluates to Long.MAX_VALUE.

There is no built-in limit on the size of intermediate integral values in expressions. As a result, you should check for the type of the ResultSet column and choose an appropriate getXXXX() method to retrieve it. Alternatively, you can use the getObject() method, then cast the result to java.lang.Number and use the intValue() or longValue() methods on the result.

When the result of an expression is stored in a column of a database table, it has to fit in the target column, otherwise an error is returned. For example when 1234567890123456789012 / 12345687901234567890 is evaluated, the result can be stored in any integral type column, even a TINYINT column, as it is a small value.

In SQL Statements, an integer literal is treated as INTEGER, unless its value does not fit. In this case it is treated as BIGINT or DECIMAL, depending on the value.

Depending on the types of the operands, the result of the operations is returned in a JDBC ResultSet in any of related Java types: Integer, Long or BigDecimal. The ResultSet.getXXXX() methods can be used to retrieve the values so long as the returned value can be represented by the resulting type. This type is deterministically based on the query, not on the actual rows returned.

Other Numeric Types

In SQL statements, number literals with a decimal point are treated as DECIMAL unless they are written with an exponent. Thus 0.2 is considered a DECIMAL value but 0.2E0 is considered a DOUBLE value.

When an approximate numeric type, REAL, FLOAT or DOUBLE (all synonymous) is part of an expression involving different numeric types, the type of the result is DOUBLE. DECIMAL values can be converted to DOUBLE unless they are beyond the Double.MIN_VALUE - Double.MAX_VALUE range. For example, A * B, A / B, A + B, etc. will return a DOUBLE value if either A or B is a DOUBLE.

Otherwise, when no DOUBLE value exists, if a DECIMAL or NUMERIC value is part an expression, the type of the result is DECIMAL or NUMERIC. Similar to integral values, when the result of an expression is assigned to a table column, the value has to fit in the target column, otherwise an error is returned. This means a small, 4 digit value of DECIMAL type can be assigned to a column of SMALLINT or INTEGER, but a value with 15 digits cannot.

When a DECIMAL values is multiplied by a DECIMAL or integral type, the resulting scale is the sum of the scales of the two terms. When they are divided, the result is a value with a scale (number of digits to the right of the decimal point) equal to the larger of the scales of the two terms. The precision for both operations is calculated (usually increased) to allow all possible results.

The distinction between DOUBLE and DECIMAL is important when a division takes place. For example, 10.0/8.0 (DECIMAL) equals 1.2 but 10.0E0/8.0E0 (DOUBLE) equals 1.25. Without division operations, DECIMAL values represent exact arithmetic.

REAL, FLOAT and DOUBLE values are all stored in the database as java.lang.Double objects. Special values such as NaN and +-Infinity are also stored and supported. These values can be submitted to the database via JDBC PreparedStatement methods and are returned in ResultSet objects. In order to allow division by zero of DOUBLE values in SQL statements (which returns NaN or +-Infinity) you should set the property hsqldb.double_nan as false (SET DATABASE SQL DOUBLE NAN FALSE). The double values can be retrieved from a ResultSet in the required type so long as they can be represented. For setting the values, when PreparedStatement.setDouble() or setFloat() is used, the value is treated as a DOUBLE automatically.

In short,

<numeric type> ::= <exact numeric type> | <approximate numeric type>

<exact numeric type> ::= NUMERIC [ <left paren> <precision> [ <comma> <scale> ] <right paren> ] | { DECIMAL | DEC } [ <left paren> <precision> [ <comma> <scale> ] <right paren> ] | SMALLINT | INTEGER | INT | BIGINT

<approximate numeric type> ::= FLOAT [ <left paren> <precision> <right paren> ] | REAL | DOUBLE PRECISION

<precision> ::= <unsigned integer>

<scale> ::= <unsigned integer>

Boolean Type

The BOOLEAN type conforms to the SQL Standard and represents the values TRUE, FALSE and UNKNOWN. This type of column can be initialised with Java boolean values, or with NULL for the UNKNOWN value.

The three-value logic is sometimes misunderstood. For example, x IN (1, 2, NULL) does not return true if x is NULL.

In previous versions of HyperSQL, BIT was simply an alias for BOOLEAN. In version 2.0, BIT is a single-bit bit map.

<boolean type> ::= BOOLEAN

The SQL Standard does not support type conversion to BOOLEAN apart from character strings that consists of boolean literals. Because the BOOLEAN type is relatively new to the Standard, several database products used other types to represent boolean values. For improved compatibility, HyperSQL allows some type conversions to boolean.

Values of BIT and BIT VARYING types with length 1 can be converted to BOOLEAN. If the bit is set, the result of conversion is the TRUE value, otherwise it is FALSE.

Values of TINYINT, SMALLINT, INTEGER and BIGINT types can be converted to BOOLEAN. If the value is zero, the result is the FALSE value, otherwise it is TRUE.

Character String Types

The CHARACTER, CHARACTER VARYING and CLOB types are the SQL Standard character string types. CHAR, VARCHAR and CHARACTER LARGE OBJECT are synonyms for these types. HyperSQL also supports LONGVARCHAR as a synonym for VARCHAR. If LONGVARCHAR is used without a length, then a length of 16M is assigned. You can set LONGVARCHAR to map to CLOB, with the sql.longvar_is_lob connection property or the SET DATABASE SQL LONGVAR IS LOB TRUE statement..

HyperSQL's default character set is Unicode, therefore all possible character strings can be represented by these types.

The SQL Standard behaviour of the CHARACTER type is a remnant of legacy systems in which character strings are padded with spaces to fill a fixed width. These spaces are sometimes significant while in other cases they are silently discarded. It would be best to avoid the CHARACTER type altogether. With the rest of the types, the strings are not padded when assigned to columns or variables of the given type. The trailing spaces are still considered discardable for all character types. Therefore if a string with trailing spaces is too long to assign to a column or variable of a given length, the spaces beyond the type length are discarded and the assignment succeeds (provided all the characters beyond the type length are spaces).

The VARCHAR and CLOB types have length limits, but the strings are not padded by the system. Note that if you use a large length for a VARCHAR or CLOB type, no extra space is used in the database. The space used for each stored item is proportional to its actual length.

If CHARACTER is used without specifying the length, the length defaults to 1. For the CLOB type, the length limit can be defined in units of kilobyte (K, 1024), megabyte (M, 1024 * 1024) or gigabyte (G, 1024 * 1024 * 1024), using the <multiplier>. If CLOB is used without specifying the length, the length defaults to 16M.

<character string type> ::= { CHARACTER | CHAR } [ <left paren> <character length> <right paren> ] | { CHARACTER VARYING | CHAR VARYING | VARCHAR } <left paren> <character length> <right paren> | LONGVARCHAR [ <left paren> <character length> <right paren> ] | <character large object type>

<character large object type> ::= { CHARACTER LARGE OBJECT | CHAR LARGE OBJECT | CLOB } [ <left paren> <character large object length> <right paren> ]

<character length> ::= <unsigned integer> [ <char length units> ]

<large object length> ::= <length> [ <multiplier> ] | <large object length token>

<character large object length> ::= <large object length> [ <char length units> ]

<large object length token> ::= <digit>... <multiplier>

<multiplier> ::= K | M | G

<char length units> ::= CHARACTERS | OCTETS

CHAR(10)
CHARACTER(10)
VARCHAR(2)
CHAR VARYING(2)
CLOB(1000)
CLOB(30K)
CHARACTER LARGE OBJECT(1M)
LONGVARCHAR

Binary String Types

The BINARY, BINARY VARYING and BLOB types are the SQL Standard binary string types. VARBINARY and BINARY LARGE OBJECT are synonyms for BINARY VARYING and BLOB types. HyperSQL also supports LONGVARBINARY as a synonym for VARBINARY. You can set LONGVARBINARY to map to BLOB, with the sql.longvar_is_lob connection property or the SET DATABASE SQL LONGVAR IS LOB TRUE statement.

Binary string types are used in a similar way to character string types. There are several built-in functions that are overloaded to support character, binary and bit strings.

The BINARY type represents a fixed width-string. Each shorter string is padded with zeros to fill the fixed width. Similar to the CHARACTER type, the trailing zeros in the BINARY string are simply discarded in some operations. For the same reason, it is best to avoid this particular type and use VARBINARY instead.

When two binary values are compared, if one is of BINARY type, then zero padding is performed to extend the length of the shorter string to the longer one before comparison. No padding is performed with other binary types. If the bytes compare equal to the end of the shorter value, then the longer string is considered larger than the shorter string.

If BINARY is used without specifying the length, the length defaults to 1. For the BLOB type, the length limit can be defined in units of kilobyte (K, 1024), megabyte (M, 1024 * 1024) or gigabyte (G, 1024 * 1024 * 1024), using the <multiplier>. If BLOB is used without specifying the length, the length defaults to 16M.

<binary string type> ::= BINARY [ <left paren> <length> <right paren> ] | { BINARY VARYING | VARBINARY } <left paren> <length> <right paren> | LONGVARBINARY [ <left paren> <length> <right paren> ] | <binary large object string type>

<binary large object string type> ::= { BINARY LARGE OBJECT | BLOB } [ <left paren> <large object length> <right paren> ]

<length> ::= <unsigned integer>

BINARY(10)
VARBINARY(2)
BINARY VARYING(2)
BLOB(1000)
BLOB(30K)
BINARY LARGE OBJECT(1M)
LONGVARBINARY

Bit String Types

The BIT and BIT VARYING types are the supported bit string types. These types were defined by SQL:1999 but were later removed from the Standard. Bit types represent bit maps of given lengths. Each bit is 0 or 1. The BIT type represents a fixed width-string. Each shorter string is padded with zeros to fill the fixed with. If BIT is used without specifying the length, the length defaults to 1. The BIT VARYING type has a maximum width and shorter strings are not padded.

Before the introduction of the BOOLEAN type to the SQL Standard, a single-bit string of the type BIT(1) was commonly used. For compatibility with other products that do not conform to, or extend, the SQL Standard, HyperSQL allows values of BIT and BIT VARYING types with length 1 to be converted to and from the BOOLEAN type. BOOLEAN TRUE is considered equal to B'1', BOOLEAN FALSE is considered equal to B'0'.

For the same reason, numeric values can be assigned to columns and variables of the type BIT(1). For assignment, the numeric value zero is converted to B'0', while all other values are converted to B'1'. For comparison, numeric values 1 is considered equal to B'1' and numeric value zero is considered equal to B'0'.

It is not allowed to perform other arithmetic or boolean operations involving BIT(1) and BIT VARYING(1). The kid of operations allowed on bit strings are analogous to those allowed on BINARY and CHARACTER strings. Several built-in functions support all three types of string.

<bit string type> ::= BIT [ <left paren> <length> <right paren> ] | BIT VARYING <left paren> <length> <right paren>

BIT
BIT(10)
BIT VARYING(2)

Storage and Handling of Java Objects

Any serializable JAVA Object can be inserted directly into a column of type OTHER using any variation of PreparedStatement.setObject() methods.

For comparison purposes and in indexes, any two Java Objects are considered equal unless one of them is NULL. You cannot search for a specific object or perform a join on a column of type OTHER.

Please note that HSQLDB is not an object-relational database. Java Objects can simply be stored internally and no operations should be performed on them other than assignment between columns of type OTHER or tests for NULL. Tests such as WHERE object1 = object2 do not mean what you might expect, as any non-null object would satisfy such a tests. But WHERE object1 IS NOT NULL is perfectly acceptable.

The engine does not allow normal column values to be assigned to Java Object columns (for example, assigning an INTEGER or STRING to such a column with an SQL statement such as UPDATE mytable SET objectcol = intcol WHERE ...).

<java object type> ::= OTHER

Type Length, Precision and Scale

In older version of HyperSQL, all table column type definitions with a column length, precision or scale qualifier were accepted and ignored. HSQLDB 1.8 enforced correctness but included an option to enforce the length, precision or scale.

In HyperSQL 2.0, length, precision and scale qualifiers are always enforced. For backward compatibility, when older databases which had the property hsqldb.enforce_strict_size=false are converted to version 2.0, this property is retained. However, this is a temporary measure. You should test your application to ensure the length, precision and scale that is used for column definitions is appropriate for the application data. You can test with the default database setting, which enforces the sizes.

String types, including all BIT, BINARY and CHAR string types plus CLOB and BLOB, are generally defined with a length. If no length is specified for BIT, BINARY and CHAR, the default length is 1. For CLOB and BLOB an implementation defined length of 1M is used.

TIME and TIMESTAMP types can be defined with a fractional second precision between 0 and 9. INTERVAL type definition may have precision and, in some cases, fraction second precision. DECIMAL and NUMERIC types may be defined with precision and scale. For all of these types a default precision or scale value is used if one is not specified. The default scale is 0. The default fractional precision for TIME is 0, while it is 6 for TIMESTAMP.

Values can be converted from one type to another in two different ways: by using explicit CAST expression or by implicit conversion used in assignment, comparison and aggregation.

String values cannot be assigned to VARCHAR columns if they are longer than the defined type length. For CHARACTER columns, a long string can be assigned (with truncation) only if all the characters after the length are spaces. Shorter strings are padded with the space character when inserted into a CHARACTER column. Similar rules are applied to VARBINARY and BINARY columns. For BINARY columns, the padding and truncation rules are applied with zero bytes, instead of spaces.

Explicit CAST of a value to a CHARACTER or VARCHAR type will result in forced truncation or padding. So a test such as CAST (mycol AS VARCHAR(2)) = 'xy' will find the values beginning with 'xy'. This is the equivalent of SUBSTRING(mycol FROM 1 FOR 2)= 'xy'.

For all numeric types, the rules of explicit cast and implicit conversion are the same. If cast or conversion causes any digits to be lost from the fractional part, it can take place. If the non-fractional part of the value cannot be represented in the new type, cast or conversion cannot take place and will result in a data exception.

There are special rules for DATE, TIME, TIMESTAMP and INTERVAL casts and conversions.

Datetime types

HSQLDB fully supports datetime and interval types and operations, including all relevant optional features, as specified by the SQL Standard since SQL-92. The two groups of types are complementary.

The DATE type represents a calendar date with YEAR, MONTH and DAY fields.

The TIME type represents time of day with HOUR, MINUTE and SECOND fields, plus an optional SECOND FRACTION field.

The TIMESTAMP type represents the combination of DATE and TIME types.

TIME and TIMESTAMP types can include WITH TIME ZONE or WITHOUT TIME ZONE (the default) qualifiers. They can have fractional second parts. For example, TIME(6) has six fractional digits for the second field.

If fractional second precision is not specified, it defaults to 0 for TIME and to 6 for TIMESTAMP.

<datetime type> ::= DATE | TIME [ <left paren> <time precision> <right paren> ] [ <with or without time zone> ] | TIMESTAMP [ <left paren> <timestamp precision> <right paren> ] [ <with or without time zone> ]

<with or without time zone> ::= WITH TIME ZONE | WITHOUT TIME ZONE

<time precision> ::= <time fractional seconds precision>

<timestamp precision> ::= <time fractional seconds precision>

<time fractional seconds precision> ::= <unsigned integer>

DATE
TIME(6)
TIMESTAMP(2) WITH TIME ZONE

Examples of the string literals used to represent date time values, some with time zone, some without, are below:

DATE '2008-08-22'
TIMESTAMP '2008-08-08 20:08:08'
TIMESTAMP '2008-08-08 20:08:08+8:00' /* Beijing */
TIME '20:08:08.034900'
TIME '20:08:08.034900-8:00' /* US Pacific */

Time Zone

DATE values do not take time zones. For example United Nations designates 5 June as World Environment Day, which was observed on DATE '2008-06-05' in different time zones.

TIME and TIMESTAMP values without time zone, usually have a context that indicates some local time zone. For example, a database for college course timetables usually stores class dates and times without time zones. This works because the location of the college is fixed and the time zone displacement is the same for all the values. Even when the events take place in different time zones, for example international flight times, it is possible to store all the datetime information as references to a single time zone, usually GMT. For some databases it may be useful to store the time zone displacement together with each datetime value. SQL’s TIME WITH TIME ZONE and TIMESTAMP WITH TIME ZONE values include a time zone displacement value.

The time zone displacement is of the type INTERVAL HOUR TO MINUTE. This data type is described in the next section. The legal values are between '–14:00' and   '+14:00'.

Operations on Datetime Types

The expression <datetime expression> AT TIME ZONE <time displacement> evaluates to a datetime value representing exactly the same point of time in the specified <time displacement>. The expression, AT LOCAL is equivalent to AT TIME ZONE <local time displacement>. If AT TIME ZONE is used with a datetime operand of type WITHOUT TIME ZONE, the operand is first converted to a value of type WITH TIME ZONE at the session’s time displacement, then the specified time zone displacement is set for the value. Therefore, in these cases, the final value depends on the time zone of the session in which the statement was used.

AT TIME ZONE, modifies the field values of the datetime operand. This is done by the following procedure:

  1. determine the corresponding datetime at UTC.

  2. find the datetime value at the given time zone that corresponds with the UTC value from step 1.

Example a:

TIME '12:00:00' AT TIME ZONE INTERVAL '1:00' HOUR TO MINUTE

If the session’s time zone displacement is -'8:00', then in step 1, TIME '12:00:00' is converted to UTC, which is TIME '20:00:00+0:00'. In step 2, this value is expressed as TIME '21:00:00+1:00'.

Example b:

TIME '12:00:00-5:00' AT TIME ZONE INTERVAL '1:00' HOUR TO MINUTE

Because the operand has a time zone, the result is independent of the session  time zone displacement. Step 1 results in TIME '17:00:00+0:00', and step 2 results in TIME '18:00:00+1:00'

Note that the operand is not limited to datetime literals used in these examples. Any valid expression that evaluates to a datetime value can be the operand.

Type Conversion

CAST is used to for all other conversions. Examples:

CAST (<value> AS TIME WITHOUT TIME ZONE)
CAST (<value> AS TIME WITH TIME ZONE)

In the first example, if <value> has a time zone component, it is simply dropped. For example TIME '12:00:00-5:00' is converted to TIME '12:00:00'

In the second example, if <value> has no time zone component, the current time zone displacement of the session is added. For example TIME '12:00:00' is converted to TIME '12:00:00-8:00' when the session time zone displacement is '-8:00'.

Conversion between DATE and TIMESTAMP is performed by removing the TIME component of a TIMESTAMP value or by setting the hour, minute and second fields to zero. TIMESTAMP '2008-08-08 20:08:08+8:00' becomes DATE '2008-08-08', while DATE '2008-08-22' becomes TIMESTAMP '2008-08-22 00:00:00'.

Conversion between TIME and TIMESTAMP is performed by removing the DATE field values of a TIMESTAMP value or by appending the fields of the TIME value to the fields of the current session date value.

Assignment

When a value is assigned to a datetime target, e.g., a value is used to update a row of a table, the type of the value must be the same as the target, but the WITH TIME ZONE or WITHOUT TIME ZONE characteristics can be different. If the types are not the same, an explicit CAST must be used to convert the value into the target type.

Comparison

When values WITH TIME ZONE are compared, they are converted to UTC values before comparison. If a value WITH TIME ZONE is compared to another WITHOUT TIME ZONE, then the WITH TIME ZONE value is converted to AT LOCAL, then converted to WITHOUT TIME ZONE before comparison.

It is not recommended to design applications that rely on comparisons and conversions between TIME values WITH TIME ZONE. The conversions may involve normalisation of the time value, resulting in unexpected results. For example, the expression: BETWEEN(TIME '12:00:00-8:00', TIME '22:00:00-8:00') is converted to BETWEEN(TIME '20:00:00+0:00', TIME '06:00:00+0:00') when it is evaluated in the UTC zone, which is always FALSE.

Functions

Several functions return the current session timestamp in different datetime types:

CURRENT_DATE

DATE

CURRENT_TIME

TIME WITH TIME ZONE

CURRENT_TIMESTAMP

TIMESTAMP WITH TIME ZONE

LOCALTIME

TIMESTAMP WITHOUT TIME ZONE

LOCALTIMESTAMP

TIMESTAMP WITHOUT TIME ZONE

Session Time Zone Displacement

When an SQL session is started (with a JDBC connection) the local time zone of the client JVM (including any seasonal time adjustments such as daylight saving time) is used as the session time zone displacement. Note that the SQL session time displacement is not changed when a seasonal time adjustment takes place while the session is open. To change the SQL session time zone displacement use the following commands:

SET TIME ZONE <time displacement>

SET TIME ZONE LOCAL

The first command sets the displacement to the given value. The second command restores the original, real time zone displacement of the session.

Datetime Values and Java

When datetime values are sent to the database using the PreparedStatement or CallableStatement interfaces, the Java object is converted to the type of the prepared or callable statement parameter. This type may be DATE, TIME, or TIMESTAMP (with or without time zone). The time zone displacement is the time zone of the JDBC session.

When datetime values are retrieved from the database using the ResultSet interface, there are two representations. The getString(…) methods of the ResultSet interface, return an exact representation of the value in the SQL type as it is stored in the database. This includes the correct number of digits for the fractional second field, and for values with time zone displacement, the time zone displacement. Therefore if TIME '12:00:00' is stored in the database, all users in different time zones will get '12:00:00' when they retrieve the value as a string. The getTime(…) and getTimestamp(…) methods of the ResultSet interface return Java objects that are corrected for the session time zone. The UTC millisecond value contained the java.sql.Time or java.sql.Timestamp objects will be adjusted to the time zone of the session, therefore the toString() method of these objects return the same values in different time zones.

If you want to store and retrieve UTC values that are independent of any session's time zone, you can use a TIMESTAMP WITH TIME ZONE column. The setTime(...) and setTimestamp(...) methods of the PreparedStatement interface which have a Calendar parameter can be used to assign the values. The time zone of the given Calendar argument is used as the time zone. Conversely, the getTime(...) and getTimestamp(...) methods of the ResultSet interface which have a Calendar parameter can be used with a Calendar argument to retrieve the values.

JDBC has an unfortunate limitation and does not include type codes for SQL datetime types that have a TIME ZONE property. Therefore, for compatibility with database tools that are limited to the JDBC type codes, HyperSQL reports these types by default as datetime types without TIME ZONE. You can use the URL property hsqldb.translate_dti_types=false to override the default behaviour.

Interval Types

Interval types are used to represent differences between date time values. The difference between two date time values can be measured in seconds or in months. For measurements in months, the units YEAR and MONTH are available, while for measurements in seconds, the units DAY, HOUR, MINUTE, SECOND are available. The units can be used individually, or as a range. An interval type can specify the precision of the most significant field and the second fraction digits of the SECOND field (if it has a SECOND field). The default precision is 2. The default second precision is 0.

<interval type> ::= INTERVAL <interval qualifier>

<interval qualifier> ::= <start field> TO <end field> | <single datetime field>

<start field> ::= <non-second primary datetime field> [ <left paren> <interval leading field precision> <right paren> ]

<end field> ::= <non-second primary datetime field> | SECOND [ <left paren> <interval fractional seconds precision> <right paren> ]

<single datetime field> ::= <non-second primary datetime field> [ <left paren> <interval leading field precision> <right paren> ] | SECOND [ <left paren> <interval leading field precision> [ <comma> <interval fractional seconds precision> ] <right paren> ]

<primary datetime field> ::= <non-second primary datetime field> | SECOND

<non-second primary datetime field> ::= YEAR | MONTH | DAY | HOUR | MINUTE

<interval fractional seconds precision> ::= <unsigned integer>

<interval leading field precision> ::= <unsigned integer>

Examples of INTERVAL type definition:

INTERVAL YEAR TO MONTH
INTERVAL YEAR(3)
INTERVAL DAY(4) TO HOUR
INTERVAL MINUTE(4) TO SECOND(6)
INTERVAL SECOND(4,6)

The word INTERVAL indicates the general type name. The rest of the definition is called an <interval qualifier>. This designation is important, as in most expressions <interval qualifier> is used without the word INTERVAL.

Interval Values

An interval value can be negative, positive or zero. An interval type has all the datetime fields in the specified range. These fields are similar to those in the TIMESTAMP type. The differences are as follows:

The first field of an interval value can hold any numeric value up to the specified precision. For example, the hour field in HOUR(2) TO SECOND can hold values above 23 (up to 99). The year and month fields can hold zero (unlike a TIMESTAMP value) and the maximum value of a month field that is not the most significant field, is 11.

The standard function ABS(<interval value expression>) can be used to convert a negative interval value to a positive one.

The literal representation of interval values consists of the type definition, with a string representing the interval value inserted after the word INTERVAL. Some examples of interval literal below:

INTERVAL '145 23:12:19.345' DAY(3) TO SECOND(3)
INTERVAL '3503:12:19.345' HOUR TO SECOND(3) /* equal to the first value */
INTERVAL '19.345' SECOND(4,3) /* maximum number of digits for the second value is 4, and each value is expressed with three fraction digits. */
INTERVAL '-23-10' YEAR(2) TO MONTH

Interval values of the types that are based on seconds can be cast into one another. Similarly those that are based on months can be cast into one another. It is not possible to cast or convert a value based on seconds to one based on months, or vice versa.

When a cast is performed to a type with a smaller least-significant field, nothing is lost from the interval value. Otherwise, the values for the missing least-significant fields are discarded. Examples:

CAST ( INTERVAL '145 23:12:19' DAY TO SECOND AS INTERVAL DAY TO HOUR ) = INTERVAL '145 23' DAY TO HOUR
CAST(INTERVAL '145 23' DAY TO HOUR AS INTERVAL DAY TO SECOND) = INTERVAL '145 23:00:00' DAY TO SECOND

A numeric value can be cast to an interval type. In this case the numeric value is first converted to a single-field INTERVAL type with the same field as the least significant field of the target interval type. This value is then converted to the target interval type For example CAST( 22 AS INTERVAL YEAR TO MONTH) evaluates to INTERVAL '22' MONTH and then INTERVAL '1 10' YEAR TO MONTH. Note that SQL Standard only supports casts to single-field INTERVAL types, while HyperSQL allows casting to multi-field types as well.

An interval value can be cast to a numeric type. In this case the interval value is first converted to a single-field INTERVAL type with the same field as the least significant filed of the interval value. The value is then converted to the target type. For example CAST (INTERVAL '1-11' YEAR TO MONTH AS INT) evaluates to INTERVAL '23' MONTH, and then 23.

An interval value can be cast into a character type, which results in an INTERVAL literal. A character value can be cast into an INTERVAL type so long as it is a string with a format compatible with an INTERVAL literal.

Two interval values can be added or subtracted so long as the types of both are based on the same field, i.e., both are based on MONTH or SECOND. The values are both converted to a single-field interval type with same field as the least-significant field between the two types. After addition or subtraction, the result is converted to an interval type that contains all the fields of the two original types.

An interval value can be multiplied or divided by a numeric value. Again, the value is converted to a numeric, which is then multiplied or divided, before converting back to the original interval type.

An interval value is negated by simply prefixing with the minus sign.

Interval values used in expressions are either typed values, including interval literals, or are interval casts. The expression: <expression> <interval qualifier> is a cast of the result of the <expression> into the INTERVAL type specified by the <interval qualifier>. The cast can be formed by adding the keywords and parentheses as follows: CAST ( <expression> AS INTERVAL <interval qualifier> ).

The examples below feature different forms of expression that represent an interval value, which is then added to the given date literal.

DATE '2000-01-01' + INTERVAL '1-10' YEAR TO MONTH /* interval literal */
DATE '2000-01-01' + '1-10' YEAR TO MONTH /* the string '1-10' is cast into INTERVAL YEAR TO MONTH */
DATE '2000-01-01' + 22 MONTH /* the integer 22 is cast into INTERVAL MONTH, same value as above */
DATE '2000-01-01' - 22 DAY /* the integer 22 is cast into INTERVAL DAY */
DATE '2000-01-01' + COL2 /* the type of COL2 must be an INTERVAL type */
DATE '2000-01-01' + COL2 MONTH /* COL2 may be a number, it is cast into a MONTH interval */

Datetime and Interval Operations

An interval can be added to or subtracted from a datetime value so long as they have some fields in common. For example, an INTERVAL MONTH cannot be added to a TIME value, while an INTERVAL HOUR TO SECOND can. The interval is first converted to a numeric value, then the value is added to, or subtracted from, the corresponding field of the datetime value.

If the result of addition or subtraction is beyond the permissible range for the field, the field value is normalised and carried over to the next significant field until all the fields are normalised. For example, adding 20 minutes to TIME '23:50:10' will result successively in '23:70:10', '24:10:10' and finally TIME '00:10:10'. Subtracting 20 minutes from the result is performed as follows: '00:-10:10', '-1:50:10', finally TIME '23:50:10'. Note that if DATE or TIMESTAMP normalisation results in the YEAR field value out of the range (1,1000), then an exception condition is raised.

If an interval value based on MONTH is added to, or subtracted from a DATE or TIMESTAMP value, the result may have an invalid day (30 or 31) for the given result month. In this case an exception condition is raised.

The result of subtraction of two datetime expressions is an interval value. The two datetime expressions must be of the same type. The type of the interval value must be specified in the expression, using only the interval field names. The two datetime expressions are enclosed in parentheses, followed by the <interval qualifier> fields. In the first example below, COL1 and COL2 are of the same datetime type, and the result is evaluated in INTERVAL YEAR TO MONTH type.

(COL1 – COL2) YEAR TO MONTH /* the difference between two DATE or two TIEMSTAMP values in years and months */
(CURRENT_DATE – COL3) DAY /* the number of days between the value of COL3 and the current date */
(CURRENT_DATE - DATE '2000-01-01') YEAR TO MONTH /* the number of years and months since the beginning of this century */
CURRENT_DATE - 2 DAY /* the date of the day before yesterday */
(CURRENT_TIMESTAMP - TIMESTAMP '2009-01-01 00:00:00') DAY(4) TO SECOND(2) /* days to seconds since the given date */

The individual fields of both datetime and interval values can be extracted using the EXTRACT function. The same function can also be used to extract the time zone displacement fields of a datetime value.

EXTRACT ({YEAR | MONTH | DAY | HOUR | MINUTE | SECOND | TIMEZONE_HOUR | TIMEZONE_MINUTE | DAY_OF_WEEK | WEEK_OF_YEAR } FROM {<datetime value> | <interval value>})

The dichotomy between interval types based on seconds, and those based on months, stems from the fact that the different calendar months have different numbers of days. For example, the expression, “nine months and nine days since an event” is not exact when the date of the event is unknown. It can represent a period of around 284 days give or take one. SQL interval values are independent of any start or end dates or times. However, when they are added to or subtracted from certain date or timestamp values, the result may be invalid and cause an exception (e.g. adding one month to January 30 results in February 30, which is invalid).

JDBC has an unfortunate limitation and does not include type codes for SQL INTERVAL types. Therefore, for compatibility with database tools that are limited to the JDBC type codes, HyperSQL reports these types by default as VARCHAR. You can use the URL property hsqldb.translate_dti_types=false to override the default behaviour.

Arrays

Array are a powerful feature of SQL:2008 and can help solve many common problems. Arrays should not be used as a substitute for tables.

HyperSQL supports arrays of values according to the SQL:2008 Standard.

Elements of the array are either NULL, or of the same data type. It is possible to define arrays of all supported types, including the types covered in this chapter and user defined types, except LOB types. An SQL array is one dimensional and is addressed from position 1. An empty array can also be used, which has no element.

Arrays can be stored in the database, as well as being used as temporary containers of values for simplifying SQL statements. They facilitate data exchange between the SQL engine and the user's application.

The full range of supported syntax allows array to be created, used in SELECT or other statements, combined with rows of tables and used in routine calls.

Array Definition

The type of a table column, a routine parameter, a variable, or the return value of a function can be defined as an array.

<array type> ::= <data type> ARRAY [ <left bracket or trigraph> <maximum cardinality> <right bracket or trigraph> ]

The word ARRAY is added to any valid type definition except BLOB and CLOB type definitions. If the optional <maximum cardinality> is not used, the default value is 1024. The size of the array cannot be extended beyond maximum cardinality.

In the example below, the table contains a column of integer arrays and a column of varchar arrays. The VARCHAR array has an explicit maximum size of 10, which means each array can have between 0 and 10 elements. The INTEGER array has the default maximum size of 1024. The scores column has a default clause with an empty array. The default clause can be defined only as DEFAULT NULL or DEFAULT ARRAY[] and does not allow arrays containing elements.

CREATE TABLE t (id INT PRIMARY KEY, scores INT ARRAY DEFAULT ARRAY[], names VARCHAR(20) ARRAY[10])

An array can be constructed from value expressions or a query expression.

<array value constructor by enumeration> ::= ARRAY <left bracket or trigraph> <array element list> <right bracket or trigraph>

<array element list> ::= <value expression> [ { <comma> <value expression> }... ]

<array value constructor by query> ::= ARRAY <left paren> <query expression> [ <order by clause> ] <right paren>

In the examples below, arrays are constructed from values, column references or variables, function calls, or query expressions.

ARRAY [ 1, 2, 3 ]
ARRAY [ 'HOT', 'COLD' ]
ARRAY [ var1, var2, CURRENT_DATE ]
ARRAY (SELECT lastname FROM namestable ORDER BY id)

Inserting and updating a table with an ARRAY column can use array constructors, not only for updated column values, but also in equality search conditions:

INSERT INTO t VALUES 10, ARRAY[1,2,3], ARRAY['HOT', 'COLD']
UPDATE t SET names = ARRAY['LARGE', 'SMALL'] WHERE id = 12
UPDATE t SET names = ARRAY['LARGE', 'SMALL'] WHERE id < 12 AND scores = ARRAY[3,4]

When using a PreparedStatement with an ARRAY parameter, an object of the type java.sql.Array must be used to set the parameter. The org.hsqldb.jdbc.JDBCArrayBasic class can be used for constructing a java.sql.Array object in the user's application. Code fragment below:

    String sql = "UPDATE t SET names = ? WHERE id = ?";
    PreparedStatement ps = connection.prepareStatement(sql)
    Object[] data = new Object[]{"one", "two"};
    // default types defined in org.hsqldb.types.Type can be used
    org.hsqldb.types.Type type = org.hsqldb.types.Type.SQL_VARCHAR_DEFAULT;
    JDBCArrayBasic array = new JDBCArrayBasic(data, type);
    ps.setArray(1, array);
    ps.setInt(2, 1000);
    ps.executeUpdate();

Array Reference

The most common operations on an array are element reference and assignment, which are used when reading or writing an element of the array. Unlike Java and many other languages, arrays are extended if an element is assigned to an index beyond the current length. This can result in gaps containing NULL elements. Array length cannot exceed the maximum cardinality.

Elements of all arrays, including those that are the result of function calls or other operations can be referenced for reading.

<array element reference> ::= <array value expression> <left bracket> <numeric value expression> <right bracket>

Elements of arrays that are table columns or routine variables can be referenced for writing. This is done in a SET statement, either inside an UPDATE statement, or as a separate statement in the case of routine variables, OUT and INOUT parameters.

<target array element specification> ::= <target array reference> <left bracket or trigraph> <simple value specification> <right bracket or trigraph>

<target array reference> ::= <SQL parameter reference> | <column reference>

Note that only simple values or variables are allowed for the array index when an assignment is performed. The examples below demonstrates how elements of the array are referenced in SELECT and an UPDATE statement.

SELECT scores[ranking], names[ranking] FROM t JOIN t1 on (t.id = t1.tid)
UPDATE t SET scores[2] = 123, names[2] = 'Reds' WHERE id = 10

Array Operations

Several SQL operations and functions can be used with arrays.

CONCATENATION

Array concatenation is performed similar to string concatenation. All elements of the array on the right are appended to the array on left.

<array concatenation> ::= <array value expression 1> <concatenation operator> <array value expression 2>

<concatenation operator> ::= ||

FUNCTIONS

Four functions operate on arrays. Details are described in the Built In Functions chapter.

CARDINALITY <left paren> <array value expression> <right paren>

MAX_CARDINALITY <left paren> <array value expression> <right paren>

Array cardinality and max cardinality are functions that return an integer. CARDINALITY returns the element count, while MAX_CARDINALITY returns the maximum declared cardinality of an array.

TRIM_ARRAY <left paren> <array value expression> <comma> <numeric value expression> <right paren>

The TRIM_ARRAY function returns a copy of an array with the specified number of elements removed from the end of the array. The <array value expression> can be any expression that evaluates to an array.

ARRAY_SORT <left paren> <array value expression> <right paren>

The ARRAY_SORT function returns a sorted copy of an array. NULL elements appear at the beginning of the new array. This function is a HyperSQL extension and not part of the SQL Standard.

CAST

An array can be cast into an array of a different type. Each element of the array is cast into the element type of the target array type.

UNNEST

Arrays can be converted into table references with the UNNEST keyword.

UNNEST(<array value expression>) [ WITH ORDINALITY ]

The <array value expression> can be any expression that evaluates to an array. A table is returned that contains one column when WITH ORDINALITY is not used, or two columns when WITH ORDINALITY is used. The first column contains the elements of the array (including all the nulls). When the table has two columns, the second column contains the ordinal position of the element in the array. When UNNEST is used in the FROM clause of a query, it implies the LATERAL keyword, which means the array that is converted to table can belong to any table that precedes the UNNEST in the FROM clause. This is explained in the Data Access and Change chapter.

COMPARISON

Arrays can be compared for equality, but they cannot be compared for ordering or ranges. Array expressions are therefore not allowed in an ORDER BY clause, or in a comparison expression such as GREATER THAN. Two arrays are equal if they have the same length and the values at each index position are either equal or both NULL.

USER DEFINED FUNCTIONS and PROCEDURES

Array parameters, variables and return values can be specified in user defined functions and procedures, including aggregate functions. An aggregate function can return an array that contains all the scalar values that have been aggregated. These capabilities allow a wider range of applications to be covered by user defined functions and easier data exchange between the engine and the user's application.

Indexes and Query Speed

HyperSQL supports PRIMARY KEY, UNIQUE and FOREIGN KEY constraints, which can span multiple columns.

The engine creates indexes internally to support PRIMARY KEY, UNIQUE and FOREIGN KEY constraints: a unique index is created for each PRIMARY KEY or UNIQUE constraint; an ordinary index is created for each FOREIGN KEY constraint.

HyperSQL allows defining indexes on single or multiple columns. You should not create duplicate user-defined indexes on the same column sets covered by constraints. This would result in unnecessary memory and speed overheads. See the discussion in the Deployment Guide chapter for more information.

Indexes are crucial for adequate query speed. When range or equality conditions are used e.g. SELECT ... WHERE acol > 10 AND bcol = 0, an index should exist on one of the columns that has a condition. In this example, the bcol column is the best candidate. HyperSQL always uses the best condition and index. If there are two indexes, one on acol, and another on bcol, it will choose the index on bcol.

Queries always return results whether indexes exist or not, but they return much faster when an index exists. As a rule of thumb, HSQLDB is capable of internal processing of queries at over 100,000 rows per second. Any query that runs into several seconds is clearly accessing thousands of rows. The query should be checked and indexes should be added to the relevant columns of the tables if necessary. The EXPLAIN PLAN FOR <query> statement can be used to see which indexes are used to process the query.

When executing a DELETE or UPDATE statement, the engine needs to find the rows that are to be deleted or updated. If there is an index on one of the columns in the WHERE clause, it is often possible to start directly from the first candidate row. Otherwise all the rows of the table have to be examined.

Indexes are even more important in joins between multiple tables. SELECT ... FROM t1 JOIN t2 ON t1.c1 = t2.c2 is performed by taking rows of t1 one by one and finding a matching row in t2. If there is no index on t2.c2 then for each row of t1, all the rows of t2 must be checked. Whereas with an index, a matching row can be found in a fraction of the time. If the query also has a condition on t1, e.g., SELECT ... FROM t1 JOIN t2 ON t1.c1 = t2.c2 WHERE t1.c3 = 4 then an index on t1.c3 would eliminate the need for checking all the rows of t1 one by one, and will reduce query time to less than a millisecond per returned row. So if t1 and t2 each contain 10,000 rows, the query without indexes involves checking 100,000,000 row combinations. With an index on t2.c2, this is reduced to 10,000 row checks and index lookups. With the additional index on t2.c2, only about 4 rows are checked to get the first result row.

Note that in HSQLDB an index on multiple columns can be used internally as a non-unique index on the first column in the list. For example: CONSTRAINT name1 UNIQUE (c1, c2, c3); means there is the equivalent of CREATE INDEX name2 ON atable(c1);. So you do not need to specify an extra index if you require one on the first column of the list.

In HyperSQL 2.0, a multi-column index will speed up queries that contain joins or values on the first n columns of the index. You need NOT declare additional individual indexes on those columns unless you use queries that search only on a subset of the columns. For example, rows of a table that has a PRIMARY KEY or UNIQUE constraint on three columns or simply an ordinary index on those columns can be found efficiently when values for all three columns, or the first two columns, or the first column, are specified in the WHERE clause. For example, SELECT ... FROM t1 WHERE t1.c1 = 4 AND t1.c2 = 6 AND t1.c3 = 8 will use an index on t1(c1,c2,c3) if it exists.

A multi-column index will not speed up queries on the second or third column only. The first column must be specified in the JOIN .. ON or WHERE conditions.

Sometimes query speed depends on the order of the tables in the JOIN .. ON or FROM clauses. For example the second query below should be faster with large tables (provided there is an index on TB.COL3). The reason is that TB.COL3 can be evaluated very quickly if it applies to the first table (and there is an index on TB.COL3):

    (TB is a very large table with only a few rows where TB.COL3 = 4)

    SELECT * FROM TA JOIN TB ON TA.COL1 = TB.COL2 AND TB.COL3 = 4;

    SELECT * FROM TB JOIN TA ON TA.COL1 = TB.COL2 AND TB.COL3 = 4;

The general rule is to put first the table that has a narrowing condition on one of its columns. In certain cases, HyperSQL 2.2.x reorders the joined tables if it is obvious that this will introduce a narrowing condition.

HyperSQL features automatic, on-the-fly indexes for views and subselects that are used in a query.

Indexes are used when a LIKE condition searches from the start of the string.

Indexes are used for ORDER BY clauses if the same index is used for selection and ordering of rows. It is possible to force the use of index for ORDER BY.

Query Processing and Optimisation

HyperSQL 2.2.x changes the order of tables in a query in order to optimise processing. This happens only when one of the tables has a narrowing condition and reordering does not change the result of the query.

Indexes and Conditions

HyperSQL optimises queries to use indexes, for all types of range and equality conditions, including IS NULL and NOT NULL conditions. Conditions can be in join or WHERE clauses, including all types of joins.

In addition, HyperSQL will use an index (if one exists) for IN conditions, whether constants, variable, or subqueries are used on the right hand side of the IN predicate. Multicolumn IN conditions can also use an index.

HyperSQL can always use indexes when several conditions are combined with the AND operator, choosing a conditions which can use an index. This now extended to all equality conditions on multiple columns that are part of an index.

HyperSQL will also use indexes when several conditions are combined with the OR operator and each condition can use an index (each condition may use a different index). For example, if a huge table has two separate columns for first name and last name, and both columns are indexed, a query such as the following example will use the indexes and complete in a short time:

    -- TC is a very large table

    SELECT * FROM TC WHERE TC.FIRSTNAME = 'John' OR TC.LASTNAME = 'Smith' OR TC.LASTNAME = 'Williams'

Each subquery is considered a separate SELECT statement and uses indexes when they are available.

In each SELECT statement, at least one index per table can be used if there is a query conditions that can use the index. When conditions on a table are combined with the OR operator, and each condition can use an index, multiple indexes per table are used.

Indexes and Operations

HyperSQL optimises simple row count queries in the form of SELECT COUNT(*) FROM <table> and returns the result immediately (this optimisation does not take place in MVCC mode).

HyperSQL can use an index on a column for SELECT MAX(<column>) FROM <table> and SELECT MIN(<column>) FROM <table> queries. There should be an index on the <column> and the query can have a WHERE condition on the same column. In the example below the maximum value for the TB.COL3 below 1000000 is returned.

    SELECT MAX(TB.COL3) FROM TB WHERE TB.COL < 1000000

HyperSQL can use an index for simple queries containing DISTINCT or GROUP BY to avoid checking all the rows of the table. Note that indexes are always used if the query has a condition, regardless of the use of DISTINCT or GROUP BY. This particular optimisation applies to cases in which all the columns in the SELECT list are from the same table and are covered by a single index, and any join or query condition uses this index.

For example, with the large table below, a DISTINCT or GROUP BY query to return all the last names, can use an the index on the TC.LASTNAME column. Similarly, a GROUP BY query on two columns can use an index that covers the two columns.

    -- TC is a very large table

    SELECT DISTINCT LASTNAME FROM TC WHERE TC.LASTNAME > 'F'

    SELECT STATE, LASTNAME FROM TC GROUP BY STATE, LASTNAME

Indexes and ORDER BY, OFFSET and LIMIT

HyperSQL can use an index on an ORDER BY clause if all the columns in ORDER BY are in a single-column or multi-column index (in the exact order). This is important if there is a LIMIT n (or FETCH n ROWS ONLY) clause. In this situation, the use of index allows the query processor to access only the number of rows specified in the LIMIT clause, instead of building the whole result set, which can be huge. This also works for joined tables when the ORDER BY clause is on the columns of the first table in a join. Indexes are used in the same way when ORDER BY ... DESC is specified in the query. Note that unlike some other RDBMS, HyperSQL does not need or create DESC indexes. It can use any ordinary, ascending index for ORDER BY ... DESC.

If there is an equality or range condition (e.g. EQUALS, GREATER THAN) condition on the columns specified in the ORDER BY clause, the index is still used.

In the two examples below, the index on TA.COL3 is used and only up to 1000 rows are processed and returned.

    (TA is a very large table with an index on TA.COL3

    SELECT * FROM TA JOIN TB ON TA.COL2 = TB.COL1 WHERE TA.COL3 > 40000 ORDER BY TA.COL3 LIMIT 1000;
    SELECT * FROM TA JOIN TB ON TA.COL2 = TB.COL1 WHERE TA.COL3 > 40000 AND TA.COL3 < 100000 ORDER BY TA.COL3 DESC LIMIT 1000;

But if the query contains an equality condition on another indexed column in the table, this may take precedence and no index may be used for ORDER BY. In this case USING INDEX can be added to the end of the query to force the use of the index for the LIMIT operation. In the example below there is an index on TA.COL1 as well as the index on TA.COL3. Normally the index on TA.COL1 is used, but the USING INDEX hint results in the index on TB.COL3 to be used for selecting the first 1000 rows.

    (TA is a very large table with an index on TA.COL3 and a separate index on TA.COL1

    SELECT * FROM TA JOIN TB ON TA.COL2 = TB.COL1 WHERE TA.COL1 = 'SENT' AND TB.COL3 > 40000 ORDER BY TB.COL3 LIMIT 1000 USING INDEX;


$Revision: 4864 $