Chapter 8. SqlTool

SqlTool Manual

Blaine Simpson

HSQLDB Development Group

$Date: 2007/08/09 23:53:01 $

Table of Contents

Purpose, Coverage, Changes in Behavior
Platforms and SqlTool versions covered
Functional Changes
The Bare Minimum
Non-displayable Types
Desktop shortcuts
Loading sample data
RC File Authentication Setup
Using Inline RC Authentication
Using the current version of SqlTool with an older HSQLDB distribution.
Interactive Usage
Command Types
Command Types
Special Commands
Edit Buffer / History Commands
PL Commands
? Variable
Storing and retrieving binary files
Command History
Shell scripting and command-line piping
Emulating Non-Interactive mode
Non-Interactive
Giving SQL on the Command Line
SQL Files
Piping and shell scripting
Optimally Compatible SQL Files
Comments
Special Commands and Edit Buffer Commands in SQL Files
Automation
Getting Interactive Functionality with SQL Files
Character Encoding
Generating Text or HTML Reports
SqlTool Procedural Language
Variables
PL Aliases
Logical Expressions
Flow Control
Chunking
Why?
How?
Raw Mode
PL/SQL
Using hsqltool.jar and hsqldbutil.jar
Delimiter-Separated-Value Imports and Exports
Simple DSV exports and imports using default settings
Specifying queries and options
Unit Testing SqlTool

Purpose, Coverage, Changes in Behavior

Purpose, Coverage, Changes in Behavior

This document explains how to use SqlTool, the main purpose of which is to read your SQL text file or stdin, and execute the SQL commands therein against a JDBC database. There are also a great number of features to facilitate both interactive use and automation. The following paragraphs explain in a general way why SqlTool is better than any existing tool for text-mode interactive SQL work, and for automated SQL tasks. Two important benefits which SqlTool shares with other pure Java JDBC tools is that users can use a consistent interface and syntax to interact with a huge variety of databases-- any database which supports JDBC; plus the tool itself runs on any Java platform. Instead of using isql for Sybase, psql for Postgresql, Sql*plus for Oracle, etc., you can use SqlTool for all of them. As far as I know, SqlTool is the only production-ready, pure Java, command-line, generic JDBC client. Several databases come with a command-line client with limited JDBC abilities (usually designed for use with their specific database).

SqlTool is purposefully not a Gui tool like Toad or DatabaseManager. There are many use cases where a Gui SQL tool would be better. Where automation is involved in any way, you really need a text client to at least test things properly and usually to prototype and try things out. A command-line tool is really better for executing SQL scripts, any form of automation, direct-to-file fetching, and remote client usage. To clarify this last, if you have to do your SQL client work on a work server on the other side of a VPN connection, you will quickly appreciate the speed difference between text data transmission and graphical data transmission, even if using VNC or Remote Console. Another case would be where you are doing some repetitive or very structured work where variables or language features would be useful. Gui proponents may disagree with me, but scripting (of any sort) is more efficient than repetitive copy & pasting with a Gui editor. SqlTool starts up very quickly, and it takes up a tiny fraction of the RAM required to run a comparably complex Gui like Toad.

SqlTool is superior for interactive use because over many years it has evolved lots of features proven to be efficient for day-to-day use. Three concise help commands (\?, :?, and *?) list all available commands of the corresponding type. SqlTool doesn't support up-arrow or other OOB escapes (due to basic Java I/O limitations), but it more than makes up for this limitation with aliases, user variables, command-line history and recall, and command-line editing with extended Perl/Java regular expressions. The \d commands deliver JDBC metadata information as consistently as possible (in several cases, database-specific work-arounds are used to obtain the underlying data even though the database doesn't provide metadata according to the JDBC specs). Unlike server-side language features, the same feature set works for any database server. Database access details may be supplied on the command line, but day-to-day users will want to centralize JDBC connection details into a single, protected RC file. You can put connection details (username, password, URL, and other optional settings) for scores of target databases into your RC file, then connect to any of them whenever you want by just giving SqlTool the ID ("urlid") for that database. When you Execute SqlTool interactively, it behaves by default exactly as you would want it to. If errors occur, you are given specific error messages and you can decide whether to roll back your session. You can easily change this behavior to auto-commit, exit-upon-error, etc., for the current session or for all interactive invocations. You can import or export delimiter-separated-value files.

When you Execute SqlTool with a SQL script, it behaves by default exactly as you would want it to. If any error is encountered, the connection will be rolled back, then SqlTool will exit with an error exit value. If you wish, you can detect and handle error (or other) conditions yourself. For scripts expected to produce errors (like many scripts provided by database vendors), you can have SqlTool continue-upon-error. For SQL script-writers, you will have access to portable scripting features which you've had to live without until now. You can use variables set on the command line or in your script. You can handle specific errors based on the output of SQL commands or of your variables. You can chain SQL scripts, invoke external programs, dump data to files, use prepared statements, Finally, you have a procedural language with if, foreach, while, continue, and break statements.

Platforms and SqlTool versions covered

SqlTool runs on any Java 1.4 or later platform. I haven't run it with a non-Sun JVM in years (like Blackdown, IBM, JRockit, etc.), but I've had no reports of problems with them, and SqlTool uses none of the Sun-proprietary classes directly. Some of the examples below use quoting which works exactly as-is for any Bourne-compatible UNIX shell. (Only line-continuation would need to be changed for C-compatible UNIX shells). I have not yet tested these commands on Windows, and I doubt whether the quoting will work just like this (though it is possible). SqlTool is still a very useful tool even if you have no quoting capability at all.

If you are using SqlTool from a HSQLDB distribution before version 1.8.0.8 final, you should use the documentation with that distribution, because this manual documents many new features, several significant changes to interactive-only commands, and a few changes effecting backwards-compatibility (see next section about that). This document is now updated for the current versions of SqlTool and SqlFile at the time I am writing this (versions 333 and 354 correspondingly, SqlFile is the class which does most of the work for SqlTool). Therefore, if you are using a version of SqlTool or SqlFile that is more than a couple revisions greater, you should find a newer version of this document. (The imprecision is due to content-independent revision increments at build time, and the likelihood of one or two behavior-independent bug fixes after public releases). The startup banner will report both versions when you run SqlTool interactively. (Dotted version numbers of SqlTool and SqlFile are older than 333 and 354).

This guide covers SqlTool bundled with series 1.8 and 1.9 of HSQLDB. [1]

Functional Changes

This section lists changes to SqlTool since the last major release of HSQLDB which may effect the portability of SQL scripts. For this revision of this document, this list consists of script-impacting changes made to SqlTool after the final 1.8.0.0 HSQLDB release. I'm specifically not listing changes to interactive-only commands (":" commands, with one legacy exception which is listed below), since these commands can't be used in SQL scripts; and I'm specifically not listing backwards-compatible feature additions and enhancements. The reason for limiting the change list to only portability- impacting changes is that a list of all enhancements since just 1.8.0.0 would be pages long.

  • SqlTool now consistently outputs \r\n line breaks when on \r\n-linebreak platforms, like Windows. This includes output written to stdout, \w files, and \o files.
  • Time type values are always output with the date as well as the time. This was required in order to produce consistent output for the wildly varying formats provided by different database vendors.
  • DSV input now takes JDBC Timestamp format with date and optionally time of day.
  • The command ":;" is now strictly an interactive command. If you want to repeat a command in an SQL scripts, just repeat the exact text of the command. Non-interactive use now has no dependency on command history.
  • The command ":w" has replace the command \w. Unlike writing "output" to a file with \w, :w is used to write SQL "commands", and this is an interactive feature.
  • Shell scripts using raw mode (e.g. PL/SQL scripts) must terminate the raw code with a line containing ".;", which will also send the code to the database for execution. (The old "." command has been changed to ":." to make it very clear that the command is now an interactive command).
  • The --sql argument will never automatically append a semicolon to the text you provide. If you want to execute a command ending with a semi-- then type a semi.

Although it doesn't effect scripts, I will mention a significant recent change to interactive commands. Special and PL commands are not stored to the edit buffer and to command history, so they can be recalled and edited just like SQL commands. Now, only edit/history : commands are not stored to the buffer and history.

The Bare Minimum You Need to Know to Run SqlTool

Warning

If you are using an Oracle database server, it will commit your current transaction if you cleanly disconnect, regardless of whether you have set auto-commit or not. This will occur if you exit SqlTool (or any other client) in the normal way (as opposed to killing the process or using Ctrl-C, etc.). This is mentioned in this section only for brevity, so I don't need to mention it in the main text in the many places where auto-commit is discussed. This behavior has nothing to do with SqlTool. It is a quirk of Oracle.

If you want to use SqlTool, then you either have an SQL text file, or you want to interactively type in SQL commands. If neither case applies to you, then you are looking at the wrong program.

Procedure 8.1. To run SqlTool...

  1. Copy the file sqltool.rc from the directory sample [1] of your HSQLDB distribution to your home directory and secure access to it if your computer is accessible to anybody else (most likely from the network). This file will work as-is for a Memory Only database instance; or if your target is a HSQLDB Server running on your local computer with default settings and the password for the "sa" account is blank (the sa password is blank when new HSQLDB database instances are created). Edit the file if you need to change the target Server URL, username, password, character set, JDBC driver, or TLS trust store as documented in the RC File Authentication Setup section. (You could, alternatively, use the --inlineRc command-line switch to specify your connection parameters as documented in the Using Inline RC Authentication section).

  2. Find out where your hsqldb.jar file resides. It typically resides at HSQLDB_HOME/lib/hsqldb.jar where HSQLDB_HOME is the base directory of your HSQLDB software installation. For this reason, I'm going to use "$HSQLDB_HOME/lib/hsqldb.jar" as the path to hsqldb.jar for my examples, but understand that you need to use the actual path to your own hsqldb.jar file.

  3. Run

        java -jar $HSQLDB_HOME/lib/hsqldb.jar --help
    to see what command-line arguments are available. Note that you don't need to worry about setting the CLASSPATH when you use the -jar switch to java. Assuming that you set up your SqlTool RC file at the default location and you want to use the HSQLDB JDBC driver, you will want to run something like
        java -jar $HSQLDB_HOME/lib/hsqldb.jar mem
    for interactive use, or
        java -jar $HSQLDB_HOME/lib/hsqldb.jar --sql 'SQL statement;' mem
    or
        java -jar $HSQLDB_HOME/lib/hsqldb.jar mem filepath1.sql...
    where mem is an urlid, and the following arguments are paths to text SQL files. For the filepaths, you can use whatever wildcards your operating system shell supports.

    The urlid mem in these commands is a key into your RC file, as explained in the RC File Authentication Setup section. Since this is a Memory Only database, you can use SqlTool with this urlid immediately with no database setup whatsoever (however, you can't persist any changes that you make to this database). The sample sqltool.rc file also defines the urlid "localhost-sa" for a local HSQLDB Server. At the end of this section, I explain how you can load some sample data to play with, if you want to.

Important

SqlTool does not commit SQL changes by default. This leaves it to the user's disgression whether to commit or rollback their modifications. Remember to either run \= to commit before quitting SqlTool (most databases also support the SQL command commit;, or use the --autoCommit command-line switch.

If you put a file named auto.sql into your home directory, this file will be executed automatically every time that you run SqlTool interactively and without the --noAutoFile switch.

To use a JDBC Driver other than the HSQLDB driver, you can't use the -jar switch because you need to modify the classpath. You must add the hsqldb.jar file and your JDBC driver classes to your classpath, and you must tell SqlTool what the JDBC driver class name is. The latter can be accomplished by either using the "--driver" switch, or setting "driver" in your config file. The RC File Authentication Setup section. explains the second method. Here's an example of the first method (after you have set the classpath appropriately).

java org.hsqldb.util.SqlTool --driver oracle.jdbc.OracleDriver urlid

Tip

If the tables of query output on your screen are all messy because of lines wrapping, the best and easiest solution is usually to resize your terminal emulator window to make it wider. (With some terms you click & drag the frame edges to resize, with others you use a menu system where you can enter the number of columns).

If you are using SqlTool to connect to a HSQLDB network server or any non-HSQLDB database, you may prefer to use the jar file hsqltool.jar or hsqldbutil.jar instead of hsqldb.jar. These alternative jar files contain all of SqlTool without stuff you don't need, but you will have to follow a simple procedure to generate these jars. See the Using hsqltool.jar and hsqldbutil.jar section.

Non-displayable Types

There are many SQL types which SqlTool (being a text-based program) can't display properly. This includes the SQL types BLOB, JAVA_OBJECT, STRUCT, and OTHER. When you run a query that returns any of these, SqlTool will save the very first such value obtained to the binary buffer and will not display any output from this query. You can then save the binary value to a file, as explained in the Storing and retrieving binary files section.

There are other types, such as BINARY, which JDBC can make displayable (by using ResultSet.getString()), but which you may very well want to retrieve in raw binary format. You can use the \b command to retrieve any-column-type-at-all in raw binary format (so you can later store the value to a binary file).

Another restriction which all text-based database clients have is the practical inability for the user to type in binary data such as photos, audio streams, and serialized Java objects. You can use SqlTool to load any binary object into a database by telling SqlTool to get the insert/update datum from a file. This is also explained in the Storing and retrieving binary files section.

Desktop shortcuts

Desktop shortcuts and quick launch icons are useful, especially if you often run SqlTool with the same set of arguments. It's really easy to set up several of them-- one for each way that you invoke SqlTool (i.e., each one would start SqlTool with all the arguments for one of your typical startup needs). One typical setup is to have one shortcut for each database account which you normally use (use a different urlid argument in each shortcut's Target specification.

Desktop icon setup varies depending on your Desktop manager, of course. I'll explain how to set up a SqlTool startup icon in Windows XP. Linux and Mac users should be able to take it from there, since it's easier with the common Linux and Mac desktops.

Procedure 8.2. Creating a Desktop Shortcut for SqlTool

  1. Right click in the main Windows background.

  2. New

  3. Shortcut

  4. Browse

  5. Navigate to where your good JRE lives. For recent Sun JRE's, it installs to C:\Program Files\Java\*\bin by default (the * will be a JDK or JRE name and version number).

  6. Select java.exe.

  7. OK

  8. Next

  9. Enter any name

  10. Finish

  11. Right click the new icon.

  12. Properties

  13. Edit the Target field.

  14. Leave the path to java.exe exactly as it is, including the quotes, but append to what is there. Beginning with a space, enter the command-line that you want run.

  15. Change Icon... to a pretty icon.

  16. If you want a quick-launch icon instead of (or in addition to) a desktop shortcut icon, click and drag it to your quick launch bar. (You may or may not need to edit the Windows Toolbar properties to let you add new items).

Loading sample data

If you want some sample database objects and data to play with, execute the sampledata.sql SQL file. sampledata.sql resides in the sample directory of your HSQLDB distribution [1]. To separate the sample data from your regular data, you can put it into its own schema by running this before you import:

    CREATE SCHEMA sampledata AUTHORIZATION dba;
    SET SCHEMA sampledata;
Run it like this from an SqlTool session
\i HSQLDB_HOME/sample/sampledata.sql
where HSQLDB_HOME is the base directory of your HSQLDB software installation [1].

For memory-only databases, you'll need to run this every time that you run SqlTool. For other (persistent) databases, the data will reside in your database until you drop the tables.

RC File Authentication Setup

RC file authentication setup is accomplished by creating a text RC configuration file. In this section, when I say configuration or config file, I mean an RC configuration file. RC files can be used by any JDBC client program that uses the org.hsqldb.util.RCData class-- this includes SqlTool, DatabaseManager, DatabaseManagerSwing. You can use it for your own JDBC client programs too.

The following sample RC file resides at sample/sqltool.rc in your HSQLDB distribution [1].

Example 8.1. Sample RC File

# $Id: sqltool.rc,v 1.22 2007/08/09 03:22:21 unsaved Exp $

# This is a sample RC configuration file used by SqlTool, DatabaseManager,
# and any other program that uses the org.hsqldb.util.RCData class.

# You can run SqlTool right now by copying this file to your home directory
# and running
#    java -jar /path/to/hsqldb.jar mem
# This will access the first urlid definition below in order to use a 
# personal Memory-Only database.
# "url" values may, of course, contain JDBC connection properties, delimited
# with semicolons.

# If you have the least concerns about security, then secure access to
# your RC file.
# See the documentation for SqlTool for various ways to use this file.

# A personal Memory-Only (non-persistent) database.
urlid mem
url jdbc:hsqldb:mem:memdbid
username sa
password

# A personal, local, persistent database.
urlid personal
url jdbc:hsqldb:file:${user.home}/db/personal;shutdown=true
username sa
password
# When connecting directly to a file database like this, you should 
# use the shutdown connection property like this to shut down the DB
# properly when you exit the JVM.

# This is for a hsqldb Server running with default settings on your local
# computer (and for which you have not changed the password for "sa").
urlid localhost-sa
url jdbc:hsqldb:hsql://localhost
username sa
password



# Template for a urlid for an Oracle database.
# You will need to put the oracle.jdbc.OracleDriver class into your 
# classpath.
# In the great majority of cases, you want to use the file classes12.zip
# (which you can get from the directory $ORACLE_HOME/jdbc/lib of any
# Oracle installation compatible with your server).
# Since you need to add to the classpath, you can't invoke SqlTool with
# the jar switch, like "java -jar .../hsqldb.jar..." or 
# "java -jar .../hsqlsqltool.jar...".
# Put both the HSQLDB jar and classes12.zip in your classpath (and export!)
# and run something like "java org.hsqldb.util.SqlTool...".

#urlid cardiff2
#url jdbc:oracle:thin:@aegir.admc.com:1522:TRAFFIC_SID
#username blaine
#password secretpassword
#driver oracle.jdbc.OracleDriver



# Template for a TLS-encrypted HSQLDB Server.
# Remember that the hostname in hsqls (and https) JDBC URLs must match the
# CN of the server certificate (the port and instance alias that follows 
# are not part of the certificate at all).
# You only need to set "truststore" if the server cert is not approved by
# your system default truststore (which a commercial certificate probably
# would be).

#urlid tls
#url jdbc:hsqldb:hsqls://db.admc.com:9001/lm2
#username blaine
#password asecret
#truststore /home/blaine/ca/db/db-trust.store


# Template for a Postgresql database
#urlid blainedb
#url jdbc:postgresql://idun.africawork.org/blainedb
#username blaine
#password losung1
#driver org.postgresql.Driver

# Template for a MySQL database.  MySQL has poor JDBC support.
#urlid mysql-testdb
#url jdbc:mysql://hostname:3306/dbname
#username root
#username blaine
#password hiddenpwd
#driver com.mysql.jdbc.Driver

# Note that "databases" in SQL Server and Sybase are traditionally used for
# the same purpose as "schemas" with more SQL-compliant databases.

# Template for a Microsoft SQL Server database
#urlid msprojsvr
#url jdbc:microsoft:sqlserver://hostname;DatabaseName=DbName;SelectMethod=Cursor
# The SelectMethod setting is required to do more than one thing on a JDBC
# session (I guess Microsoft thought nobody would really use Java for 
# anything other than a "hello world" program).
# This is for Microsoft's SQL Server 2000 driver (requires mssqlserver.jar
# and msutil.jar).
#driver com.microsoft.jdbc.sqlserver.SQLServerDriver
#username myuser
#password hiddenpwd

# Template for a Sybase database
#urlid sybase
#url jdbc:sybase:Tds:hostname:4100/dbname
#username blaine
#password hiddenpwd
# This is for the jConnect driver (requires jconn3.jar).
#driver com.sybase.jdbc3.jdbc.SybDriver

# Template for Embedded Derby / Java DB.
#urlid derby1
#url jdbc:derby:path/to/derby/directory;create=true
#username ${user.name}
#password any_noauthbydefault
#driver org.apache.derby.jdbc.EmbeddedDriver
# The embedded Derby driver requires derby.jar.
# There'a also the org.apache.derby.jdbc.ClientDriver driver with URL
# like jdbc:derby://<server>[:<port>]/databaseName, which requires
# derbyclient.jar.
# You can use \= to commit, since the Derby team decided (why???)
# not to implement the SQL standard statement "commit"!!
# Note that SqlTool can not shut down an embedded Derby database properly,
# since that requires an additional SQL connection just for that purpose.
# However, I've never lost data by not shutting it down properly.
# Other than not supporting this quirk of Derby, SqlTool is miles ahead of ij.

You can put this file anywhere you want to, and specify the location to SqlTool/DatabaseManager/DatabaseManagerSwing by using the --rcfile argument. If there is no reason to not use the default location (and there are situations where you would not want to), then use the default location and you won't have to give --rcfile arguments to SqlTool/DatabaseManager/DatabaseManagerSwing. The default location is sqltool.rc or dbmanager.rc in your home directory (corresponding to the program using it). If you have any doubt about where your home directory is, just run SqlTool with a phony urlid and it will tell you where it expects the configuration file to be.

    java -jar $HSQLDB_HOME/lib/hsqldb.jar x

The config file consists of stanza(s) like this:

    urlid web
    url jdbc:hsqldb:hsql://localhost
    username web
    password webspassword

These four settings are required for every urlid. (There are optional settings also, which are described a couple paragraphs down). The URL may contain JDBC connection properties. You can have as many blank lines and comments like

    # This comment

in the file as you like. The whole point is that the urlid that you give in your SqlTool/DatabaseManager command must match a urlid in your configuration file.

Important

Use whatever facilities are at your disposal to protect your configuration file.

It should be readable, both locally and remotely, only to users who run programs that need it. On UNIX, this is easily accomplished by using chmod/chown commands and making sure that it is protected from anonymous remote access (like via NFS, FTP or Samba).

You can also put the following optional settings into a urlid stanza. The setting will, of course, only apply to that urlid.

charset
This is used by the SqlTool program, but not by the DatabaseManager programs. See the Character Encoding section of the Non-Interactive section. You can, alternatively, set this for one SqlTool invocation by setting the system property sqlfile.charset . Defaults to US-ASCII.
driver
Sets the JDBC driver class name. You can, alternatively, set this for one SqlTool/DatabaseManager invocation by using the command line switch --driver. Defaults to org.hsqldb.jdbcDriver.
truststore
TLS trust keystore store file path as documented in the TLS chapter. You usually only need to set this if the server is using a non-publicly-certified certificate (like a self-signed self-ca'd cert).

Property and SqlTool command-line switches override settings made in the configuration file.

Using Inline RC Authentication

Inline RC authentication setup is accomplished by using the --inlineRc command-line switch on SqlTool. The --inlineRc command-line switch takes a comma-separated list of key/value elements. The url and user elements are required. The rest are optional.

url
The JDBC URL of the database you wish to connect to.
user
The username to connect to the database as.
charset
Sets the character encoding. Defaults to US-ASCII.
trust
The TLS trust keystore file path as documented in the TLS chapter.
password
You may only use this element to set empty password, like
                    password=
. For any other password value, omit the password element and you will be prompted for the value.

(Use the --driver switch instead of --inlineRc to specify a JDBC driver class). Here is an example of invoking SqlTool to connect to a standalone database.

    java -jar $HSQLDB_HOME/lib/hsqldb.jar
         --inlineRc URL=jdbc:hsqldb:file:/home/dan/dandb,USER=dan

For security reasons, you cannot specify a non-empty password as an argument. You will be prompted for a password as part of the login process.

Using the current version of SqlTool with an older HSQLDB distribution.

This procedure will allow users of a legacy version of HSQLDB to use all of the new features of SqlTool. You will also get the new versions of the DatabaseManagers! This procedure works for distros going back to 1.7.3.3 at least, probably much farther.

Follow the instructions in the See the Using hsqltool.jar and hsqldbutil.jar section to build the jar file hsqldbutil.jar.

For now on, whenever you are going to run SqlTool, make sure that you have this hsqldbutil.jar as the first item in your CLASSPATH. You can't run SqlTool with the "-jar" switch (because the -jar switch doesn't permit setting your own class path).

Here's a UNIX example where somebody wants to use the new SqlTool with their older HSQLDB database, as well as with Postgresql and a local application.

CLASSPATH=/path/to/hsqldbutil.jar:/home/bob/classes:/usr/local/lib/pg.jdbc3.jar
export CLASSPATH
java org.hsqldb.util.SqlTool urlid

Interactive Usage

Do read the The Bare Minimum section before you read this section.

You run SqlTool interactively by specifying no SQL filepaths on the SqlTool command line. Like this.

    java -jar $HSQLDB_HOME/lib/hsqldb.jar urlid

Procedure 8.3. What happens when SqlTool is run interactively (using all default settings)

  1. SqlTool starts up and connects to the specified database, using your SqlTool configuration file (as explained in the RC File Authentication Setup section).

  2. SQL file auto.sql in your home directory is executed (if there is one),

  3. SqlTool displays a banner showing the SqlTool and SqlFile version numbers and describes the different command types that you can give, as well as commands to list all of the specific commands available to you.

You exit your session by using the "\q" special command or ending input (like with Ctrl-D or Ctrl-Z).

Important

Every command (regardless of type) and comment must begin at the beginning of a line (or immediately after a comment ends with "*/").

You can't nest commands or comments. You can only start new commands (and comments) after the preceding statement has been terminated. (Remember that if you're running SqlTool interactively, you can terminate an SQL statement without executing it by entering a blank line).

(Special Commands, Edit Buffer Commands and PL Commands always consist of just one line. Any of these commands or comments may be preceded by space characters.)

These rules do not apply at all to Raw Mode. Raw mode is for use by advanced users when they want to completely bypass SqlTool processing in order to enter a chunk of text for direct transmission to the database engine.

Command Types

When you are typing into SqlTool, you are always typing part of the immediate command. You execute the immediate command by hitting ENTER after a semi-colon (for SQL commands) or by just hitting ENTER (after any other non-empty command-- see next section about this distinction). The interactive : commands can perform actions with or on the edit buffer. The edit buffer usually contains a copy of the last command executed, and you can always view it with the :b command. If you never use any : commands, you can entirely ignore the edit buffer. If you want to repeat commands or edit previous commands, you will need to work with the edit buffer. The immediate command contains whatever (and exactly what) you type. The command history and edit buffer may contain any type of command other than comments and : commands (i.e., : commands and comments are just not copied to the history or to the edit buffer).

Hopefully an example will clarify the difference between the immediate command and the edit buffer. If you type in the edit buffer Substitution command ":s/tbl/table/", the :s command that you typed is the immediate command (and it will never be stored to the edit buffer or history, since it is a : command), but the purpose of the substitution command is to modify the contents of the edit buffer (perform a substitution on it)-- the goal being that after your substitutions you would execute the buffer with the ":;" command. The ":a" command is special in that when you hit ENTER to execute it, it copies the contents of the edit buffer to a new immediate command and leaves you in a state where you are appending to that immediate command (nearly) exactly as if you had just typed it in.

Command Types

Command types

Note

Above, we said that if you enter an SQL command, one SQL command corresponds to one SqlTool command. This is the most typical usage, however, you can actually put multiple SQL statements into one SQL command. One example would be

    INSERT INTO t1 VALUES(0); SELECT * FROM t1;
This is one SqlTool command containing two SQL statements. See the Chunking section to see why you may want to chunk SQL commands, how, and the implications.

SQL Statement

Any command that you enter which does not begin with "\", ":", or "* " is an SQL Statement. The command is not terminated when you hit ENTER, like most OS shells. You terminate SQL Statements with either ";" at the end of a line, or with a blank line. In the former case, the SQL Statement will be executed against the SQL database and the command will go into the edit buffer and SQL command history for editing or viewing later on. In the former case, execute against the SQL database means to transmit the SQL text to the database engine for execution. In the latter case (you end an SQL Statement with a blank line), the command will go to the edit buffer and SQL history, but will not be executed (but you can execute it later from the edit buffer). (See the note immediately above about multiple SQL statements in one SqlTool command).

(Blank lines are only interpreted this way when SqlTool is run interactively. In SQL files, blank lines inside of SQL statements remain part of the SQL statement).

As a result of these termination rules, whenever you are entering text that is not a Special Command, Edit Buffer / History Command, or PL Command, you are always appending lines to an SQL Statement or comment. (In the case of the first line, you will be appending to an empty SQL statement. I.e. you will be starting a new SQL Statement or comment).

Special Command
Run the command "\?" to list the Special Commands. All of the Special Commands begin with "\". I'll describe some of the most useful Special Commands below.
Edit Buffer / History Command
Run the command ":?" to list the Edit-Buffer/History Commands. All of these commands begin with ":". These commands use commands from the command history, or operate upon the edit "buffer", so that you can edit and/or (re-)execute previously entered commands.
PL Command

Procedural Langage commands. Run the command "*?" to list the PL Commands. All of the PL Commands begin with "*". PL commands are for setting and using scripting variables and conditional and flow control statements like * if and * while. A few PL features (such as PL aliases and updating and selecing data directly from/to files) can be a real convenience for nearly all users, so these features will be discussed briefly in this section. More detailed explanation of PL variables and the other PL features, with examples, are covered in the SqlTool Procedural Language section.

Raw Mode
The descriptions of command-types above do not apply to Raw Mode. In raw mode, SqlTool doesn't interpret what you type at all. It all just goes into the edit buffer which you can send to the database engine. Beginners can safely ignore raw mode. You will never encounter it unless you run the "\." special command, or enter a PL/SQL command. See the Raw Mode section for the details.

Special Commands

Essential Special Commands

\?
help
\q
quit
\i path/to/script.sql
execute the specified SQL script, then continue again interactively.
\=
commit the current SQL transaction. Most users are used to typing the SQL statement commit;, but this command is crucial for those databases which don't support the statement. It's obviously unnecessary if you have auto-commit mode on.
\x?
List a summary of DSV eXporting, and all available DSV options.
\m?
List a summary of DSV iMporting, and all available DSV options.
\d?
List a summary of the \d commands below.
\dt [filter_substring]
\dv [filter_substring]
\ds [filter_substring]
\di [table_name]
\dS [filter_substring]
\da [filter_substring]
\dn [filter_substring]
\du [filter_substring]
\dr [filter_substring]
\d* [filter_substring]

Lists available objects of the given type.

  • t: non-system Tables
  • v: Views
  • s: Sequences
  • i: Indexes
  • S: System tables
  • a: Aliases
  • n: schema Names
  • u: database Users
  • r: Roles
  • *: all table-like objects
If your database supports schemas, then the schema name will also be listed.

If you supply an optional filter substring, then only items which contain the given substring (in the object name or schema name) will be listed.

Important

The substring test is case-sensitive! Even though in SQL queries and for the "\d objectname" command object names are usually case-insensitive, for the \dX commands, you must capitalize the filter substring exactly as it will appear in the special command output. This is an inconvenience, since the database engine will change names in SQL to default case unless you double-quote the name, but that is server-side functionality which cannot (portably) be reproduced by SqlTool. You can use spaces and other special characters in the string.

Tip

Filter substrings ending with "." are special. If a substring ends with ".", then this means to narrow the search by the exact, case-sensitive schema name given. For example, if I run "\d* BLAINE.", this will list all table-like database objects in the "BLAINE" schema. The capitalization of the schema must be exactly the same as how the schema name is listed by the "\dn" command. You can use spaces and other special characters in the string. (I.e., enter the name exactly how you would enter it inside of double-quotes in an SQL command). This is an inconvenience, since the database engine will change names in SQL to default case unless you double-quote the name, but that is server-side functionality which cannot (portably) be reproduced by SqlTool.

Important

Indexes may not be searched for by substring, only by exact target table name. So if I1 is an index on table T1, then you list this index by running "\di T1". In addition, many database vendors will report on indexes only if a target table is identified. Therefore, "\di" with no argument will fail if your database vendor does not support it.

\d objectname [filter]

Lists names of columns in the specified table or view. objectname may be a base table name or a schema.object name.

If you supply a filter string, then only columns with a name containing the given filter will be listed. The objectname is nearly always case-insensitive (depends on your database), but the filter is always case-sensitive. You'll find this filter is a great convenience compared to other database utilities, where you have to list all columns of large tables when you are only interested in one of them.

Tip

When working with real data (as opposed to learning or playing), I often find it useful to run two SqlTool sessions in two side-by-side terminal emulator windows. I do all of my real work in one window, and use the other mostly for \d commands. This way I can refer to the data dictionary while writing SQL commands, without having to scroll.

This list here includes only the essential Special Commands, but n.b. that there are other useful Special Commands which you can list by running \?. (You can, for example, execute SQL from external SQL files, and save your interactive SQL commands to files). Some specifics of these other commands are specified immediately below, and the Generating Text or HTML Reports section explains how to use the "\o" and "\H" special commands to generate reports.

Be aware that the \! Special Command does not work for external programs that read from standard input. You can invoke non-interactive and graphical interactive programs, but not command-line interactive programs.

SqlTool executes \! programs directly, it does not run an operating system shell (this is to avoid OS-specific code in SqlTool). Because of this, you can give as many command-line arguments as you wish, but you can't use shell wildcards or redirection.

The \w command can be used to store any command in your SQL history to a file. Just restore the command to the edit buffer with a command like "\-4" before you give the \w command.

Edit Buffer / History Commands

Edit Buffer / History Commands

:?
help
:b
List the current contents of the edit buffer.
:h
Shows the Command History. For each command which has been executed (up to the max history length), the SQL command history will show the command; its command number (#); and also how many commands back it is (as a negative number). : commands are never added to the history list. You can then use either form of the command identifier to recall a command to the edit buffer (the command described next) or as the target of any of the following : commands. This last is accomplished in a manner very similar to the vi editor. You specify the target command number between the colon and the command. As an example, if you gave the command :s/X/Y/, that would perform the substitution on the contents of the edit buffer; but if you gave the command :-3 s/X/Y/, that would perform the substitution on the command 3 back in the command history (and copy the output to the edit buffer). Also, just like vi, you can identify the command to recall by using a regular expression inside of slashes, like :/blue/ s/X/Y/ to operate on the last command you ran which contains "blue".
:13 OR :-2 OR :/blue/

Recalls a command from Command history to the edit buffer. Enter ":" followed by the positive command number from Command history, like ":13"... or ":" followed by a negative number like ":-2" for two commands back in the Command history... or ":" followed by a regular expression inside slashes, like ":/blue/" to recall the last command which contains "blue". The specified command will be written to the edit buffer so that you can execute it or edit it using the commands below.

As described under the :h command immediately above, you can follow the command number here with any of the commands below to perform the given operation on the specified command from history instead of on the edit buffer contents. So, for example, ":4;" would load command 4 from history then execute it (see the ":;" command below).

:;
Executes the SQL, Special or PL statement in the edit buffer (by default). This is an extremely useful command. It's easy to remember because it consists of ":", meaning Edit Buffer Command, plus a line-terminating ";", (which generally means to execute an SQL statement, though in this case it will also execute a special or PL command).
:a

Enter append mode with the contents of the edit buffer (by default) as the current command. When you hit ENTER, things will be exactly as if you physically re-typed the command that is in the edit buffer. Whatever lines you type next will be appended to the immediate command. As always, you then have the choice of hitting ENTER to execute a Special or PL command, entering a blank line to store back to the edit buffer, or end a SQL statement with semi-colon and ENTER to execute it.

You can, optionally, put a string after the :a, in which case things will be exactly as just described except the additional text will also be appended to the new immediate command. If you put a string after the :a which ends with ;, then the resultant new immediate command will just be executed right away, as if you typed in and entered the entire thing.

If your edit buffer contains SELECT x FROM mytab and you run a:le, the resultant command will be SELECT x FROM mytable. If your edit buffer contains SELECT x FROM mytab and you run a: ORDER BY y, the resultant command will be SELECT x FROM mytab ORDER BY y. Notice that in the latter case the append text begins with a space character.

:s/from regex/to string/switches

The Substitution Command is the primary method for SqlTool command editing-- it operates upon the current edit buffer by default. The "to string" and the "switches" are both optional (though the final "/" is not). To start with, I'll discuss the use and behavior if you don't supply any substitution mode switches.

Don't use "/" if it occurs in either "from string" or "to string". You can use any character that you want in place of "/", but it must not occur in the from or to strings. Example

    :s@from string@to string@

The to string is substituted for the first occurrence of the (case-specific) from string. The replacement will consider the entire SQL statement, even if it is a multi-line statement.

In the example above, the from regex was a plain string, but it is interpreted as a regular expression so you can do all kinds of powerful substitutions. See the perlre man page, or the java.util.regex.Pattern API Spec for everything you need to know about extended regular expressions.

Don't end a to string with ";" in attempt to make a command execute. There is a substitution mode switch to use for that purpose.

You can use any combination of the substitution mode switches.

  • Use "i" to make the searches for from regex case insensitive.

  • Use "g" to substitute Globally, i.e., to subsitute all occurrences of the from regex instead of only the first occurrence found.

  • Use ";" to execute the command immediately after the substitution is performed.

  • Use "m" for ^ and $ to match each line-break in a multi-line edit buffer, instead of just at the very beginning and every end of the entire buffer.

If you specify a command number (from the command history), you end up with a feature very reminiscent of vi, but even more powerful, since the Perl/Java regular expression are a superset of the vi regular expressions. As an example,

                    :24 s/pin/needle/g;
                    
would start with command number 24 from command history, substitute "needle" for all occurrences of "pin", then execute the result of that substitution (and this final statement will of course be copied to the edit buffer and to command history).

:w /path/to/file.sql
This appends the contents of the current buffer (by default) to the specified file. Since what is being written are Special, PL, or SQL commands, you are effectively creating an SQL script.

I find the ":/regex/" and ":/regex/;" constructs particularly handy for every-day usage.

                    :/\\d/;
re-executes the last \d command that you gave (The extra "\" is needed to escape the special meaning of "\" in regular expressions). It's great to be able to recall and execute the last "insert" command, for example, without needing to check the history or keep track of how many commands back it was. To re-execute the last insert command, just run ":/insert/;". If you want to be safe about it, do it in two steps to verify that you didn't accidentally recall some other command which happened to contain the string "insert", like
    :/insert/
    :;
(Executing the last only if you are satisfied when SqlTool reports what command it restored). Often, of course, you will want to change the command before re-executing, and that's when you combine the :s and :a commands.

We'll finish up with a couple fine points about Edit/Buffer commands. You generally can't use PL variables in Edit/Buffer commands, to eliminate possible ambiguities and complexities when modifying commands. The :w command is an exception to this rule, since it can be useful to use variables to determine the output file, and this command does not do any "editing".

The :? help explains how you can change the default regular expression matching behavior (case sensitivity, etc.), but you can always use syntax like "(?i)" inside of your regular expression, as described in the Java API spec for class java.util.regex.Pattern, found here. History-command-matching with the /regex/ construct is purposefully liberal, matching any portion of the command, case sensitive, etc., but you can still use the method just described to modify this behavior. In this case, you could use "(?-i)" at the beginning of your regular expression to be case-sensitive.

PL Commands

Essential PL Command

* VARNAME = value

Set the value of a variable. If the variable doesn't exist yet, it will be created. The most common use for this is so that you can later use it in SQL statements, print statements, and PL conditionals, by using the *{VARNAME} or *{:VARNAME} construct. The only difference between *{VARNAME} and *{:VARNAME} is that the former produces an error if VARNAME is not set, whereas the latter will expand to a zero-length string if VARNAME is not set.

If you set a variable to an SQL statement (without the terminating ";") you can then use it as a PL alias like /VARNAME, as shown in this example.

Example 8.2. Defining and using a PL alias (PL variable)

    * qry = SELECT COUNT(*) FROM mytable
    \p The stored query is '*{qry}'
    /qry;
    /qry WHERE mass > 200;

If you put variable definitions into the SQL file auto.sql in your home directory, those aliases/variables will always be available for interactive use.

PL variables can be expanded within all commands other than : edit/history commands.

* load VARNAME /file/path.txt
Sets VARNAME to the content of the specified ASCII file.
* prepare VARNAME
Indicate that next command should be a SQL INSERT or UPDATE command containing one question mark. The value of VARNAME will be substuted for the ? variable. This does work for CLOB columns.
* VARNAME _
When next SQL command is run, instead of displaying the rows, just store the very first column value to variable VARNAME. This works for CLOB columns too. It also works with Oracle XML type columns if you use column labels and the getclobval function.
* VARNAME ~
Exactly the same as
* VARNAME ~
except that the fetched results will be displayed in addition to setting the variable.
* dump VARNAME /file/path.txt
Store the value of VARNAME to the specified ASCII file.

? Variable

You don't set the ? variable. It is just like the Bourne shell variable ? in that it is always automatically set to the first value of a result set (or the return value of other SQL commands). It works just like the

* VARNAME ~
command described above, but it all happens automatically. You can, of course, dereference ? like any PL variable, but it does not list with the
list
and
listvalues
commands. You can see the value whenever you want by running
    \p  *{?}

Note that PL commands are used to upload and download column values to/from local ASCII files, but the corresponding actions for binary files use the special \b commands. This is because PL variables are used for ASCII values and you can store any number of column values in PL variables. This is not true for binary column values. The \b commands work with a single binary byte buffer.

See the SqlTool Procedural Language section below for information on using variables in other ways, and information on the other PL commands and features.

Storing and retrieving binary files

You can upload binary files such as photographs, audio files, or serialized Java objects into database columns. SqlTool keeps one binary buffer which you can load from files with the \bl command, or from a database query by doing a one-row query for any non-displayable type (including BLOB, OBJECT, and OTHER). In the latter case, the data returned for the first non-displayable column of the first result row will be stored into the binary buffer.

Once you have data in the binary buffer, you can upload it to a database column (including BLOB, OBJECT, and OTHER type columns), or save it to a file. The former is accomplished by the special command \bp followed by a prepared SQL query containing one question mark place-holder to indicate where the data gets inserted. The latter is accomplished with the \bd command.

You can also store the output from normal, displayable column into the binary buffer by using the special command \b. The very first column value from the first result row of the next SQL command will be stored to the binary byte buffer.

Example 8.3. Inserting binary data into database from a file

    \bl /tmp/favoritesong.mp3
    \bp
    INSERT INTO musictbl (id, stream) VALUES(3112, ?);

Example 8.4. Downloading binary data from database to a file

    SELECT stream FROM musictbl WHERE id = 3112;
    \bd /tmp/favoritesong.mp3

You can also store and retrieve text column values to/from ASCII files, as documented in the Essential PL Command section.

Command History

The SQL history shown by the \h command, and used by other commands, is truncated to 100 entries, since its utility comes from being able to quickly view the history list. You can change the history length by setting the system property sqltool.historyLength to an integer like

java -Dsqltool.historyLength=100 -jar $HSQLDB_HOME/lib/hsqldb.jar urlid
If there is any demand, I'll make the setting of this value more convenient.

The SQL history list contains all executed commands other than Edit Buffer commands and comments, even if the command has a syntax error or fails upon execution. The reason for including bad commands is so that you can recall and fix them if you wish to. The same applies to the edit buffer. If you copy a command to the edit buffer by entering blank line, or if you edit the edit buffer, that edit buffer value will never make it into the command history until and if you execute it.

Shell scripting and command-line piping

You normally use non-interactive mode for input piping. You specify "-" as the SQL file name. See the Piping and shell scripting subsection of the Non-Interactive chapter.

Emulating Non-Interactive mode

You can run SqlTool interactively, but have SqlTool behave exactly as if it were processing an SQL file (i.e., no command-line prompts, error-handling that defaults to fail-upon-error, etc.). Just specify "-" as the SQL file name in the command line. This is a good way to test what SqlTool will do when it encounters any specific command in an SQL file. See the Piping and shell scripting subsection of the Non-Interactive chapter for an example.

Non-Interactive

Read the Interactive Usage section if you have not already, because much of what is in this section builds upon that. You can skip all discussion about Command History and the edit buffer if you will not use those interactive features.

Important

If you're doing data updates, remember to issue a commit command or use the --autoCommit switch.

As you'll see, SqlTool has many features that are very convenient for scripting. But what really makes it superior for automation tasks (as compared to SQL tools from other vendors) is the ability to reliably detect errors and to control JDBC transactions. SqlTool is designed so that you can reliably determine if errors occurred within SQL scripts themselves, and from the invoking environment (for example, from a perl, Bash, or Python script, or a simple cron tab invocation).

Giving SQL on the Command Line

If you just have a couple Commands to run, you can run them directly from the comand-line or from a shell script without an SQL file, like this.

    java -jar $HSQLDB_HOME/lib/hsqldb.jar --sql 'SQL statement;' urlid

Note

The --sql automatically implies --noinput, so if you want to execute the specified SQL before and in addition to an interactive session (or stdin piping), then you must also give the --stdinput switch.

Since SqlTool transmits SQL statements to the database engine only when a line is terminated with ";", if you want feedback from multiple SQL statements in an --sql expression, you will need to use functionality of your OS shell to include linebreaks after the semicolons in the expression. With any Bourne-compatible shell, you can include linebreaks in the SQL statements like this.

    java -jar $HSQLDB_HOME/lib/hsqldb.jar --sql '
        SQL statement number one;
        SQL statement
            number two;
        SQL statement three;
    ' urlid
If you don't need feedback, just separate the SQL commands with semicolons and the entire expression will be chunked.

The --sql switch is very useful for setting shell variables to the output of SQL Statements, like this.

    # A shell script
    USERCOUNT=`java -jar $HSQLDB_HOME/lib/hsqldb.jar --sql '
        select count(*) from usertbl;
    ' urlid` || {
        # Handle the SqlTool error
    }
    echo "There are $USERCOUNT users registered in the database."
    [ "$USECOUNT" -gt 3 ] && {   # If there are more than 3 users registered
        # Some conditional shell scripting

SQL Files

Just give paths to sql text file(s) on the command line after the urlid.

Often, you will want to redirect output to a file, like

java -jar $HSQLDB_HOME/lib/hsqldb.jar sql... > /tmp/log.sql 2>&1

(Skip the "2>&1" if you're on Windows).

You can also execute SQL files from an interactive session with the "\i"' Special Command, but be aware that the default behavior in an interactive session is to continue upon errors. If the SQL file was written without any concern for error handling, then the file will continue to execute after errors occur. You could run \c false before \i filename, but then your SqlTool session will exit if an error is encountered in the SQL file. If you have an SQL file without error handling, and you want to abort that file when an error occurs, but not exit SqlTool, the easiest way to accomplish this is usually to add \c false to the top of the script.

If you specify multiple SQL files on the command-line, the default behavior is to exit SqlTool immediately if any of the SQL files encounters an error.

SQL files themselves have ultimate control over error handling. Regardless of what command-line options are set, or what commands you give interactively, if a SQL file gives error handling statements, they will take precedence.

You can also use \i in SQL files. This results in nested SQL files.

You can use the following SQL file, sample.sql, which resides in the sample directory of your HSQLDB distribution [1]. It contains SQL as well as Special Commands making good use of most of the Special Commands documented below.

/*
    $Id: sample.sql,v 1.5 2005/05/02 15:07:27 unsaved Exp $
    Examplifies use of SqlTool.
    PCTASK Table creation
*/

/* Ignore error for these two statements */
\c true
DROP TABLE pctasklist;
DROP TABLE pctask;
\c false

\p Creating table pctask
CREATE TABLE pctask (
    id integer identity,
    name varchar(40),
    description varchar,
    url varchar,
    UNIQUE (name)
);

\p Creating table pctasklist
CREATE TABLE pctasklist (
    id integer identity,
    host varchar(20) not null,
    tasksequence int not null,
    pctask integer,
    assigndate timestamp default current_timestamp,
    completedate timestamp,
    show bit default true,
    FOREIGN KEY (pctask) REFERENCES pctask,
    UNIQUE (host, tasksequence)
);

\p Granting privileges
GRANT select ON pctask TO public;
GRANT all ON pctask TO tomcat;
GRANT select ON pctasklist TO public;
GRANT all ON pctasklist TO tomcat;

\p Inserting test records
INSERT INTO pctask (name, description, url) VALUES (
    'task one', 'Description for task 1', 'http://cnn.com');
INSERT INTO pctasklist (host, tasksequence, pctask) VALUES (
    'admc-masq', 101, SELECT id FROM pctask WHERE name = 'task one');

commit;

You can execute this SQL file with a Memory Only database with a command like

    java -jar $HSQLDB_HOME/lib/hsqldb.jar  --sql '
        create user tomcat password "x";
    ' mem path/to/hsqldb/sample/sample.sql

(The --sql "create...;" arguments create an account which the script uses). You should see error messages betwen the Continue-on-error...true and Continue-on-error...false. The script purposefully runs commands that might fail there. The reason the script does this is to perform database-independent conditional table removals. (The SQL clause IF EXISTS is more graceful and succinct, and should be used if you don't need to support databases which don't support IF EXISTS). If an error occurs when continue-on-error is false, the script would abort immedately.

Piping and shell scripting

You can of course, redirect output from SqlTool to a file or another program.

    java -jar $HSQLDB_HOME/lib/hsqldb.jar urlid file.sql > file.txt 2>&1

    java -jar $HSQLDB_HOME/lib/hsqldb.jar urlid file.sql 2>&1 | someprogram...

You can type commands in to SqlTool while being in non-interactive mode by supplying "-" as the file name. This is a good way to test how SqlTool will behave when processing your SQL files.

        java -jar $HSQLDB_HOME/lib/hsqldb.jar urlid -

This is how you have SqlTool read its input from another program:

Example 8.5. Piping input into SqlTool

        echo "Some SQL commands with '$VARIABLES';" |
        java -jar $HSQLDB_HOME/lib/hsqldb.jar urlid -

Make sure that you also read the Giving SQL on the Command Line section. The --sql switch is a great facility to use with shell scripts.

Optimally Compatible SQL Files

If you want your SQL scripts optimally compatible among other SQL tools, then don't use any Special or PL Commands. SqlTool has default behavior which I think is far superior to the other SQL tools, but you will have to disable these defaults in order to have optimally compatible behavior.

These switches provide compatibilty at the cost of poor control and error detection.

  • --continueOnErr

    The output will still contain error messages about everything that SqlTool doesn't like (malformatted commands, SQL command failures, empty SQL commands), but SqlTool will continue to run. Errors will not cause rollbacks (but that won't matter because of the following setting).

  • --autoCommit

You don't have to worry about accidental expansion of PL variables, since SqlTool will never expand PL variables if you don't set any variables on the command line, or give any "* " PL commands. (And you could not have "* " commands in a compatible SQL file).

Comments

SQL comments of the form /*...*/ must begin where a (SQL/Special/Edit-Buffer/PL) Command could begin, and they end with the very first "*/" (regardless of quotes, nesting, etc. You may have as many blank lines as you want inside of a comment.

Example 8.6. Valid comment example

    SELECT count(*) FROM atable;
    /* Lots of
     comments interspersed among
     several lines */   SELECT count(*)
    FROM btable;

Notice that a command can start immediate after the comment ends.

Example 8.7. Invalid comment example

    SELECT count(*) FROM
    /* atable */
    btable;

This comment is invalid because you could not start another command at the comment location (because it is within an SQL Statement).

You can try using /*...*/ in other locations, and -- style SQL comments, but SqlTool will not treat them as comments. If they occur within an SQL Statment, SqlTool will pass them to the database engine, and the DB engine will determine whether to parse them as comments.

Special Commands and Edit Buffer Commands in SQL Files

Don't use Edit Buffer / History Commands in your sql files, because they won't work. Edit Buffer / History Commands are for interactive use only. (But, see the Raw Mode section for an exception). You can, of course, use any SqlTool command at all interactively. I just wanted to group together the commands most useful to script-writers.

\q [abort message]

Be aware that the \q command will cause SqlTool to completely exit. If a script x.sql has a \q command in it, then it doesn't matter if the script is executed like

    java -jar .../hsqldb.jar urlid a.sql x.sql z.sql
or if you use \i to read it in interactively, or if another SQL file uses \i to nest it. If \q is encountered, SqlTool will quit. See the SqlTool Procedural Language section for commands to abort an SQL file (or even parts of an SQL file) without causing SqlTool to exit.

\q takes an optional argument, which is an abort message. If you give an abort message, the message is displayed to the user and SqlTool will exit with a failure status. If you give no abort message, then SqlTool will exit quietly with successful status. As a result,

\q
means to make an immediate but graceful exit, whereas
\q Message
means to abort immediately.

\p [text to print]
Print the given string to stdout. Just give "\p" alone to print a blank line.
\i /path/to/file.sql
Include another SQL file at this location. You can use this to nest SQL files. For database installation scripts I often have a master SQL file which includes all of the other SQL files in the correct sequence. Be aware that the current continue-upon-error behavior will apply to included files until such point as the SQL file runs its own error handling commands.
\o [file/path.txt]

Tee output to the specified file (or stop doing so). See the Generating Text or HTML Reports section.

\=
A database-independent way to commit your SQL session.
\a [true|false]
This turns on and off SQL transaction autocommits. Auto-commit defaults to false, but you can change that behavior by using the --autoCommit command-line switch.
\c [true|false]

A "true" setting tells SqlTool to Continue when errors are encountered. The current transaction will not be rolled back upon SQL errors, so if \c is true, then run the ROLLCACK; command yourself if that's what you want to happen. The default for interactive use is to continue upon error, but the default for non-interactive use is to abort upon error. You can override this behavior by using the --continueOnErr or the --abortOnErr command-line switch.

With database setup scripts, I usually find it convenient to set "true" before dropping tables (so that things will continue if the tables aren't there), then set it back to false so that real errors are caught. DROP TABLE tablename IF EXISTS; is a more elegant, but less portable, way to accomplish the same thing.

Tip

It depends on what you want your SQL files to do, of course, but I usually want my SQL files to abort when an error is encountered, without necessarily killing the SqlTool session. If this is the behavior that you want, then put an explicit \c false at the top of your SQL file and turn on continue-upon-error only for sections where you really want to permit errors, or where you are using PL commands to handle errors manually. This will give the desired behavior whether your script is called by somebody interactively, from the SqlTool command-line, or included in another SQL file (i.e. nested).

Important

The default settings are usually best for people who don't want to put in any explicit \c or error handling code at all. If you run SQL files from the SqlTool command line, then any errors will cause SqlTool to roll back and abort immediately. If you run SqlTool interactively and invoke SQL files with \i commands, the scripts will continue to run upon errors (and will not roll back). This behavior was chosen because there are lots of SQL files out there that produce errors which can be ignored; but we don't want to ignore errors that a user won't see. I reiterate that any and all of this behavior can (and often should) be changed by Special Commands run in your interactive shell or in the SQL files. Only you know whether errors in your SQL files can safely be ignored.

Automation

SqlTool is ideal for mission-critical automation because, unlike other SQL tools, SqlTool returns a dependable exit status and gives you control over error handling and SQL transactions. Autocommit is off by default, so you can build a completely dependable solution by intelligently using \c commands (Continue upon Errors) and commit statements, and by verifying exit statuses.

Using the SqlTool Procedural Language, you have ultimate control over program flow, and you can use variables for database input and output as well as for many other purposes. See the SqlTool Procedural Language section.

Getting Interactive Functionality with SQL Files

Some script developers may run into cases where they want to run with sql files but they alwo want SqlTool's interactive behavior. For example, they may want to do command recall in the sql file, or they may want to log SqlTool's command-line prompts (which are not printed in non-interactive mode). In this case, do not give the sql file(s) as an argument to SqlTool, but pipe them in instead, like

java -jar $HSQLDB_HOME/lib/hsqldb.jar urlid < filepath1.sql > /tmp/log.html 2>&1
or
cat filepath1.sql... |
java -jar $HSQLDB_HOME/lib/hsqldb.jar urlid > /tmp/log.html 2>&1

Character Encoding

SqlTool defaults to the US-ASCII character set (for reading). You can use another character set by setting the system property sqlfile.charset, like

java -Dsqlfile.charset=UTF-8 -jar $HSQLDB_HOME/lib/hsqldb.jar urlid file.sql...

You can also set this per urlid in the SqlTool configuration file. See the RC File Authentication Setup section about that.

Generating Text or HTML Reports

This section is about making a file containing the output of database queries. You can generate reports by using operating system facilities such as redirection, tee, and cutting and pasting. But it is much easier to use the "\o" and "\H" special commands.

Procedure 8.4. Writing query output to an external file

  1. By default, everthing will be done in plain text. If you want your report to be in HTML format, then give the special command \H. If you do so, you will probably want to use filenames with an suffix of ".html" or ".htm" instead of ".txt" in the next step.

  2. Run the command \o path/to/reportfile.txt. From this point on, output from your queries will be appended to the specified file. (I.e. another copy of the output is generated.) This way you can continue to monitor or use output as usual as the report is generated.

  3. When you want SqlTool to stop writing to the file, run \o (or just quit SqlTool if you have no other work to do).

  4. If you turned on HTML mode with \H before, you can run \H again to turn it back off, if you wish.

It is not just the output of "SELECT" statements that will make it into the report file, but

Kinds of output that get teed to \o files

  • Output of SELECT statements.
  • Output of all "\d" Special Commands. (I.e., "\dt", "\dv", etc., and "\d OBJECTNAME").
  • Output of "\p" Special Commands. You will want to use this to add titles, and perhaps spacing, for the output of individual queries.
Other output will go to your screen or stdout, but will not make it into the report file. Be aware that no error messages will go into the report file. If SqlTool is run non-interactively (including if you give any SQL file(s) on the command line), SqlTool will abort with an error status if errors are encountered. The right way to handle errors is to check the SqlTool exit status. (The described error-handling behavior can be modified with SqlTool command-line switches and Special Commands).

Warning

Remember that \o appends to the named file. If you want a new file, then use a new file name or remove the pre-existing target file ahead of time.

Tip

So that I don't end up with a bunch of junk in my report file, I usually leave \o off while I perfect my SQL. With \o off, I perfect the SQL query until it produces on my screen exactly what I want saved to file. At this point I turn on \o and run ":;" to repeat the last SQL command. If I have several complex queries to run, I turn \o off and repeat until I'm finished. (Every time you turn \o on, it will append to the file, just like we need).

Usually it doesn't come to mind that I need a wider screen until a query produces lines that are too long. In this case, stretch your window and repeat the last command with the ":;" Edit Buffer Command.

SqlTool Procedural Language

Aka PL

Most importantly, run SqlTool interactively and give the "*?" command to see what PL commands are available to you. I've tried to design the language features to be intuitive. Readers experience with significant shell scripting in any language can probably learn everything they need to know by looking at (and running!) the sample script sample/pl.sql in your HSQLDB distribution [1] and using the *? command from within an interactive SqlTool session as a reference. (By significant shell scripting, I mean to the extent of using variables, for loops, etc.).

PL variables will only be expanded after you run a PL command (or set variable(s) from the command-line). We only want to turn on variable expansion if the user wants variable expansion. People who don't use PL don't have to worry about strings getting accidentally expanded.

All other PL commands imply the "*" command, so you only need to use the "*" statement if your script uses PL variables and it is possible that no variables may be set before-hand (and no PL commands have been run previously). In this case, without "*", your script would silently use a literal value like "*{x}" instead of trying to expand it. With a preceding "*" command, PL will notice that the variable x has not been set and will generate an error. (If x had been set here will be no issue because setting a variable automatically turns on PL variable expansion).

PL is also used to upload and download column values to/from local ASCII files, analogously to the special \b commands for binary files. This is explained above in the Interactive Essential PL Command section above.

Variables

  • Use the * list command to list some or all variables; or * listvalues to also see the values.
  • You can set variables using the * VARNAME = value command. This document explains elsewhere how you can set a values to the contents of files, and to the return value of SQL statements and fetches.
  • You can also set variables using the --setvar command-line switch. I give a very brief but useful example of this below.
  • Variables are always expanded in SQL, Special, and PL commands if they are written like *{VARNAME} (assuming that a PL command has been run previously). Your SQL scripts can give good feedback by echoing the value of variables with the "\p" special command. Use the construct *{:VARNAME} to expand the variable, but to expand to a zero-length string instead of fail if VARNAME is not set.
  • A variable written like /VARNAME is expanded if it begins an SQL Statement. This usage is called PL Aliasing. See the PL Aliases section below.

  • Variables are normally written like *VARNAME in logical expressions to prevent them from being evaluated too early. See below about logical expressions.
  • You can't do math with expression variables, but you can get functionality like the traditional for (i = 0; i < x; i++) by appending to a variable and testing the string length, like

        * while (*i < ${x})
            * i = *{i}.
    i will be a growing line of dots.

  • Variable names must not contain white space, or the characters "}" or "=".

PL Aliases

PL Aliasing just means the use of a PL variable as the first thing in an SQL statement, with the shortcut notation /VARNAME.

/VARNAME must be followed by whitespace or terminate the Statement, in order for SqlFile to tell where the variable name ends.

Note

Note that PL aliases are a very different thing from SQL aliases or HSQLDB aliases, which are features of databases, not SqlFile.

If the value of a variable is an entire SQL command, you generally do not want to include the terminating ";" in the value. There is an example of this above.

PL aliasing may only be used for SQL statements. You can define variables for everything in a Special or PL Command, except for the very first character ("\" or "*"). Therefore, you can use variables other than alias variables in Special and PL Commands. Here is a hyperbolically impractical example to show the extent to which PL variables can be used in Special commands even though you can not use them as PL aliases.

        sql> * qq = p Hello Butch
        sql> \*{qq} done now
        Hello Butch done now
(Note that the \* here is not the special command "\*", but is the special command "\p" because "*{qq}" resolves to "p").

Here is a short SQL file that gives the specified user write permissions on some application tables.

Example 8.8. Simple SQL file using PL

    /*
       grantwrite.sql

       Run SqlTool like this:
           java -jar path/to/hsqldb.jar -setvar USER=debbie grantwrite.sql
     */

    /* Explicitly turn on PL variable expansion, in case no variables have
       been set yet.  (Only the case if user did not set USER).
    */
    *

    GRANT all ON book TO *{USER};
    GRANT all ON category TO *{USER};

Note that this script will work for any (existing) user just by supplying a different user name on the command-line. I.e., no need to modify the tested and proven script. There is no need for a commit statement in this SQL file since no DML is done. If the script is accidentally run without setting the USER variable, SqlTool will give a very clear notificaton of that.

The purpose of the plain "*" command is just so that the *{USER} variables will be expanded. (This would not be necessary if the USER variable, or any other variable, were set, but we don't want to depend upon that).

Logical Expressions

Logical expressions occur only inside of logical expression parentheses in PL statements. For example, if (*var1 > astring) and while (*checkvar). (The parentheses after "foreach" do not enclose a logical expression, they just enclose a list).

There is a critical difference between *{VARNAME} and *VARNAME inside logical expressions. *{VARNAME} is expanded one time when the parser first encounters the logical expression. *VARNAME is re-expanded every time that the expression is evaluated. So, you would never want to code * while (*{X} < 5) because the statement will always be true or always be false. (I.e. the following block will loop infinitely or will never run).

Don't use quotes or whitespace of any kind in *{VARNAME} variables in expressions. (They would expand and then the expression would most likely no longer be a valid expression as listed in the table below). Quotes and whitespace are fine in *VARNAME variables, but it is the entire value that will be used in evaluations, regardless of whether quotes match up, etc. I.e. quotes and whitespace are not special to the token evaluator.

Logical Operators

TOKEN
The token may be a literal, a *{VARNAME} which is expanded early, or a *VARNAME which is expanded late. (You usually do not want to use *{VARNAME} in logical expressions). False if the token is not set, empty, or "0". True otherwise.
TOKEN1 == TOKEN2
True if the two tokens are equivalent "strings".
TOKEN1 <> TOKEN2
Ditto.
TOKEN1 >< TOKEN2
Ditto.
TOKEN1 > TOKEN2
True if the TOKEN1 string is longer than TOKEN2 or is the same length but is greater according to a string sort.
TOKEN1 < TOKEN2
Similarly to TOKEN1 > TOKEN2.
! LOGICAL_EXPRESSION
Logical negation of any of the expressions listed above.

*VARNAMEs in logical expressions, where the VARNAME variable is not set, evaluate to an empty string. Therefore (*UNSETVAR = 0) would be false, even though (*UNSETVAR) by itself is false and (0) by itself is false. Another way of saying this is that *VARNAME in a logical expression is equivalent to *{:VARNAME} out of a logical expression.

When developing scripts, you definitely use SqlTool interactively to verify that SqlTool evaluates logical expressions as you expect. Just run * if commands that print something (i.e. \p) if the test expression is true.

Flow Control

Flow control works by conditionally executing blocks of Commands according to conditions specified by logical expressions.

The conditionally executed blocks are called PL Blocks. These PL Blocks always occur between a PL flow control statement (like * foreach, *while, * if) and a corresponding * end PL Command (like * end foreach).

Caution

Be aware that the PL block reader is ignorant about SQL statements and comments when looking for the end of the block. It just looks for lines beginning with some specific PL commands. Therefore, if you put a comment line before a PL statement, or if a line of a multi-line SQL statement has a line beginning with a PL command, things may break.

I am not saying that you shouldn't use PL commands or SQL commands inside of PL blocks-- you definitely should! I'm saying that in PL blocks you should not have lines inside of SQL statments or comments which could be mistaken for PL commands. (Especially, "commenting out" PL end statements will not work if you leave * end at the beginning of the line).

(This limitation will very likely be removed in a future version of SqlTool).

The values of control variables for foreach and while PL blocks will change as expected.

There are * break and * continue, which work as any shell scripter would expect them to. The * break command can also be used to quit the current SQL file without triggering any error processing. (I.e. processing will continue with the next line in the including SQL file or interactive session, or with the next SQL file if you supplied multiple on the command-line).

Below is an example SQL File that shows how to use most PL features. If you have a question about how to use a particular PL feature, check this example before asking for help. This file resides in the sample directory with the name pl.sql [1]. Definitely give it a run, like

java -jar $HSQLDB_HOME/lib/hsqldb.jar mem $HSQLDB_HOME/pl.jar

Example 8.9. SQL File showing use of most PL features

/*
    $Id: pl.sql,v 1.4 2005/05/02 15:07:26 unsaved Exp $
    SQL File to illustrate the use of SqlTool PL features.
    Invoke like
        java -jar .../hsqldb.jar .../pl.sql mem
                                                         -- blaine
*/

* if (! *MYTABLE)
    \p MYTABLE variable not set!
    /* You could use \q to Quit SqlTool, but it's often better to just
       break out of the current SQL file.
       If people invoke your script from SqlTool interactively (with
       \i yourscriptname.sql) any \q will kill their SqlTool session. */
    \p Use arguments "--setvar MYTABLE=mytablename" for SqlTool
    * break
* end if

/* Turning on Continue-upon-errors so that we can check for errors ourselves.*/
\c true

\p
\p Loading up a table named '*{MYTABLE}'...

/* This sets the PL variable 'retval' to the return status of the following
   SQL command */
* retval ~
CREATE TABLE *{MYTABLE} (
    i int,
    s varchar
);
\p CREATE status is *{retval}
\p

/* Validate our return status.  In logical expressions, unset variables like
   *unsetvar are equivalent to empty string, which is not equal to 0
   (though both do evaluate to false on their own, i.e. (*retval) is false
   and (0) is false */
* if (*retval != 0)
    \p Our CREATE TABLE command failed.
    * break
* end if

/* Default Continue-on-error behavior is what you usually want */
\c false
\p

/* Insert data with a foreach loop.
   These values could be from a read of another table or from variables
   set on the command line like
*/
\p Inserting some data int our new table (you should see 3 row update messages)
* foreach VALUE (12 22 24 15)
    * if (*VALUE > 23)
        \p Skipping *{VALUE} because it is greater than 23
        * continue
        \p YOU WILL NEVER SEE THIS LINE, because we just 'continued'.
    * end if
    INSERT INTO *{MYTABLE} VALUES (*{VALUE}, 'String of *{VALUE}');
* end foreach
\p

* themax ~
/* Can put Special Commands and comments between "* VARNAME ~" and the target 
   SQL statement. */
\p We're saving the max value for later.  You'll still see query output here:
SELECT MAX(i) FROM *{MYTABLE};

/* This is usually unnecessary because if the SELECT failed, retval would
   be undefined and the following print statement would make SqlTool exit with
   a failure status */
* if (! *themax)
    \p Failed to get the max value.
    /* It's possible that the query succeeded but themax is "0".
       You can check for that if you need to. */
    * break
    \p YOU WILL NEVER SEE THIS LINE, because we just 'broke'.
* end if

\p
\p ##############################################################
\p The results of our work:
SELECT * FROM *{MYTABLE};
\p MAX value is *{themax}

\p
\p Everything worked.

Chunking

We hereby call the ability to transmit multiple SQL commands to the database in one transmission chunking. Unless you are in Raw mode, SqlTool only transmits commands to the database engine when it reads in a ";" at the end of a line of an SQL command. Therefore, you normally want to end each and every SQL command with ";" at the end of a line. This is because the database can only send one status reply to each JDBC transmission. So, while you could run

    SELECT * FROM t1; SELECT * FROM t2;
SqlTool can only display the results from the last query. This is a limitation of the client/server nature of JDBC, and applies to any JDBC client. There are, however, situations where you don't need immediate feedback from every SQL command. For example,

Example 8.10. Single-line chunking example

    INSERT INTO t1 VALUES(0); SELECT * FROM t1;
It's useful because the output of the second SQL command will tell you whether the first SQL command succeeded. So, you won't miss the status output from the first command.

Why?

The first general reason to chunk SQL commands is performance. For standalone databases, the most common performance bottleneck is network latency. Chunking SQL commands can dramatically reduce network traffic.

The second general reason to chunk SQL commands is if your database requires you to send multiple commands in one transmission. This is often the case when you need to tell the database the SQL or PL/SQL commands that comprise a stored procedure, function, trigger, etc.

How?

The most simple way is enter as many SQL commands as you want, but just do not end a line with ";" until you want the chunk to transmit.

Example 8.11. Multi-line chunking example

    INSERT INTO t1 VALUES (1)
    ; INSERT INTO t1 VALUES (2)
    ; SELECT * FROM t1;
If you list your command history with \s, you will see that all 3 SQL commands in 3 lines are in one SqlTool command. You can recall this SqlTool command from history to re-execute all three SQL commands.

The other method is by using Raw Mode. Go to the Raw Mode section to see how. You can enter any text at all, exactly how you want it to be sent to the database engine. Therefore, in addition to chunking SQL commands, you can give commands for non-SQL extensions to the database. For example, you could enter JavaScript code to be used in a stored procedure.

Raw Mode

You begin raw mode by issuing the Special Command "\.". You can then enter as much text in any format you want. When you are finished, enter a line consisting of only ".;" to store the input to the edit buffer and send it to the database server for execution.

This paragraph applies only to interactive usage. Interactive users may may end the raw input with ":." instead of ".;". This will just save the input to the edit buffer so that you can edit it and send it to the database manually. You can look at the edit buffer with the ":b" Buffer Command. You would normally use the command ":;" to send the buffer to the database after you are satisfied with it. You'll notice that your prompt will be the continuation prompt between entering "\." and terminating the raw input with ".;" or ":.".

Example 8.12. Interactive Raw Mode example

    sql> \.
    Enter RAW SQL.  No \, :, * commands.
    End with a line containing only ".;" to send to database,
    or ":." to store to edit buffer for editing or saving.
    -----------------------------------------------------------
    raw> line one;
      +> line two;
      +> line three;
      +> :.
    Raw SQL chunk moved into buffer.  Run ":;" to execute the chunk.
    sql> :;
    Executing command from buffer:
    line one;
    line two;
    line three;

    SQL Error at 'stdin' line 13:
    "line one;
    line two;
    line three;"
    Unexpected token: LINE in statement [line]
    sql>
The error message "Unexpected token: LINE in statement [line]" comes from the database engine, not SqlTool. All three lines were transmitted to the database engine.

Edit Buffer Commands are not available when running SqlTool non-interactively.

PL/SQL

Note

PL/SQL is not the same as PL. PL is the procedural language of SqlFile and is independent of your back-end database. PL commands always begin with *. PL/SQL is processed on the server side and you can only use it of your database supports it. You can not intermix PL and PL/SQL (except for setting a PL variable to the output of PL/SQL execution), because when you enter PL/SQL to SqlTool that input is not processed by SqlFile.

Use Raw Mode to send PL/SQL code blocks to the database engine. You do not need to enter the "\." command to enter raw mode. Just begin a new SqlTool command line with "DECLARE" or "BEGIN", and SqlTool will automatically put you into raw mode. See the Raw Mode section for details.

The following sample SQL file resides at sample/plsql.sql in your HSQLDB distribution [1]. This script will only work if your database engine supports standard PL/SQL, if you have permission to create the table "T1" in the default schema, and if that object does not already exist.

Example 8.13. PL/SQL Example

/*
 * $Id: plsql.sql,v 1.4 2007/08/09 03:22:21 unsaved Exp $
 *
 * This example is copied from the "Simple Programs in PL/SQL"
 * example by Yu-May Chang, Jeff Ullman, Prof. Jennifer Widom at
 * the Standord University Database Group's page
 * http://www-db.stanford.edu/~ullman/fcdb/oracle/or-plsql.html .
 * I have only removed some blank lines (in case somebody wants to
 * copy this code interactively-- because you can't use blank
 * lines inside of SQL commands in non-raw mode SqlTool when running
 * it interactively); and, at the bottom I have  replaced the
 * client-specific, non-standard command "run;" with SqlTool's
 * corresponding command ".;" and added a plain SQL SELECT command
 * to show whether the PL/SQL code worked.  - Blaine
 */

CREATE TABLE T1(
    e INTEGER,
    f INTEGER
);

DELETE FROM T1;

INSERT INTO T1 VALUES(1, 3);

INSERT INTO T1 VALUES(2, 4);

/* Above is plain SQL; below is the PL/SQL program. */
DECLARE

    a NUMBER;

    b NUMBER;

BEGIN

    SELECT e,f INTO a,b FROM T1 WHERE e>1;

    INSERT INTO T1 VALUES(b,a);

END;

.;
/** The statement on the previous line, ".;" is SqlTool specific.
 *  This command says to save the input up to this point to the
 *  edit buffer and send it to the database server for execution.
 *  I added the SELECT statement below to give imm
 */

/* This should show 3 rows, one containing values 4 and 2 (in this order)...*/
SELECT * FROM t1;
Note that, inside of raw mode, you can use any kind of formatting you want: Whatever you enter-- blank lines, comments, everything-- will be transmitted to the database engine.

Using hsqltool.jar and hsqldbutil.jar

This section is for those users who want to use SqlTool but without the overhead of hsqldb.jar (or who want to use a new SqlTool build with an older HSQLDB distribution).

If you do not need to directly use JDBC URLs like jdbc:hsqldb:mem: + something, jdbc:hsqldb:file: + something, or jdbc:hsqldb:res: + something, then you can use hsqltool.jar in place of the much larger hsqldb.jar file. hsqltool.jar will work for all JDBC databases other than HSQLDB Memory-only and In-process databases (the latter are fine if you access them via a HSQLB Server or WebServer). You will have to supply the JDBC driver for non-HSQLDB URLs, of course.

hsqltool.jar includes the HSQLDB JDBC driver. If you do not need to connect to HSQLDB databases at all, then hsqldbutil.jar is what you want. hsqldbutil.jar contains everything you need to run SqlTool and DatabaseManagerSwing against non-HSQLDB databases... well, besides the JDBC drivers for the target databases.

The HSQLDB distribution doesn't "come with" pre-built hsqltool.jar and hsqldbutil.jar files. You need to "build" them, but that is very easy to do.

These instructions assume that you are capable of running an Ant build. See the Building HSQLDB chapter if you need more details than what you see here.

  1. Download and extract a current HSQLDB distribution. If you don't want to use the source code, documentation, etc., you can use a temporary directory and remove it afterwards.

  2. Cd to the build directory under the root directory where you extracted the distribution to.

  3. Run ant hsqldbutil or ant hsqltool according to the criteria above. (If your goal is to use this jar with an older HSQLDB distribution, then you definitely need to build hsqlbutil.jar).

  4. If you're going to clean up afterwards, copy the jar that you built out of lib to a safe location first.

If you are using the HSQLDB JDBC driver (i.e., you're connecting up to a URL like jdbc:hsqldb:hsql + something or jdbc:hsqldb:http + something), you invoke SqlTool exactly as with hsqldb.jar except you use the file path to your new jar file instead of the path to hsqldb.jar.

If you are using a non-HSQLDB JDBC driver, you must set your CLASSPATH to include this new jar file and your JDBC driver, then run SqlTool like

        java org.hsqldb.util.SqlTool ...
You can specify your JDBC driver class either with the --driver switch to SqlTool, or in your RC file stanza (the last method is usually more convenient).

Delimiter-Separated-Value Imports and Exports

Note

This feature is independent of HSQLDB Text Tables, a server-side feature of HSQLDB. It makes no difference to SqlTool whether the source or target table of your export/import is a memory, cache, or text table. Indeed, like all features of SqlTool, it works fine with other JDBC databases. It works great, for example to migrate data from a table of one type to a table of another type, or to another schema, or to another database instance, or to another database system.

This feature is what most business people call "CSV", but these files are more accurately called Delimiter Separated Value files because the delimiter is usually not a comma, and, more importantly, we purposefully choose an effective delimiter instead of the CSV method of using a delimiter which works in some cases and then use quotes and back-slashes to escape occurrence of the delimiter in the actual data. Just by choosing a delimiter which never needs escaping, we eliminate the whole mess, and the data in our files always looks just like the corresponding data in the database. To make this CSV / Delimiter-separated-value dintinction clear, I use the suffix ".dsv" for my data files. This leads me to stipulate the abbreviation DSV for the Delimiter Separated Value feature of HSQLDB.

Use the \x command to eXport a table to a DSV file, and the \m command to iMport a DSV file into a pre-existing table.

The row and column delimiters may be any String, not just a single character. And just as the delimiter capability is more general than traditional CSV delimiters, the export function is also more general than just a table data exporter. Besides the trivial generalization that you may specify a view or other virtual table name in place of a table name, you can alternatively export the output of any query which produces normal text output. A benefit to this approach is that it allows you to export only some columns of a table, and to specify a WHERE clause to narrow down the rows to be exported (or perform any other SQL transformation, mapping, join, etc.). One specific use for this would be to exclude columns of binary data (which can be exported by other means, such as a PL loop to store binary values to files with the \bd command).

Note that the import command will not create a new table. This is because of the impossibility of guessing appropriate types and constraints based only on column names and a data sampling (which is all that a DSV-importer has access to). Therefore, if you wish to populate a new table, create the table before running the import. The import file does not need to have data for all columns of a table. The only required columns are those required by database constraints (non-null, indexes, keys, etc.) One specific reason to omit columns is if you want values of some columns to be created automatically by column DEFAULT settings, triggers, HSQLDB identity sequences, etc. Another reason would be to skip binary columns.

Simple DSV exports and imports using default settings

Even if you need to change delimiters, table names, or file names from the defaults, I suggest that you run one export and import with default settings as a practice run. A memory-only HSQLDB instance is ideal for test runs like this.

This command exports the table icf.projects to the file projects.dsv in the current directory (where you invoked SqlTool from). By default, the output file name will be the specified source table name plus the extension .dsv.

Example 8.14. DSV Export Example

    SET SCHEMA icf;
    \x projects
We could also have run \x icf.projects (which would have created a file named icf.projects.dsv) instead of changing the session schema. In this example we have chosen to make the export file name independent of the schema to facilitate importing it into a different schema.

Take a look at the output file. Notice that the first line consists of column names, not data. This line is present because it will be needed if the file is to used for a DSV import. Notice the following characterstics about the export data. The column delimiter is the pipe character "|". The record delimiter is the default line delimiter character(s) for your operating system. The string used to represent database NULLs is [null]. See the next section for how to change these from their default values.

This command imports the data from the file projects.dsv in the current directory (where you invoked SqlTool from) into the table newschema.projects. By default, the output table name will be the input filename after removing optional leading directory and trailing final extension.

Example 8.15. DSV Import Example

    SET SCHEMA newschema;
    \m projects.dsv
If the DSV file was named with the target schema, you would have skipped the SET SCHEMA command, like \m newschema.projects.dsv.

Specifying queries and options

For a hands on example of a DSM import which generates an import report and uses some other options, change to directory HSQLDB/sample and play with the working script dsv-sample.sql [1]. You can execute it like

    java -jar ../lib/hsqldb.jar mem dsv-sample.sql
(assuming that you are using the supplied sqltool.rc file or have have urlid mem set up.

The header line in the DSV file is required at this time. (If there is user demand, it can be made optional for exporting, but it will remain required for importing).

Your export will fail if the column or record delimiter, or the null representation value occurs in the data being exported. You change these values by setting the PL variables *DSV_COL_DELIM, *DSV_ROW_DELIM, *DSV_NULL_REP. Notice that the asterisk is part of the variable names, to indicate that these variables are used by SqlTool internally. When specifying delimiters, you can use the escape seqpences \n, \r, \f, \t, \\, and decimal, octal or hex specifications like \20, \020, \0x20. For example, to change the column delimiter to the tab character, you would give the command

    * *DSV_COL_DELIM = \t

For imports, you must always specify the source DSV file path. If you want to export to a different file than one in the current directory named according to the source table, set the PL variable *DSV_TARGET_FILE, like

    * *DSV_TARGET_FILE = /tmp/dtbl.dsv

For exports, you must always specify the source table name or query. If you want to import to a table other than that derived from the input DSV file name, set the PL variable *DSV_TARGET_TABLE. The table name may contain a schema name prefix.

You don't need to import all of the columns in a data file. To designate the fields to be skipped, iether set the PL PL variable *DSV_SKIP_COLUMNS, or replace the column names in the header line to "-" (hyphen). The value of *DSV_SKIP_COLUMNS is case-insensitive, and multiple column names are separated with white space and/or commas.

You can specify a query instead of a tablename with the \x command in order to filter or transform data from a table or view, or to export the output of a join, etc. You must set the PL variable *DSV_TARGET_FILE, as explained above (since there is no table name from which to automatically map a file name).

Example 8.16. DSV Export of an Arbitrary SELECT Statement

    * *DSV_TARGET_FILE = outfile.txt
    \x SELECT entrydate, 2 * aval "Double aval", modtime FROM bs.dtbl
Note that I specified the column label alias "Double aval" so that the label for that column in the DSV file header will not be blank.

By default, imports will abort as soon as a error is encountered during parsing the file or inserting data. If you invoke SqlTool with a SQL script on the command line, the failure will cause SqlTool to roll back and exit. If run interactively, you can decide whether to commit or roll back the rows that inserted before the failure. You can modify this behavior with the \a and \c settings.

If you set either a reject dsv file or a reject report file, then failures during imports will be reported but will not cause the import to abort. When run in this way, SqlTool will give you a report at the end about how many records were skipped, rejected, and successfully inserted. The reject dsv file is just a dsv file with exact copies of the dsv records that failed to insert. The reject report file is a HTML report which lists, for every rejected record, why that record was rejected.

To allow for user-friendly entry of headers, we require that tables for DSV import/exports use standard column names. I.e., no column names that require quoting. The DSV import and export parsers are very smart and user-friendly. The data types of columns are checked so that the parser can make safe assumptions about white space and blank entries in the data. If a column is a JDBC Boolean type, for example, then we know that a field value of " True " obviously means "True", and that a field value of "" obviously means null. Since we require vanilla style column names, we allow white space anywhere in the header column. We allow blank lines anywhere (where "lines" are delimited by *DSV_ROW_DELIM). By default, commented lines are ignored, and the comment character can be changed from its default value.

Run the command "\x?" or "\m?" to see the several system PL variables which you can set to adjust reject file behavior, commenting behavior, and other DSV features.

You can also define some settings right in the DSV file, and you can even specify multiple header lines in a single DSV file. I use this last feature to import data from one data set into multple tables that are joined. Since I don't have any more time to dedicate to explaining all of these features, I'll give you some examples from working DSV files and let you take it from there.

Example 8.17. Sample DSV headerswitch settings

    # RCS keyword was here.

    headerswitch{
    itemdef:name|-|-|hardness|breakdc|-
    simpleitemdef:itemdef_name|maxvalue|weight|-|-|maxhp
    }
I'll just note that the prefixes for the header rows must be of format target-table-name + :. You can use * for target-table-name here, for the obvious purpose.

Example 8.18. DSV targettable setting

    targettable=t
This last example is from the SqlTool unit test file dsv-trimming.dsv. These special commands must be at the top of the file (before any normal data or header lines).

There is also the *DSV_CONST_COLS setting, which you can use to automatically write static, constant values to the specified columns of all inserted rows.

Unit Testing SqlTool

The SqlTool unit tests reside at testrun/sqltool in the HSQLDB source code repository. Just run the runtests.bash script from that directory to execute all of the tests. Read the file README.txt to find out all about it, including everything you'd need to know to test your own scripts or to add more unit test scripts for SqlTool.



[1] To reduce the time I will need to spend maintaining this document, in this chapter I am giving the path to the sample directory as it is in HSQLDB 1.9.x distributions, namely, HSQLDB_HOME/sample. HSQLDB 1.8.x users should translate these sample directory paths to use HSQLDB_HOME/src/org/hsqldb/sample/....