TD Toolbelt Reference
You can run Treasure Data from the command line using these commands.
Command | Example |
---|---|
Basic Commands | td |
Database Commands | td db:create <db> |
Table Commands | td table:list [db] |
Query Commands | td query [sql] |
Import Commands | td import:list |
Bulk Import Commands | td bulk_import:list |
Result Commands | td result:list |
Schedule Commands | td sched:list |
Schema Commands | td schema:show <db> <table> |
Connector Commands | td connector:guess [config] |
User Commands | td user:list |
Workflow Commands | td connector:guess [config] |
Job Commands | td job:show <job_id> |
Basic Commands
You can use the following commands to enable basic functions in Treasure Data.
td
Show list of options in Treasure Data.
Usage
td
Options |
Description |
---|---|
-c, --config PATH |
path to the configuration file (default: ~/.td/td.conf) | -k, --apikey KEY |
use this API key instead of reading the config file | -e, --endpoint API_SERVER |
specify the URL for API server to use (default: https://api.treasuredatacom). The URL must contain a scheme (http:// or https:// prefix) to be valid. | --insecure |
insecure access: disable SSL (enabled by default) | -v, --verbose |
verbose mode | -r, --retry-post-requests |
retry on failed post requests. Warning: can cause resource duplication, such as duplicated job submissions. | --version | show version</p> |
Additional Commands
Usage
td <command>
Options |
Description |
---|---|
|
create/delete/list databases |
|
create/delete/list/import/export/tail tables |
|
issue a query |
|
show/kill/list jobs |
|
manage bulk import sessions (Java based fast processing) |
|
manage bulk import sessions (Old Ruby-based implementation) |
|
create/delete/list result URLs |
|
create/delete/list schedules that run a query periodically |
|
create/delete/modify schemas of tables |
|
manage connectors |
|
manage workflows |
|
show scheds, jobs, tables and results |
|
show/set API key |
|
show status of the Treasure Data server |
|
create a sample log file |
|
show help messages |
Database Commands
You can create, delete, and view lists of databases from the command line.
td db create
Create a database.
Usage
td db:create <db>
Example
td db:create example_db
td db delete
Delete a database.
Usage
td db:delete <db>
Options |
Description |
---|---|
|
clear tables and delete the database |
Example
td db:delete example_db
td db list
Show list of tables in a database.
Usage
td db:list
Options |
Description |
---|---|
|
format of the result rendering (tsv, csv, json or table. default is table) |
Example
td db:list
td dbs
Table Commands
You can create, list, show, and organize table structure using the command line.
- td table:list
- td table:show
- td table:create
- td table:delete
- td table:import
- td table:export
- td table:swap
- td table:rename
- td table:tail
- td table:partial_delete
- td table:expire
td table list
Show list of tables.
Usage
td table:list [db]
Options |
Description |
---|---|
|
number of threads to get list in parallel show estimated table size in bytes |
|
format of the result rendering (tsv, csv, json or table. default is table) |
Example
td table:list
td table:list example_db
td tables
td table show
Describe information in a table.
Usage
td table:show <db> <table>
Options |
Description |
---|---|
|
show more attributes |
Example
td table example_db table1
td table create
Create a table.
Usage
td table:create <db> <table>
Options |
Description |
---|---|
|
set table type (log) set table expire days set include_v flag set detect schema flag |
Example
td table:create example_db table1
td table delete
Delete a table.
Usage
td table:delete <db> <table>
Options |
Description |
---|---|
|
never prompt |
Example
td table:delete example_db table1
td table import
Parse and import files to a table
Usage
td table:import <db> <table> <files...>
Options |
Description |
---|---|
|
file format (default: apache) |
|
same as --format apache; apache common log format |
|
same as --format syslog; syslog |
|
same as --format msgpack; msgpack stream format |
|
same as --format json; LF-separated json format |
|
time key name for json and msgpack format (e.g. 'created_at') |
|
Create table and database if doesn't exist |
Example
td table:import example_db table1 --apache access.log
td table:import example_db table1 --json -t time - < test.json
How is the import command’s time format set in a windows batch file?
‘%’ is a recognized environment variable, so you must use ‘%%’ to set it.
td import:prepare --format csv --column-header --time-column 'date' --time-format '%%Y-%%m-%%d' test.csv
td table export
Dump logs in a table to the specified storage
Usage
td table:export <db> <table>
Options |
Description |
---|---|
|
wait until the job is completed |
|
export data which is newer than or same with the TIME |
|
export data which is older than the TIME |
|
name of the destination S3 bucket (required) |
|
path prefix of the file on S3 |
|
AWS access key id to export data (required) |
|
AWS secret access key to export data (required) |
|
file format for exported data. Available formats are tsv.gz (tab-separated values per line) and jsonl.gz (JSON record per line). The json.gz and line-json.gz formats are default and still available but only for backward compatibility purpose;use is discouraged because they have far lower performance. |
|
specify resource pool by name |
|
export with server side encryption with the ENCRYPT_METHOD |
|
export with assume role with ASSUME_ROLE_ARN as role arn |
Example
td table:export example_db table1 --s3-bucket mybucket -k KEY_ID -s SECRET_KEY
td table swap
Swap the names of two tables.
Usage
td table:swap <db> <table1> <table2>
Example
td table:swap example_db table1 table2
td table rename
Rename the existing table.
Usage
td table:rename <db> <from_table> <dest_table>
Options |
Description |
---|---|
|
replace existing dest table |
Example
td table:rename example_db table1 table2
td table tail
Get recently imported logs.
Usage
td table:tail <db> <table>
Options |
Description |
---|---|
|
number of logs to get |
|
pretty print |
Example
td table:tail example_db table1
td table:tail example_db table1 -n 30
td table partial delete
Info
In February and March 2025, the Partial Delete Job feature, including the td table:partial_delete
command, will be deprecated. Treasure Data recommends transitioning to the Trino Delete statement, which provides greater flexibility by allowing custom deletion conditions. This change also involves the removal of the partial_delete
operator from the Treasure Workflow command.
Delete logs from the table within the specified time range.
Usage
td table:partial_delete <db> <table>
Options |
Description |
---|---|
|
end time of logs to delete in Unix time >0 and multiple of 3600 (1 hour) |
|
start time of logs to delete in Unix time >0 and multiple of 3600 (1 hour) |
|
wait for the job to finish |
|
specify resource pool by name |
Example
td table:partial_delete example_db table1 --from 1341000000 --to 1341003600
td table expire
Expire data in table after specified number of days. Set to 0
to disable the expiration.
Usage
td table:expire <db> <table> <expire_days>
Example
td table:expire example_db table1 30
Query Commands
You can issue queries from the command line.
td query
Issue a query
Usage
td query [sql]
Options |
Description |
---|---|
|
use the database (required) |
|
wait for finishing the job (for seconds) |
|
use vertical table to show results |
|
write result to the file |
|
format of the result to write to the file (tsv, csv, json, msgpack, and msgpack.gz) |
|
write result to the URL (see also result:create subcommand) It is suggested for this option to be used with the -x / --exclude option to suppress printing of the query result to stdout or -o / --output to dump the query result into a file.
|
|
set user name for the result URL |
|
ask password for the result URL |
|
set priority |
|
automatic retrying count |
|
use file instead of inline query |
|
set query type (hive, presto) |
|
OBSOLETE - enable random sampling to reduce records 1/DENOMINATOR |
|
limit the number of result rows shown when not outputting to file |
|
output of the columns' header when the schema is available for the table (only applies to json, tsv and csv formats) |
|
do not automatically retrieve the job result |
|
specify resource pool by name |
|
optional user-provided unique ID. You can include this ID with your `create` request to ensure idempotence.ß |
|
specify query engine version by name |
Example
td query -d example_db -w -r rset1 "select count(*) from table1"
td query -d example_db -w -r rset1 -q query.txt
Import Commands
You can import and organize data from the command line using these commands.
- td import:list
- td import:show
- td import:create
- td import:jar_version
- td import:jar_update
- td import:prepare
- td import:upload
- td import:auto
- td import:perform
- td import:error_records
- td import:commit
- td import:delete
- td import:freeze
- td import:unfreeze
- td import:config
td import list
List bulk import sessions
Usage
td import:list
Options |
Description |
---|---|
|
format of the result rendering (tsv, csv, json or table. default is table) |
Example
td import:list
td import show
Show list of uploaded parts.
Usage
td import:show <name>
Example
td import:show
td import create
Create a new bulk import session to the table
Usage
td import:create <name> <db> <table>
Example
td import:create logs_201201 example_db event_logs
td import jar version
Show import jar version
Usage
td import:jar_version
Example
td import:jar_version
td import jar update
Update import jar to the latest version
Usage
td import:jar_update
Example
td import:jar_update
td import prepare
Convert files into part file format
Usage
td import:prepare <files...>
Options |
Description |
---|---|
|
source file format [csv, tsv, json, msgpack, apache, regex, mysql]; default=csv |
|
compressed type [gzip, none, auto]; default=auto detect |
- |
specifies the strftime format of the time column The format slightly differs from Ruby's Time#strftime format in that the'%:z' and '%::z' timezone options are not supported. |
|
encoding type [UTF-8, etc.] |
|
output directory. default directory is 'out'. |
|
size of each parts (default: 16384) |
|
name of the time column |
|
time column's value. If the data doesn't have a time column,users can auto-generate the time column's value in 2 ways:
This mode can be used to assign incremental timestamps to subsequent records. Timestamps will be incremented by 1 second each record. If the number of records causes the timestamp to overflow the range (timestamp >= TIME + HOURS * 3600), the next timestamp will restart at TIME and continue from there. E.g. '--time-value 1394409600,10' will assign timestamp 1394409600 to the first record, timestamp 1394409601 to the second, 1394409602 to the third, and so on until the 36000th record which will have timestamp 1394445600 (1394409600 + 10 * 3600). The timestamp assigned to the 36001th record will be 1394409600 again and the timestamp will restart from there. |
|
pair of name and type of primary key declared in your item table |
|
prepare in parallel (default: 2; max 96) |
|
only columns |
|
exclude columns |
|
error records handling mode [skip, abort]; default=skip |
|
invalid columns handling mode [autofix, warn]; default=warn |
|
write error records; default directory is 'error-records'. |
|
column names (use --column-header instead if the first line has column names) |
|
column types [string, int, long, double] |
|
column type [string, int, long, double]. A pair of column name and type can be specified like 'age:int' |
|
disable automatic type conversion |
|
the empty string values are interpreted as null values if columns are numerical types. |
CSV/TSV Specific Options
Options |
Description |
---|---|
|
first line includes column names |
|
delimiter CHAR; default="," at csv, "\t" at tsv |
|
escape CHAR; default=\ |
|
newline [CRLF, LF, CR]; default=CRLF |
|
quote [DOUBLE, SINGLE, NONE]; if csv format, default=DOUBLE. if tsv format, default=NONE |
MySQL Specific Options
Options |
Description |
---|---|
|
JDBC connection URL |
|
user name for MySQL account |
|
password for MySQL account |
REGEX Specific Options
Options |
Description |
---|---|
|
pattern to parse line. When 'regex' is used as source file format, this option is required |
Example
td import:prepare logs/*.csv --format csv --columns time,uid,price,count --time-column time -o parts/
td import:prepare logs/*.csv --format csv --columns date_code,uid,price,count --time-value 1394409600,10 -o parts/
td import:prepare mytable --format mysql --db-url jdbc:mysql://localhost/mydb --db-user myuser --db-password mypass
td import:prepare "s3://<s3_access_key>:<s3_secret_key>@/my_bucket/path/to/*.csv" --format csv --column-header --time-column date_time -o parts/
td import upload
Upload or re-upload files into a bulk import session
Usage
td import:upload <session name> <files...>
Options |
Description |
---|---|
|
upload process will automatically retry at specified time; default: 10 |
|
create automatically bulk import session by specified database and table names If you use 'auto-create' option, you MUST not specify any session name as first argument. |
|
perform bulk import job automatically |
|
commit bulk import job automatically |
|
delete bulk import session automatically |
|
upload in parallel (default: 2; max 8) |
|
source file format [csv, tsv, json, msgpack, apache, regex, mysql]; default=csv |
|
compressed type [gzip, none, auto]; default=auto detect |
|
specifies the strftime format of the time column The format slightly differs from Ruby's Time#strftime format in that the '%:z' and '%::z' timezone options are not supported. |
|
encoding type [UTF-8, etc.] |
|
output directory. default directory is 'out'. |
|
size of each parts (default: 16384) |
|
name of the time column |
|
time column's value. If the data doesn't have a time column, users can auto-generate the time column's value in 2 ways:
TIME: where TIME is a Unix time in seconds since Epoch. The time column value is constant and equal to TIME seconds. E.g. '--time-value 1394409600' assigns the equivalent of timestamp 2014-03-10T00:00:00 to all records imported.
TIME,HOURS: where TIME is the Unix time in seconds since Epoch and HOURS is the maximum range of the timestamps in hours. This mode can be used to assign incremental timestamps to subsequent records. Timestamps will be incremented by 1 second each record. If the number of records causes the timestamp to overflow the range (timestamp >= TIME + HOURS
|
|
pair of name and type of primary key declared in your item table |
|
prepare in parallel (default: 2; max 96) |
|
only columns |
|
exclude columns |
|
error records handling mode [skip, abort]; default=skip |
|
invalid columns handling mode [autofix, warn]; default=warn |
|
write error records; default directory is 'error-records'. |
|
column names (use --column-header instead if the first line has column names) |
|
column types [string, int, long, double] |
|
column type [string, int, long, double]. A pair of column name and type can be specified like 'age:int' |
|
disable automatic type conversion |
|
the empty string values are interpreted as null values if columns are numerical types. |
CSV/TSV Specific Options
Options |
Description |
---|---|
|
irst line includes column names |
|
delimiter CHAR; default="," at csv, "\t" at tsv |
|
escape CHAR; default=\ |
|
newline [CRLF, LF, CR]; default=CRLF |
|
quote [DOUBLE, SINGLE, NONE]; if csv format, default=DOUBLE. if tsv format, default=NONE |
MySQL Specific Options
Options |
Description |
---|---|
|
JDBC connection URL |
|
user name for MySQL account |
|
password for MySQL account |
REGEX Specific Options
Options |
Description |
---|---|
|
pattern to parse line. When 'regex' is used as source file format, this option is required |
Example
td import:upload mysess parts/* --parallel 4
td import:upload mysess parts/*.csv --format csv --columns time,uid,price,count --time-column time -o parts/
td import:upload parts/*.csv --auto-create mydb.mytbl --format csv --columns time,uid,price,count --time-column time -o parts/
td import:upload mysess mytable --format mysql --db-url jdbc:mysql://localhost/mydb --db-user myuser --db-password mypass
td import:upload "s3://<s3_access_key>:<s3_secret_key>@/my_bucket/path/to/*.csv" --format csv --column-header --time-column date_time -o parts/
td import auto
Automatically upload or re-upload files into a bulk import session. It's functional equivalent of 'upload' command with 'auto-perform', 'auto-commit' and 'auto-delete' options. But it, by default, doesn't provide 'auto-create' option. If you want 'auto-create' option, you explicitly must declare it as command options.
Usage
td import:auto <session name> <files...>
Options |
Description |
---|---|
|
upload process will automatically retry at specified time; default: 10 |
|
create automatically bulk import session by specified database and table names If you use 'auto-create' option, you MUST not specify any session name as first argument. |
|
upload in parallel (default: 2; max 8) |
|
source file format [csv, tsv, json, msgpack, apache, regex, mysql]; default=csv |
|
compressed type [gzip, none, auto]; default=auto detect |
|
specifies the strftime format of the time column The format slightly differs from Ruby's Time#strftime format in that the '%:z' and '%::z' timezone options are not supported. |
|
encoding type [UTF-8, etc.] |
|
output directory. default directory is 'out'. |
|
size of each parts (default: 16384) |
|
name of the time column |
|
time column's value. If the data doesn't have a time column, users can auto-generate the time column's value in 2 ways:
|
|
pair of name and type of primary key declared in your item table |
|
prepare in parallel (default: 2; max 96) |
|
only columns |
|
exclude columns |
|
error records handling mode [skip, abort]; default=skip |
|
invalid columns handling mode [autofix, warn]; default=warn |
|
write error records; default directory is 'error-records'. |
|
column names (use --column-header instead if the first line has column names) |
|
column types [string, int, long, double] |
|
column type [string, int, long, double]. A pair of column name and type can be specified like 'age:int' |
|
disable automatic type conversion |
|
the empty string values are interpreted as null values if columns are numerical types. |
CSV/TSV Specific Options
Options |
Description |
---|---|
|
first line includes column names |
|
delimiter CHAR; default="," at csv, "\t" at tsv |
|
escape CHAR; default=\ |
|
newline [CRLF, LF, CR]; default=CRLF |
|
quote [DOUBLE, SINGLE, NONE]; if csv format, default=DOUBLE. if tsv format, default=NONE |
Options |
Description |
---|---|
|
JDBC connection URL |
|
user name for MySQL account |
|
password for MySQL account |
REGEX Specific Options
Options |
Description |
---|---|
|
pattern to parse line. When 'regex' is used as source file format, this option is required |
Example
td import:auto mysess parts/* --parallel 4
td import:auto mysess parts/*.csv --format csv --columns time,uid,price,count --time-column time -o parts/
td import:auto parts/*.csv --auto-create mydb.mytbl --format csv --columns time,uid,price,count --time-column time -o parts/
td import:auto mysess mytable --format mysql --db-url jdbc:mysql://localhost/mydb --db-user myuser --db-password mypass
td import:auto "s3://<s3_access_key>:<s3_secret_key>@/my_bucket/path/to/*.csv" --format csv --column-header --time-column date_time -o parts/
td import perform
Start to validate and convert uploaded files
Usage
td import:perform <name>
Options |
Description |
---|---|
|
wait for finishing the job |
|
force start performing |
|
specify resource pool by name |
Example
td import:perform logs_201201
td import error records
Show records which did not pass validations
Usage
td import:error_records <name>
Example
td import:error_records logs_201201
td import commit
Start to commit a performed bulk import session
Usage
td import:commit <name>
Options |
Description |
---|---|
|
wait for finishing the commit |
Example
td import:commit logs_201201
td import delete
Delete a bulk import session
Usage
td import:delete <name>
Example
td import:delete logs_201201
td import freeze
Pause any further data upload for a bulk import session/Reject succeeding uploadings to a bulk import session
Usage
td import:freeze <name>
Example
td import:freeze logs_201201
td import unfreeze
Unfreeze a bulk import session
Usage
td import:unfreeze <name>
Example
td import:unfreeze logs_201201
td import config
create guess config from arguments Usage
td import:config <files...>
Options |
Description |
---|---|
|
output file name for connector:guess |
|
source file format [csv, tsv, mysql]; default=csv |
|
Database Connection URL |
|
user name for database |
|
password for database |
|
not supported |
|
not supported |
|
not supported |
|
not supported |
Example
td import:config "s3://<s3_access_key>:<s3_secret_key>@/my_bucket/path/to/*.csv" -o seed.
Bulk Import Commands
You can create and organize bulk imports from the command line.
- td bulk_import:list
- td bulk_import:show <name>
- td bulk_import:create <name> <db> <table>
- td bulk_import:prepare_parts <files...>
- td bulk_import:upload_parts <name> <files...>
- td bulk_import:delete_parts <name> <ids...>
- td bulk_import:perform <name>
- td bulk_import:error_records <name>
- td bulk_import:commit <name>
- td bulk_import:delete <name>
- td bulk_import:freeze <name>
- td bulk_import:unfreeze <name>
For instructions on how to use the bulk import commands, refer to the Bulk Import API Tutorial.
td bulk import list
List bulk import sessions
Usage
td bulk_import:list
Options |
Description |
---|---|
|
format of the result rendering (tsv, csv, json or table. default is table) |
Example
td bulk_import:list
td bulk import show
Shows a list of uploaded parts
Usage
td bulk_import:show <name>
Example
td bulk_import:show logs_201201
td bulk import create
Creates a new bulk import session to the table
Usage
td bulk_import:create <name> <db> <table>
Example
td bulk_import:create logs_201201 example_db event_logs
td bulk import prepare parts
Converts files into part file format
Usage
td bulk_import:prepare_parts <files...>
Options |
Description |
---|---|
|
source file format [csv, tsv, msgpack, json] |
|
column names (use --column-header instead if the first line has column names) |
|
first line includes column names |
|
delimiter between columns (default: (?-mix:\t|,)) null expression for the automatic type conversion (default: (?i-mx:\A(?:null||\-|\\N)\z)) true expression for the automatic type conversion (default: (?i-mx:\A(?:true)\z)) false expression for the automatic type conversion (default: (?i-mx:\A(?:false)\z)) |
|
disable automatic type conversion |
|
name of the time column |
|
strftime(3) format of the time column |
|
value of the time column |
|
text encoding |
|
compression format name [plain, gzip] (default: auto detect) |
|
size of each parts (default: 16384) |
|
output directory |
Example
td bulk_import:prepare_parts logs/*.csv --format csv --columns time,uid,price,count --time-column "time" -o parts/
td bulk import upload parts
Uploads or re-uploads files into a bulk import session
Usage
td bulk_import:upload_parts <name> <files...>
Options |
Description |
---|---|
|
add prefix to parts name |
|
use COUNT number of . (dots) in the source file name to the parts name perform bulk import job automatically perform uploading in parallel (default: 2; max 8) |
|
specify resource pool by name |
Example
td bulk_import:upload_parts parts/* --parallel 4
td bulk import delete parts
Delete uploaded files from a bulk import session
Usage
td bulk_import:delete_parts <name> <ids...>
Options |
Description |
---|---|
|
add prefix to parts name |
Example
td bulk_import:delete_parts logs_201201 01h 02h 03h
td bulk import perform
Start to validate and convert uploaded files
Usage
td bulk_import:perform <name>
Options |
Description |
---|---|
|
wait for finishing the job force start performing specify resource pool by name |
Example
td bulk_import:perform logs_201201
td bulk import error records
Show records which did not pass validations
Usage
td bulk_import:error_records <name>
Example
td bulk_import:error_records logs_201201
td bulk import commit
Start to commit a performed bulk import session
Usage
td bulk_import:commit <name>
Options |
Description |
---|---|
|
wait for finishing the commit |
Example
td bulk_import:commit logs_201201
td bulk import delete
Delete a bulk import session
Usage
td bulk_import:delete <name>
Example
td bulk_import:delete logs_201201
td bulk import freeze
Block the upload to a bulk import session
Usage
td bulk_import:freeze <name>
Example
td bulk_import:freeze logs_201201
td bulk import unfreeze
Unfreeze a frozen bulk import session
Usage
td bulk_import:unfreeze <name>
Example
td bulk_import:unfreeze logs_201201
Result Commands
You can use the command line to list, create, show, and delete results.
td result list
Show list of result URLs
Usage
td result:list
Options |
Description |
---|---|
|
format of the result rendering (tsv, csv, json or table. default is table) |
Example
td result:list
td results
td result show
Describe information of a result URL.
Usage
td result:show <name>
Example
td result name
td result create
Create a result URL
Usage
td result:create <name> <URL>
Options |
Description |
---|---|
|
set user name for authentication |
|
ask password for authentication |
Example
td result:create name mysql://my-server/mydb
td result delete
Delete a result URL.
Usage
td result:delete <name>
Example
td result:delete name
Schedule Commands
You can use the command line to schedule, update, delete, and list queries.
- td sched:list
- td sched:create
- td sched:delete
- td sched:update
- td sched:history
- td sched:run
- td sched:result
td sched list
Show list of schedules
Usage
td sched:list
Options |
Description |
---|---|
|
format of the result rendering (tsv, csv, json or table. default is table) |
Example
td sched:list
td scheds
td sched create
Create a schedule
Usage
td sched:create <name> <cron> [sql]
Options |
Description |
---|---|
|
use the database (required) |
|
name of the timezone. Only extended timezones like 'Asia/Tokyo', 'America/Los_Angeles' are supported, (no 'PST', 'PDT', etc...). When a timezone is specified, the cron schedule is referred to that timezone. Otherwise, the cron schedule is referred to the UTC timezone. E.g. cron schedule '0 12 * * *' will execute daily at 5 AM without timezone option and at 12PM with the -t / --timezone 'America/Los_Angeles' timezone option |
|
delay time of the schedule |
|
write result to the URL (see also result:create subcommand) |
|
set user name for the result URL |
|
ask password for the result URL |
|
set priority |
|
use file instead of inline query |
|
automatic retrying count |
|
set query type (hive) |
Example
td sched:create sched1 "0 * * * *" -d example_db "select count(*) from table1" -r rset1
td sched:create sched1 "0 * * * *" -d example_db -q query.txt -r rset2
td sched delete
Delete a schedule
Usage
td sched:delete <name>
Example
td sched:delete sched1
td sched update
Modify a schedule
Usage
td sched:update <name>
Options |
Description |
---|---|
|
change the schedule's name |
|
change the schedule |
|
change the query |
|
change the database |
|
change the result target (see also result:create subcommand) |
|
name of the timezone. Only extended timezones like 'Asia/Tokyo', 'America/Los_Angeles' are supported, (no 'PST', 'PDT', etc...). When a timezone is specified, the cron schedule is referred to that timezone. Otherwise, the cron schedule is referred to the UTC timezone. E.g. cron schedule '0 12 * * *' will execute daily at 5 AM without timezone option and at 12PM with the -t / --timezone 'America/Los_Angeles' timezone option |
|
change the delay time of the schedule |
|
set priority |
|
automatic retrying count |
|
set query type (hive) EXPERIMENTAL: specify query engine version by name |
Example
td sched:update sched1 -s "0 */2 * * *" -d my_db -t "Asia/Tokyo" -D 3600
td sched history
Show history of scheduled queries
Usage
td sched:history <name> [max]
Options |
Description |
---|---|
|
skip N pages |
|
skip N schedules |
|
format of the result rendering (tsv, csv, json or table. default is table)
|
Example
td sched sched1 --page 1
td sched run
Run scheduled queries for the specified time
Usage
td sched:run <name> <time>
Options |
Description |
---|---|
|
number of jobs to run |
|
format of the result rendering (tsv, csv, json or table. default is table) |
Example
td sched:run sched1 "2013-01-01 00:00:00" -n 6
td sched result
Show status and result of the last job ran. --last [N]
enables showing the result before N from the last. The other options are identical to those of the job:show
command.
Usage
td sched:result <name>
Options |
Description |
---|---|
|
show logs |
|
wait for finishing the job |
|
use vertical table to show results |
|
write result to the file |
|
limit the number of result rows shown when not outputting to file |
|
output of the columns' header when the schema is available for the table (only applies to tsv and csv formats) |
|
do not automatically retrieve the job result null expression in csv or tsv |
|
format of the result to write to the file (tsv, csv, json, msgpack, and msgpack.gz) show the result before N from the last. default: 1 |
Example
td sched:result NAME | sched:result NAME --last | sched:result NAME --last 3
Schema Commands
Use the command line to work with schema in a table.
td schema show
Show schema of a table
Usage
td schema:show <db> <table>
Example
td schema example_db table1
td schema set
Set new schema on a table
Usage
td schema:set <db> <table> [columns...]
Example
td schema:set example_db table1 user:string size:int
td schema add
Add new columns to a table.
Usage
td schema:add <db> <table> <columns...>
Example
td schema:add example_db table1 user:string size:int
td schema remove
Remove columns from a table
Usage
td schema:remove <db> <table> <columns...>
Example
td schema:remove example_db table1 user size
Connector Commands
You can use the command line to control several elements related to connectors.
- td connector:guess
- td connector:preview
- td connector:issue
- td connector:list
- td connector:create
- td connector:show
- td connector:update
- td connector:delete
- td connector:history
- td connector:run
td connector guess
Run guess
to generate a connector configuration file. Using the connector's credentials, this command examines the data and attempts to determine the file type, delimiter character, and column names. This "guess" is then written to the configuration file for the connector. This command is useful for file-based connectors.
Usage
td connector:guess [config]
Options |
Description |
---|---|
|
(obsoleted) |
|
(obsoleted) |
|
(obsoleted) |
|
(obsoleted) |
|
output file name for connector:preview |
|
specify list of guess plugins that users want to use |
Example
td connector:guess seed.yml -o config.yml
Example seed.yml
in:
type: s3
bucket: my-s3-bucket
endpoint: s3-us-west-1.amazonaws.com
path_prefix: path/prefix/to/import/
access_key_id: ABCXYZ123ABCXYZ123
secret_access_key: AbCxYz123aBcXyZ123
out:
mode: append
td connector preview
Show a subset of possible data that the data connector fetches
Usage
td connector:preview <config>
Options |
Description |
---|---|
|
format of the result rendering (tsv, csv, json or table. default is table) |
Example
td connector:preview td-load.yml
td connector issue
Runs connector execution one time only
Usage
td connector:issue <config>
Options |
Description |
---|---|
|
destination database |
|
destination table |
|
data partitioning key |
|
wait for finishing the job |
|
Create table and database if doesn't exist |
Example
td connector:issue td-load.yml
td connector list
Shows a list of connector sessions
Usage
td connector:list
Options |
Description |
---|---|
|
format of the result rendering (tsv, csv, json or table. default is table) |
Example
td connector:list
td connector create
Creates a new connector session
Usage
td connector:create <name> <cron> <database> <table> <config>
Options |
Description |
---|---|
|
data partitioning key |
|
name of the timezone. Only extended timezones like 'Asia/Tokyo', 'America/Los_Angeles' are supported, (no 'PST', 'PDT', etc...). When a timezone is specified, the cron schedule is referred to that timezone. Otherwise, the cron schedule is referred to the UTC timezone. E.g. cron schedule '0 12 * * *' will execute daily at 5 AM without timezone option and at 12PM with the -t / --timezone 'America/Los_Angeles' timezone option |
|
delay time of the schedule |
Example
td connector:create connector1 "0 * * * *" connector_database connector_table td-load.yml
td connector show
Shows the execution settings for a connector such as name, timezone, delay, database, table
Usage
td connector:show <name>
Example
td connector:show connector1
td connector update
Modify a connector session
Usage
td connector:update <name> [config]
Options |
Description |
---|---|
|
change the schedule's name |
|
change the database |
|
change the table |
|
change the schedule or leave blank to remove the schedule |
|
name of the timezone. Only extended timezones like 'Asia/Tokyo', 'America/Los_Angeles' are supported, (no 'PST', 'PDT', etc...). When a timezone is specified, the cron schedule is referred to that timezone. Otherwise, the cron schedule is referred to the UTC timezone. E.g. cron schedule '0 12 * * *' will execute daily at 5 AM without timezone option and at 12PM with the -t / --timezone 'America/Los_Angeles' timezone option |
|
change the delay time of the schedule |
|
change the name of the time column |
|
update the connector configuration |
|
update the connector config_diff |
Example
td connector:update connector1 -c td-bulkload.yml -s '@daily' ...
td connector delete
Delete a connector session
Usage
td connector:delete <name>
Example
td connector:delete connector1
td connector history
Show the job history of a connector session
Usage
td connector:history <name>
Options |
Description |
---|---|
|
format of the result rendering (tsv, csv, json or table. default is table) |
Example
td connector:history connector1
td connector run
Run a connector session for the specified time option.
Usage
td connector:run <name> [time]
Options |
Description |
---|---|
|
wait for finishing the job |
Example
td connector:run connector1 "2016-01-01 00:00:00"
User Commands
You can use the command line to control several elements related to users.
- td user:list
- td user:show
- td user:create
- td user:delete
- td user:apikey:list
- td user:apikey:add
- td user:apikey:remove
td user list
Show a list of users.
Usage
td user:list
Options |
Description |
---|---|
|
format of the result rendering (tsv, csv, json or table. default is table) |
Example
td user:list
td user:list -f csv
td user show
Show a user.
Usage
td user:show <name>
Example
td user:show "Roberta Smith"
td user create
Create a user. As part of the user creation process, you will be prompted to provide a password for the user.
Usage
td user:create <first_name> --email <email_address>
Example
td user:create "Roberta" --email "roberta.smith@acme.com"
td user delete
Delete a user.
Usage
td user:delete <email_address>
Example
td user:delete roberta.smith@acme.com
td user apikey list
Show API keys for a user.
Options |
Description |
---|---|
|
format of the result rendering (tsv, csv, json or table. default is table) |
Usage
td user:apikey:list <email_address>
Example
td user:apikey:list roberta.smith@acme.com
td user:apikey:list roberta.smith@acme.com -f csv
td user apikey add
Add an API key to a user.
Usage
td user:apikey:add <email_address>
Example
td user:apikey:add roberta.smith@acme.com
td user apikey remove
Remove an API key from a user.
Usage
td user:apikey:remove <email_address> <apikey>
Example
td user:apikey:remove roberta.smith@acme.com 1234565/abcdefg
Workflow Commands
You can create or modify workflows from the CLI using the following commands. The command wf
can be used interchangeably with workflow.
Basic Workflow Commands
td workflow reset
Reset the workflow moduleS
Usage
td workflow:reset
td workflow:update
Update the workflow module
Usage
td workflow:update [version]
td workflow:version
Show workflow module version
Usage
td workflow:version
Local-mode commands
You can use the following commands to locally initiate changes to workflows.
Usage
td workflow <command> [options...]
Options |
Description |
---|---|
|
create a new workflow project |
|
run a workflow |
|
show workflow definitions |
|
run a scheduler server |
|
migrate database |
|
update CLI to the latest version |
Info
Secrets for local mode use the following command:
td workflow secrets --local
Server-mode commands
You can use the following commands to initiate changes to workflows from the server.
Usage
td workflow <command> [options...]
Options |
Description |
---|---|
|
start server |
Client-mode commands
You can use the following commands to initiate changes to workflows from the client.
Usage
td workflow <command> [options...]
Options |
Description |
---|---|
|
create and upload a new revision |
|
pull an uploaded revision |
|
start a new session attempt of a workflow |
|
retry a session |
|
kill a running session attempt |
|
start sessions of a schedule for past times |
|
start sessions of a schedule for past times |
|
skip sessions of a schedule to a future time |
|
skip sessions of a schedule to a future time |
|
show projects |
|
show registered workflow definitions |
|
show registered schedules |
|
disable a workflow schedule |
|
disable all workflow schedules in a project |
|
disable a workflow schedule |
|
enable a workflow schedule |
|
enable all workflow schedules in a project |
|
enable a workflow schedule |
|
show sessions for all workflows |
|
show sessions for all workflows in a project |
|
show sessions for a workflow |
|
show a single session |
|
show attempts for all sessions |
|
show attempts for a session |
|
show a single attempt |
|
show tasks of a session attempt |
|
delete a project |
|
manage secrets |
|
show client and server version |
parameter |
Description |
---|---|
|
output log messages to a file (default: -) |
|
log level (error, warn, info, debug or trace) |
|
add a performance system config |
|
Configuration file (default: /Users/<user_name>/.config/digdag/config) |
|
show client version |
client options:
parameter |
Description |
---|---|
|
Server endpoint |
|
Additional headers |
|
Disable server version check |
|
Disable certificate verification |
|
Add an Authorization header with the provided username and password |
Job Commands
You can view status and results of jobs, view lists of jobs and delete jobs using the CLI.
td job show
Show status and results of a job.
Usage
td job:show <job_id>
Example
td job:show 1461
Options | Description |
---|---|
-v, --verbose
|
show logs |
-w, --wait
|
wait for finishing the job
|
-G, --vertical
|
use vertical table to show results |
-o, --output PATH
|
write results to the file |
-l, --limit ROWS
|
limit the number of result rows shown when not outputting to file |
-c, --column-header
|
output of the columns' header when the schema is available for the table (only applies to tsv and csv formats) |
-x, --exclude
|
do not automatically retrieve the job result |
--null STRING
|
null expression in csv or tsv |
-f, --format FORMAT
|
format of the result to write to the file (tsv, csv, json, msgpack, and msgpack.gz) |
td job status
Show status progress of a job.
Usage
td job:status <job_id>
Example
td job:status 1461
td job list
td job:list [max]
[max] is the number of jobs to show.
Example
td jobs --page 1
Options | Description |
---|---|
-p, --page PAGE
|
skip N pages |
-s, --skip N
|
skip N jobs |
-R, --running
|
show only running jobs |
-S, --success
|
show only succeeded jobs |
-E, --error
|
show only failed jobs |
--slow [SECONDS]
|
show slow queries (default threshold: 3600 seconds) |
-f, --format FORMAT
|
format of the result rendering (tsv, csv, json or table. default is table) |
td job kill
Kill or cancel a job.
td jobs --page 1
Example
td jobs --page 1