CLI Commands
The Snowpack CLI provides three commands for interactive use. Run all commands
with uv run snowpack from the project root.
Every command accepts --spark-host, --spark-port, and --catalog flags.
These can also be set via the SNOWPACK_SPARK_HOST, SNOWPACK_SPARK_PORT, and
SNOWPACK_CATALOG environment variables. Explicit flags take precedence over
environment variables.
snowpack tables
Lists all Iceberg tables visible in the catalog, optionally filtered to a single database.
Flags
| Flag | Default | Description |
|---|---|---|
--spark-host | localhost | Spark Thrift Server hostname. |
--spark-port | 10000 | Spark Thrift Server port. |
--catalog | glue_catalog | Iceberg catalog name. |
--database, -d | (all) | Filter to a single database. |
Examples
List all tables across every database:
uv run snowpack tablesList tables in a specific database:
uv run snowpack tables --database my_databaseUse a non-default Spark host:
uv run snowpack tables --spark-host spark.internal.example.comsnowpack health
Fetches live health metrics for a single table directly from the PyIceberg catalog. Returns small file count, snapshot count, manifest count, and position delete file count.
Flags
| Flag | Default | Description |
|---|---|---|
--spark-host | localhost | Spark Thrift Server hostname. |
--spark-port | 10000 | Spark Thrift Server port. |
--catalog | glue_catalog | Iceberg catalog name. |
--verbose, -v | false | Show detailed breakdown of each metric. |
Positional arguments
| Argument | Description |
|---|---|
database | Database name. |
table | Table name. |
Examples
Check health for a single table:
uv run snowpack health my_database my_tableVerbose output with full metric breakdown:
uv run snowpack health --verbose my_database my_tablesnowpack maintain
Submits a maintenance job for a single table. By default, Snowpack selects the
appropriate actions based on health analysis. You can override this by specifying
actions explicitly with --action.
Flags
| Flag | Default | Description |
|---|---|---|
--spark-host | localhost | Spark Thrift Server hostname. |
--spark-port | 10000 | Spark Thrift Server port. |
--catalog | glue_catalog | Iceberg catalog name. |
--action | (auto) | Maintenance action to run. Repeatable — pass multiple --action flags to run several actions in sequence. Valid values: rewrite_data_files, rewrite_position_delete_files, rewrite_manifests, expire_snapshots, remove_orphan_files. |
--target-file-size-mb | 512 | Target file size in MB for compaction. |
--min-file-size-mb | 384 | Minimum file size in MB. Files smaller than this are candidates for compaction. |
--dry-run | false | Log the maintenance plan without executing it. |
--verbose, -v | false | Show detailed output during execution. |
Positional arguments
| Argument | Description |
|---|---|
database | Database name. |
table | Table name. |
Examples
Run automatic maintenance (Snowpack decides which actions to take):
uv run snowpack maintain my_database my_tableRun only compaction and snapshot expiration:
uv run snowpack maintain \ --action rewrite_data_files \ --action expire_snapshots \ my_database my_tableDry run with custom file size targets:
uv run snowpack maintain \ --dry-run \ --target-file-size-mb 256 \ --min-file-size-mb 192 \ my_database my_tableVerbose output against a remote Spark host:
uv run snowpack maintain \ --spark-host spark.internal.example.com \ --verbose \ my_database my_table