Skip to content

Zart is in active development — breaking API changes may occur despite our best efforts to keep contracts stable.

Deployment Options

The simplest deployment: run the worker inside your application binary using a background tokio task.

use zart::{Worker, WorkerConfig, TaskRegistry};
use zart_postgres::PostgresScheduler;
use std::time::Duration;
use tokio::signal;
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let scheduler = PostgresScheduler::connect(
&std::env::var("DATABASE_URL")?
).await?;
// Register all your task handlers
let mut registry = TaskRegistry::new();
registry.register("onboarding", OnboardingTask::new());
registry.register("checkout", CheckoutTask::new());
registry.register("report-gen", ReportTask::new());
// Build the worker
let worker = Worker::new(
scheduler.clone(),
registry,
WorkerConfig {
poll_interval: Duration::from_secs(5),
max_tasks_per_poll: 10,
max_concurrent_tasks: 16,
shutdown_timeout: Duration::from_secs(30),
},
);
// Spawn worker as a background task — doesn't block your app
let worker_handle = tokio::spawn(async move {
worker.run().await
});
// Your web server / app logic runs here alongside the worker
run_axum_server(scheduler).await?;
// Graceful shutdown
worker_handle.abort();
Ok(())
}

Best for:

  • Monoliths and smaller services
  • Development and testing
  • When you want the simplest possible setup
  • Applications where all tasks are handled in one binary

DatabaseCrateNotes
PostgreSQL 14+zart-postgresRecommended for production. Uses SKIP LOCKED, advisory locks, and LISTEN/NOTIFY for efficient polling.

The PostgreSQL backend uses sqlx’s built-in connection pool. Configure pool size via the WorkerConfig:

let scheduler = PostgresScheduler::connect_with_pool(
&database_url,
PoolConfig {
max_connections: 20,
min_connections: 2,
acquire_timeout: Duration::from_secs(30),
},
).await?;

By default Zart uses zart_tasks, zart_executions, and the other zart_* tables created by the bundled migration. If those names collide with existing tables in your database, or you need multiple logical workers sharing one database (e.g. per-environment isolation), you can override them with TableNames.

use zart_scheduler::{TableNames, PostgresScheduler};
let names = TableNames::with_prefix("myapp_")?; // → myapp_tasks, myapp_executions, …
let scheduler = PostgresScheduler::with_table_names(pool, names);
let names = TableNames::default().with_schema("tenant_a")?;
// → "tenant_a"."zart_tasks", "tenant_a"."zart_executions", …
let scheduler = PostgresScheduler::with_table_names(pool, names);
let names = TableNames::with_prefix("svc_")?.with_schema("myschema")?;
// → "myschema"."svc_tasks", …
let scheduler = PostgresScheduler::with_table_names(pool, names);

For CLI or container deployments, TableNames::from_env_or_default() reads ZART_TABLE_PREFIX and ZART_SCHEMA:

Terminal window
ZART_TABLE_PREFIX=myapp_ ZART_SCHEMA=tenant_a ./my-worker
let names = TableNames::from_env_or_default()?;
let scheduler = PostgresScheduler::with_table_names(pool, names);

All deployment modes support graceful shutdown. When a shutdown signal is received:

  1. The worker stops polling for new tasks.
  2. In-flight tasks are allowed to complete (up to shutdown_timeout).
  3. Tasks still running after the timeout are re-queued for another worker.
WorkerConfig {
shutdown_timeout: Duration::from_secs(30), // wait up to 30s for tasks to finish
// ...
}

In Docker / Kubernetes, set terminationGracePeriodSeconds to at least shutdown_timeout + 10 to give the pod time to drain cleanly.