Skip to content

Zart is in active development — breaking API changes may occur despite our best efforts to keep contracts stable.

Parallel Steps

This example runs health checks on three services concurrently. Each check is an independent step scheduled at the same time; zart::wait collects their results once all complete. A final sequential step aggregates the data into a summary.

Features demonstrated: #[zart_step], zart::schedule(), zart::wait(), structured output, DurableExecution trait.

#[derive(Debug, Clone, Serialize, Deserialize)]
struct HealthCheckInput {
services: Vec<String>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
struct ServiceResult {
name: String,
status: String,
response_ms: u64,
issues: Vec<String>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
struct HealthCheckOutput {
services_checked: usize,
total_issues: usize,
results: Vec<ServiceResult>,
}
use zart::prelude::*;
use zart::zart_step;
#[zart_step("check-service")]
async fn check_service(service: String) -> Result<ServiceResult, StepError> {
let (status, response_ms, issues) = match service.as_str() {
"auth-api" => ("healthy".into(), 42, vec![]),
"payments" => ("degraded".into(), 156, vec!["high latency detected".into()]),
"users-db" => ("healthy".into(), 28, vec![]),
_ => ("unknown".into(), 0, vec!["no check configured".into()]),
};
Ok(ServiceResult { name: service, status, response_ms, issues })
}
struct HealthCheckTask;
#[async_trait]
impl DurableExecution for HealthCheckTask {
type Data = HealthCheckInput;
type Output = HealthCheckOutput;
async fn run(&self, data: Self::Data) -> Result<Self::Output, TaskError> {
// Schedule one step per service — all are registered before any runs
let handles: Vec<StepHandle<ServiceResult>> = data
.services
.iter()
.map(|service| zart::schedule(check_service(service.clone())))
.collect();
// Block until every scheduled step has a persisted result
let results = zart::wait(handles).await?;
let mut service_results = vec![];
for result in results {
let svc = result.map_err(|e| TaskError::StepFailed {
step: "parallel-health-check".into(),
source: e,
})?;
service_results.push(svc);
}
let total_issues = service_results.iter().map(|s| s.issues.len()).sum();
Ok(HealthCheckOutput {
services_checked: service_results.len(),
total_issues,
results: service_results,
})
}
}
let mut registry = TaskRegistry::new();
registry.register("health-check", HealthCheckTask);
let registry = Arc::new(registry);
let durable = DurableScheduler::new(sched.clone());
let output = durable
.start_and_wait_for::<HealthCheckTask>(
"health-check-run-1",
"health-check",
&HealthCheckInput {
services: vec!["auth-api".into(), "payments".into(), "users-db".into()],
},
Duration::from_secs(60),
)
.await?;
println!("Checked {} services, {} issues", output.services_checked, output.total_issues);
=== Zart Parallel Steps Example ===
Starting execution 'parallel-demo-...'...
Services: ["auth-api", "payments", "users-db"]
Worker started. Steps executing...
Execution completed!
Services checked: 3
Total issues: 1
Service: auth-api — status: healthy (42ms)
Service: payments — status: degraded (156ms)
Issue: high latency detected
Service: users-db — status: healthy (28ms)
// Sequential — each step waits for the previous one to finish
let a = step_a().await?;
let b = step_b().await?;
let c = step_c().await?;
// Parallel — all three are registered first, then collected together
let h1 = zart::schedule(step_a());
let h2 = zart::schedule(step_b());
let h3 = zart::schedule(step_c());
let results = zart::wait(vec![h1, h2, h3]).await?;

Use parallel steps when the work is independent and order does not matter. Use sequential steps when each step depends on the previous step’s output.

zart::schedule(step) — registers the step for execution without waiting. Returns a handle. The step is enqueued in the scheduler and may run on any available worker.

zart::wait(handles) — suspends the current execution until every handle has a result. Returns results in the same order as the input handles.

Each step is a separate task — scheduled steps become independent scheduler tasks. They can run on different workers, in any order, and each has its own retry policy if configured.