Overview
Every call totrigger() can behave in one of three fundamentally different ways depending on the action you pass. Choosing the right action determines whether the caller blocks, whether the work is queued with retries, or whether the message is simply dispatched and forgotten.
| Action | Caller blocks? | Retries? | Returns |
|---|---|---|---|
| (none) — Synchronous | Yes | No | Function result |
Void — Fire-and-forget | No | No | None / null |
Enqueue — Named queue | No | Yes | { messageReceiptId } |
The Three Trigger Actions
1. Synchronous (no action)
When you omit theaction field, trigger() performs a direct, synchronous invocation. The caller sends the request to the engine, the engine routes it to the target function, and the caller blocks until the function returns a result or the timeout expires.
- Node / TypeScript
- Python
- Rust
- You need the function’s return value to continue (e.g. fetching data, validating input)
- The operation is fast and the caller can afford to wait
- You want errors to propagate directly to the caller
- Request/response APIs like HTTP endpoints that must return data to a client
2. Void (fire-and-forget)
TriggerAction.Void() tells the engine to dispatch the invocation but not wait for a result. The caller continues immediately. If the target function fails, the caller is unaware — there are no retries and no acknowledgement.
- Node / TypeScript
- Python
- Rust
- The caller does not need a response
- Losing the occasional message is acceptable (best-effort delivery)
- You want minimal latency impact on the caller’s hot path
- Side effects like logging, analytics, non-critical notifications
3. Enqueue (named queue)
TriggerAction.Enqueue({ queue: 'name' }) routes the invocation through a named queue configured in iii-config.yaml, check how to create queues in more detail. The caller receives an acknowledgement (messageReceiptId) once the engine accepts the job but does not wait for it to be processed. The queue provides retries, concurrency control, backoff, and optional FIFO ordering. If all retries are exhausted, the job moves to a dead letter queue.
- Node / TypeScript
- Python
- Rust
- The work is expensive or slow and you do not want to block the caller
- You need automatic retries with backoff on failure
- You need concurrency control over how many jobs run in parallel
- You need FIFO ordering guarantees (e.g. financial transactions)
- You want failed jobs preserved in a dead letter queue for later inspection
Key Differences at a Glance
| Dimension | Synchronous | Void | Enqueue |
|---|---|---|---|
| Caller blocks | Yes — waits for result | No | No |
| Returns | Function return value | null / None | { messageReceiptId } |
| Error propagation | Errors reach the caller directly | Errors are silent to the caller | Retried automatically; DLQ on exhaustion |
| Retries | None — caller handles retry logic | None | Configurable (max_retries, backoff_ms) |
| Ordering | Sequential by nature | No guarantees | Optional FIFO with message_group_field |
| Concurrency control | N/A | N/A | Configurable per queue |
| Use case | Read data, validate, RPC | Analytics, logs, non-critical side effects | Payments, emails, heavy processing |
Real-World Scenarios
Scenario 1: E-Commerce Order Flow
An order API must respond fast, payment processing must be reliable, and analytics can be best-effort.- Node / TypeScript
- Python
- Rust
Scenario 2: User Registration Pipeline
Registration must return the created user (synchronous), send a welcome email reliably (enqueue), and log the event without blocking (void).- Node / TypeScript
- Python
- Rust
Scenario 3: Multi-Step Data Pipeline
An ETL pipeline where each stage hands off to the next via queues, with monitoring dispatched as void.- Node / TypeScript
- Python
- Rust
Decision Flowchart
Use this mental model when deciding which action to use:Combining Actions in a Single Function
A single function can use all three actions. This is common in orchestrator functions that coordinate multiple downstream services.- Node / TypeScript
- Python
- Rust
SDK Syntax Reference
- Node / TypeScript
- Python
- Rust
Common Mistakes
Using synchronous calls for slow, non-critical work
Using synchronous calls for slow, non-critical work
If you call a slow function synchronously inside an HTTP handler, your API response time degrades. Use
Enqueue for work that does not need to complete before responding.Using Void for work that must complete
Using Void for work that must complete
Void provides no delivery guarantees. If the target function fails or the worker is unavailable, the message is lost. Use
Enqueue when reliability matters.Enqueuing work that needs an immediate response
Enqueuing work that needs an immediate response
Enqueue returns a receipt, not the function’s result. If you need the function’s return value, use a synchronous call.
Next Steps
Use Queues
Configure named queues with retries, concurrency, and FIFO ordering
Dead Letter Queues
Handle and redrive failed queue messages
Functions & Triggers
Register functions and bind triggers to them
Trigger Types
Deep dive into HTTP, queue, cron, log, and stream triggers