Skip to main content
A module for asynchronous job processing. It supports two modes: topic-based queues (register a consumer per topic, emit events) and named queues (enqueue function calls via TriggerAction.Enqueue, no trigger registration).
iii-queue
For step-by-step instructions, see Use Topic-Based Queues and Use Named Queues. For DLQ management, see Manage Failed Triggers.

Queue Modes

Topic-based queues

Register consumers for a topic and emit events to it. Messages are delivered using fan-out per function: every distinct function subscribed to a topic receives a copy of each message. When multiple replicas of the same function are running, they compete on a shared per-function queue — only one replica processes each message.
  1. Register a consumer with registerTrigger({ type: 'durable:subscriber', function_id: 'my::handler', config: { topic: 'order.created' } }). This subscribes the handler to that topic.
  2. Emit events by calling trigger({ function_id: 'iii::durable::publish', payload: { topic: 'order.created', data: payload } }) or trigger({ function_id: 'iii::durable::publish', payload: { topic, data }, action: TriggerAction.Void() }) for fire-and-forget. The iii::durable::publish function fans out the payload to every subscribed function.
  3. Action on the trigger: the handler receives the data as its input. Optional queue_config on the trigger controls per-subscriber retries and concurrency.
The producer knows the topic name; consumers register to receive it. Queue settings can live at the trigger registration site.
If functions A and B both subscribe to topic order.created, each message published to that topic is delivered to both A and B. If function A has 3 replicas running, only one replica of A processes each message — they compete on a shared queue. This gives you pub/sub-style fan-out with the durability and retry guarantees of a queue.

Named queues

Define queues in iii-config.yaml, then enqueue function calls directly. No trigger registration needed.
  1. Define queues in queue_configs (see Configuration).
  2. Enqueue a function call with trigger({ function_id: 'orders::process', payload, action: TriggerAction.Enqueue({ queue: 'payment' }) }). The engine routes the job to the named queue and invokes the function when a worker consumes it.
  3. Action on the trigger: the target function receives payload as its input. Retries, concurrency, and FIFO are configured centrally in iii-config.yaml.
The producer targets the function and queue explicitly. Queue configuration is centralized. For a hands-on walkthrough, see Use Named Queues.

When to use which

Topic-basedNamed queues
ProducerCalls trigger({ function_id: 'iii::durable::publish', payload: { topic, data } })Calls trigger({ function_id, payload, action: TriggerAction.Enqueue({ queue }) })
ConsumerRegisters registerTrigger({ type: 'durable:subscriber', config: { topic } })No registration — function is the target
DeliveryFan-out: each subscribed function gets every message; replicas competeSingle target function per enqueue call
ConfigOptional queue_config on triggerqueue_configs in iii-config.yaml
Use caseDurable pub/sub with retries and fan-outDirect function invocation with retries, FIFO, DLQ
Both modes are valid. Named queues offer config-driven retries, concurrency, and FIFO ordering.
Named queues use the Enqueue trigger action. For a full comparison of synchronous, Void, and Enqueue invocation modes, see Trigger Actions.

Sample Configuration

- name: iii-queue
  config:
    queue_configs:
      default:
        max_retries: 5
        concurrency: 5
        type: standard
      payment:
        max_retries: 10
        concurrency: 2
        type: fifo
        message_group_field: transaction_id
    adapter:
      name: builtin
      config:
        store_method: file_based
        file_path: ./data/queue_store

Configuration

queue_configs
map[string, FunctionQueueConfig]
required
A map of named queue configurations. Each key is the queue name referenced in TriggerAction.Enqueue({ queue: 'name' }). Define a queue named default in config for the common case; reference it as TriggerAction.Enqueue({ queue: 'default' }).
adapter
Adapter
The transport adapter for queue persistence and distribution. Defaults to builtin when not specified.

Queue Configuration

Each entry in queue_configs defines an independent named queue with its own retry, concurrency, and ordering settings.
max_retries
u32
Maximum delivery attempts before routing the job to the dead-letter queue. Defaults to 3.
concurrency
u32
Maximum number of jobs processed simultaneously from this queue. Defaults to 10. For FIFO queues, the engine overrides this to prefetch=1 to guarantee ordering — see the note below.
type
string
Delivery mode: standard (concurrent, default) or fifo (ordered within a message group).
message_group_field
string
Required when type is fifo. The JSON field in the job payload whose value determines the ordering group. Jobs with the same group value are processed strictly in order. The field must be present and non-null in every enqueued payload.
backoff_ms
u64
Base retry backoff in milliseconds. Applied with exponential scaling: backoff_ms × 2^(attempt − 1). Defaults to 1000.
poll_interval_ms
u64
Worker poll interval in milliseconds. Defaults to 100.
When type is fifo, the engine sets prefetch=1 regardless of the concurrency value. This ensures only one job is in-flight at a time, which is required for strict ordering. FIFO queues also retry failed jobs inline (blocking the queue) rather than in parallel.

Standard vs FIFO Queues

DimensionStandardFIFO
Processing modelUp to concurrency jobs in parallelOne job at a time (prefetch=1)
OrderingNo guarantees — jobs may complete in any orderStrictly ordered within a message group
message_group_fieldNot requiredRequired — must be present and non-null in every payload
ThroughputHigh — scales with concurrencyLower — trades throughput for ordering
Use casesEmail sends, image processing, notificationsPayments, ledger entries, state machines
RetriesRetried independently, other jobs continueRetried inline — blocks the queue until success or DLQ

Standard queue flow

Jobs are dequeued and processed concurrently. Each job is independent.

FIFO queue flow

Jobs within the same message group are processed one at a time, strictly in order.

Adapters

builtin

Built-in in-process queue. No external dependencies. Suitable for single-instance deployments — messages are not shared across engine instances.
name: builtin
config:
  store_method: file_based   # in_memory | file_based
  file_path: ./data/queue_store  # required when store_method is file_based
store_method
string
Persistence strategy: in_memory (lost on restart) or file_based (durable across restarts). Defaults to in_memory.
file_path
string
Path to the queue store directory. Required when store_method is file_based.

redis

Uses Redis as the queue backend for topic-based pub/sub. Enables message distribution across multiple engine instances.
The Redis adapter supports publishing to named queues but does not implement named queue consumption, retries, or dead-letter queues. It is suitable for topic-based pub/sub only. For full named queue support in multi-instance deployments, use the RabbitMQ adapter.
name: redis
config:
  redis_url: ${REDIS_URL:redis://localhost:6379}
redis_url
string
The URL of the Redis instance to connect to.

rabbitmq

Uses RabbitMQ as the queue backend. Supports durable delivery, retries, and dead-letter queues across multiple engine instances. The engine owns consumer loops, retry acknowledgement, and backoff logic — RabbitMQ is used as a transport only. Retry uses explicit ack + republish to a retry exchange with an x-attempt header, keeping compatibility with both classic and quorum queues.
name: rabbitmq
config:
  amqp_url: ${RABBITMQ_URL:amqp://localhost:5672}
amqp_url
string
The AMQP URL of the RabbitMQ instance to connect to.

Queue naming in RabbitMQ

For each named queue defined in queue_configs, iii creates the following RabbitMQ resources:
ResourceFormatExample (payment)
Main exchangeiii.__fn_queue::<name>iii.__fn_queue::payment
Main queueiii.__fn_queue::<name>.queueiii.__fn_queue::payment.queue
Retry exchangeiii.__fn_queue::<name>::retryiii.__fn_queue::payment::retry
Retry queueiii.__fn_queue::<name>::retry.queueiii.__fn_queue::payment::retry.queue
DLQ exchangeiii.__fn_queue::<name>::dlqiii.__fn_queue::payment::dlq
DLQ queueiii.__fn_queue::<name>::dlq.queueiii.__fn_queue::payment::dlq.queue
Each named queue creates six RabbitMQ objects to support delayed retry and dead-lettering. For the design rationale, see Queue Architecture.

Topic-based queue naming in RabbitMQ

For topic-based queues, iii uses a fanout exchange per topic. Each subscribed function gets its own queue and DLQ bound to the exchange:
ResourceFormatExample (topic order.created, function notify::email)
Fanout exchangeiii.<topic>.exchangeiii.order.created.exchange
Per-function queueiii.<topic>.<function_id>.queueiii.order.created.notify::email.queue
Per-function DLQiii.<topic>.<function_id>.dlqiii.order.created.notify::email.dlq
RabbitMQ’s fanout exchange natively delivers a copy of each published message to every bound queue, providing fan-out delivery.

Adapter Comparison

builtinrabbitmqredis
RetriesYesYesNo
Dead-letter queueYesYesNo
FIFO orderingYesYesNo
Named queue consumptionYesYesNo (publish only)
Topic-based pub/subYesYesYes
Multi-instanceNoYesYes
External dependencyNoneRabbitMQRedis

Choosing an adapter

ScenarioRecommended AdapterWhy
Local developmentbuiltin (in_memory)Zero dependencies, fast iteration
Single-instance productionbuiltin (file_based)Durable across restarts, no external infra
Multi-instance productionrabbitmqDistributes messages across engine instances
Regardless of which adapter you choose, retry semantics, concurrency enforcement, and FIFO ordering behave identically — the engine owns these behaviors, not the adapter.
For step-by-step queue setup, see Use Named Queues and Use Topic-Based Queues.

Builtin Functions

The queue module registers the following functions automatically when it initializes. These are callable via trigger() from any SDK or via the iii trigger CLI command.

iii::durable::publish

Publishes a message to a topic-based queue. The message is fanned out to every distinct function subscribed to that topic. Replicas of the same function compete on a shared per-function queue.
FieldTypeDescription
topicstringThe topic to publish to (required, non-empty)
dataanyThe payload delivered to each subscribed function
Returns null on success.

iii::queue::redrive

Moves all messages from a named queue’s dead-letter queue back to the main queue. Each message gets its attempt counter reset to zero. Input:
FieldTypeDescription
queuestringThe named queue whose DLQ should be redriven (required, non-empty)
Output:
FieldTypeDescription
queuestringThe queue name that was redriven
redrivennumberThe number of messages moved back to the main queue
CLI example:
iii trigger \
  --function-id='iii::queue::redrive' \
  --payload='{"queue": "payment"}'
For a complete guide on inspecting DLQ messages before redriving, see Manage Failed Triggers.

Queue Flow

Retry and dead-letter flow

When a job fails, the engine retries it with exponential backoff. After all retries exhaust, the job moves to the DLQ.

Topic-based queue flow (fan-out)

When a message is published to a topic, the engine fans it out to every distinct function subscribed to that topic. Replicas of the same function compete on their shared per-function queue.