Skip to main content

Goal

Make your iii application fully observable: correlate every log entry to the exact trace that produced it, inspect execution timelines to find bottlenecks, and optionally export all telemetry to third-party tools like Grafana, Jaeger, or Datadog.

Why use the iii Logger

Every iii SDK ships a Logger class that emits logs as OpenTelemetry LogRecords. Each log call automatically captures the active trace ID and span ID, linking the log entry to the distributed trace that produced it. Language-native logging functions — console.log in Node, print() in Python, tracing::info! in Rust — write to stdout but are not connected to traces. This means you cannot find them in the iii Console’s trace detail view, and they are invisible to any OTLP-based observability backend.
ApproachWhere it appearsTrace correlation
console.log("Order created")stdout onlyNone
print("Order created")stdout onlyNone
tracing::info!("Order created")stdout onlyNone
logger.info("Order created", { orderId })stdout, iii Console, OTLP backendsAutomatic — linked to the active trace and span
Avoid using console.log, print(), or tracing::info! for application logs. These bypass the OpenTelemetry pipeline and will not appear in the iii Console or any connected observability tool.

Trace-correlated logs in iii Console

When you use the iii Logger, every log entry is attached to the trace that was active when the log was emitted. In the iii Console, clicking a trace in the waterfall chart opens a detail drawer. The Logs tab shows every log entry from that exact execution — with severity, timestamp, message, and any structured data you attached. iii Console trace detail showing the Logs tab with correlated log entries for a specific trace execution This is the core value of using the iii Logger: you can go from a slow trace in the waterfall chart directly to the logs that explain what happened, without grepping through stdout or cross-referencing timestamps manually.

Using the Logger

1

Import and instantiate

Create a Logger instance in your function handler. No configuration is required — trace context is injected automatically.
import { Logger } from 'iii-sdk'

const logger = new Logger()

logger.info('Worker connected')
2

Attach structured data

Pass a second argument with key-value data. Structured data becomes filterable attributes in the iii Console and in any OTLP-compatible backend.
logger.info('Order processed', { orderId: 'ord_123', amount: 49.99, currency: 'USD' })
logger.warn('Retry attempt', { attempt: 3, maxRetries: 5, endpoint: '/api/charge' })
logger.error('Payment failed', { orderId: 'ord_123', gateway: 'stripe', errorCode: 'card_declined' })
logger.debug('Cache lookup', { key: 'user:42', hit: false })
Prefer key-value objects over string interpolation. Structured fields let you filter, aggregate, and build dashboards — string-interpolated messages do not.
3

Use inside a function handler

The Logger works anywhere inside a function handler. Trace context is captured from the active invocation automatically.
import { registerWorker, Logger } from 'iii-sdk'

const iii = registerWorker(process.env.III_URL ?? 'ws://localhost:49134')

iii.registerFunction('orders::create', async (req) => {
  const logger = new Logger()
  const { customerId, amount } = req.body
  const orderId = `order-${Date.now()}`

  logger.info('Order created', { orderId, customerId, amount })

  // ... business logic ...

  logger.info('Order processing complete', { orderId })
  return { status_code: 201, body: { orderId } }
})

Logger API reference

All three SDKs expose the same four methods:
MethodNode / TypeScriptPythonRust
Infologger.info(msg, data?)logger.info(msg, data=None)logger.info(msg, Option<Value>)
Warninglogger.warn(msg, data?)logger.warn(msg, data=None)logger.warn(msg, Option<Value>)
Errorlogger.error(msg, data?)logger.error(msg, data=None)logger.error(msg, Option<Value>)
Debuglogger.debug(msg, data?)logger.debug(msg, data=None)logger.debug(msg, Option<Value>)
When OpenTelemetry is not initialized (e.g. in unit tests), the Logger falls back to console.* (Node), Python logging (Python), or tracing::* (Rust) — your logs still appear in stdout.

Configuring observability

The iii engine’s Observability worker (iii-observability) controls how traces, logs, and metrics are collected and exported. There are two main configurations depending on your environment.

Local development

For local development, use the memory exporter. Traces and logs are stored in the engine’s memory and can be inspected through the iii Console. This is the simplest setup and requires no external infrastructure.
iii-config.yaml
workers:
  - name: iii-observability
    config:
      enabled: true
      exporter: memory
      logs_enabled: true
      memory_max_spans: 1000
FieldPurposeDefault
exporterWhere to send traces: memory, otlp, or bothotlp
memory_max_spansMax spans kept in memory1000
logs_enabledEnable structured log storagetrue (always initialized)
logs_max_countMax log entries kept in memory1000
logs_console_outputAlso print logs to the terminal via tracingtrue
With exporter: memory, all data lives in the engine process. This is ideal for development — no collector, no database, just start the engine and open the console.

Exporting to third-party tools

For production or when you want to send telemetry to an external system (Grafana, Jaeger, Datadog, or any OTLP-compatible collector), use the otlp exporter with an endpoint.
iii-config.yaml
workers:
  - name: iii-observability
    config:
      enabled: true
      exporter: otlp
      endpoint: "http://collector.example.com:4317"
      service_name: my-service
      service_version: 1.0.0
      metrics_enabled: true
      logs_enabled: true
      logs_exporter: otlp
To keep local visibility through iii Console and export to a collector simultaneously, use exporter: both:
iii-config.yaml
workers:
  - name: iii-observability
    config:
      enabled: true
      exporter: both
      endpoint: "http://collector.example.com:4317"
      service_name: my-service
      logs_enabled: true
      logs_exporter: both
The endpoint field can also be set via the OTEL_EXPORTER_OTLP_ENDPOINT environment variable. See the Observability worker reference for the full list of configuration fields and environment variable overrides.

Using iii Console

The iii Console is a web UI for inspecting traces, logs, metrics, and more. It comes included with every iii installation — no separate setup required. It connects to a running iii engine and gives you full operational visibility.

Launch

Start the console while your engine is running:
iii-console
Open http://localhost:3113 in your browser.
The console connects to a running iii engine instance. Make sure your engine is started before launching the console. By default it expects the engine at 127.0.0.1:3111.

Inspecting traces

Navigate to the Traces page to see all collected traces. Each trace shows its root operation, duration, service name, span count, and status. Click on a trace to open the detail view with four visualization modes:
  • Waterfall Chart — timeline showing every span by start time and duration. Best for understanding sequential and parallel flow.
  • Flame Graph — stack-based view where wider bars mean longer duration. Best for spotting time-consuming operations.
  • Service Breakdown — aggregate stats per service (total spans, average duration, error rate). Best for identifying bottleneck services.
  • Trace Map — topology graph showing cross-service communication patterns.

Inspecting logs per trace

Click on any span in the trace view to open the detail drawer. Switch to the Logs tab to see every log entry that was emitted during that span’s execution. Each entry includes the severity level, timestamp, message, and structured attributes. Trace detail drawer with the Logs tab selected, showing correlated log entries You can also use the dedicated Logs page for a full log viewer with severity filters, full-text search, and time-range controls. If a log entry has a trace_id, you can click it to jump directly to the corresponding trace.

Identifying bottlenecks

Use the waterfall chart to spot long-running spans. Switch to the flame graph to see which operations consume the most time relative to the total trace duration. The service breakdown view aggregates performance stats so you can identify which service needs optimization. For more details on all console features, see the Console reference.

Result

Your iii application is now fully observable:
  • Structured logs are correlated to distributed traces automatically — no manual wiring.
  • Local visibility is available through iii Console with the memory exporter — no external infrastructure needed.
  • Third-party export sends traces, logs, and metrics to any OTLP-compatible backend via the otlp or both exporter.
  • Bottleneck identification is possible through waterfall, flame graph, and service breakdown views in the console.

Next steps

Console

Full iii Console feature reference

OpenTelemetry Integration

Custom spans, metrics, and telemetry utilities

Observability Worker

Full configuration reference for traces, logs, metrics, alerts, and sampling

Observability Example

End-to-end multi-step workflow with trace correlation