Concepts

Pipeline model

An Open-M pipeline is a directed graph of components connected by typed arrows. Components are containerised microservices. Arrows are Pulsar/Kafka topic-subscription pairs. Every canvas operation is an edit to the pipeline YAML, and the control plane provisions the corresponding broker objects on deploy.

Components

A component is the fundamental unit of pipeline logic. Every component is a containerised microservice with two internal layers:

Open-M Component (e.g. customer-enricher)
Open-M Wrapper — subscribe · deserialise · schema-validate · UTL-X · retry · ack · log
Business Logic Engine — Apache Camel route · TIBCO BW process · SDK · UTL-X mapping

The wrapper handles all platform concerns: subscribing to input topics, deserialising the MPPM envelope, validating schemas, executing any Mode 2 UTL-X transform, passing the payload to business logic, constructing the outbound envelope, publishing to the output topic, acknowledging the broker, and publishing the log event. Business logic never touches the broker directly.

The business logic engine can be one of four implementation types: Apache Camel (for the 160+ Camel-compatible connectors), TIBCO BW (shim for existing BW investments), the Open-M SDK (Go, Java, or .NET for custom logic), or a UTL-X mapping component (pure field mapping, no code).

In the pipeline YAML, a component is declared with a ref pointing to its versioned manifest in the Service Registry: namespace.category.name:version.

Ports

Every component exposes typed ports. The port type is declared in the component's manifest and determines what schema the port accepts or emits:

output — produces payload input — consumes payload error — failed envelopes log — structured log events

A component can have multiple output ports (e.g. a router that routes to different downstream components based on content) or multiple input ports (e.g. a stateful join that waits for two independent inputs). Ports are declared in the component manifest and referenced in the pipeline YAML's connections[].from.port and connections[].to.port fields.

The log port is always present on every component — it is not shown as an arrow on the canvas to avoid clutter, but appears as a small icon on the component node. Clicking it in live test mode opens an inline log tail filtered by correlation_id.

Connections

A connection (arrow) pairs one output port with one input port. Under the hood it is three broker objects: an output topic, a named durable subscription, and a route. The connection stanza in the pipeline YAML declares all three explicitly — auto-generated by the IDE when the arrow is drawn.

Connections carry: schema references (what type of MPPM payload travels on this topic), delivery guarantee, retry policy, error routing, and optionally a Mode 2 UTL-X transform.

Canvas → broker topology

Every visual element on the canvas maps to specific broker objects that the control plane provisions on deploy:

Canvas elementBroker object(s) provisioned
Component nodeKubernetes Deployment + Service, namespace placement, replica count from spec.scaling
Output port (circle, right side)Pulsar/Kafka topic with declared retention policy
Input port (circle, left side)Named durable subscription on the upstream topic
Error port (diamond, bottom)Error topic + subscription for the connected error-handler component
Log icon (top of node)Log topic with logging retention policy
Arrow (solid)Route identity pairing topic + subscription. No separate broker object — the route is a pipeline-level concept for operational traceability.
◇ UTL-X diamond on arrow (Mode 2)No extra broker object. The transform runs inside the receiving wrapper in-process.
DLQ (not visible on canvas by default)One DLQ topic per component, auto-named. Surfaces in the ops dashboard when messages land there.
💡

Provisioning is idempotent. Topics and subscriptions that already exist with the same name and config are left unchanged. Redeploying a pipeline does not drop messages already in-flight on existing subscriptions.

Three-layer topology model

The canvas has three view layers. Switching layers never changes the YAML — it renders the same descriptor through different lenses:

LAYER 1
Logical flow
Default view. Components and arrows. No cluster or replica information. This is where integration logic is designed and reviewed.
LAYER 2
Placement
Clusters appear as coloured swim lanes. Components shown within the cluster(s) they are placed on. Multi-cluster components appear in multiple lanes connected by a federation indicator.
LAYER 3
Instance
Expands a selected component into N replica boxes, each showing pod name, host node, health, and live stats. The equivalent of Fiorano's dual-draw FT model — without the visual duplication.

Scaling is declared in the spec.scaling stanza — a separate section from the logical topology. The same pipeline YAML can run as a single-cluster sandbox (1 replica per component) and a multi-cluster production deployment (3 clusters, variable replicas) without any change to the logical flow.

Component engine types

EngineImplemented asBest for
Apache CamelCamel YAML DSL route, packaged in a Camel-runner wrapper160+ Camel connectors (Salesforce, ServiceNow, AWS, Azure, IBM MQ, etc.)
TIBCO BWBW process, shim maps BW variables to MPPM envelope fieldsMigrating existing BW investment to Open-M pipelines incrementally
Open-M SDKGo, Java, or .NET library; component implements a single handler functionCustom business logic, stateful joins, complex aggregation
UTL-X mappingPure UTL-X script, no deployment — runs in the receiving wrapper (Mode 2) or as a standalone mapping component (Mode 3)Field mapping, format conversion, structural transformation

The wrapper in detail

The Open-M wrapper is a compiled-in library (not a sidecar) embedded in every component container. It is versioned separately from business logic. Every Open-M wrapper in Go, Java, and .NET implements the same lifecycle:

sequence — wrapper processing one message
broker.subscribe(input_subscription)
  → envelope = broker.dequeue()
  → wrapper.validate_schema(envelope.payload, envelope.schema_ref)
  → if connection.transform: payload = utlx.execute(transform, envelope)
  → output = business_logic.process(payload, envelope.steps)
  → new_envelope = wrapper.build_envelope(output, envelope)
  → broker.publish(output_topic, new_envelope)
  → broker.ack(envelope)                    # after publish for at_least_once
  → log_topic.publish(log_event(correlation_id, ...))
ℹ️

The wrapper is the only layer that interacts with Pulsar/Kafka. Business logic receives a plain payload and optionally the step window — it has no broker dependency and can be unit-tested with a synthetic MPPM envelope without a running broker.