Low code development

Joule is a low code development platform designed to ideate, pilot and scale business use cases

What we will learn on this page

We will explore the Joule platform's low-code approach and its core features.

By the end of the article, we will have a clear understanding of how Joule simplifies development through its Domain-Specific Language (DSL) and the definition of use cases.

We will learn about:

  1. Joule low-code approach Simplifying development using YAML-based DSL.

  2. Use case definition Combining data sources, processing, and outputs into cohesive definitions.

  3. Streams Configuring data processing pipelines, like tumbling windows.

  4. Data subscription & publishing Connecting to external data sources and publishing results.

  5. Contextual data Managing slower-changing data with in-memory caching for low-latency reads.

These concepts are introduced with high-level examples and can be explored in more detail with the linked documentation.

Joule low-code approach

The Joule platform offers a low-code development solution that simplifies coding complexity by using a high-level language. This is accomplished through the Joule DSL, which enables the definition of use cases using human-readable YAML syntax.

Use case definition

A use case is defined by combining source data requirements, processing pipeline, and output destinations into a single cohesive definition.

Example

The following diagram show the components of a use case. Each use case dependency is linked using a logical name which exists within an independent file.

Use case components

This results in a single use case definition:

Stream

A stream defines the actual processing requirements and sequence.

Pipelines

Example

The below example creates the min & max for ask & bid values within a five-second tumbling window and only publishes symbols where they are not 'A'.

Data subscription

Users are able to subscribe to external data events through the use of source connectors.

Sources

Example

The below example connects to a Kafka cluster, consumes events from the quote topic and transforms the received quote object into an internal StreamEvent object.

Event publishing

Users are able to publish events to downstream data platforms through the use of destination connectors.

Sinks

Example

The below example generates a quoteWindowStream.csv file from the tumblingWindowQuoteStream events.

Contextual data

Often in stream processing additional data is required to perform analytics, generally known as reference or contextual data.

Data of this form generally updates at a much slower pace and therefore is managed differently and held in data platform not architected for low latency reads. Joule has build a low latency read mechanism to overcome this limitation using in-memory caching.

Contextual data

Example

The below example connects to a distributed caching platform, Apache Geode, for a low latency reference data reads.

Last updated

Was this helpful?