Low code development

Joule is a low code development platform designed to ideate, pilot and scale business use cases

What we will learn on this page

We will explore the Joule platform's low-code approach and its core features.

By the end of the article, we will have a clear understanding of how Joule simplifies development through its Domain-Specific Language (DSL) and the definition of use cases.

We will learn about:

  1. Joule low-code approach Simplifying development using YAML-based DSL.

  2. Use case definition Combining data sources, processing, and outputs into cohesive definitions.

  3. Streams Configuring data processing pipelines, like tumbling windows.

  4. Data subscription & publishing Connecting to external data sources and publishing results.

  5. Contextual data Managing slower-changing data with in-memory caching for low-latency reads.

These concepts are introduced with high-level examples and can be explored in more detail with the linked documentation.

Joule low-code approach

The Joule platform offers a low-code development solution that simplifies coding complexity by using a high-level language. This is accomplished through the Joule DSL, which enables the definition of use cases using human-readable YAML syntax.

This forms the key platform low code approach

Use case definition

A use case is defined by combining source data requirements, processing pipeline, and output destinations into a single cohesive definition.

Example

The following diagram show the components of a use case. Each use case dependency is linked using a logical name which exists within an independent file.

This results in a single use case definition:

use case:
  name: nasdaq_buy_signaler

  constraints:
    valid from: "2024-01-01T08:00:00.000Z"
    valid to: "2030-01-01T23:59:00.000Z" # empty for infinite processing

  sources:
    - live_nasdaq_quotes
    
  stream name: quote_buy_signals
  
  sinks:
    - client_buy_dashboards

Stream

A stream defines the actual processing requirements and sequence.

Example

The below example creates the min & max for ask & bid values within a five-second tumbling window and only publishes symbols where they are not 'A'.

stream:
  name: basic_tumbling_window_pipeline
  eventTimeType: EVENT_TIME
  sources:
    - nasdaq_quotes_stream

  processing unit:
    pipeline:
      - time window:
          emitting type: tumblingQuoteAnalytics
          aggregate functions:
            MIN: [ask, bid]
            MAX: [ask, bid]
          policy:
            type: tumblingTime
            windowSize: 5000

  emit:
    eventType: windowQuoteEvent
    select: "symbol, ask_MIN, ask_MAX, bid_MIN, bid_MAX"
    having: "symbol !='A'"

  group by:
    - symbol

Data subscription

Users are able to subscribe to external data events through the use of source connectors.

Example

The below example connects to a Kafka cluster, consumes events from the quote topic and transforms the received quote object into an internal StreamEvent object.

consumer:
  name: nasdaq_quotes_stream
  sources:
    - kafkaConsumer:
        name: nasdaq_quotes_stream
        cluster address: KAFKA_BROKER:19092
        consumerGroupId: nasdaq
        topics:
          - quotes

        deserializer:
          parser: com.fractalworks.examples.banking.data.QuoteToStreamEventParser
          key deserializer: org.apache.kafka.common.serialization.IntegerDeserializer
          value deserializer: com.fractalworks.streams.transport.kafka.serializers.object.ObjectDeserializer

Event publishing

Users are able to publish events to downstream data platforms through the use of destination connectors.

Example

The below example generates a quoteWindowStream.csv file from the tumblingWindowQuoteStream events.

publisher:
  name: standardAnalyticsFilePublisher
  source: basic_tumbling_window_pipeline
  sinks:
    - file:
        enabled: true
        filename: nasdaqAnalytic
        path: ./data/output/analytics
        batchSize: 1024
        timeout: 1000
        formatter:
          csv formatter:
            contentType: "text/csv"
            encoding: "UTF-8"
            delimiter: "|"

Contextual data

Often in stream processing additional data is required to perform analytics, generally known as reference or contextual data.

Data of this form generally updates at a much slower pace and therefore is managed differently and held in data platform not architected for low latency reads. Joule has build a low latency read mechanism to overcome this limitation using in-memory caching.

Example

The below example connects to a distributed caching platform, Apache Geode, for a low latency reference data reads.

reference data:
  name: banking market data 
  data sources:
    - geode stores:
        name: us markets
        connection:
          locator address: 192.168.86.39
          locator port: 41111
        stores:
          nasdaqIndexCompanies:
            region: nasdaq-companies
            keyClass : java.lang.String
            gii: true
          holidays:
            region: us-holidays
            keyClass : java.lang.Integer

Last updated