Joule
  • Welcome to Joule's Docs
  • Why Joule?
    • Joule capabilities
  • What is Joule?
    • Key features
    • The tech stack
  • Use case enablement
    • Use case building framework
  • Concepts
    • Core concepts
    • Low code development
    • Unified execution engine
    • Batch and stream processing
    • Continuous metrics
    • Key Joule data types
      • StreamEvent object
      • Contextual data
      • GeoNode
  • Tutorials
    • Getting started
    • Build your first use case
    • Stream sliding window quote analytics
    • Advanced tutorials
      • Custom missing value processor
      • Stateless Bollinger band analytics
      • IoT device control
  • FAQ
  • Glossary
  • Components
    • Pipelines
      • Use case anatomy
      • Data priming
        • Types of import
      • Processing unit
      • Group by
      • Emit computed events
      • Telemetry auditing
    • Processors
      • Common attributes
      • Filters
        • By type
        • By expression
        • Send on delta
        • Remove attributes
        • Drop all events
      • Enrichment
        • Key concepts
          • Anatomy of enrichment DSL
          • Banking example
        • Metrics
        • Dynamic contextual data
          • Caching architecture
        • Static contextual data
      • Transformation
        • Field Tokeniser
        • Obfuscation
          • Encryption
          • Masking
          • Bucketing
          • Redaction
      • Triggers
        • Change Data Capture
        • Business rules
      • Stream join
        • Inner stream joins
        • Outer stream joins
        • Join attributes & policy
      • Event tap
        • Anatomy of a Tap
        • SQL Queries
    • Analytics
      • Analytic tools
        • User defined analytics
          • Streaming analytics example
          • User defined analytics
          • User defined scripts
          • User defined functions
            • Average function library
        • Window analytics
          • Tumbling window
          • Sliding window
          • Aggregate functions
        • Analytic functions
          • Stateful
            • Exponential moving average
            • Rolling Sum
          • Stateless
            • Normalisation
              • Absolute max
              • Min max
              • Standardisation
              • Mean
              • Log
              • Z-Score
            • Scaling
              • Unit scale
              • Robust Scale
            • Statistics
              • Statistic summaries
              • Weighted moving average
              • Simple moving average
              • Count
            • General
              • Euclidean
        • Advanced analytics
          • Geospatial
            • Entity geo tracker
            • Geofence occupancy trigger
            • Geo search
            • IP address resolver
            • Reverse geocoding
            • Spatial Index
          • HyperLogLog
          • Distinct counter
      • ML inferencing
        • Feature engineering
          • Scripting
          • Scaling
          • Transform
        • Online predictive analytics
        • Model audit
        • Model management
      • Metrics engine
        • Create metrics
        • Apply metrics
        • Manage metrics
        • Priming metrics
    • Contextual data
      • Architecture
      • Configuration
      • MinIO S3
      • Apache Geode
    • Connectors
      • Sources
        • Kafka
          • Ingestion
        • RabbitMQ
          • Further RabbitMQ configurations
        • MQTT
          • Topic wildcards
          • Session management
          • Last Will and Testament
        • Rest endpoints
        • MinIO S3
        • File watcher
      • Sinks
        • Kafka
        • RabbitMQ
          • Further configurations
        • MQTT
          • Persistent messaging
          • Last Will and Testament
        • SQL databases
        • InfluxDB
        • MongoDB
        • Geode
        • WebSocket endpoint
        • MinIO S3
        • File transport
        • Slack
        • Email
      • Serialisers
        • Serialisation
          • Custom transform example
          • Formatters
        • Deserialisers
          • Custom parsing example
    • Observability
      • Enabling JMX for Joule
      • Meters
      • Metrics API
  • DEVELOPER GUIDES
    • Setting up developer environment
      • Environment setup
      • Build and deploy
      • Install Joule
        • Install Docker demo environment
        • Install with Docker
        • Install from source
        • Install Joule examples
    • Joulectl CLI
    • API Endpoints
      • Mangement API
        • Use case
        • Pipelines
        • Data connectors
        • Contextual data
      • Data access API
        • Query
        • Upload
        • WebSocket
      • SQL support
    • Builder SDK
      • Connector API
        • Sources
          • StreamEventParser API
        • Sinks
          • CustomTransformer API
      • Processor API
      • Analytics API
        • Create custom metrics
        • Define analytics
        • Windows API
        • SQL queries
      • Transformation API
        • Obfuscation API
        • FieldTokenizer API
      • File processing
      • Data types
        • StreamEvent
        • ReferenceDataObject
        • GeoNode
    • System configuration
      • System properties
  • Deployment strategies
    • Deployment Overview
    • Single Node
    • Cluster
    • GuardianDB
    • Packaging
      • Containers
      • Bare metal
  • Product updates
    • Public Roadmap
    • Release Notes
      • v1.2.0 Join Streams with stateful analytics
      • v1.1.0 Streaming analytics enhancements
      • v1.0.4 Predictive stream processing
      • v1.0.3 Contextual SQL based metrics
    • Change history
Powered by GitBook
On this page
  • Objective
  • Prerequisites
  • Development steps
  • What we have learnt

Was this helpful?

  1. Tutorials
  2. Advanced tutorials

Custom missing value processor

Build, deploy and apply a custom transformer

PreviousAdvanced tutorialsNextStateless Bollinger band analytics

Last updated 4 months ago

Was this helpful?

Objective

We will create a simple missing data transformer that fills in missing values by either adding a default value or using the previous field's value.

Prerequisites

To get started building a custom processor ensure you have your development environment configured.

Development steps

These instructions cover how to build, deploy a use the processor on to the Joule Platform.

1

Create project using the template

git clone git@gitlab.com:joule-platform/fractalworks-project-templates.git

Joule uses Gradle to manage Java dependencies. To add dependencies for your processor, manage them in the build.gradle file inside your processors project directory.

2

Implement missing value transformer

Processors differ from connectors as they do not require, currently, a specification and builder classes. So jump right in and create and name a class that reflects the processing function.

Joule provides the core logic such as batching, cloning, linking of data stores, and a unique processor UUID for event change lineage.

Key areas of implementation:

  • Define processor DSL namespace

  • Initialize and apply methods

  • Attribute setters and properties

  • Add the class definition to plugins.properties

  • Deploy and apply to a Joule runtime environment

Code implementation

package com.yourcompany.processor.transformers;

import com.fasterxml.jackson.annotation.JsonRootName;
import com.fasterxml.jackson.annotation.JsonProperty;

import com.fractalworks.streams.core.data.streams.Context;
import com.fractalworks.streams.core.data.streams.StreamEvent;
import com.fractalworks.streams.core.data.streams.Metric;

import java.util.HashMap;
import java.util.Map;

// JsonRootName value will be used in the use case definition
@JsonRootName(value = "missing value transformer")
public class CustomMissingValueTransformer extends AbstractProcessor {

    private Map<String,Object> previousValue = new HashMap<>();

    private String key;
    private String field;
    
    private Object defaultValue;
    
    public CustomMissingValueTransformer() {
        super();
    }

    @Override
    public void initialize(Properties prop) throws ProcessorException {
        super.initialize(prop);
        // Add specific initialisation code here        
    }

    /**
    * This is were your custom code is provided
    */
    @Override
    public StreamEvent apply(StreamEvent event, Context context) 
        throws StreamsException {
        
        var value = event.getValue(field);
        if(value == null){
            value = (previousValue.containsKey(key) 
                ? previousValue.getValue(previousValue): defaultValue;
            event.addValue(uuid,field,value); 
            if (logger.isInfoEnabled()) {
                logger.info("Updated missing {} value with {}",field,value);
            }              
        }
        previousValue.put(key, value);
        
        // JMX enabled metrics
        metrics.incrementMetric(Metric.PROCESSED);
        return event;
    }
    
    /**
    * Attribute setters and dsl property 
    */

    @JsonProperty(value = "unique key", required = true)
    public void setField(String key) {
        this.key = key;
    }
    
    @JsonProperty(value = "field", required = true)
    public void setField(String field) {
        this.field = field;
    } 
        
    @JsonProperty(value = "default value", required = true)
    public void setDefaultValue(Object defaultValue) {
        this.defaultValue = defaultValue;
    } 
}

Note: If you would like to perform batch processing override the below method.

public MicroBatch apply(MicroBatch batch, Context context) throws StreamsException;
3

Add to plugins.properties

For Joule to load and initialised the component the processor must be defined within the plugins.properties file under the META-INF/services directory.

Add the below line in the plugins.properties file:

com.yourcompany.processor.transformers.CustomMissingValueTransformer
4

Build, test and package

The template project provides basic JUnit test to validate DSL. The project will execute these tests during the gradle build cycle and deploy to your local maven repository.

gradle build publishToMavenLocal
5

Deploy

Once your package has been successfully created you are ready to deploy to a Joule project.

The resulting jar from the build process needs to be copied to the userlibs directory under a Joule project directory. For example using the getting started project copy the file to quickstart/userlibs directory.

cp build/libs/<your-processor>.jar <location>/userlibs
6

Now apply to a stream

Lets say, sometimes we do not get a bid value which is needed to trigger an alert. So overcome a division by zero we provide a default value and use previous values when needed.

stream:
  name: nasdaq_major_banks_stream
  eventTimeType: EVENT_TIME
        
  processing unit:
    pipeline:
    # Filter events by major banks
    - filter:
        expression: "(typeof industry !== 'undefined' && 
                      industry == 'Major Banks')"
    
    # Apply 
    - missing value transformer:
        key: symbol
        field: bid
        default value: 1.0
    
  emit:
    select: symbol, bid, ask
    
    # Spread trigger
    having: "((bid - ask) / bid) > 0.015"
    
  group by:
  - symbol

What we have learnt

As a first process we have covered a number of key features:

  • Build a simple transformer Used the provided template project to quick start development and add custom code within key processor methods.

  • Built the jar Used gradle build tool to build, test and deploy to local maven repo.

  • Deploy the jar to a Joule runtime environment Copied the Jar to an existing local Joule runtime environment

  • Apply transformer within a use case Apply the transformer within a use case to provide consistent spread alerts in the event of missing data.

Read the documentation to get your environment ready to build.

We have provided a project template project to quick start development. The project can be found . Clone the template project and copy relevant code and structure to your own project.

Follow the same steps used in the documentation to apply this script.

environment setup
here
getting started
Source:
Data Preprocessing — Handling Missing Values in a dataset