Joule
  • Welcome to Joule's Docs
  • Why Joule?
    • Joule capabilities
  • What is Joule?
    • Key features
    • The tech stack
  • Use case enablement
    • Use case building framework
  • Concepts
    • Core concepts
    • Low code development
    • Unified execution engine
    • Batch and stream processing
    • Continuous metrics
    • Key Joule data types
      • StreamEvent object
      • Contextual data
      • GeoNode
  • Tutorials
    • Getting started
    • Build your first use case
    • Stream sliding window quote analytics
    • Advanced tutorials
      • Custom missing value processor
      • Stateless Bollinger band analytics
      • IoT device control
  • FAQ
  • Glossary
  • Components
    • Pipelines
      • Use case anatomy
      • Data priming
        • Types of import
      • Processing unit
      • Group by
      • Emit computed events
      • Telemetry auditing
    • Processors
      • Common attributes
      • Filters
        • By type
        • By expression
        • Send on delta
        • Remove attributes
        • Drop all events
      • Enrichment
        • Key concepts
          • Anatomy of enrichment DSL
          • Banking example
        • Metrics
        • Dynamic contextual data
          • Caching architecture
        • Static contextual data
      • Transformation
        • Field Tokeniser
        • Obfuscation
          • Encryption
          • Masking
          • Bucketing
          • Redaction
      • Triggers
        • Change Data Capture
        • Business rules
      • Stream join
        • Inner stream joins
        • Outer stream joins
        • Join attributes & policy
      • Event tap
        • Anatomy of a Tap
        • SQL Queries
    • Analytics
      • Analytic tools
        • User defined analytics
          • Streaming analytics example
          • User defined analytics
          • User defined scripts
          • User defined functions
            • Average function library
        • Window analytics
          • Tumbling window
          • Sliding window
          • Aggregate functions
        • Analytic functions
          • Stateful
            • Exponential moving average
            • Rolling Sum
          • Stateless
            • Normalisation
              • Absolute max
              • Min max
              • Standardisation
              • Mean
              • Log
              • Z-Score
            • Scaling
              • Unit scale
              • Robust Scale
            • Statistics
              • Statistic summaries
              • Weighted moving average
              • Simple moving average
              • Count
            • General
              • Euclidean
        • Advanced analytics
          • Geospatial
            • Entity geo tracker
            • Geofence occupancy trigger
            • Geo search
            • IP address resolver
            • Reverse geocoding
            • Spatial Index
          • HyperLogLog
          • Distinct counter
      • ML inferencing
        • Feature engineering
          • Scripting
          • Scaling
          • Transform
        • Online predictive analytics
        • Model audit
        • Model management
      • Metrics engine
        • Create metrics
        • Apply metrics
        • Manage metrics
        • Priming metrics
    • Contextual data
      • Architecture
      • Configuration
      • MinIO S3
      • Apache Geode
    • Connectors
      • Sources
        • Kafka
          • Ingestion
        • RabbitMQ
          • Further RabbitMQ configurations
        • MQTT
          • Topic wildcards
          • Session management
          • Last Will and Testament
        • Rest endpoints
        • MinIO S3
        • File watcher
      • Sinks
        • Kafka
        • RabbitMQ
          • Further configurations
        • MQTT
          • Persistent messaging
          • Last Will and Testament
        • SQL databases
        • InfluxDB
        • MongoDB
        • Geode
        • WebSocket endpoint
        • MinIO S3
        • File transport
        • Slack
        • Email
      • Serialisers
        • Serialisation
          • Custom transform example
          • Formatters
        • Deserialisers
          • Custom parsing example
    • Observability
      • Enabling JMX for Joule
      • Meters
      • Metrics API
  • DEVELOPER GUIDES
    • Setting up developer environment
      • Environment setup
      • Build and deploy
      • Install Joule
        • Install Docker demo environment
        • Install with Docker
        • Install from source
        • Install Joule examples
    • Joulectl CLI
    • API Endpoints
      • Mangement API
        • Use case
        • Pipelines
        • Data connectors
        • Contextual data
      • Data access API
        • Query
        • Upload
        • WebSocket
      • SQL support
    • Builder SDK
      • Connector API
        • Sources
          • StreamEventParser API
        • Sinks
          • CustomTransformer API
      • Processor API
      • Analytics API
        • Create custom metrics
        • Define analytics
        • Windows API
        • SQL queries
      • Transformation API
        • Obfuscation API
        • FieldTokenizer API
      • File processing
      • Data types
        • StreamEvent
        • ReferenceDataObject
        • GeoNode
    • System configuration
      • System properties
  • Deployment strategies
    • Deployment Overview
    • Single Node
    • Cluster
    • GuardianDB
    • Packaging
      • Containers
      • Bare metal
  • Product updates
    • Public Roadmap
    • Release Notes
      • v1.2.0 Join Streams with stateful analytics
      • v1.1.0 Streaming analytics enhancements
      • v1.0.4 Predictive stream processing
      • v1.0.3 Contextual SQL based metrics
    • Change history
Powered by GitBook
On this page
  • Overview
  • Examples & DSL attributes
  • Attributes schema
  • Ingestion deserialisation in Joule
  • Default consumer properties

Was this helpful?

  1. Components
  2. Connectors
  3. Sources

Kafka

Standard Kafka consumers transport ingests data from subscribed cluster topics

PreviousSourcesNextIngestion

Last updated 6 months ago

Was this helpful?

Overview

This page provides an overview of configuring the Kafka consumer in Joule to ingest data from specified Kafka topics, allowing for seamless integration with data pipelines.

Joule's Kafka consumer deserialises Kafka streams into Joule StreamEvents, enabling them to be processed within the Joule environment. This deserialisation can be automated or customised with a transformer based on configuration settings.

Client library:

Examples & DSL attributes

The example provided illustrates how to set up a consumer to process messages from the quotes topic on a specified Kafka broker (KAFKA_BROKER:9092).

It uses the StickyAssignor strategy for partition assignment and provides an example of adding the Kafka broker's symbolic name to the /etc/hosts file.

kafkaConsumer:
  name: nasdaq_quotes_stream
  cluster address: KAFKA_BROKER:9092
  consumerGroupId: nasdaq
  topics:
    - quotes
    
  properties:
    partition.assignment.strategy: org.apache.kafka.clients.consumer.StickyAssignor

Add to the /etc/hosts file the address of the Kafka host. This enables you to use the symbolic name rather than a specific IP address i.e. 127.0.0.1 KAFKA_BROKER

Attributes schema

Attribute
Description
Data Type
Required

name

Name of source stream

String

cluster address

Kafka cluster details. Maps to bootstrap.servers

http://<ip-address>:port

consumerGroupId

Consumer group to ensure an event is read only once. Maps to group.id

String

topics

Message topics to subscribe too

List of strings

pollingTimeout

Long

Default: 250ms

deserializer

Deserialisation configuration

properties

Properties map

Ingestion deserialisation in Joule

The Kafka consumer configuration in Joule is designed to seamlessly ingest and process data from Kafka topics by deserialising streams into Joule-compatible formats.

Using Kafka consumer attributes and custom configurations, it is possible to transform messages into Joule StreamEvents that can be processed further in data pipelines.

Default consumer properties

enable.auto.commit=true
auto.offset.reset=latest
auto.commit.interval.ms=50
session.sendTimeout.ms=30000
key.deserializer=org.apache.kafka.common.serialization.IntegerDeserializer
value.deserializer=org.apache.kafka.common.serialization.StringDeserializer

This places an upper bound on the amount of time that the consumer can be idle before fetching more records. See this for more information

Additional consumer properties to be applied to the Kafka client API. See official for available properties. Default properties applied are listed

This setup supports various deserialisation options for efficient and flexible integration with data infrastructure. .

org.apache.kafka:kafka-clients:2.7.0
Continue with this documentation to see how it is done
documentation
See Ingestion documentation
documentation
below