Quickstart
Get stream processing from your data sources to insights in less than 15 minutes
When you start working with Joule you will be editing files locally using a code editor and running projects using the Joule command scripts. If you prefer to build your projects within an Integrated Development Environment (IDE) clone the existing banking example project from here.
Download the banking example project here.
Just click, download, unzip and run the examples
Prerequisites
Getting started with Joule has minimal requirements to getting started but to take full advantage of the platform some key technical capabilities would be needed.
To use configure Joule it is important that you know some basics of the Terminal. In particular, you should understand general bash commands to navigate through the directory structure of your computer easily.
Install Joule using the installation instructions for your operating system.
Create a GitLab account if you don't already have one.
Install the required platform tools, see the setting up the environment document.
Add to the /etc/hosts file the address of the Kafka host
i.e. 127.0.0.1 KAFKA_BROKER
Getting started
We shall start with a simple use case that demonstrates how to get yourself running with the platform using the following repeatable steps.
The use case will subscribe to a Kafka quotes topic, get the high and low price per symbol using tumbling windows and then publish resulting event onto the analytics_view topic.
This project can be found on Gitlab by following this link.
1. Connect to a data source
Define either one or more event sources using provided data source connectors.
Overview of the definition
Provide a logical name for the source definition
Define one or more channels to receive events
Joule will subscribe to events using the
quotes
Kafka topicReceived events are deserialised using a user defined parser in to a Joule
StreamEvent
to enable processing
Example Kafka subscription
2. Process events
Use case processing is defined as a pipeline of processing stages. Joule provides a set of OOTB processors, see documentation, along with a SDK to enable developers to extend the platform capabilities.
Overview of the definition
A logical name is defined for the use case, this will be used in the next step
Processing constraints define when this stream can execute
Event processing will use the actual event time provide within the received event
The use case will subscribe to events from the
nasdaq_quotes_stream
data source configured in step 1.Event telemetry is switched on to track every event received and published
The use case applies 1 second tumbling window aggregate functions for two event attributes grouped by
symbol
A simple event project emits the computed grouped events
Example tumbling window calculations
3. Distribute processed event
Distribution of processed events can be as simple as to file, dashboard, or on to another streaming channel for another process to perform further processing. For this example we are using the Kafka sink connector. For further information on available sinks can be found here.
Overview of the definition
Provide a logical name for the distribution definition
Bind to the use case in this case it is
streamingAnalyticsPublisher
Define one or more channels to receive events
The published event is created by mapping the internal Joule event to the domain type defined by the transform StockAnalyticRecordTransform implemementation which will then be converted to Json
Example Kafka publish connection using a translation AVRO schema
Deployment artefact
Now we bring together each deployment artefact (source, use case and sinks) to form the desired use case. A use case is formed by a single app.env file which references these files. This method of deployment enables you to simply switch out the source and sinks based upon your needs i.e. development, testing and production deployments
Example app.env file used by Joule to run a use case
Get the example running
Joule has provided the necessary scripts and configurations on getting a use case running using either a Docker image or a local unpacked installation, we shall use the local installation to get you familiar with the general directory structure, this will benefit your understanding of the provided Docker image.
At the root of the directory we have the following structure:
1. Start a local version of Redpanda Kafka
2. Start the data simulator
This will generate simulated quotes based upon the provided nasdaq csv info file.
3. Start the Joule use case
This will use the app.env file to start the use case which will publish resulting analytic results on to the analytics_view
topic
4. View the results
Or from the Redpanda UI
Use this link to access the console
5. Stopping the processes
Last updated