Quickstart
Gain insights from your data sources with minimised friction using the Joule prototyping platform
Last updated
Gain insights from your data sources with minimised friction using the Joule prototyping platform
Last updated
Download the getting started project here. Clone the repository and follow the instructions to run the examples
When you start working with Joule you will be editing files locally using a code editor and running use case examples using Postman. If you prefer to build your projects within an Integrated Development Environment (IDE) clone the existing banking example project from here.
Getting started with Joule has minimal requirements to getting started but to take full advantage of the platform some key technical capabilities would be needed.
Clone the getting started project to a local directory by using the following command:
Install the required platform tools if you want to experiment with Joule SDK, see the setting up the environment document.
Note: To configure and run Joule it is important that you know some basics of the Terminal. In particular, you should understand general bash commands to navigate through the directory structure of your computer easily.
Joule has provided the necessary scripts and configurations on getting a use case running using either a Docker image or a local unpacked installation, we shall use the local installation to get you familiar with the general directory structure, this will benefit your understanding of the provided Docker image.
For this you need to change in to the quickstart
directory and run a single command.
This will start up the following containers ready for use case deployment:
Joule Daemon
Joule Database
Redpanda (Lightweight Kafka implementation)
Both the banking and telco demo directories provide a set of examples in the form of a postman collection and environment. These can be found under the examples
directory. Now lets get you started running the banking Getting Started
demo example using Postman.
First import the use case demos and environment files from the banking-demo examples
directory
Set the environment to Joule.
From the Getting started \ Deploy
folder click Run folder
from the menu.
Finally execute the run order
by clicking the Run Joule - Banking demo
button
This will deploy the source, stream, sink and the use case binding definition to the platform. Note on a restart these setting will be rehydrated and will automatically start.
This will generate simulated quotes based upon the provided nasdaq csv info file.
Use this link to access the console
Note there are many other examples within the getting started
project. These are described within the README.md files.
If you choose to build the environment for development purposes this is done by simply running the below command, please ensure you have the correct build environment set out in the setting up the environment document.
This will build both the banking and telco example components, and copy across the configurations, libraries, and setup the demo environment.
The banking getting started use case demonstrates core features that are reusable across all use cases; connecting to data, processing and distributing events.
Hence, we shall start with a simple use case that demonstrates how to get yourself running with the platform using the following repeatable steps. The use case subscribes to a Kafka quotes topic, get the high and low price per symbol using tumbling windows and then publish resulting event onto the analytics_view topic.
Define either one or more event sources using provided data source connectors.
nasdaq_quotes_event_stream
is the logical name for the source definition
Subscribe to events using the quotes
Kafka topic
Received events are deserialised using a user defined parser in to a Joule StreamEvent
to enable processing
Use case processing is defined as a pipeline of processing stages. Joule provides a set of OOTB processors, see documentation, along with a SDK to enable developers to extend the platform capabilities.
basic_tumbling_window_pipeline
is used as the logical name to defined streaming processing pipeline, this will be used in the next step
Processing constraints, valid date to and from, define when this stream can execute
Event processing will use the event time provide within the received event
The use case will subscribe to events from the nasdaq_quotes_stream
data source configured in step 1.
The use case applies 1 second tumbling window aggregate functions for two event attributes grouped by symbol
A simple select projection emits the computed events
Distribution of processed events can be as simple as to file, dashboard, or on to another streaming channel for another process to perform further processing. For this example we are using the Kafka sink connector. For further information on available sinks can be found here.
Provide a logical name for the distribution definition
Bind to the use case in this case it is streamingAnalyticsPublisher
Define one or more channels to receive events
The published event is created by mapping the internal Joule event to the domain type defined by the transform StockAnalyticRecordTransform implementation which will then be converted to Json
Now we bring together each deployment artefact (source, use case and sinks) to form the desired use case. A use case is formed by a single app.env file which references these files. This method of deployment enables you to simply switch out the source and sinks based upon your needs i.e. development, testing and production deployments