Data priming
Prime Joule with necessary startup data
This is an optional feature that provides the ability to prime Joule with data necessary for an active use case
Overview
Advanced use cases often require contextual data to support calculations or complex business logic. Joule enables this by offering data priming at initialisation and enrichment processing stages.
The initialisation process imports data at startup from local files into an in-memory SQL database, making it immediately available for use in processing.
Initialisation process
Joule’s initialisation process leverages an embedded SQL engine, enabling powerful features like metrics, event capturing, data exporting and access to contextual data.
This imported data, typically static contextual information, plays a vital role in supporting key functions within the event stream pipeline.
Data made available through the initialisation process can be accessed through several main components:
Enricher processor For adding contextual information to events.
Metrics engine For real-time calculations and metrics updates.
Select projection For choosing specific fields for further processing.
In-memory SQL API For direct data access and manipulation within Joule.
Attribute | Description | Data Type | Required |
---|---|---|---|
schema | Global database schema when set can be used for any import definition where schema is not defined. Default schema | String | |
csv | List of CSV data import configurations | ||
parquet | List of parquet data import configurations |
Example
This following example demonstrates how to initialise two separate data files into independent in-memory SQL database tables using CSV and Parquet formats.
The CSV file contains Nasdaq company information, it is treated as static reference data and is therefore stored in the
reference_data
schema.Meanwhile, the Parquet file loads pre-calculated metrics, priming the metrics engine within the
metrics
schema.
This setup enables efficient access to contextual data and metrics calculations during event processing.
This feature can load and read files from existing databases!
Attributes schema
These are common DSL keywords used in both parquet and CSV importing methods.
Attribute | Description | Data Type | Required |
---|---|---|---|
schema | Database schema to create and apply table import function | String | |
table | Target table to import data into | String | |
drop table | Drop existing table before import. This will cause a table recreation | Boolean Default true | |
index | Create an index on the created table |
Index
If this optional field is supplied the index is recreated once the data has been imported.
Attribute | Description | Data Type | Required |
---|---|---|---|
fields | A list of table fields to base | String | |
unique | True for a unique | Boolean Default true |
Types of import
Parquet Import
Parquet formatted files can be imported into the system.
index
cannot be created over a view
Example
Attributes schema
Attribute | Description | Data Type | Required |
---|---|---|---|
asView | Create a view over the | Boolean Default false | |
files | List of | String list |
CSV import
Data can be imported from CSV files using a supported set of delimiters. The key difference between parquet and CSV, it is possible to control the table definition.
Joule by default will try to create a target table based upon a sample set of data assuming a header exists on the first row.
Example
Attributes schema
Attribute | Description | Data Type | Required |
---|---|---|---|
table definition | Custom SQL table definition used when provided. This will override the | String | |
file | Name and path of the file to import. | String | |
delimiter | Field delimiter to use. Supported delimiters include | String Default | | |
date format | User specified date format. Many formats are possible, i.e: %d/%Y/%m | %d-%m-%Y | String Default: System | |
timestamp formate | User specified timestamp format. Many formats are possible, i.e: %d/%Y/%m | %d-%m-%Y | String Default: System | |
sample size | Number of rows to use to determine types and table structure | Integer Default: 1024 | |
skip | Number of rows to skip when auto generating a table using a sample size | Integer Default: 1 | |
header | Flag to indicate the first line in the file contains a header | Boolean Default: true | |
auto detect | Auto detect table format by taking a sample size of data. If set to | Boolean Default: true |
Application example
The following example primes the process with contextual and metrics data which is used for event enrichment processing.
Last updated