💵
Banking
The objective of this project is to demonstrate Joule real-time analytics capabilities using time and event based window aggregate functions. Six use case files have been created that demonstrates various platform capabilities.
External market data events are generated using a basic simulator based upon a static nasdaq end of day market data file, published on to Kafka quotes topic. Events are consumed and processed by Joule with resulting analytics published into a InfluxDB time series bucket. Thereafter Grafana visualisations or and other suitable tool can be applied.
- 1.Sliding windows (i.e. time and event based)
- 2.Group by aggregate functions
- 3.User defined analytics using SDK (Bollinger Bands)
- 4.Event filtering
- 5.Event projection
- 6.InfluxDB transport
- 7.Kafka publisher and consumer transports
- 8.Kafka event transformers
- 9.SQLTap that captures processed events within stream
- 10.RestAPI to access raw and processed events
Good to know: depending on the product you're building, it can be useful to explicitly document use cases. Got a product that can be used by a bunch of people in different ways? Maybe consider splitting it out!
Install
Build
Run
By building the example locally you will have the flexibility to create new use cases, analytics, processors, transports and contribute back the to project. Build the project in the root directory by following the below instructions.
First build the project
gradle clean build && cd build/distributions
&& unzip fractalworks-banking-example-1.0.zip
&& cd fractalworks-banking-example-1.0
&& chmod +x bin/*.sh
Start joule processor
./bin/startJoule.sh
Events will be generated, published, consumed, process and finally published to InfluxDB and ready to be visualised in Grafana.
Simulated stock quote events are created using the StockQuoteSimulator driver. A total of 7514 stock quotes are generated and published on each processing cycle. Update the StockQuoteSimulator class for additional fields and processing requirements. The
conf/csv/nasdaq.csv
file used to prime the simulator.The following fields are available on each quote event
- ingestTime
- eventTime
- symbol
- bid
- ask
- volume
- volatility
- date
The Kafka configuration can be found here
conf/simulator/kafkapublisher.yaml
.The use case is defined using the Joule low-code approach.
stream:
name: standardQuoteAnalyticsStream
enabled: true
validFrom: 2020-01-01
validTo: 2025-12-31
eventTimeType: EVENT_TIME
sources: [ nasdaq_quotes_stream ]
processingUnit:
pipeline:
- timeWindow:
emittingEventType: coreQuoteAnalytics
aggregateFunctions:
MIN: [ ask ]
MAX: [ bid ]
SUM: [ volume ]
MEAN: [ volatility ]
policy:
type: slidingTime
slideSize: 500
windowSize: 2500
select:
expression: "symbol, ask_MIN, bid_MAX, volume_SUM, volatility_MEAN"
groupBy:
- symbol
Last modified 6mo ago