Documentation

Alooma Architecture Overview

Abstract

SummaryAn overview of the Alooma architecture, including descriptions of the data flow from input, through the Alooma service, to output. 

docs-data-flow-diagram.png

The basic data flow in Alooma starts with the data source, or "input" (on the left in the diagram above). Data goes through the Alooma Service (transformations, encryption, etc.) into Alooma's staging, and from there, gets loaded into the customer's target data destination, or "output".

Each of these stages is described in more detail below.

Data sources/inputs (1)

Data sources (called inputs in the user interface) are treated differently, based on type. Generally, data from databases, files, applications, repositories, etc. is requested/pulled regularly, depending on how the input is configured.

Data streams/events are pushed to Alooma as they are generated.

All connections are SSL encrypted, and Alooma supports additional secure connectivity via SSH/Reverse-SSH tunnel, VPC peering, and site-to-site VPN.

Alooma service (2 and 3)

Within the Alooma service data is transformed and prepared for loading into the target (output) before being sent to Alooma's staging bucket. While in staging, data is encrypted using a customer-specific key.

(Optional) Long-term data retention (4)

For longer-term data retention, Alooma supports customers configuring their own (additional) S3 bucket. Alooma provides the option to automatically store all events’ raw data. Events are stored as they are received, before being processed by Alooma.

Target/output (5)

The data is loaded from Alooma's staging into the target data destination (called an output in the user interface): the customer's cloud data warehouse (Azure SQL Data Warehouse, Google BigQuery, Amazon Redshift, Snowflake, etc.) or other kind of storage (S3, etc.). The data remains encrypted (SSL) during transit.

Search results

    No results found