Build data pipelines in minutes

Any input, any output, any scale, zero latency, zero data-loss.
We’ll handle the hassle for you.

Load data to your own
data-warehouse

Integrate all your data sources to Amazon’s petabyte scale data warehouse. Stream your data to Redshift tables to derive real-time insights. Alooma takes care of all the complexity so you don’t have to.

Real-time ETL to Amazon Redshift

Any input

Import your data from any source. Alooma natively supports dozens of the most popular data sources. MySQL, Postgres, MongoDB, iOS, Android, Salesforce, REST, Segment, Mixpanel, Localytics and many more. New data sources are added on a weekly basis.


Quick integration

Within minutes of starting to use Alooma you will have a working real-time data pipeline to Redshift. Our cloud-based solution means a minimal setup process. A friendly user-interface will guide you along the way.
Quick integration to Amazon Redshift

Manage schema changes

Get notified when a new field appears in your data, or when an existing field appears with a different format. Your pipeline will not break and you will not lose data. Events from the new schema will be kept for you until you decide how you would like them to load to Redshift.
Handle schema changes in your ETL

Data Protection Guarantee.

Alooma is built on high availability, fault tolerant distributed architecture. We make sure that you will not lose any data or have any duplicates even in case of failure. Events that experience errors are kept for you on a separate queue. Our event sampler and in-stream code allow you to fix any errors and load your data to Redshift without data-loss or duplications.
Distributed architecture

Code Engine

With Alooma's built-in Python interpreter, it's easy to manipulate and filter your data stream, correct errors, collect metrics and raise notifications. Enrich your data with our built-in SDKs (e.g. geolocation) or with your own SDKs. Your code runs in-stream so your data reaches Redshift exactly how you intended, in real-time. No more lengthy high-latency data preparation jobs.
In-stream Python code provides flexibility for your ETL to Amazon Redshift

Mapping made easy

Define how you would like your data to be loaded to Redshift. Map every field of your structured or semi-structured data to a Redshift table column. Let Alooma auto-map your data for you, or use our data exploration tools to guide you to the optimal mapping.
Map semi-structured (JSON) data to table columns

Live monitoring

Gain visibility of your data pipeline: track incoming throughput, latency, loading rates and error rates. Web and email notifications will provide you actionable information about any warnings or errors related to your pipeline.
Live monitoring

Integrate all your data in minutes!

Start now