Documentation

Welcome to Alooma!

We're excited you're here!

Alooma is a modern, cloud-based data pipeline. Use it to import data from any of a huge number of inputs, perform mapping and transformation if/as necessary, and then load that data into your output or "data destination" (a data warehouse such as Amazon Redshift, Google BigQuery, Microsoft Azure, Snowflake, or cloud-based data store such as Amazon S3) for analysis.

Getting started

Alooma makes it easy to get data from your data source to your data destination.

The process looks like this:

  1. Create an output ("Where will your data live after extraction?").

  2. Create an input ("What is your data source or stream?").

  3. That's it! Your data starts to flow from your input(s) to your output right away.

The schema mapping is done automatically (but you can easily customize that via the Mapper). If you want to do any transformation of your data prior to loading, use our powerful Code Engine. If anything unexpected happens, don't worry. Any events that fail to load are captured in the Restream Queue.

Where to go next

Take a quick tour.

See an overview of Alooma's architecture.

Learn more about creating an output.

Learn more about creating an input.

Learn more about basic Alooma concepts.

More questions? Take a look at the assorted FAQ.

Search results

    No results found