Using Alooma to Migrate Oracle Data to the Cloud

Rami Amar  •  10 min read  • 27 Dec 2017

The Oracle LogMiner is a feature of the Oracle data warehouse, enabling ordinary SQL clients to read DML & DDL statements directly from the oracle transaction logs, aka the redo logs. It is a powerful mechanism, which we described extensively in part 1 of this post. We studied the LogMiner thoroughly before building our integration, which leverages it to successfully synchronize an Oracle data warehouse to a cloud data warehouse. Creating a reliable, real-time synchronization between your on-prem and cloud data warehouses is a crucial step, which lays the foundation for migrating an enterprise’s data infrastructure to the cloud. Such synchronization enables the enterprise to migrate all of its data applications one by one, with minimal interruptions of service to its employees. In this post we will describe how Alooma’s Oracle integration works and how it can be used for reliable, real-time synchronization with a cloud data warehouse like Redshift, BigQuery, or Snowflake.


Let’s do a short recap of some key terms necessary to use the Oracle LogMiner:

  • Redo Logs, or archive logs - these are the files Oracle writes to record all changes to the data warehouse
  • SCN - the system change number, or the internal Oracle tick count, used for keeping order of transactions
  • DBMS_LOGMNR.START_LOGMNR - the stored procedure required to initialize a LogMiner session
  • V$LOGMNR_CONTENTS - the view enabling reading the redo logs with standard SQL. Some of its columns include:
    • SCN - thesystemchangenumber at which the change was made
    • CSCN - the SCN indicating when the change was committed (only available when reading committed transactions)
    • RS_ID, SSN - the record set identifier and the SQL sequence number, which together, uniquely identify a recorded change
    • SQL_REDO - the actual DML statement which can be used to re-apply the recorded changes (insert/update/delete)
  • The LogMiner Dictionary - translates between table and column IDs to table and column names

Alooma’s Oracle connector is composed of two components which run in parallel:

  • LogMiner reader - manages the LogMiner sessions, reads DML statements and breaks them down to record granularity
  • Tables Snapshot reader - reads initial snapshots of all tables configured for replication

Here’s how the integration works:

  1. The LogMiner reader starts reading the redo logs from the latest SCN
  2. The Tables Snapshot reader starts reading snapshots of all tables, one by one
  3. For every replicated table, a staging table and a final table are created on the cloud data warehouse
  4. Records from both components are loaded to the staging tables
  5. Periodically the staging tables are consolidated into the final tables

Let’s describe these steps in greater detail.

Reading from the LogMiner

The LogMiner reader starts from the latest, most recent SCN, and initiates a LogMiner session limited by a range of SCNs. Iterating over the DML statements, it parses the SQL and converts each record to a separate Alooma event. For every event, the following metadata is included: SCN, timestamp, schema and table names, row_id, and operation (insert/update/delete). When the LogMiner reader finishes iterating over a range of SCNs, it cleanly closes the LogMiner session, stores its own state, and then repeats the process. To be able to continue exactly where it left off, the LogMiner reader stores in its state the last SCN it read along with the RS_ID and SSN.

Reading Initial table snapshots

The initial snapshot, taken by the Table Snapshot reader, is fairly straightforward. It queries select * from the tables configured for replication to retrieve all of their contents. Each record is converted to an Alooma event, and the following metadata is attached to it: timestamp, schema and table name, row_id, operation, and SCN. The operation metadata field is always “created”, and the SCN is the SCN taken at the beginning of the snapshot. This SCN enables correct ordering of records during the merge (union, not join) with the events generated by the LogMiner reader. More on that in Step 5: Periodic Consolidation.

Creating tables on the cloud data warehouse

As events are being generated by the two components of the Oracle integration, the Mapper component of the Alooma pipeline analyzes the events and their schema. Behind the scenes, the Mapper periodically wakes up and performs the following procedure:

  1. Check for new or modified Event Types (every table’s schema will be represented by a single Event Type)
  2. Retrieve the original schema from the source (the Oracle data warehouse) and carefully translate it to a schema readable by the target data warehouse
  3. Retrieve the entire schema from the target data warehouse and verify the new tables or columns do not exist
  4. Create new tables or add columns to existing tables
    • Since the Oracle integration transmits a change log, the Mapper knows it needs to create (or update) two tables: a staging table, and a final table
  5. Update the mapping, indicating to the processing engine to which tables the new/modified events should be loaded
    • The events from the Oracle integration will be loaded to the staging tables
  6. Trigger the Restream Queue, to reprocess all events which have been stored aside, while the above steps were performed

Loading data to the staging tables

The events generated by the the LogMiner reader and the Table Snapshot reader all end up in the same queue (a Kafka topic), and from there they are processed by Alooma’s processing engine. The processing engine performs 3 basic steps:

  1. User defined processing, written in Python and run by the Code Engine
  2. Format and schema conversions defined by the mapping and run by the Mapper
  3. Loading to the destination tables, as defined by the mapping

As previously mentioned, the events are loaded to staging tables, which have the suffix _log. Depending on the target data warehouse, the loading mechanism may vary. For example, with Redshift and Snowflake, data is first copied to S3, in CSV files, and then loaded with a COPY command. With BigQuery, the streaming API is used. In all cases, loading is optimized to handle records and not SQL statements. This is the reason the Oracle integration parses the DML statements and breaks them up to individual records. Lastly, loading of data from all tables occurs in parallel (as much as the data warehouses performance permits), to keep latency to an overall minimum

Periodic consolidation of the staging tables into the final tables

Once records from the Oracle data warehouse are loaded to the staging tables, suffixed with _log, they require one final step in order to form a replica of the original tables: Consolidation. We will briefly describe how the consolidation process works, and you may read more about it in our documentation.

Consolidation is the process of transforming the records in the staging tables, which indicate changes like insertions, modifications, and deletions, into an exact copy of the original tables. Consolidating these change records into the actual table requires a complex query:

  1. For every table T:
    • Merge the staging table T_log with the final table T, into a temporary table T_temp
      • For records which were modified or deleted, T_temp will have duplicates
  2. Dedup table T_temp in a way that leaves only the most recent version of every record, and discards all of the deleted records
    • The deduplication is done over the primary key of the record
    • The recency of records is denoted by the SCN value
    • Deleted records are distinguished by a boolean column called deleted

These queries are set up automatically and run periodically with a configurable frequency from 15 minutes to 8 hours. Failures are reported with notifications to the dashboard and your email. Whenever schemas change, these queries update automatically, and, on supported data warehouses, they will even reconstruct any view which is dependent on the final tables.

The short video below is a demo of how Alooma replicates Oracle data to Redshift, to be analyzed in a BI tool.


Oracle is a powerful and complicated database and working closely with its internals comes with several difficulties:

  • Memory constraints - Most Oracle installations run on-premises and on real servers. This means the CPU and memory (RAM) is limited and may easily reach its limits. For example, if you open a LogMiner session that spans a large range of SCNs, the database may exhaust its memory for storing the redo logs in memory. In addition, LogMiner sessions which are not cleanly closed may also create a resource leak, leading to failures which will be very hard to debug.
  • LogMiner sessions with all transactions vs. only committed transactions - when reading from the redo logs, it is tempting to request only committed transactions. This may result in omitted data, since transactions are not committed in the same order as they are initiated. In other words, while transaction X was committed, transaction X-1 may still be processing. This makes it very difficult to keep track of the last transaction which was already replicated.
  • Missing primary keys - in some Oracle installations, tables may be defined without primary keys. This is completely a data model design choice made by the Oracle DBA (and a valid one, it’s not necessarily a bad practice). Fortunately, Oracle generates a row_id for every record in every table. This row_id can be retrieved and also used when querying for a single record.
  • Keeping track with SSNs & RS_ID - more than one transaction can be registered at the same SCN. Therefore, to track which transactions have been read and replicated from the LogMiner, the SCN is not an accurate indicator. Instead, the RS_ID and the SSN can be used to distinguish every transaction from the rest, and also to order transactions according to execution order.
  • The LogMiner Dictionary - the LogMiner view displays identification numbers for tables and columns, instead of the actual table names and column names. The LogMiner feature enables the Oracle user to populate the IDs with names in two ways: Dump the definitions dictionary into the redo logs, or query the data model on the fly. The first option may become outdated, as the model changes, but the dictionary dump stays the same. The second option may have a high performance toll. Our integration implements a third way: it queries the data model every time an unknown ID is encountered, and it caches all the results to improve on performance.
  • Partial Supplemental Logging - In our first post, we discussed supplemental logging as a required step for configuring the LogMiner to work. Without supplemental loggings, the Oracle data warehouse doesn’t write enough information into the Redo Logs. Once configured, supplemental logging has two modes: partial and full. Full logging results in DML statements that include whole records, not requiring any additional information for replication. Partial logging results in DML statements that only update the modified fields. When partial logging is configured, it may be necessary to query the original table for the rest of the values in the record. Without completing the partially logged record to a complete record, it might be impossible to consolidate the changes.


Sophisticated platforms require sophisticated solutions. The Oracle data warehouse is an extremely powerful database and is still one of the most adopted databases in the world. Being able to replicate data out of Oracle and into the destination cloud data warehouse of your choice is a capability that may play an important role in keeping your BI and even whole organization nimble and agile.

We hope we managed to explain how the LogMiner works, and also get into the gory details of building a reliable and continuous replication mechanism. Feel free to reach out for a demo, or with any technical questions.


Get your data flowing today!
Contact us for a demo or free trial.