What is Data Profiling?

by Garrett Alley  
4 min read  • 15 Jan 2019

Data profiling is a process of examining data from an existing source and summarizing information about that data. You profile data to determine the accuracy, completeness, and validity of your data. Data profiling can be done for many reasons, but it is most commonly part of helping to determine data quality as a component of a larger project. Commonly, data profiling is combined with an ETL (Extract, Transform, and Load) process to move data from one system to another. When done properly, ETL and data profiling can be combined to cleanse, enrich, and move quality data to a target location.

For example, you might want to perform data profiling when migrating from a legacy system to a new system. Data profiling can help identify data quality issues that need to be handled in the code when you move data into your new system. Or, you might want to perform data profiling as you move data to a data warehouse for business analytics. Often when data is moved to a data warehouse, ETL tools are used to move the data. Data profiling can be helpful in identifying what data quality issues must be fixed in the source, and what data quality issues can be fixed during the ETL process.

Why profile data?

Data profiling allows you to answer the following questions about your data:

  • Is the data complete? Are there blank or null values?
  • Is the data unique? How many distinct values are there? Is the data duplicated?
  • Are there anomalous patterns in your data? What is the distribution of patterns in your data?
  • Are these the patterns you expect?
  • What range of values exist, and are they expected? What are the maximum, minimum, and average values for given data? Are these the ranges you expect?

Answering these questions helps you ensure that you are maintaining quality data, which — companies are increasingly realizing — is the cornerstone of a thriving business. For more information, see our post on data quality.

How do you profile data?

Data profiling can be performed in different ways, but there are roughly three base methods used to analyze the data.

Column profiling counts the number of times every value appears within each column in a table. This method helps to uncover the patterns within your data.

Cross-column profiling looks across columns to perform key and dependency analysis. Key analysis scans collections of values in a table to locate a potential primary key. Dependency analysis determines the dependent relationships within a data set. Together, these analyses determine the relationships and dependencies within a table.

Cross-table profiling looks across tables to identify potential foreign keys. It also attempts to determine the similarities and differences in syntax and data types between tables to determine which data might be redundant and which could be mapped together.

Rule validation is sometimes considered the final step in data profiling. This is a proactive step of adding rules that check for the correctness and integrity of the data that is entered into the system.

These different methods may be performed manually by an analyst, or they may be performed by a service that can automate these queries.

Data profiling challenges

Data profiling is often difficult due to the sheer volume of data you’ll need to profile. This is especially true if you are looking at a legacy system. A legacy system might have years of older data with thousands of errors. Experts recommend that you segment your data as a part of your data profiling process so that you can see the forest for the trees.

If you manually perform your data profiling, you’ll need an expert to run numerous queries and sift through the results to gain meaningful insights about your data, which can eat up precious resources. In addition, you will likely only be able to check a subset of your overall data because it is too time-consuming to go through the entire data set.

How Alooma can help

If you are performing data profiling on a large data source, consider coupling it with a tool like Alooma to help streamline and automate the process of cleansing your data.

Alooma is a modern ETL tool that can help automate cleansing and transforming data before moving it to a target store. As a part of the assessment of your data, you can identify which errors can be fixed at the source, and which errors Alooma can repair while the data is in the pipeline.

Alooma can help you plan. Once you decide what data you want to profile and move, our data experts can help you plan, execute, and maintain your data pipeline.

Alooma is secure. Alooma specializes in securely moving your data. Alooma encrypts data in motion and at rest, and is proudly 100% SOC 2 Type II, ISO27001, HIPAA, and GDPR compliant.

Are you ready to see how Alooma can help you profile and clean your data? Contact us today!

Like what you read? Share on

This might interest you as well