A whole bunch of 1000’s of organizations construct knowledge integration pipelines to extract and remodel knowledge. They set up knowledge high quality guidelines to make sure the extracted knowledge is of top of the range for correct enterprise selections. These guidelines assess the information primarily based on mounted standards reflecting present enterprise states. Nonetheless, when the enterprise surroundings modifications, knowledge properties shift, rendering these mounted standards outdated and inflicting poor knowledge high quality.
For instance, an information engineer at a retail firm established a rule that validates each day gross sales should exceed a 1-million-dollar threshold. After a number of months, each day gross sales surpassed 2 million {dollars}, rendering the brink out of date. The info engineer couldn’t replace the foundations to replicate the newest thresholds because of lack of notification and the hassle required to manually analyze and replace the rule. Later within the month, enterprise customers seen a 25% drop of their gross sales. After hours of investigation, the information engineers found that an extract, remodel, and cargo (ETL) pipeline chargeable for extracting knowledge from some shops had failed with out producing errors. The rule with outdated thresholds continued to function efficiently with out detecting this concern. The ordering system that used the gross sales knowledge positioned incorrect orders, inflicting low stock for future weeks. What if the information engineer had the power to arrange dynamic thresholds that mechanically adjusted as enterprise properties modified?
We’re excited to speak about the way to use dynamic guidelines, a brand new functionality of AWS Glue Information High quality. Now, you may outline dynamic guidelines and never fear about updating static guidelines frequently to adapt to various knowledge developments. This function lets you writer dynamic guidelines to check present metrics produced by your guidelines together with your historic values. These historic comparisons are enabled through the use of the final(ok) operator in expressions. For instance, as an alternative of writing a static rule like RowCount > 1000, which could turn into out of date as knowledge quantity grows over time, you may substitute it with a dynamic rule like RowCount > min(final(3)) . This dynamic rule will succeed when the variety of rows within the present run is bigger than the minimal row depend from the latest three runs for a similar dataset.
That is half 7 of a seven-part collection of posts to elucidate how AWS Glue Information High quality works. Try the opposite posts within the collection:
Earlier posts clarify the way to writer static knowledge high quality guidelines. On this submit, we present the way to create an AWS Glue job that measures and displays the information high quality of an information pipeline utilizing dynamic guidelines. We additionally present the way to take motion primarily based on the information high quality outcomes.
Resolution overview
Let’s contemplate an instance knowledge high quality pipeline the place an information engineer ingests knowledge from a uncooked zone and masses it right into a curated zone in an information lake. The info engineer is tasked with not solely extracting, reworking, and loading knowledge, but in addition figuring out anomalies in contrast in opposition to knowledge high quality statistics from historic runs.
On this submit, you’ll discover ways to writer dynamic guidelines in your AWS Glue job to be able to take applicable actions primarily based on the result.
The info used on this submit is sourced from NYC yellow taxi journey knowledge. The yellow taxi journey data embrace fields capturing pickup and dropoff dates and occasions, pickup and dropoff areas, journey distances, itemized fares, price varieties, fee varieties, and driver-reported passenger counts. The next screenshot reveals an instance of the information.
Arrange sources with AWS CloudFormation
This submit contains an AWS CloudFormation template for a fast setup. You may evaluate and customise it to fit your wants.
The CloudFormation template generates the next sources:
- An Amazon Easy Storage Service (Amazon S3) bucket (
gluedataqualitydynamicrules-*
) - An AWS Lambda which can create the next folder construction inside the above Amazon S3 bucket:
raw-src/
touchdown/nytaxi/
processed/nytaxi/
dqresults/nytaxi/
- AWS Identification and Entry Administration (IAM) customers, roles, and insurance policies. The IAM function
GlueDataQuality-*
has AWS Glue run permission in addition to learn and write permission on the S3 bucket.
To create your sources, full the next steps:
- Check in to the AWS CloudFormation console within the us-east-1 Area.
- Select Launch Stack:
- Choose I acknowledge that AWS CloudFormation may create IAM sources.
- Select Create stack and await the stack creation step to finish.
Add pattern knowledge
- Obtain the dataset to your native machine.
- Unzip the file and extract the Parquet information into an area folder.
- Add parquet information underneath prefix
raw-src/
in Amazon s3 bucket (gluedataqualitydynamicrules-*
)
Implement the answer
To begin configuring your answer, full the next steps:
- On the AWS Glue Studio console, select ETL Jobs within the navigation pane and select Visible ETL.
- Navigate to the Job particulars tab to configure the job.
- For Title, enter
GlueDataQualityDynamicRules
- For IAM Function, select the function beginning with
GlueDataQuality-*
. - For Job bookmark, select Allow.
This lets you run this job incrementally. To be taught extra about job bookmarks, seek advice from Monitoring processed knowledge utilizing job bookmarks.
- Go away all the opposite settings as their default values.
- Select Save.
- After the job is saved, navigate to the Visible tab and on the Sources menu, select Amazon S3.
- Within the Information supply properties – S3 pane, for S3 supply sort, choose S3 location.
- Select Browse S3 and navigate to the prefix
/touchdown/nytaxi/
within the S3 bucket beginning withgluedataqualitydynamicrules-*
. - For Information format, select Parquet and select Infer schema.
- On the Transforms menu, select Consider Information High quality.
You now implement validation logic in your course of to determine potential knowledge high quality issues originating from the supply knowledge.
- To perform this, specify the next DQDL guidelines on the Ruleset editor tab:
- Choose Authentic knowledge to output the unique enter knowledge from the supply and add a brand new node under the Consider Information High quality node.
- Select Add new columns to point knowledge high quality errors so as to add 4 new columns to the output schema.
- Choose Information high quality outcomes to seize the standing of every rule configured and add a brand new node under the Consider Information High quality node.
- With
rowLevelOutcomes
node chosen, select Amazon S3 on the Targets menu. - Configure the S3 goal location to
/processed/nytaxi/
underneath the bucket title beginning withgluedataqualitydynamicrules-*
and set the output format to Parquet and compression sort to Snappy.
- With the
ruleOutcomes
node chosen, select Amazon S3 on the Targets menu. - Configure the S3 goal location to
/dqresults/
underneath the bucket title beginning withgluedataqualitydynamicrules-*
. - Set the output format to Parquet and compression sort to Snappy.
- Select Save.
Up so far, you could have arrange an AWS Glue job, specified dynamic guidelines for the pipeline, and configured the goal location for each the unique supply knowledge and AWS Glue Information High quality outcomes to be written on Amazon S3. Subsequent, let’s study dynamic guidelines and the way they operate, and supply a proof of every rule we utilized in our job.
Dynamic guidelines
Now you can writer dynamic guidelines to check present metrics produced by your guidelines with their historic values. These historic comparisons are enabled through the use of the final()
operator in expressions. For instance, the rule RowCount > max(final(1))
will succeed when the variety of rows within the present run is bigger than the latest prior row depend for a similar dataset. final()
takes an non-compulsory pure quantity argument describing what number of prior metrics to think about; final(ok)
the place ok >= 1
will reference the final ok metrics. The rule has the next circumstances:
- If no knowledge factors can be found,
final(ok)
will return the default worth 0.0 - If fewer than ok metrics can be found,
final(ok)
will return all prior metrics
For instance, if values from earlier runs are (5, 3, 2, 1, 4), max(final (3)) will return 5.
AWS Glue helps over 15 forms of dynamic guidelines, offering a sturdy set of knowledge high quality validation capabilities. For extra info, seek advice from Dynamic guidelines. This part demonstrates a number of rule varieties to showcase the performance and allow you to use these options in your personal use circumstances.
CustomSQL
The CustomSQL rule offers the potential to run a customized SQL assertion in opposition to a dataset and examine the return worth in opposition to a given expression.
The next instance rule makes use of a SQL assertion whereby you specify a column title in your SELECT assertion, in opposition to which you examine with some situation to get row-level outcomes. A threshold situation expression defines a threshold of what number of data ought to fail to ensure that your complete rule to fail. On this instance, greater than 90% of data ought to include passenger_count
higher than 0 for the rule to move:
Word: Customized SQL additionally helps Dynamic guidelines, under is an instance of the way to use it in your job
CustomSql "choose depend(*) from main" between min(final(3)) * 0.9 and max(final(3)) * 1.2
Imply
The Imply rule checks whether or not the imply (common) of all of the values in a column matches a given expression.
The next instance rule checks that the imply of trip_distance
is lower than the utmost worth for the column journey distance over the past three runs occasions 1.5:
Sum
The Sum rule checks the sum of all of the values in a column in opposition to a given expression.
The next instance rule checks that the sum of total_amount
is between 80% of the minimal of the final three runs and 120% of the utmost of the final three runs:
RowCount
The RowCount rule checks the row depend of a dataset in opposition to a given expression. Within the expression, you may specify the variety of rows or a variety of rows utilizing operators like > and <.
The next instance rule checks if the row depend is between 90% of the minimal of the final three runs and 120% of the utmost of final three runs (excluding the present run). This rule applies to your complete dataset.
Completeness
The Completeness rule checks the share of full (non-null) values in a column in opposition to a given expression.
The next instance rule checks if the completeness of the fare_amount
column is bigger than or equal to the 90% of the common of the final three runs:
DistinctValuesCount
The DistinctValuesCount rule checks the variety of distinct values in a column in opposition to a given expression.
The next instance guidelines checks for 2 circumstances:
- If the distinct depend for the
ratecodeid
column is between the common of the final three runs minus 1 and the common of the final three runs plus 2 - If the distinct depend for the
pulocationid
column is bigger than 80% of the common of the final three runs
ColumnCount
The ColumnCount rule checks the column depend of the first dataset in opposition to a given expression. Within the expression, you may specify the variety of columns or a variety of columns utilizing operators like > and <.
The next instance rule examine if the column depend is the same as the utmost of the final two runs:
Run the job
Now that the job setup is full, we’re ready to run it. As beforehand indicated, dynamic guidelines are decided utilizing the final(ok) operator, with ok set to three within the configured job. This means that knowledge high quality guidelines will probably be evaluated utilizing metrics from the earlier three runs. To evaluate these guidelines precisely, the job have to be run a minimal of ok+1 occasions, requiring a complete of 4 runs to completely consider dynamic guidelines. On this instance, we simulate an ETL job with knowledge high quality guidelines, beginning with an preliminary run adopted by three incremental runs.
First job (preliminary)
Full the next steps for the preliminary run:
- Navigate to the supply knowledge information made accessible underneath the prefix
/raw-src/
within the S3 bucket beginning withgluedataqualitydynamicrules-*
. - To simulate the preliminary run, copy the day one file 20220101.parquet underneath
/raw-src/
to the/touchdown/nytaxi/
folder in the identical S3 bucket.
- On the AWS Glue Studio console, select ETL Jobs within the navigation pane.
- Select
GlueDataQualityDynamicRule
underneath Your jobs to open it. - Select Run to run the job.
You may view the job run particulars on the Runs tab. It can take a couple of minutes for the job to finish.
- After job efficiently completes, navigate to the Information high quality -updated tab.
You may observe the Information High quality guidelines, rule standing, and evaluated metrics for every rule that you simply set within the job. The next screenshot reveals the outcomes.
The rule particulars are as follows:
- CustomSql – The rule passes the information high quality examine as a result of 95% of data have a
passenger_count
higher than 0, which exceeds the set threshold of 90%. - Imply – The rule fails because of the absence of earlier runs, leading to a default worth of 0.0 when utilizing final(3), with an general imply of 5.94, which is bigger than 0. If no knowledge factors can be found, final(ok) will return the default worth of 0.0.
- Sum – The rule fails for a similar cause because the imply rule, with final(3) leading to a default worth of 0.0.
- RowCount – The rule fails for a similar cause because the imply rule, with final(3) leading to a default worth of 0.0.
- Completeness – The rule passes as a result of 100% of data are full, that means there aren’t any null values for the
fare_amount
column. - DistinctValuesCount “ratecodeid” – The rule fails for a similar cause because the imply rule, with final(3) leading to a default worth of 0.0.
- DistinctValuesCount “pulocationid” – The rule passes as a result of the distinct depend of 205 for the
pulocationid
column is increased than the set threshold, with a price of 0.00 as a result of avg(final(3))*0.8 ends in 0. - ColumnCount – The rule fails for a similar cause because the imply rule, with final(3) leading to a default worth of 0.0.
Second job (first incremental)
Now that you’ve efficiently accomplished the preliminary run and noticed the information high quality outcomes, you’re prepared for the primary incremental run to course of the file from day two. Full the next steps:
- Navigate to the supply knowledge information made accessible underneath the prefix
/raw-src/
within the S3 bucket beginning withgluedataqualitydynamicrules-*
. - To simulate the primary incremental run, copy the day two file 20220102.parquet underneath
/raw-src/
to the/touchdown/nytaxi/
folder in the identical S3 bucket. - On the AWS Glue Studio console, repeat Steps 4–7 from the primary (preliminary) run to run the job and validate the information high quality outcomes.
The next screenshot reveals the information high quality outcomes.
On the second run, all guidelines handed as a result of every rule’s threshold has been met:
- CustomSql – The rule handed as a result of 96% of data have a
passenger_count
higher than 0, exceeding the set threshold of 90%. - Imply – The rule handed as a result of the imply of 6.21 is lower than 9.315 (6.21 * 1.5, that means the imply from max(final(3)) is 6.21, multiplied by 1.5).
- Sum – The rule handed as a result of the sum of the overall quantity, 1,329,446.47, is between 80% of the minimal of the final three runs, 1,063,557.176 (1,329,446.47 * 0.8), and 120% of the utmost of the final three runs, 1,595,335.764 (1,329,446.47 * 1.2).
- RowCount – The rule handed as a result of the row depend of 58,421 is between 90% of the minimal of the final three runs, 52,578.9 (58,421 * 0.9), and 120% of the utmost of the final three runs, 70,105.2 (58,421 * 1.2).
- Completeness – The rule handed as a result of 100% of the data have non-null values for the fare quantity column, exceeding the set threshold of the common of the final three runs occasions 90%.
- DistinctValuesCount “ratecodeid” – The rule handed as a result of the distinct depend of 8 for the ratecodeid column is between the set threshold of 6, which is the common of the final three runs minus 1 ((7)/1 = 7 – 1), and 9, which is the common of the final three runs plus 2 ((7)/1 = 7 + 2).
- DistinctValuesCount “pulocationid” – The rule handed as a result of the distinct depend of 201 for the
pulocationid
column is bigger than 80% of the common of the final three runs, 160.8 (201 * 0.8). - ColumnCount – The rule handed as a result of the variety of columns, 19, is the same as the utmost of the final two runs.
Third job (second incremental)
After the profitable completion of the primary incremental run, you’re prepared for the second incremental run to course of the file from day three. Full the next steps:
- Navigate to the supply knowledge information underneath the prefix
/raw-src/
within the S3 bucket beginning withgluedataqualitydynamicrules-*
. - To simulate the second incremental run, copy the day three file 20220103.parquet underneath
/raw-src/
to the/touchdown/nytaxi/
folder in the identical S3 bucket. - On the AWS Glue Studio console, repeat Steps 4–7 from the primary (preliminary) job to run the job and validate knowledge high quality outcomes.
The next screenshot reveals the information high quality outcomes.
Much like the second run, the information file from the supply didn’t include any knowledge high quality points. In consequence, all the outlined knowledge validation guidelines had been inside the set thresholds and handed efficiently.
Fourth job (third incremental)
Now that you’ve efficiently accomplished the primary three runs and noticed the information high quality outcomes, you’re prepared for the ultimate incremental run for this train, to course of the file from day 4. Full the next steps:
- Navigate to the supply knowledge information underneath the prefix
/raw-src/
within the S3 bucket beginning withgluedataqualitydynamicrules-*
. - To simulate the third incremental run, copy the day 4 file 20220104.parquet underneath
/raw-src/
to the/touchdown/nytaxi/
folder in the identical S3 bucket. - On the AWS Glue Studio console, repeat Steps 4–7 from the primary (preliminary) job to run the job and validate the information high quality outcomes.
The next screenshot reveals the information high quality outcomes.
On this run, there are some knowledge high quality points from the supply that had been caught by the AWS Glue job, inflicting the foundations to fail. Let’s study every failed rule to know the particular knowledge high quality points that had been detected:
- CustomSql – The rule failed as a result of solely 80% of the data have a
passenger_count
higher than 0, which is decrease than the set threshold of 90%. - Imply – The rule failed as a result of the imply of
trip_distance
is 71.74, which is bigger than 1.5 occasions the utmost of the final three runs, 11.565 (7.70 * 1.5). - Sum – The rule handed as a result of the sum of
total_amount
is 1,165,023.73, which is between 80% of the minimal of the final three runs, 1,063,557.176 (1,329,446.47 * 0.8), and 120% of the utmost of the final three runs, 1,816,645.464 (1,513,871.22 * 1.2). - RowCount – The rule failed as a result of the row depend of 44,999 just isn’t between 90% of the minimal of the final three runs, 52,578.9 (58,421 * 0.9), and 120% of the utmost of the final three runs, 88,334.1 (72,405 * 1.2).
- Completeness – The rule failed as a result of solely 82% of the data have non-null values for the
fare_amount
column, which is decrease than the set threshold of the common of the final three runs occasions 90%. - DistinctValuesCount “ratecodeid” – The rule failed as a result of the distinct depend of 6 for the ratecodeid column just isn’t between the set threshold of 6.66, which is the common of the final three runs minus 1 ((8+8+7)/3 = 7.66 – 1), and 9.66, which is the common of the final three runs plus 1 ((8+8+7)/3 = 7.66 + 2).
- DistinctValuesCount “pulocationid” – The rule handed as a result of the distinct depend of 205 for the pulocationid column is bigger than 80% of the common of the final three runs, 165.86 ((216+201+205)/3 = 207.33 * 0.8).
- ColumnCount – The rule handed as a result of the variety of columns, 19, is the same as the utmost of the final two runs.
To summarize the result of the fourth run: the foundations for Sum and DistinctValuesCount for pulocationid, in addition to the ColumnCount rule, handed efficiently. Nonetheless, the foundations for CustomSql, Imply, RowCount, Completeness, and DistinctValuesCount for ratecodeid failed to satisfy the factors.
Upon inspecting the Information High quality analysis outcomes, additional investigation is critical to determine the foundation trigger of those knowledge high quality points. As an example, within the case of the failed RowCount rule, it’s crucial to determine why there was a lower in document depend. This investigation ought to delve into whether or not the drop aligns with precise enterprise developments or if it stems from points inside the supply system, knowledge ingestion course of, or different components. Acceptable actions have to be taken to rectify these knowledge high quality points or replace the foundations to accommodate pure enterprise developments.
You may broaden this answer by implementing and configuring alerts and notifications to promptly tackle any knowledge high quality points that come up. For extra particulars, seek advice from Arrange alerts and orchestrate knowledge high quality guidelines with AWS Glue Information High quality (Half 4 on this collection).
Clear up
To wash up your sources, full the next steps:
- Delete the AWS Glue job.
- Delete the CloudFormation stack.
Conclusion
AWS Glue Information High quality provides an easy technique to measure and monitor the information high quality of your ETL pipeline. On this submit, you discovered about authoring a Information High quality job with dynamic guidelines, and the way these guidelines get rid of the necessity to replace static guidelines with ever-evolving supply knowledge to be able to maintain the foundations present. Information High quality dynamic guidelines allow the detection of potential knowledge high quality points early within the knowledge ingestion course of, earlier than downstream propagation into knowledge lakes, warehouses, and analytical engines. By catching errors upfront, organizations can ingest cleaner knowledge and make the most of superior knowledge high quality capabilities. The foundations present a sturdy framework to determine anomalies, validate integrity, and supply accuracy as knowledge enters the analytics pipeline. Total, AWS Glue dynamic guidelines empower organizations to take management of knowledge high quality at scale and construct belief in analytical outputs.
To be taught extra about AWS Glue Information High quality, seek advice from the next:
In regards to the Authors
Prasad Nadig is an Analytics Specialist Options Architect at AWS. He guides prospects architect optimum knowledge and analytical platforms leveraging the scalability and agility of the cloud. He’s enthusiastic about understanding rising challenges and guiding prospects to construct fashionable options. Outdoors of labor, Prasad indulges his artistic curiosity by way of pictures, whereas additionally staying up-to-date on the newest know-how improvements and developments.
Mahammadali Saheb is a Information Architect at AWS Skilled Companies, specializing in Information Analytics. He’s enthusiastic about serving to prospects drive enterprise end result through knowledge analytics options on AWS Cloud.
Tyler McDaniel is a software program growth engineer on the AWS Glue staff with numerous technical pursuits together with high-performance computing and optimization, distributed methods, and machine studying operations. He has eight years of expertise in software program and analysis roles.
Rahul Sharma is a Senior Software program Improvement Engineer at AWS Glue. He focuses on constructing distributed methods to help options in AWS Glue. He has a ardour for serving to prospects construct knowledge administration options on the AWS Cloud. In his spare time, he enjoys taking part in the piano and gardening.
Edward Cho is a Software program Improvement Engineer at AWS Glue. He has contributed to the AWS Glue Information High quality function in addition to the underlying open-source challenge Deequ.