99% improvement in data availability
83% improvement in data processing times
Speedy data processing utilizing iterator components
Challenge
Red Classic Transportation needed to enable more timely information into Snowflake. Once a day refreshes were not enough.
Source systems with limited or non-existent audit and change tracking were limiting the ability to track changes (including deletes) and the team wanted to streamline ELT processes to simplify errors and error handling.
Red Classic Transportation needed to enable more timely information into Snowflake. Once a day refreshes were not enough.
Source systems with limited or non-existent audit and change tracking were limiting the ability to track changes (including deletes) and the team wanted to streamline ELT processes to simplify errors and error handling.
Solution
To effectively identify and extract the necessary tables and variables, Red Classic overcame a unique challenge. Traditional high-water mark columns in the source system lacked consistency, prompting them to uncover the solution in the MSSQL row version system updated column. They successfully converted it to BIGINT, ensuring seamless integration. They appended all tables in a persistent storage (PSA) layer, facilitating effortless querying without flattening. With load dates added, an audit trail was formed. A SQL query crafted through these tools enabled them to maintain the production layer as a mirror image of their source systems and incorporate API calls to monitor and terminate runaway processes.
To effectively identify and extract the necessary tables and variables, Red Classic overcame a unique challenge. Traditional high-water mark columns in the source system lacked consistency, prompting them to uncover the solution in the MSSQL row version system updated column. They successfully converted it to BIGINT, ensuring seamless integration. They appended all tables in a persistent storage (PSA) layer, facilitating effortless querying without flattening. With load dates added, an audit trail was formed. A SQL query crafted through these tools enabled them to maintain the production layer as a mirror image of their source systems and incorporate API calls to monitor and terminate runaway processes.
Results
Featured Resources
Matillion Adds AI Power to Pipelines with Amazon Bedrock
Data Productivity Cloud adds Amazon Bedrock to no-code generative ...
BlogData Mesh vs. Data Fabric: Which Approach Is Right for Your Organization? Part 3
In our recent exploration, we've thoroughly analyzed two key ...
eBooks10 Best Practices for Maintaining Data Pipelines
Mastering Data Pipeline Maintenance: A Comprehensive GuideBeyond ...