- Blog
- 11.09.2023
- Data Fundamentals
10 Best Practices for Maintaining Data Pipelines - 2

Building data pipelines has become easier than ever; however, if you ask most data engineers, they’re likely to echo that they spend most of their time maintaining pipelines, and data quality often takes a hit when pipelines are not regularly maintained. In the world of data engineering, the integrity and efficiency of your data pipelines are essential.
It’s me again — Jean Mandarin, Senior Manager of Data Insights at Matillion. In our previous blog post, we delved into the first five best practices for maintaining robust data pipelines. Now, we're back with the remaining five, ensuring you have a comprehensive guide to bolster your data pipeline strategy.
Best Practice 6: Use Views and Temporary Tables for Intermediate Tables
Cost efficiency is a critical aspect of pipeline maintenance that is often overlooked. While storage costs have significantly decreased over the past few decades, they still represent a tangible expense. In this context, we recommend breaking down complex code into smaller, more manageable code modules. This approach naturally results in the creation of multiple staging tables. These staging tables are typically temporary and become obsolete once their respective operations conclude. Consequently, there's no justification for retaining these tables in your database, incurring unnecessary storage costs for your organization.
Modern data platforms, including Amazon Redshift, Databricks, and Snowflake, provide solutions such as temporary tables, transient tables (in the case of Snowflake), and even views to tackle this issue. Another often-overlooked advantage of using temporary tables or views in your data pipelines is their role in mitigating against data swamp. A data swamp arises when your data environment becomes cluttered with outdated, irrelevant, undocumented, and inaccurate data. This often occurs when the pipelines responsible for creating permanent tables either no longer exist or have undergone significant changes.
Best Practice 7: If you cannot quality check your pipeline, then you shouldn’t build it
Data quality stands as a primary challenge arising from our contemporary data proliferation paradigm. While we've highlighted in this blog that building pipelines has become relatively straightforward, the issue of data quality is often the first casualty when these pipelines aren't consistently maintained.
Most modern pipeline tools, including Matillion, come equipped with built-in features like auto-fixes for schema drift and basic observability through logs and alerts. However, when operating at a large scale, these tools may fall short of comprehensive data quality assurance. As a result, additional pipelines are required to monitor the production pipelines themselves.
Data teams frequently develop regression test pipelines to scrutinize aspects such as data completeness, freshness, and uniqueness. These pipelines tend to be more intricate and may even involve data science modeling for specific operations like data profiling. In environments with high code complexity, the need for a dedicated data quality tool becomes evident to simplify the intricacies of data quality assurance.
However, the advantage of low-code tools, as opposed to high-code environments, lies in their simplicity. Low-code tools streamline the integration of new pipelines into the observability framework, making both construction and management notably more accessible.
Best Practice 8: Test as you build
Efficient pipeline management hinges on robust initial construction. This necessitates ongoing unit testing during the development process. Unit testing for data pipelines is a well-established practice, akin to the fundamental principles of software and data engineering.
The critical factor is the ease with which a data platform facilitates these unit tests. Productivity soars when data teams can swiftly sample their work while building pipelines. If a data professional must execute an entire pipeline or switch tools merely to assess a single calculation, their productivity diminishes. In contrast, those who can test specific transformations in-situ enjoy heightened efficiency.
The choice of tools plays a pivotal role; data teams lacking the right tool may perceive testing operations as 'Large,' whereas the same tests in a more efficient tool could be deemed 'Small.'
Best Practice 9: Build in those kill-switch and retry mechanism
Data failures are an inevitable part of the data landscape, and modern data teams are actively engaged in proactive detection and automated rectification. As previously discussed in this blog, we've explored the concept of data quality observability. However, it's essential to recognize that simply having these observability tools in place isn't enough; they must serve a more proactive purpose than being mere ornamental dashboards in a visualization tool.
Our objective is to leverage the insights gained from observability to trigger data-driven actions that can autonomously rectify issues within data pipelines. This is where orchestration logic comes into play, offering control mechanisms like If/Else statements, retry protocols, and conditional branching.
In most scenarios, when a specific operation encounters an issue, a straightforward retry is the desired course of action. These retry mechanisms play a vital role in ensuring the reliability and graceful recovery of data processing tasks, significantly reducing the troubleshooting workload on the data team. Nonetheless, these mechanisms can become intricate, especially in complex business domains. The core principle remains consistent: automating the resolution of transient errors.
Conversely, if an error persists beyond the retry attempts, the flow should halt to prevent further issues. When constructing new pipelines, data teams must carefully consider how to implement these flow management operations and develop contingency plans for addressing major data failures.
Best Practice 10: Define your change management before building your first pipeline
DataOps has matured into a well-defined and comprehensive domain that encompasses various sub-disciplines. These include the integration of Agile methodologies, CI/CD practices, and robust security measures. The advantages of DataOps are undeniably poised to enhance pipeline maintenance significantly.
Consequently, it is imperative that, prior to constructing pipelines, data teams give thoughtful consideration to how these initiatives align with the overarching DataOps framework. This involves addressing fundamental questions, ranging from managing version control and traceability to fostering cross-functional collaboration.
Harness Matillion's Data Productivity Cloud for Streamlined Data Pipeline Management
Matillion's Data Productivity Cloud streamlines and simplifies data movement, reconfiguring data pipelines into more controllable and sustainable structures. Utilizing its user-friendly interface, Matillion equips data teams with tools for comprehensive documentation, logic breakdown, code organization, and efficient unit testing. It provides a comprehensive platform for both data loading and transformation, nurturing an environment of dynamic data utilization. Furthermore, Matillion offers features for data quality assessments, orchestration logic, and change management, all integrated into a unified platform.
Ready to dive in?
With Matillion, you're not just constructing data pipelines; you're enabling your entire data team to achieve heightened productivity. You can kickstart your journey in minutes with a complimentary trial of our platform.
About the Writer
With a career spanning over 15 years in the data field, Jean Mandarin’s expertise has evolved through diverse databases and data analytics technologies. Now, in the heart of Matillion's data landscape as the leader of the Data Insights team, Jean plays a pivotal role in driving innovation within the Data Productivity Cloud and advancing groundbreaking initiatives in the data-driven realm. Jean's commitment to data-driven progress is not just professional; it's a passionate journey towards a future where data empowers and transforms.

Jean Mandarin
Senior Manager, Data Insights
Featured Resources
Data Mesh vs. Data Fabric: Which Approach Is Right for Your Organization? Part 3
In our recent exploration, we've thoroughly analyzed two key ...
eBooks10 Best Practices for Maintaining Data Pipelines
Mastering Data Pipeline Maintenance: A Comprehensive GuideBeyond ...
NewsMatillion Adds AI Power to Pipelines with Amazon Bedrock
Data Productivity Cloud adds Amazon Bedrock to no-code generative ...
Share: