Visit Matillion AI Playground at Snowflake Data Cloud Summit 24

Find out more

10 Best Practices for Maintaining Data Pipelines - 1

In today's data-driven world, data teams build more and more data pipelines to meet increasing demand. They are the lifeblood that enable data to flow seamlessly from various sources to your data warehouse, allowing businesses to gain valuable insights and make informed decisions. And these very pipelines quickly become monsters that are hard to maintain and scale. Building data pipelines is only half the battle; maintaining them is equally critical for long-term success.

My name is Jean Mandarin, Senior Manager of Data Insights at Matillion. In this blog series, I leverage my 15+ years of experience in the data field to share the top 10 best practices for maintaining data pipelines. As we delve into these insights, we'll also see how Matillion's Data Productivity Cloud can enhance your data pipeline journey.

Why Building and Maintaining Data Pipelines Matters

As data teams create an increasing number of data pipelines to meet business needs, they often face a challenge: the technical debt that accumulates, akin to structural cracks, that necessitates specialized resources for repair. This struggle often leaves data teams spending a significant portion of their time managing and troubleshooting pipelines, hampering their productivity and value delivery to the business. When choosing a data pipeline tool, data leaders must consider not only its ability to create pipelines but also the essential aspects of maintenance, including data governance, quality, cost efficiency, resource allocation, and talent management.

In the era of the "pipeline arms race," many vendors claim to offer quick ways to build pipelines. However, Matillion stands out by providing a productive platform that streamlines maintenance, making it accessible to a broader range of data professionals. 

Best Practice 1: Good Documentation

Pipeline maintenance live and die by how good the accompanying documentation of the pipeline is. Proper documentation is a fine science that lies between — not too little but not too much — documentation. Extensive documentation on a shared drive is essential for audits, particularly in regulatory settings. However, it's less likely that Data Engineers will maintain such documentation, especially in a fast-paced CI/CD environment with frequent sprint changes. In traditional high-code environments, teams often used comments within the code. However, when code branches into multiple jobs, its high-level purpose can become obscured, leading to abstraction and reduced clarity. Low-code platforms with graphical interfaces offer a clear advantage by making branching transparent, enhancing the readability of the entire pipeline, not just the code itself.

In Matillion, documentation is automatically generated by scanning the canvas, creating concise to moderately detailed documentation, significantly saving Data Teams time. We also encourage Matillion's data teams to include brief notes on canvas components. These notes serve a dual purpose, aiding auto-documentation and providing valuable insights to new team members about the pipeline's purpose and functionality. These notes are often just a few lines but collectively weave a comprehensible data narrative, even for non-technical business users. They are generated as HTML documents, which can be conveniently added to document repositories.

Best Practice 2: Breaking Down Complex Data Logic

Breaking large code into smaller parts has been a well-established industry practice for decades, offering various advantages such as improved comprehension, enhanced data quality checks, and streamlined maintenance. However, in practice, even with the best intentions, smaller, decoupled code can become complex when data teams compromise on best practices to adapt to rapidly changing business rules.

In environments with a substantial amount of code, the task of decoupling code is typically undertaken by the most experienced team member, given their expertise in doing so efficiently. Consequently, in high-code environments, the process of breaking down large code into smaller segments can be more costly due to the need for specialized resources.

Conversely, low-code platforms with intuitive graphical interfaces facilitate a more cost-effective approach to this task. At Matillion, the data team regularly delegates code decoupling to less experienced members, thanks to the user-friendly canvas interface. This simplifies the process, making it more straightforward without requiring an in-depth understanding of complex coding intricacies.

Best Practice 3: Organizing Your Codes

Modern data tools offer diverse solutions for seamless navigation within code-stacks, a crucial requirement as data teams frequently shift between pipelines. Consolidating all your code into a single large folder structure can lead to chaos.

To enhance organization, it's advisable to group codes based on their function. For instance, data extraction codes belong in the "landing" folder, data preparation in the "staging" folder, and so on. This principle extends to codes responsible for building fact and dimension tables, as well as those for data regression testing. Yet, even this organization can benefit from a standardized naming convention, particularly in complex organizations. Often, teams find it beneficial to add a numbering system to clarify the execution order.

For instance:

Loading folder
1_load_aws_data.sql
2_load_salesforce_data.sql
3_load_azure_data.sql
Transform folder
1_transform_aws_data.sql
2_transform_salesforce_data.sql
3_transform_azure_data.sql

However, in some cases, the execution order may not be linear, as one code in one folder may depend on another in a different folder. This introduces the concept of branching, as mentioned earlier. To maintain productivity, data tools should empower teams to delve into sub-codes, recognizing that pipeline codes exist in a three-dimensional view. Many tools fall short, offering navigation but lacking the ability to drill down into sub-codes. A robust data tool should facilitate users in drilling down, drilling up, and moving both forward and backward, ensuring efficient code management.

Best Practice 4: Data Loading and Transformation on One Platform

While discussing data pipelines, we've primarily focused on the movement of data from external sources into cloud data warehouses. However, we must broaden our perspective to include local data movements within the data warehouse itself. For instance, the transfer of data from one schema to another can be viewed as a data pipeline. It's not uncommon for some data teams to employ separate tools for data loading and data transformation. When this occurs, it introduces complexities, as these become independent platforms, resulting in dependencies and hindering the establishment of consistent workflows. Ultimately, this impacts the data team's efficiency in terms of maintenance.

A more efficient approach is to use a single tool for both data loading and transformation. This not only benefits the data team but also extends to data analysis and data science teams. Employing one platform across various teams, encompassing low-coders and high-coders, fosters a vibrant data community with shared skills and knowledge. Such organizations facilitate transitions across the data spectrum, from data engineering to data science, creating a dynamic data culture that promotes streamlined data pipelines and, as a result, easier maintenance.

Best Practice 5: Adding Naming Conventions to Database Objects

As organizations expand, so does the volume of data within their pipelines, resulting in a multitude of objects, including databases, schemas, and tables. Consider a typical scenario where data teams extract, transform, and create fact and dimension tables from a data source. Multiple tables would be created or updated for each stage. Similar to the code-level navigation we discussed earlier, it's crucial to logically organize these objects. 

For example, here at Matillion this is how our data team structures our own database objects:

The scheme db_source will contain all raw data from the sources and will have the prefix ‘s_’ e..g s_salesforce_opportunities and s_marketo_leads.
Similarly, the schema 

  1. db_stage will contain transformed tables with the prefix ‘stg_’
  2. db_dwh will contain transformed tables with the prefix ‘f_’ for facts tables and ‘d_’ for dimension tables.
  3. db_report will contain specific tables use for analysis & data science

Implementing a robust naming convention for your database objects expedites troubleshooting data-related issues. Furthermore, it enables a decoupled architecture where data within each schema can theoretically be processed separately, leveraging a shared-nothing database architecture like Snowflake. These benefits contribute to enhanced performance optimization, monitoring, and alerting.

Leveraging Matillion's Data Productivity Cloud for Effective Data Pipeline Maintenance

Matillion's Data Productivity Cloud simplifies and automates data movement, transforming data pipelines into more manageable and maintainable structures. With its low-code approach, Matillion enables data teams to document, break down logic, organize codes, and perform unit testing efficiently. It offers an all-in-one platform for data loading and transformation, fostering a dynamic data culture. Additionally, Matillion supports data quality checks, orchestration logic, and change management, all within a unified platform.

Ready to Get Started?

With Matillion, you're not just building data pipelines; you're empowering your entire data team to achieve more. You can begin in minutes with a free trial of our platform. Unlock the power of data productivity today!

Jean Mandarin
Jean Mandarin

Senior Manager, Data Insights

With a career spanning over 15 years in the data field, Jean Mandarin’s expertise has evolved through diverse databases and data analytics technologies. Now, in the heart of Matillion's data landscape as the leader of the Data Insights team, Jean plays a pivotal role in driving innovation within the Data Productivity Cloud and advancing groundbreaking initiatives in the data-driven realm. Jean's commitment to data-driven progress is not just professional; it's a passionate journey towards a future where data empowers and transforms.