- Blog
- 08.16.2024
- Product, Data Fundamentals
Using Variables in a Matillion pipeline

Matillion Data Productivity Cloud (DPC) provides the ability to use variables to create a metadata-driven/parameterized pipeline. DPC has two levels each offering two types of variables. These are project level and pipeline level, each providing a different scope within the project.
- Project level variables are similar to global variables in a traditional programming language. These variables have a scope visible to every pipeline within the project.
- Pipeline level variables are similar to local variables in a programming language. These variables have a scope that is visible only to the pipeline that they are defined in within the project.
The Project variable offers two data types: TEXT or NUMBER. This variable style is known as a Scalar variable and is designed to hold a single value.
Pipeline variables support GRID variables, in addition to SCALAR. This style is designed to handle a two-dimensional array of data. The ability to handle these two styles of variables is critical for creating metadata-driven pipelines.
Part one, we will walk you through the steps for creating Project and Pipeline variables, as well as what datatypes are supported by Scalar and GRID variables. Bear in mind that each variable type supports TEXT and NUMBER.
Six steps to creating a Project variable
The steps to add a Project variable are:
1.Open your project.
2. Click on the Variables button in the sidebar.
3. Highlight Project and click the Add button.
4. Select the Project variable radio button to indicate this as a project variable.
5. Select the type for the variable, either TEXT or NUMBER for Scalar, and click Next.
6. Fill in the Variable name, Default value and click the Create button. You can override the environment default Note for the Variable name, and it is best practice to identify project variables by a prefix such as “proj_,” although your company may have a different format they prefer. In addition, you can override the environment for the Project variables.
Your Project Variable is now complete.
Nine steps to creating a Pipeline variable
The steps to add a Pipeline variable are:
1.Open your project.
2. Click on the Variables button in the sidebar.
3. Highlight Pipeline and click the Add button.
4.Select the Pipeline variable radio button to indicate this is a project variable.
5. Select the type for the variable, either TEXT or NUMBER for Scalar or GRID, and click Next.
6. If you select TEXT or NUMBER and click the Next button, you’ll next fill in the Variable name, Default value, select the Public or Private radio button under visibility, and then click the Create button. It is best practice to identify pipeline variables by a prefix such as “pipe_,”although your company may have a different format they prefer.
7. If you select GRID and click Next, you’ll fill in the Variable name, select the Public or Private radio button under Visibility, and click the Next button. It is best practice to identify pipeline variables by a prefix such as “gv_,” although your company may have a different format they prefer.
8. Now, you’ll enter the list of values for the GRID variable. Enter the Variable name and if the type is TEXT or NUMBER. After that, you can add additional variables by clicking the green + button. Once you’ve added the list of variables to the GRID variable, click the Next button.
9. Click the green + sign at the bottom of the screen to add default values for each variable.
Your variables are now successfully created.
How variables are used
Once you’ve followed the steps for creating Project and Pipeline variables, they can be used in Data Productivity Cloud pipelines. The pipeline can use these variables to create a metadata-driven workflow that allows for dynamic value changes.
Part two provides an overview of how to incorporate these variables into an actual pipeline that uses GRID variables to manage table metadata and SCALAR variables for table names. The goal of the pipeline is to demonstrate a simple incremental load pattern.
Mike Terrell
Sales Engineer
Featured Resources
What Is Massively Parallel Processing (MPP)? How It Powers Modern Cloud Data Platforms
Massively Parallel Processing (often referred to as simply MPP) is the architectural backbone that powers modern cloud data ...
BlogETL and SQL: How They Work Together in Modern Data Integration
Explore how SQL and ETL power modern data workflows, when to use SQL scripts vs ETL tools, and how Matillion blends automation ...
WhitepapersUnlocking Data Productivity: A DataOps Guide for High-performance Data Teams
Download the DataOps White Paper today and start building data pipelines that are scalable, reliable, and built for success.
Share: