- Blog
- 09.10.2024
- Product, Data Fundamentals
Creating Custom Connector Endpoints for DPC API

Following part one on OAuth setup, it’s time to create custom endpoints for the Matillion Data Productivity Cloud (DPC) API. This blog will guide you through setting up the first API endpoint using the Custom Connector.
You can either add a new Custom Connector, or else begin by selecting Data Productivity Cloud from the Flex Connector Library.
Add Custom Connector
- Navigate to the Connectors tab and click on “Add connector”
- Enter a name for the Connector and the first endpoint under it. In this example we will see how to create an endpoint for Execute Published Pipelines.
[NOTE: eu1 in the url will be replaced with us1 for DPC accounts hosted in the US region.]
- Authentication: Use OAuth in the Authentication Type field and select the DPC API OAuth we created in Part 1 in the Authentication field.
- Parameters: Setup the projectId parameter to be Configurable and enter the value of the project that includes the pipeline that needs to be executed. The projectId can be found in the url for the Designer when in the project; it is the GUID value that is after the text “matillion.com/project/”
- Headers: Add a header parameter for Content-Type and set the value to application/json. This parameter can be set to be a Constant.
- Body:
- Once these properties are setup, you can set up OAuth in Designer Project to use the DPC API Custom Connector endpoints.
You’ve now successfully set up a custom endpoint for Matillion DPC API, streamlining data integration. Ready for part three? In the third installment, we will dive into how to set up the OAuth within Designer to start using this Custom Connector in real-world data orchestration workflows. Read here!
Part Four: Pipeline Execution and Status Retrieval Framework
Check out Matillion DPC capabilities by signing up for a free trial!
Naval Bal
Solution Architect Manager
Featured Resources
What Is Massively Parallel Processing (MPP)? How It Powers Modern Cloud Data Platforms
Massively Parallel Processing (often referred to as simply MPP) is the architectural backbone that powers modern cloud data ...
BlogETL and SQL: How They Work Together in Modern Data Integration
Explore how SQL and ETL power modern data workflows, when to use SQL scripts vs ETL tools, and how Matillion blends automation ...
WhitepapersUnlocking Data Productivity: A DataOps Guide for High-performance Data Teams
Download the DataOps White Paper today and start building data pipelines that are scalable, reliable, and built for success.
Share: