Yes, Matillion Democratizes AI for Snowflake Users

Matillion is excited to announce our availability and support for Snowflake Cortex and Snowpark Container Services, jointly helping customers deliver on their AI initiatives. 

With Matillion’s Snowflake Cortex components, we bring a no-code approach to Snowflake’s generative AI and machine learning solutions. This integration empowers data engineers to leverage generative AI to drive significant value from their data using Matillion's unified data pipeline platform, without requiring extensive AI expertise. What sets Matillion apart is our commitment to enabling data engineers to get started with generative AI instantly with our Cortex components without additional skills, knowledge, or extensive coding.

Matillion’s Snowpark Container Services (SPCS) integration allows customers to securely run virtually any open-source AI model directly within their Snowflake account, ensuring data privacy and enabling customization based on specific use cases. Matillion brings AI capabilities directly into data pipelines, and with our SPCS integration, we provide Snowflake users with the flexibility to adapt and evolve as customers' needs change, potentially leveraging smaller, dedicated models for improved cost, accuracy, and performance.

Quick Background on Snowflake’s AI Offerings

Snowflake Cortex is an intelligent, fully managed service that offers machine learning and AI solutions to Snowflake users. Snowflake Cortex capabilities include:

  • LLM Functions: SQL and Python functions that leverage large language models (LLMs) for understanding, querying, translating, summarizing, and generating free-form text.
  • ML Functions: SQL functions that perform predictive analysis using machine learning to help you gain insights into your structured data and accelerate everyday analytics.

(Check out Snowflake’s documentation to learn more)

These functions can be easily integrated into SQL queries using Matillion’s graphical components, which are automatically pushed down into Snowflake’s native architecture, allowing users to incorporate AI without managing complex infrastructure. 

Snowpark Container Services (SPCS) allows customers to run their own Docker containers and LLMs within their Snowflake accounts, providing full control and data sovereignty. This gives customers full control over their models, ensuring no data or prompts leave the customer's Snowflake instance, making it ideal for sensitive use cases. Additionally, SPCS enables compute on GPU, which significantly accelerates AI workloads. Pushing down Matillion workloads on Snowflake means you're consuming Snowflake credits, giving you complete control over your budget while leveraging GPUs for AI workloads.

No-Code for Cortex 

Matillion brings simplicity and ease of use to Snowflake Cortex. With Matillion, customers can get up and running with Cortex LLM and Cortex ML in minutes, without extensive setup or lines of SQL code. Matillion's graphical components make it easy to incorporate AI into data workflows, regardless of technical expertise.

Key Benefits:

  • Easy setup without infrastructure: Matillion components integrate with Cortex, enabling users to leverage LLM and ML functions in minutes, without IT.
  • Mix classic data engineering with AI: Use Cortex components alongside Matillion's transformation components directly within a pipeline, addressing AI use cases in a pushdown ELT fashion.
  • Simplify Cortex Components: No need for complex SQL code; use no-code, out-of-the-box components within Matillion data pipelines.
Matillion’s Snowflake Cortex LLM Components

Matillion provides transformation components specifically designed for Cortex LLM functions, empowering customers to incorporate LLM capabilities into their data workflows with ease. Matillion's Cortex components offer a selection of pre-built large language models, such as Llama 2, Mistral, Mixtral, and Google Gemma, readily available without additional setup.

Cortex LLM Components now available in Matillion Data Productivity Cloud:

Cortex Completions:

  • Generates creative content or transforms existing text into different formats.
  • Use Cases: Generating product descriptions, ad copy, and content marketing.

Cortex Extract Answers: 

  • Extracts specific information from text data.
  • Use Cases: Extracting key details from customer reviews, contracts, or reports.

Cortex Sentiment: 

  • Performs sentiment analysis on text data.
  • Use Cases: Analyzing sentiment from reviews, social media, or surveys.

Cortex Summarize: 

  • Generates concise summaries of text data.
  • Use Cases: Summarizing lengthy reports or documents.

Cortex Translate: 

  • Translates text data from one language to another.
  • Use Cases: Tailoring reviews or content for global audiences in different languages.

WATCH A DEMO

Check out this blog to learn more about how to get started with Cortex LLM with Matillion! 

Cortex ML Integration in Matillion

Customers can leverage Cortex ML capabilities within Matillion, providing flexibility and extensibility to leverage the power of Cortex ML within their pipelines. Cortex ML offers machine learning capabilities for tasks such as forecasting, anomaly detection, and outlier identification, empowering customers to perform advanced analytics and predictive modeling.

WATCH A DEMO

Check out this blog to learn more about how to get started with Cortex ML with Matillion!

Ensure Data Sovereignty and Security Like Never Before

Snowpark Container Services Prompt in Matillion

Matillion's Snowpark Container Services Prompt empowers customers to leverage AI while maintaining data privacy, security, and control. It allows customers to leverage customizable, task-specific models for enhanced performance, accuracy, and cost-effectiveness.

Key benefits:

  1. Private AI and Data Security: Running LLMs inside Snowflake accounts ensures data remains secure, with no records leaving the customer's account.
  2. Flexibility and Customization: Run virtually any open-source LLM within the Snowflake account, choosing the most suitable model for the use case.
  3. Seamless Integration: Matillion's Snowpark Container Services Prompt allows customers to leverage their LLMs as part of their Matillion data pipelines, making it convenient to incorporate AI capabilities into data workflows.
  4. Ability to Adapt and Evolve: As customers embrace generative AI, they might realize LLMs aren't always the optimal fit. Snowpark Container Services Prompt enables adopting smaller, dedicated models for specific use cases.

 

Business Use Cases:

  • Personally Identifiable Information (PII): Ensure data security when dealing with sensitive data containing PII. Smaller, dedicated models trained for PII detection can be more effective and secure.
  • Domain-Specific Classification or Extraction: For tasks like classifying customer support tickets or extracting entities from legal contracts, smaller models trained on domain-specific data can be more effective.
  • Highly Regulated Industries: Healthcare, finance, and government have strict requirements around data privacy and security (e.g., HIPAA or GDPR). Snowpark Container Services provides the necessary control and compliance.

RAG with Matillion + Snowpark Container Services

Matillion simplifies building Retrieval Augmented Generation (RAG) pipelines with Snowflake, empowering data practitioners to create contextually rich AI applications without writing complex code. RAG improves large language models (LLMs) by retrieving relevant data from vector stores containing your business information (e.g., product documentation or knowledge base), providing context for more accurate question-answering, summarization, and content generation. An example RAG use case would be to enhance customer support by using the company's internal knowledge base to automatically prepare accurate, context-aware answers for unstructured customer support tickets.

By leveraging Matillion's vector store connectivity and LLM inference capabilities, you can load and look up data, bring business knowledge into your pipelines, and contextualize AI insights with external knowledge. This powerful combination enables you to quickly develop and deploy end-to-end RAG use cases, such as customer support automation or sentiment analysis, all within the Snowflake platform.

WATCH A DEMO

Read more about this in this blog

Democratizing AI with Existing Data Engineering Skills 

Matillion is committed to democratizing AI and making it accessible to organizations of all sizes. By integrating with Snowflake Cortex, Matillion empowers data engineers and analysts to leverage the power of AI without requiring specialized skills or extensive coding knowledge. With Matillion's no-code approach to Cortex integration, customers can seamlessly incorporate AI into their data workflows, driving innovation and uncovering new insights.

Address Enterprise AI Needs by Integrating to Private LLMs in Snowflake

Matillion's Snowpark Container Services (SPCS) empowers users to leverage private, customizable LLMs within their Snowflake accounts. By running AI workloads directly in Snowflake, you maintain full control over your data, ensuring that no sensitive information ever leaves your environment. SPCS enables you to use Matillion's AI components with your chosen LLM, such as Llama 2 or Mistral, and fine-tune them for specific tasks. With Matillion's pushdown capabilities, all workloads execute using Snowflake's compute resources and credits, giving you complete visibility and control over your budget while benefiting from GPU acceleration for AI workloads. Although this approach is geared towards technical practitioners, it offers the most secure way to utilize AI models, ensuring data security and allowing you to select specific models (LLMs or smaller models) that best fit your use cases, delivering the highest levels of accuracy and precision.

Why Matillion + Snowflake

Matillion and Snowflake offer a powerful combination for data engineers looking to streamline their data workflows and leverage the full potential of Snowflake's platform. With Matillion, you can push down all your workloads into Snowflake's native infrastructure, ensuring your data stays within Snowflake, minimizing data movement, and optimizing performance. By pushing down Matillion workloads to Snowflake, you consume Snowflake credits directly, giving you full control and visibility over your budget. This integration enables you to leverage Snowflake's scalability, performance, and cost-effectiveness while benefiting from Matillion's intuitive interface and extensive library of pre-built components.

Get Started Today!

  1. Schedule a demo with our expert team to see the integration in action and learn how it can benefit your organization.
  2. Contact our sales team to discuss your specific requirements and explore how Matillion can help you unlock the value of Snowflake's AI capabilities.
  3. Spin up a proof of concept and witness the transformative potential of combining Matillion's data integration, transformation, and generative AI capabilities with Snowflake's AI functionalities.

 

Coming to Snowflake Data Cloud Summit? Meet us at one of our booths - #1517 in the South Hall and #2502 near the North Hall escalator. Join us for food, fun, and ping pong at SpinFest, on Tuesday night!  Sign up HERE!   

Molly Sandbo
Molly Sandbo

Director of Product Marketing at Matillion

Molly Sandbo has years of product marketing experience in the data tech space - from business intelligence to data integration. Molly leads portfolio marketing at Matillion, helping to bring the Data Productivity Cloud and Matillion’s AI offering to market.