AI, For Real: First Draft Responses to Customer Support Tickets

The potential for AI applications seems boundless – so much so that it might seem overly theoretical for individuals and companies who wanted to start benefiting from the technology yesterday. The AI hurdle is real: hypothetically there is potential but how do you make it practical? 

I have personally found it helpful to read through practical applications and use cases of any concept that feels easy to “get” but may feel hard to “do.” The following article explains how a company used AI and Matillion’s Data Productivity Cloud to build a customer support dashboard that generated first-draft responses to all support customer support tickets. 

Spoiler alert: the company is us, it’s Matillion. Teams across the company have been thinking about incorporating AI into their business cases. One of the first use cases to be prioritized was augmenting the customer support team with AI to help speed up response times and ticket volume.  

Why Customer Support? 

Customer support is a critical area that often faces resource constraints. Customers expect rapid responses, especially when dealing with business-critical issues. AI offers a solution by harnessing technical resources across systems—such as product documentation, knowledge base articles, and historical data—to help support agents address issues more efficiently.

Who built it? 

Matillion teams across data engineering and the office of the CTO collaborated with the customer success and support team to ensure the solution provided correct responses and overall direction for the end users. The team successfully developed the proof of concept within three weeks, followed by ongoing improvements over the next month and a half.

The Technical Solution

The team built a pipeline that populated a dashboard integrated with Salesforce that provides support agents with a first-draft response to customer support tickets. These responses are automatically generated by a Large Language Model (LLM) using company documentation and knowledge base articles. Agents then review and refine these responses before sending them to customers, ensuring quality and accuracy.

“These are not how do I reset my password questions,” says Julian Wiffen, Chief of AI and Data at Matillion and advisor to the project. “These are responses that take multiple hours to compose correctly.” 

Here’s a step-by-step breakdown of how the system was constructed:

1.Build a data pipeline using Matillion’s pre-built Salesforce connector that automatically pulls tickets from Salesforce on a schedule - for example once per hour.

2. Load Pinecone vector database with product documentation, knowledge base articles, and existing cases with answers using Matillion’s Vector Upsert Component.

3. Chunk the documentation and articles so the RAG process will be able to efficiently look up customer support answers from among all the documentation and knowledge articles. The documentation sources are constantly updated so this process includes pulling new documentation as part of the task.

4. Build a prompt for the LLM using Matillion’s Prompt Component:

  • Prompt 1: Comb relevant keywords which were then fed into Matillion’s vector query component
  • Prompt 2: Prepare draft response to support ticket using the Prompt Component with RAG mode enabled. Add a link to relevant public-facing documentation. Linking the public documentation was a guardrail to prevent hallucinations
  • This process, known as “Retrieval Augmented Generation” (RAG), was built by a data engineer without needing additional AI training or expertise.

5. Use Matillion’s Salesforce Output Reverse ETL component to push the newly generated answers back into Salesforce.

After the process above was completed, every ticket was then manually reviewed by a support agent to ensure accuracy, inform edits to the knowledge base and documentation, and focus on more advanced high-priority cases and customer experience. This is known as a human-in-the-loop concept and is critical of any AI use case implementation.  

Results and Impact

The new AI capabilities quickly yielded significant benefits:

  • Within four weeks, 50% of the AI-generated answers were useful to support agents, a figure that has since risen to 66% following documentation improvements.
  • The system now processes 150 tickets per week, allowing support agents to focus on higher-priority issues and enhancing their overall efficiency.

Lessons Learned and Best Practices

The project offered valuable insights applicable to various AI business use cases:

Comprehensive Documentation: Incorporate all relevant documentation sources into the vector database and RAG process.

Clear Model Boundaries: Define what the model can and cannot access to avoid blind spots.

Mixed Data Formats: Use both structured and unstructured data for more effective prompts.

Model Experimentation: Continuously test and compare different language models to find the best fit.

Chunking Documentation: For efficient vector indexing.

Audit Trails: Maintain a history of how responses were generated.

Agent Feedback: Implement a rating system for agents to evaluate model accuracy.

The team plans to further enhance the AI-powered support system by incorporating log files and screenshots into the response generation process. 

Takeaways 

Building an AI-powered customer support system within three weeks showcases the practical application of AI in addressing real-world business challenges. By leveraging AI to generate first-draft responses, Matillion’s support team was able to significantly improve efficiency and customer satisfaction. 

Interested in reading about additional AI use cases? Read through Suite Success: Transform Multilingual Reviews into Actionable Insights with Snowflake Cortex and Matillion.  Or if you’re ready to get a see of all of Mattilion’s AI features, book a demo.

Kelsey Bernius
Kelsey Bernius

Senior Technical Content Manager