Corrected Transcript Vik Iyer: Hi. I'm Vik Iyer, CIO Marketing Services. Data productivity is no longer a nice-to-have. It's the difference between data teams that scale AI initiatives and those stuck in backlogs. Organizations leveraging agentic automation are seeing data work that once took weeks or months completed in hours or days, directly accelerating their AI roadmap and turning data operations from a cost center into a strategic advantage. In this CIO webcast with Matillion, you'll hear how leading enterprises are achieving measurable ROI through three strategic shifts: scaling capacity without adding headcount, consolidating fragmented tool stacks, and accelerating AI delivery. But before we start, let's find out more about our speakers: Dr. Malte Polley, Dan Adams, and Julian Wiffen. Malte, it's great to have you on this webcast. It's gonna be a fascinating discussion, but just tell us a little bit about yourself. Dr. Malte Polley: Yeah. Thanks for being here. Thanks for your introduction. I'm working with AWS for eight years. I'm working for five years with Snowflake, mainly focusing on delivering data products and within the field of DevOps in the end. And I'm doing this now for a German FSI insurance broker [MRH Trowe]. So we are focusing on delivering valuable insights around our businesses here. Vik Iyer: Thank you. And, Dan, welcome along, and same question to you. Dan Adams: Hi, Vik. Yeah. Hi, everyone. Thanks for having me. So my name is Dan Adams. I'm the Global Analytics Manager here at Edmund Optics. We're an industrial optical components manufacturer. I lead a small analytics team primarily focused on delivering reliable business intelligence to the wider org, but I also explore opportunities to leverage AI and ML as well. Vik Iyer: And finally, Julian, great to be with you, and tell us a little bit about yourself. Julian Wiffen: Yeah. Thanks. I'm Matillion's Chief of AI & Data. My team's remit is primarily to look at how we can bring AI into our product to help people working with data—both to the system's users and to help them transform that data. Vik Iyer: Thank you, Julian. And I guess it's such a big topic, but the most obvious question really to start off with: What challenges are data teams facing and where is AI playing a part? Dan, perhaps you could kick us off. What are you seeing right now? Dan Adams: I think the appetite for data within the enterprise has massively increased, right, in the last few years. So data teams are stretched, being asked to pull in more data, do more with less, get data ready faster, and just more of everything. Right? And that obviously hits us from a capacity point of view. On the other hand, you know, we were being asked to support new AI applications as well and use lots of unstructured data sources, which are messy and, you know, are a lot harder to work with than nice clean SQL tables, which we're perhaps more used to. So, yeah, that is a real challenge for us. Vik Iyer: Thank you, Dan. Malte, I would love to just find out a little bit more about what you're seeing in your parts of the industry. Dr. Malte Polley: Yeah. Basically, data quality is and will remain an issue. And even with AI, you will not get rid of it. Actually, it's getting worse as in the human interaction loop, someone can make additional sense or get a clue around your data decisions. And in the world of autonomous agents, you really need to have best-in-class data. So this is still something which we all need to work on. So getting the clean SQL table Dan was talking about is still the first—or the "job zero"—in the end. I think what is really tricky is actually the expectation around AI in the field of every IT section. It's this mystical thinking that with AI, everything is getting easier, faster, and better. But it's still the old law: If you have bad input, you will get bad output no matter whether you use AI or not. Julian Wiffen: Yeah. That's—I mean, what you gentlemen touched on there reminds me of an old saying from a friend of mine who works in medical stats: that [there is only] one word to describe a dataset with no quality issues, and that's "fake." We're never gonna get away from the data quality stuff and this, you know, never gonna be magically solved. But it does resonate with kind of what we're hearing—that every data analytics team and data engineering team in the world generally has a backlog that overwhelms them, has more demand for their products than they can fulfill. And as Dan says, the generative AI boom has just increased this because more and more data is being created and there's more and more demand for data to enable GenAI projects that are often really big, high priorities for the executive leadership. And they need different types of data that we're not necessarily as used to working with. You know, we've all been working with tables and SQL for decades, but working with audio and video clips is probably quite new for most of us in the industry. And that's where kind of Matillion comes in in a couple of ways. We've started to build pieces to help you wrangle that unstructured data and work with it. And also primarily by building Maia as a virtual data engineer that helps and empowers the user. It helps the user become a lot more productive and lets them scale and gives them a fighting chance to get on top of that demand, that backlog, and not be the villain that gets the blame if the AI project doesn't run as fast as the CEO wants. And, hopefully, my colleagues here can talk a little bit about the impact it's had on their teams. Vik Iyer: Yes. Absolutely, Julian. And that's what we're gonna do now. We're gonna sort of zero in on that focus of Maia and how data productivity is being impacted. Dan, I understand you achieved tremendous productivity by adopting Maia. So can you tell us a little bit more about your story? Dan Adams: Yeah. Absolutely. So I think it's important context to note that I have one full-time engineer in my team. So my team is very small. We're four people, one data engineer. Okay? One full-time data engineer. And we've seen since adopting Maia, starting to use it for our pipeline development a few months ago, we've probably seen a 5 to 10x increase in productivity and just massive increase in speed of development. So the time it takes us to get a pipeline from concept to production is just, you know, vanishing before our eyes, which is wonderful and really helps my team get through a lot more backlog, a lot more requests that we're getting, and cope with that long list that Julian was referencing that we get from the wider organization. I can think of one specific example to just illustrate this. You know, we have to connect to lots of external APIs with a lot of our marketing data sources. And one particular API for one of our core sort of marketing tools was really proving troublesome. We'd been working on this API and trying to find a solution for probably over a year. We'd engaged consultants on a couple of occasions. We'd outsourced the problem to a specialist provider who was doing data-as-a-service for us, and we still weren't satisfied with what we were getting. We decided to give it a go with Maia, and we managed to get a working solution in less than a day with my experienced engineer working alongside Maia. So that gives you some idea of the kind of productivity gains that we're seeing when you put a tool like this in the hands of an experienced engineer. Julian Wiffen: To some of that, I'd chime in and say that there's been a lot of focus, I think, in the world generally about generative AI and LLMs' ability to write, but a lot of the power here comes from the ability to read. And so in the case of a complex API spec, it's the ability of the model to go through and read all of the full length of all that documentation and pull out what a human might miss on the first pass. Dan Adams: Exactly. And that's what—yeah. That's what we do. We fed it the API documentation. And instead of my human engineer having to spend hours and hours reading through all of that information and trying to figure out which REST API calls he was gonna use and how to format them and what, you know, what to put in them, Maia figured all that out for us. And it didn't—it wasn't a fully working pipeline, but it got us 90% of the way there. And then within a few hours of sort of debugging and fixing, you know, we had something working, which is just incredible. Julian Wiffen: One of the most satisfying moments I heard earlier this year was Customer Success saying they had instructions to shut a case with the reply that "We didn't fully understand your instructions, but Maia did, so it got the problem solved, and you can now close the case." And that was very pleasing to hear. Dr. Malte Polley: And maybe some international guys are also watching this webcast. So we are talking in English, right? But Maia is able to respond in your language. Right? So the barrier between the technology platform and the tool itself is also lowered due to the generative AI/post-generative AI service just by adding one word: "Respond in German," or whatever language into your context file of Maia. Right? So this is really, really shaking. Julian Wiffen: Yeah. It's pleasing to hear. We've got users in Japanese and the like, and it's—well, I'd like to pay credit for a lot of that, but that's just from the inevitability of the generative AI tools. We didn't even plan for that. We just tried it and discovered that, yes, it was very, very polyglot. Vik Iyer: And, Dan, I just wanted to ask, I mean, there is always talk about, you know, how to achieve AI success. So for example, you know, which large language model should data leaders use? Yet, one of the key challenges you mentioned—or I think I've referred to—was around unstructured data. Now can you tell us a little bit more about what you did with regards to that? Dan Adams: Yeah. So we did actually release a generative AI application for internal use into production last year. My experience of going through that process is that the models are incredible things. They're very, very powerful, but they're increasingly becoming commoditized. Like, during the development of that project, we actually swapped out the model two, three times, right, to get the one that was working best for us at that time. The real challenge was actually creating that knowledge base from all this unstructured data that we had. So we had maybe three to five thousand PDFs, image files, chat transcripts, comments, and things like that that have been posted on intranet walls and things. And trying to get all that data into a vector store for the LLM to actually reference when it was trying to create an answer—that was the challenge. That was the problem. The model... which, you know, I would urge people not to worry so much about which model they're using, you know, whether it should be Llama or whether it should be ChatGPT, it's really about getting your data AI-ready. And, actually, that it's a data engineering problem to be successful with an AI application rather than having the right model. That's certainly what I found. Julian Wiffen: Yeah. We found that we did some internal tests for a bit of fun where we made the model sit our certification exam. And the difference in scores didn't come from the different brands of model. Because it's a multiple-choice exam, really easy to mark, which is why we picked it. But the difference came from where you used a vector store or not, and/or if you used a knowledge graph on top of that vector store, and that was what was making a real impact of wrangling the data, feeding it to the right model, and then just having a good way of scoring and judging which one was performing better. Certainly, getting the data into the right shape was the key to the success or failure there and literally passing or failing. Dr. Malte Polley: Nevertheless, in the end, from a software perspective, right, data engineering needs to introduce some kind of unit testing. Right? So if you want to move from one version to another or if you want to step from one model provider to another, unit testing with a ground truth, you already labeled and where you know that this is your benchmark, is key to keep up your systems up and running. Yeah. Vik Iyer: Thank you, Malte. And I wanna... Malte, I wanna bring you in your story into this now, because your company has some really complex challenges given their acquisition strategy. So how has Maia helped you? Dr. Malte Polley: Yeah. Basically, we have up to twenty acquisitions per year. And all companies we buy or we bought have their own CRM system. And to drive fast business decisions, we decided to implement Maia with Matillion to bring in within Snowflake a very fast staging area. So the standard process is post-merger integration. And if you are familiar with this process, you know, it will take some time to integrate a complete company into yours. But if you focus on the data itself, you can really progress just by introducing the staging area to create additional benefits for your business to move contracts from A to B, etcetera. And with this, Maia is able to help us to understand fast the table structure, the schema. We can easily talk to business owners. Maia is mapping terms, which are similar, but maybe not exactly the same. And with that, you can accelerate really this first step of your PMI process. And there we see really huge benefits as we integrate databases much faster than in the past. Julian Wiffen: Yeah. Always a huge challenge for that type of operation against where the LLM helped by handling ambiguity a little bit and able to take/translate free text notes that you've had from a handover into something useful that Maia can feed off rather than having to get it into something very structured. Dr. Malte Polley: Yeah. And, obviously, we are in a deep conversation with Matillion and Julian's team. Right? So we think of that, for example, adding text and descriptions in your tables, in your columns, etcetera, especially in combination with Snowflake, enables Maia to understand the business model behind the data product, and with that, you can even get further with your interaction on the speech level towards an IT system. Julian Wiffen: Yeah. And that's the focus of a lot of our upcoming roadmap actually is around better capturing that business knowledge and semantic knowledge—how you map from a business term to a their own table and all the rules that sit in the background. And we provide some ability to have that now in terms of sharing context files, but there'll be more and more coming in terms of resolving. You know, how do you teach Maia that your company has a different definition for when somebody becomes a customer or stops being a customer? How that might have varied a year ago, so that sort of scenarios. Vik Iyer: And, Malte, I just wonder, are you also seeing almost like a democratization effect, you know, where business analysts can become part of the data team, and they can maybe perhaps work in their native language as well? Just like to get your thoughts on that one. Dr. Malte Polley: Yeah. Actually, this is one of our thinkings. Right? So we are creating at the moment onboarding paths to a diverse range of IT tools. Like, we have an onboarding path to Power BI where we want business analysts to interact via this tool with data already residing in Snowflake. We also go further that we don't want to give directly access to Snowflake, but in the end, having a safe space with a clear rule set, which is supported by Snowflake and by Matillion, you can create your own data products just with your thinking in a way of ad-hoc analysis. And we are also a very small team. Right? We have more than 2,500 employees, and we are three data engineers and one team lead. Right? So obviously, we cannot create every data pipeline in the company or answer every insight question. And with this native language support, we can bring in the tools into the hand of the business drivers and decision-makers in the end. We can coach. We can mentor. We can create communities. But in the end, the work is under the fingertips of all business analysts. And this is something we are really keen to explore in the next year. Julian Wiffen: Yeah. There's another touch to that native language capability, which is that the models that are underneath Maia, obviously, you can understand interactions in German or similar, but they can also understand different coding languages. So we've seen this from a bunch of customers that they wanna translate from some of the legacy system. They've got a long set of SQL scripts or a bunch of Informatica or Alteryx jobs or similar. It's easy for a user to chuck this in and say, "Can you replicate this, please?" And bring that out into the kind of graphical structures and make it easy to understand. So there's a whole lot of ability to capture knowledge in the same way with the business analysts. You're getting closer to that subject matter expertise, the domain knowledge they have that a data engineer maybe doesn't. Dan Adams: Yeah. And... sorry. In a small way, I'd also just like to chime in that not quite—we don't have quite the same ambitions as Malte does with rolling this out to analysts all over the company, but just within my team. Right? Like, it allows me to turn my BI analysts, right, my BI developers into part-time data engineers when the need arises. Having a tool like this with such a low barrier to entry, they don't need to learn new coding languages or anything like that. They can simply interact with natural language, tell it what to do. They can have it describe an existing pipeline to explain what's going on, and just that extra flexibility when you're running such a small team is really, really useful as well. Julian Wiffen: That descriptive ability is actually our first step into this space because our first experiments were like, couldn't an LLM just describe what a pipeline's doing? We didn't expect it to work because it's a bespoke language. They're written in a bespoke YAML format that's not public, but we're very pleasantly surprised by how well they did, and that's what led us on the path to Copilot and then Maia. Dr. Malte Polley: In the end, we are talking today about Maia, and we already look back. I don't know, Julian, you know the exact date of the first publishment of Maia. Right? But we are working with the Copilot within Matillion now for more than a year, and the quality accelerated, the capability expanded day by day. And so the future's bright, I think, around this agentic coding support. Yeah. Vik Iyer: Yes. Absolutely. And I want to bring you all into this because we also know that documentation is really important for data teams as workflows, you know, grow more complex. So how can businesses tackle that particular challenge? And maybe, Dan, bring you in onto this one. Dan Adams: Yeah. Sure. So I think I speak for most data people when I say that documentation is—we know it's so important, but it usually falls off the bottom of the list. Right? Because you have something that's stopped working. You know? You have an urgent request comes in. And so it's all very much an "ideal world" sort of thing up until now. With GenAI, with a product like Maia that has a deep understanding of the tool that you're working with, it's possible to generate documentation that's quite effective in minutes for large numbers of pipelines. Or like I just alluded to, just simply ask Maia at the time that you need the documentation to create the documentation or answer a question you might have about a specific part of the pipeline. "What is this component doing? Why is this here?" For example. Yeah. Which is hugely powerful, means that the solution is far more robust. You know? It's not locked away inside someone's head. If that person leaves the company, things don't break, right, which I think we've all seen happen in the past as data people. And no mysterious scripts anymore, you know, propping up the entire system. It was written by some mystical engineer twenty-five years ago. Right? I think we can move towards this open ecosystem of data products and data artifacts that we manage that everyone understands and everyone can manage, and you can update that documentation as well really regularly as well. It's so easy to generate. So for documentation, it really is a fantastic sort of secondary use case for the product, really, I would say. Julian Wiffen: Yeah. It blurs back into doing this. It's auto-generate commit messages, so that's kept on top. As you say, you can do it after the fact. You can go back to an existing one and document. It's a good sanity check too in terms of: If the documentation comes out with "Was that actually what I intended to build?" or "Was my build exactly what I intended to build?", you know, it becomes a proof that, yes, it does. It's confirmed. It's written back the same sort of thing. And it's even reached the point where we're starting to explore using Maia in tutorial mode with the documentation to, like, build exercises or build introductions for a new user. That's, again, kinda coming soon, but just showing people. And, of course, it's always got the latest version. So it's always got the latest version of the docs. Dr. Malte Polley: That's what I wanted to underline. Right? So the one part of the documentation question is "What did I and what I want to document?" And the other thing is "What do I need to understand from the documentation of an IT platform to create an artifact like a pipeline?" And this is something where Maia saved us hours. Right? Even for Matillion, it's critical to have an up-to-date documentation. But the question is, do I understand what Matillion tries to send the message via the documentation? Right? And Maia is helping us actually as a support engineer 24/7 directly, always up to date with the latest component descriptions and property settings. Julian Wiffen: It's actually something we learned quite early on in the GenAI days in terms of how good documentation is: documentation well written for humans is also good for large language models, so it's encouraging us to do better discipline in terms of upgrading the docs. We had a pre-Maia or a separate Maia. We had an IT support case answering bot pipeline. We had one scenario where the model got it wrong. The answer came back that you couldn't do the thing the person was asking whether they could do, and four of us on a call examined the docs that the model had read and said—and two people said, "No. The model's right. You can do this thing." And two people said, "No. You can't." And what it showed us is we had an ambiguous support doc about this particular feature and needed to tighten that up. And it—I think that's probably a big lesson for anybody working in GenAI that, as Dan said, the quality of the content you feed in is where you get the power. And your team spend less time actually doing the work because the LLM is doing it for you—or the model system doing it for you—but need to spend more time on making that time freed up on making sure the quality of instructions and quality of data going in is as good as you can make it. Ideally, you get a virtuous cycle because you improve the quality, the model answers more for you, and you keep going. Dr. Malte Polley: And before everyone is thinking that Maia is now taking everything and you have an uncontrolled system, right, in the end, Matillion introduced a lot of features which are software related, right? There are staging, there's environments, you have a Git repository, right? And one critical thing, at least we learned, is that never forget to commit before using Maia for the next run. Right? Because this is the easiest way to revert all of the changes and to stay in line of your mental concept, which might sometimes differ from the one Maia has. Vik Iyer: Yeah. Absolutely. And, of course, we have two people in this webcast who have seen quantifiable efficiencies and money savings from generative AI, which is fantastic. And I'd just like to, you know, what advice would you give to our audience looking to deploy? Any steps that you think they should take? And then perhaps, Malte, you could start. Dr. Malte Polley: I think introducing agentic coding in general is not removing the barrier of a concrete software development understanding. Right? So this is step one. So you can introduce non-software development related people to an agentic system. But as Dan already said, an experienced engineer, no matter what technology he or she already used in the past, will even get even more benefits from an agentic support. Right? And by that, I just can say, try it out. Figure out what the market does. Obviously, there are strengths of Maia related to SQL. Obviously, other agentic systems are better in different coding languages. Right? Play them together, and you will accelerate your work a lot. Vik Iyer: And, Dan, love to get your perspective as well. Dan Adams: Yeah. I just I would underline: you need to put these tools in the hands of an experienced engineer who knows what they're doing to really see that massive productivity gain. In fact, I would go as far to say as putting them in the hands of people who don't know what they're doing initially is probably quite dangerous because, you know, they get you really close to a finished solution, but usually not quite all the way there. So you do need that human in the loop still, and I think it's important to recognize that. People shouldn't be sat there thinking, "Oh, I don't need to hire any data engineers now. I can just get my analysts to run a team of agents." Right? That's probably not gonna work out too well. And the other thing which we alluded to earlier in the discussion is: it always comes back to the data. Now I would say that as a data person, of course, I would say that—I am biased—but I do think that you won't get much value out of your AI tools and generative AI applications unless you have solid data behind you. You need to make sure—it's all about context. Right? I'm probably stealing that quote from somebody else, but it's all about context. You know, AI without context is just very, very generic. Right? And to understand your business, it needs that detailed information that is specific to your business, from your business. So really, really focus on that rather than which model you're gonna use. And, yeah, give it a try. I think, you know, you've got to be quick. The space is moving so, so fast. To echo what Malte said, you need to just be trying this stuff all the time. So, you know, be brave with it. Give it a try. See how it works, and then stay on top of the latest developments, and I'm sure you'll find some value. Dr. Malte Polley: And if you really have developers on your team and you think your company is not using agentic coding at the moment, you will find out that they are already doing this. Right? We have a term in Germany. It's called "shadow IT." I don't know whether this is a correct translation into English. But, yeah, make it convenient for your company. Otherwise, the other one will take over. Dan Adams: Yeah. Shadow AI is definitely taking over from shadow IT for 100%—can agree with that. Yeah. So using well-controlled and governed systems that sit within, you know, a reliable platform like Matillion is a really good way to go if you're just wanting to get started with this. Vik Iyer: Yep. So it's a very good message. Your employees will go off on their own agenda if you don't give them tools that they need, I guess. Now before we finish, let's see Maia in action. (Video Segment) Joe Herbert: Hi there. My name is Joe, Principal Solution Architect here at Matillion for Maia, our agentic data team. So I'm gonna kick off this short demonstration by putting Maia to work. I've given it a prompt to read through, my data landscape file, which tells it a little bit about some of the data that I have across my source systems. I wanted to then ingest a table from S3, which has some JSON data in it. I've then also given it another context file, which describes the connection strings to an on-premise SQL Server. I wanna load the two remaining tables from that SQL Server and then join all of this data together in a star schema model. So I'm asking Maia to build the ingestion engine as well as the transformation layer, and we're doing all of this in a push-down fashion, executing inside of my AWS Elastic Container Service, moving that data from source into target, which in today's example is Snowflake, though we support Redshift and Databricks as well. I've asked it to go one step further than just create this end-to-end ELT. I'd also like it to use variables where appropriate, test and document its work, and push all of the changes from my local branch where I'm building here into my remote branch for testing in different environments. And I've also asked it to go even one step further and build a prediction model, with some results as well and net recommended next steps based upon what it finds from this data. So let's see how Maia's getting on. First of all, I've toggled it to use "plan mode," which means it's gonna simply read through that instruction, gather all those pieces of information up, just like any good human data engineer would, and is able to then have a plan of attack for how it's gonna go about building that pipeline to ingest and move that data across. You can see it's found the relevant file path string for that sales transactions data inside of my S3 bucket, and it's also got the relevant SQL Server table information here as well. So now it's generated a human-readable plan. At this point, I can go read through it line by line, make changes, make edits, but I'm just gonna go ahead and click "accept plan" here at the bottom and then have Maia start building out that series of pipelines. Now whilst it starts building out, let's read through that plan that it generated. So here you can see we've got a summary of its task. We've got the key performance indicator selection, so it's identified some relevant sales and revenue fields from across that data landscape estate. It's identified the relevant architecture flows, so the movement of data from those source systems into the target. And you'll see that it's added this _staging. That's because in my pipeline building standards file, I've asked it to add _stage as a way of conforming to my naming conventions that I have across my target cloud data warehouse. And you can see this pipeline standards file here. This allows engineers to actually bring in their naming conventions, building standards, use of variables, and when to use code or when to use the components out of the box and the preferences with those. And you can actually onboard Maia into your existing data engineering team so that it behaves in a way just like your human data engineers would. The benefit of this means that you can actually put Maia to work across multiple different tasks all at the same time, and you become more of a pipeline reviewer, and being able to ensure and check that the data validity and the movement of data from source to target or inside of that target environment is actually taking place in the way that which you would expect it to. And so Maia then helps you level up with your day-to-day tasks of building these pipelines. Whilst Maia's building these pipelines out, it's using these components that we have across the Data Productivity Cloud, such as the S3 load. This is able to ingest from CSV, JSON, or Parquet, XML file types that you may have, and it's able to then move between different environments, dev, test, and prod when this pipeline itself gets moved between those different environments. And it's creating the table using the VARIANT data type in Snowflake to land that semi-structured data within. And all of this is human-readable. I can come in here on the right-hand side. I can make changes to this as a human engineer, and I can also just overwrite things if I would like. Now Maia's going through, and as it builds this out, it's also gonna test its work as it goes through as well. So it's gonna ensure that the data is successfully loading correctly and bring us back those results here on the left-hand side. At any one point in time, I can come here to the drop-down, and I can see where Maia's up to with this particular task. And you can see it's now adding that SQL Server load components onto the canvas, loading those two of the data tables in. Now I have asked it to use variables as well, so I'm hoping that it comes back and adds a nice little variable with an iterator to loop through all of those tables as well. And if it doesn't, I can tell it to do so as well after the fact. Maia's really good for updating and maintaining existing pipelines or simply understanding and exploring code that you have across your estate. You can connect to your external Git repository here just as we have done in this project, and you can bring just the orchestration and transformation pipelines that you may already have, but also any of those code types that you have across your organization—YAML, Python, SQL, as well as Alteryx files, Talend files, as well as existing code bases. You can connect up to your external Git repository, and then you can actually view and visualize that code here inside of the designer as well as have Maia read and understand that business intent of those pipelines as well. So Maia's finished building its ingestion. It's loaded that data from our S3 and our SQL Server into our Snowflake, and now it's going ahead and it's gonna build a star schema model to map and join all of these different tables that it's loaded across together. And as you can see, whilst Maia's carrying on with this transformation pipeline, it's gonna generate SQL that pushes down inside of our Snowflake. I can come back and actually document this pipeline as well for other human users to come in and inspect and understand the logic that's being built. Maia's able to help make human-readable, auto-documentation on the fly. I can add that into the canvas, and let's say I wanna commit this change here. I can generate a commit message here with Maia. It's gonna summarize the work that we've done so far, and I can commit this and then push all of these changes out to that remote repository. Now Maia's finished building the load, and it's asking here to sample and test the transformation pipeline. So you can see it's parsing that JSON data out, and it's checking its work as it goes through. I can always inspect and see the SQL that Maia's generating here inside of the canvas by simply validating the code and then previewing the SQL that it's generated here in the calculations window or perhaps en masse here as well. After it's done that, I can also sample the data and actually get a sense and see what it looks and feels like at this moment as well as view the metadata of those columns inside of our Snowflake warehouse. Thanks for watching. Maia's gonna carry on building this star schema, and I'm gonna go and grab a cup of coffee. (End of Video) Vik Iyer: Well, that was pretty impressive seeing Maia there. It really gives a clear picture of where data engineering is headed. And that does bring us to the end of this CIO webcast with Matillion on how data teams can be transformed. I'd like to thank Malte, Dan, and Julian for sharing their expert insight as well as everyone who watched. If you would like more information on Maia, be sure to visit matillion.com/maia. Goodbye.
Agentic Data Engineering: How Edmund Optics & MRH Trowe Automated 80% of Data Work
In this CIO webcast, you’ll hear how Maia, the agentic data team, is helping leaders from Edmund Optics and MRH Trowe close the gap between AI ambition and data engineering capacity. You’ll learn:
- How enterprise teams are experiencing 5x–10x productivity gains
- The impact of automating 80% of repetitive pipeline work, including builds, optimizations, and documentation
- How teams prepare AI-ready data faster by handling schema drift, complex APIs, and metadata mapping
- How business analysts and engineers alike can now build governed, production-grade pipelines with natural language guidance
The webinar wraps up with a demo of Maia in action. For a closer look, you can book a slot on our next live demo here.
Featured Resources
Agents of Data: Becoming AI Native
Learn more NewsMatillion recognized as a Challenger by the Gartner® Magic Quadrant™
Agentic AI platform development acknowledged in 2025 market overview
Learn more BlogThe Future of Data Belongs to the Bold: Why Being a Challenger Matters When Choosing a Data and AI Partner
Matillion has been named a Challenger in the 2025 Gartner® Magic Quadrant™ for Data Integration Tools – recognition that we ...
Learn more
Share: