The Cleanest CRM Data in the Business

feat. Jon Krangel

Jon Krangel of Retool shares how they keep the cleanest Salesforce data in the business and some of the valuable things they do with it. He discusses the importance of hiring people with technical skills in RevOps, then provides a live demo of how they work in Retool's dbt and Retool instances to keep data clean. He also discusses the philosophy behind understanding data's cleanliness after it has been created rather than enforcing rules and the time it is created. Finally, he shares the Goals object his team created in Salesforce to show the benefit of all his work on producing clean data.

Transcript

0:27 Derek Steer: Jon

0:29 Jon Krangel: Derek,

0:30 DS: Thanks for joining me.

0:33 JK: Yeah, good to see you as well. Thanks for having me on.

0:36 DS: Why don't you start by introducing yourself. What do you do now? How'd you get there? And why did you choose to do this?

0:46 JK: Yeah, sure. So right now I am the Revenue Operations lead at Retool. So I lead a team of folks who cover all the go-to-market functions, marketing across strategy, operations systems, and other things like variable compensation and deal desk. It's the greatest job in the world. I love it. We have an amazing team, and I'm fortunate to be able to work on incredible infrastructure with them every single day and help the business make a ton of important decisions.

I got here actually through a bunch of interesting opportunities, of which one where we worked together at Mode where, I think I was the Head of Operations or something, which was basically a bunch of stuff including sales ops and strategy and a bunch of other things.

And really it was a combination of experiences where I was actually in the front of the house selling, doing customer facing work as well as the back of the house after I got into tech that culminated in having, the experience that I do right now, which this wide angle perspective across the go to market organization.


1:56 DS: What I remember really well from when we worked together was your technical bent – your ability to work with systems more effectively than anyone else I've seen in this particular role. I always find it interesting when we talk about this to remember that you at one point were a seller because I think of you as so technically advanced, but maybe talk about how you got some of those technical skills and how you became so systems oriented.

2:28 JK: Yeah, totally. So from a very young age, I've always been kind of an engineer at heart. I take things apart. I remember I got in trouble once when I was like eight because I took apart our television and my dad was not happy with that. But I wanted to see how it works and that was just like part of my psyche and that continues to this day.

When I got into business eventually, I always found myself applying technical skills, whether it was really large and unwieldy VBA-driven spreadsheets to help us fix some accounting or finance issue; and so that became a thing that I started to lean into over time. And what I found was bringing technical skills, even if they're not the most technical – I would never call myself a software engineer – but bringing some technical skills to a non-technical function or vice versa is a superpower because you wind up being able to stand out among your peers.

So as an example, I have always been very SQL literate. It is one of the most important technical skills in my toolkit and for my team as well. And when I was a seller, that was something I really leaned on. So there were zero sales reps that I knew who could write SQL or could do any sort of more advanced analytics. But I could, so when I became one, that was what powered all of my sales pitches. So I had queries and Tableau dashboards and all kinds of things that I would bring to customers to show them how they were stacking up against the competition and things like that.

I certainly was not the most technical seller from a kind of sales fundamentals perspective. I probably didn't do great discovery. I probably didn't do all the things you're supposed to do as a seller, but I learned a little bit about that. But what I did bring to the table was a whole different perspective to sales, which was very data-driven and being able to troubleshoot their problems – and when I say they, I mean the customer's problems – without them having to call a sales, you know, a support hotline or something like that.

4:29 DS: When you say you have always been very SQL literate, when does always start? How did you even get those skills in the first place and how would you recommend that someone go and get those skills today?

4:41 JK: Yeah, so I think I learned SQL for the first time right after college when I was working at Deloitte Consultinga and I was working – my client was a major financial institution that everybody's heard of – and I was working deep inside the bowels of their Oracle general ledger and consolidations and reporting system. Like basically how balance sheets balance and come together. I taught myself basic SQL in order to actually interface with that database because I needed to get ledger entries out and do some light analysis.

I don't remember exactly how I learned it; maybe probably Googled a little bit, maybe W3schools or some sort of equivalent sort of thing. It's shame. I don't think Mode SQL School existed back then. If it had, that would've probably been my first port of call. But it was definitely a really powerful weapon for figuring out how to speak to databases, get data out of them, and understand what was going on inside of them.

5:44 DS: Yeah. The thing about this that's interesting to me is you learned it out of necessity in a different job; a fundamentally different job from what you do today, but the carryover has been really direct and a powerful influence on the way that you do your job today.

5:58 JK: Very much so, and I would even say that the actual SQL syntax is 5% of it. It’s actually learning about how to think like a database. How tables come together; when you look at an application seeing… you know, when I look at an application right now and I see the front end of it, I actually see “what are all the objects behind the scenes that are coming together to model all of this stuff,” and it helps you understand how software comes together. And if you see how software comes together, it makes it easier for you to analyze the behavior of that software or understand how to dig into or query it. And then, of course, all of this is all in service of answering questions about the business, because at the end of the day, that's why we look at data: to answer questions, give us insight, help us predict what's gonna happen in the future.

6:49 DS: I had this experience, too, working in data where the first software company that I worked at, I, through working with the data, started to understand how software applications are structured. You know, the way that an application will read data and then display it to you on the screen as you're saying. So now I do the same thing: when I see a web app or a website I just deconstruct it mentally into all the components that would be displayed and how you would structure that, and there is a real utility to that when you are doing data analysis. One thing I've also noticed is that there's a real utility in thinking about a system like Salesforce that is essentially just a big database.

There are a lot of UI components on top of it, and there are validation rules and all the other things that go into a Salesforce instance, but at its heart it is data structure and understanding data structure. And so I'm curious if there are any specific ways where you've seen that kind of knowledge or expertise play out in the Salesforce systems world now that you are firmly in that RevOps seat.

7:56 JK: Definitely, I mean, Salesforce – really the answer is Salesforce most of the time – but the CRM is the beating heart of the go-to-market organization. It is the center, it's the brain, and understanding not only the kind of Salesforce-specific nuances (or whatever CRM you choose to use), but more importantly how those things interface with the actual business concepts that are important to you and to your business is incredibly important.

And mapping those two things together, it's kind of like when you were talking about kind of web application, somebody has to build the connective tissue between the thing that the users are interacting with – the front end and the logic that they care about – and the technical data that goes underneath all of it. So having that perspective, I think, is very important and ultimately, at the end of the day, when you're in revenue operations you really are just a product and design and EPD team thats customer is the go-to market. So that is how I think about our role, is we have to be totally obsessed with the customer, so our AEs are SDRs, our Sales leadership, all of these folks – even the board is a customer because they consume all of the outputs eventually of what the go-to-market produces. So we have to think about what kind of data structures need to underlie the business in order to give us the answers that we need or the visibility that we need, and then that becomes Salesforce configuration.

9:27 DS: So you're getting into a little bit of the job function of RevOps: who your customers are and what you need to provide to them. This is something that I'm very interested in because I also have found that people have different perspectives depending on their backgrounds.

And so I'm curious how you think of the role of Revenue Operations and really what it is you're trying to achieve. In two years at Retool, or two years from now at Retool, what will make you say “that was a great two years; I was successful.”

9:58 JK: Yeah. Yeah, absolutely. At the end of the day, my personal goal for the RevOps team is that we are the stewards. We are building the ultimate go-to-market machine for Retool. Now, every single one of those words has meaning to me. We wanna aspire to build something really incredible.We also want this thing to literally be a machine. And a machine is elegant, it is efficient, it functions well, it is something that produces a known output given certain inputs. So that's the way that we think about our job and we think about the infrastructure and the processes and everything that go into it.

At the end of the day, what we're trying to do is we're trying to accelerate the business – so help people work better, faster, stronger – and accelerate high-quality decision making. Because one thing that we both felt the pain of is trying to make a decision in a business context and – let's be honest – best case, you have 8% of the information you feel like you need in order to make a high-quality in any scenario. But most of the time we feel like we have one or 2% of the information that we could have, and that's the stuff that's really frustrating. When we feel like the business is not giving us clear and concise answers to things that we think it ought to. And so it's our job to make sure that that happens and that it happens in a way that is not obnoxious to the team.

So I've been a part of organizations where the operations team and the systems team feel like they're totally running the show in a way that is not productive and they have systems that are built in ways that sellers hate, that managers don't really know how to interface with. We don't want to do that here at Retool. We want this thing to feel kind of like… our aspiration is like when the original iPhone came out, Steve Jobs was very keen on saying that it didn't come with a manual because you shouldn't need one. You should turn it on – there was the one button at the top – and it should just make sense.

12:04 DS: So I think it's an admirable goal: you want your business systems to just make sense when people use them; Salesforce and anything else by extension. But it's a really hard line to toe. You nailed it that you want to make things easy for sellers, but they want just as little responsibility for this clean data as possible, in my experience, because that responsibility is not fun. It feels like extra work. You know, the common refrain is “Hey, I'm supposed to be selling, not punching data into my CRM.” So how do you find that line? Where do you draw the line so that you get what you need to make those decisions as a business, but also don't slow down the sales process too much.

12:51 JK: I hear reps say all the time that “I don't wanna fill in this field or that field, it's just gonna slow me down.” And those are often the sorts of things that managers want populated because they help us make decisions for how we drive the business. But the reality is, if you dig a little bit deeper behind this pushback, what I have found is there's often a lot more nuance to the story. So, as an example, reps don't generally hate – as much as other things – filling in context about deals, particularly when it is to their benefits. So they're writing down things like MEDDPICC as an example, or BANT, or things along those sorts of lines. That’s stuff that's familiar to them. They understand it, it's about deal strategy, they actually see the inherent value there. [It] communicates to their managers. It actually helps them work with other people on the account team that they want to. So I think an average rep is actually going to see some value in that.

What is incredibly laborious for a rep and just incredibly difficult for them to wrap their heads around in my experience, is the more esoteric stuff. So populating deal type information or figuring out a booking type or something along those lines. Looking it up on a Confluence page and then putting in, well this is an expansion deal versus a “this,” versus a “that” or getting the term start and end dates just right: all of those sorts of things. But the reality is those sorts of things follow business rules. And if they follow business rules, computers are often best suited – way better than humans – to figure out what they look like. So in our environment we ask reps to figure out exactly none of those things. We ask them very basic human-level questions.

So as an example, when you go to move an opportunity out of an unqualified state into a qualified state (this is our stage two), there's a ton of information we gotta figure out at that time in order to pipeline the deal. Is it expansion? Is it not expansion? How does it fit into our booking structure? All that. We don't ask reps any of that. What we do is we scan the Salesforce Account record, we look to see if this is an existing customer, and then we ask them plain text questions: “Is this related to this specific business unit that's already active or is this a new business unit?” Then they make an answer to that question, and then we take that answer and then we fill in all the blanks for them.

So that's the sort of thing that they can get behind because we're not asking them to understand the difference between an expansion and a new subsidiary and all these weird, esoteric things. We're just asking them a question that a person can answer based on the context they have in their head at that moment.

15:34 DS: So the key thing is “don't make 'em go look stuff up.” I'll let them just answer questions in the normal stream of their work, and then it's the RevOps burden to produce all the rest of the information that's necessary to properly allocate that deal to the correct rep.

15:53 JK: Exactly. I have a strong point of view that anytime a manager or a leader says “hey, can you document how this works in Confluence or in the Wiki or whatever the case is…” if I actually have to do that for a day-to-day thing that a rep needs to do, that means I've probably failed at delivering a robust and high-quality user experience for that rep.

Now, we do have some documentation here at Retool, but usually it's the long tail of stuff: the nuances of your comp policy that we don't generally need to talk about unless we have to adjudicate some sort of conflict or a really long tail exception process or something like that. But to just work a deal, you shouldn't ever have to pull up a Confluence page.

16:40 DS: Yeah, I think most people would agree with you on that. Although, to your point, that's the way it gets set up in a lot of cases.

16:47 JK: It is, and the tools now are really, really powerful. So Salesforce has evolved to the point where you can do a bunch of really amazing things with Lightning record pages. So right now, in our Salesforce instance, as you advance opportunities through stages, the few bits of enablement you should need – you need a technical evaluation template, okay, great – a link to that shows up in the upper right hand corner of your deal at stage three when you're about to write the technical evaluation for that deal. And it goes away when you advance to stage four, once you've already submitted that. So what we do is we say, “okay, what are the things that are going to help the reps?” And then let's surface exactly those pieces of information to them at exactly the time that they need it, and then make it go away.

17:32 DS: You said something to me when we talked recently about earning the right to do an analysis. And I would love for you to tell that story or explain that philosophy because I think it's very related to exactly this: data capture and having the right types of data and the right level of data cleanliness.

17:54 JK: When I joined Retool back in 2021, we were just really getting started.

We had a handful of sellers on the sales team and we really didn't understand a whole lot about what our sales process would look like. So our bias was not to get in the way. We wanted to have the lowest-friction sales processes possible because, like I said, we didn't really know what it looked like yet.

So we did things that I don't think are long-term best practice but were pragmatic at the time, like there was no gating or really regulation between stage two when we pipelined a deal and stage six when we were saying the deal was getting signed; very few validation rules or really anything there.

Reps could move forward in stages, backward in stages, do all kinds of stuff. And POCs would happen sometimes and often we'd forget to move it into the POC stage and all of that sort of normal thing. But that was a bit by design because we were really trying to figure out what was happening.

About a year and a half on, I was in our go-to-market leadership channel and one of our new joiners, a new leader, asked a very reasonable question, which was, well “how many deals die at the POC stage?” Stage four, what's our conversion rate look like to Closed Won from that. And a few of the other managers piled on and said “oh yeah, we should definitely look at that and we should diagnose and figure out what's going on – I think we're losing this and that. Jon, can you go and dig into this?”

And I said, no. Absolutely not. I explained that we had no governance, we had no gating. We had now come a long way where we had a sales process we believed in, we understood what it looked like, but we had not actualized that in our processes – not only our systems, but our managers weren't holding reps accountable to making sure stages were where they needed were where they needed to be, etc. And my comment in that moment was “we have not earned the right to ask the business that question.” And I said “if we want to earn the right to ask the business that question, I need from all of you to know exactly what the exit criteria are for all of these deals, to make sure that reps understand that deals move forward and not backwards,

to build all of these things into the system in a robust, easy to capture way, and for all of us to lock arms and say, this is how we sell. We can change that over time, but this is how we sell right now and we're gonna actualize that.” And so we did that three months later and a few months after that – after I built the timestamps and all of these things – I said “okay, great. Now we've got three or four months worth of data. I can answer that question.”

20:31 DS: So just to put a finer point on it, the specific challenge was that before adding that structure in, reps would leave a deal in stage two, probably longer than it should have been. They might have met what should have been the exit criteria, though there was no official exit criteria at the time.

They would've sufficiently advanced the deal but not bothered to change it in the CRM either because they just didn't care to really and there was nothing requiring them to, or because they weren't necessarily aware of that exit criteria. But it sounds like the key there was hardening the criteria for getting from one stage to the next, making sure that it's necessary for reps to do that in order for them to eventually get paid on the other end; to advance the deal at the appropriate points through the stages at Salesforce. And then ultimately that's what's gonna produce some clean data for you to go answer that question.

21:40 JK: Exactly. And that's the key is that clean data isn't just a thing that we do. It's not a vanity exercise. Any data that you collect that is clean should be in service of helping the business answer questions or understand something about itself. Otherwise, what's the point? Then you definitely shouldn't have reps wasting time or anybody really wasting time maintaining it.

22:04 DS: Let's talk about clean data for a second. What does clean data even mean to you? Like if I say the phrase “clean data,” is that important? Is it important to have clean data? Why is that good and why does it matter?

22:19 JK: Yeah, the lifeblood of the business at the end of the day. And I know that sounds like a cliche, but clean data really is. What clean data means to me is that it makes sense. It all hangs together. So if you think about a data set that is a bunch of addresses, the cities that are listed in various rows need to all be parts of the states that they are listed next to. Those states need to all be actually parts of the countries that they are listed next to and when you have New Jersey, as an example, listed, it should always be capitalized in exactly the same way, not slight variations on things. It shouldn't say USA, U dot S dot A dot; all of those things. Because those are the sorts of things which honestly elongate and make very difficult the process of asking questions of the business. I have been part of any number of data and analytical teams where somebody comes to you and you say, “I wanna know X and I think this is really straightforward to get because of Y”; some sort of business question. Then the analyst pulls it; takes it away and says “okay, this makes sense.”

But they know that because they're part of an organization that does not value this, they're gonna have to spend the first half of the project just getting the dataset to the point where they can actually analyze it. So that's the dirty little truth about being on a data and analytics team is that you spend a really unfortunate amount of time in most contexts just cleaning and scrubbing data, not actually even doing the fun stuff, which is analyzing it and talking to the data and seeing what it says, but just getting it to a place where it's not gross. And that's because those who produce the data do not see themselves as accountable for producing clean, consistent data.

24:14 DS: So you bring up an interesting point here, which is that someone's cleaning the data at some point. It's really just a question of who. It could be that we could get people to put clean data into the system. It could be that there are some automatic things within the system to help clean up the data. So, for example, stripping all of the periods out of U dot S dot A so that it matches with USA;. normalizing data in that kind of way. And then it could also just be that “hey, you know, we've put the burden on analysts. We're gonna have messy data in our CRM and every time we want to answer a question, an analyst is gonna essentially go and redo the same work that they do every time of cleaning up what all of the country names look like” as just one example.

But there are many possible ways in which that flavor of messy data can creep in. Is there one of those ways that's better than the others? How do you think about these three things together, these three possibilities?

25:19 JK:Yeah, so I think the most common example I see is that analysts wind up doing the data cleansing. The folks at the very end, very downstream of all of this. The challenge with that is multifold. So the first problem with that is that a data analyst – if you're lucky, you have that capacity on the revenue operations team, but most people don't and they will look to a data team or maybe a finance team or something like – they're too far removed from the day-to-day minutiae of the business and why data structures exist the way they do. And so they wind up having to make a bunch of assumptions, some of which they're able to validate with the business, but most of which they just have to make.

And those assumptions can lead to really, really different outcomes in the way that they're answering the question that's being asked. So in my experience, it's actually a self-preservation issue. If I want other consumers to be able to take the data that we produce and do things that are not damaging with that data, I need to make sure it's as squeaky clean as possible because they're always gonna have to understand the business.

So a data analyst on a data team needs to know the difference between an Account Executive and an SDR and the difference between recurring revenue and a sales qualified opportunity and those sorts of things. That is already a huge education that they have to figure out that is a little bit separate from their day job of being a really robust analyst. But if you have to then, on top of that, understand all the weird edge cases and nuances and corner cases it's no good. I mean, an example that you and I experienced directly was we, at Mode, I remember we had in the early days the revenue query that one of the folks wrote that was to calculate our revenue.

And I remember pulling it up and showing it to you and saying “Derek, why is this 500 lines? It's revenue. It should be like 10 lines with like two things in the where clause.” And you agreed that, you know, every little weird nuance in that query was an opportunity for something to break. But that was what had to happen in order to ingest the data structures that were at the time in our CRM.

27:36 DS: I’m sad to say that I do remember that.

27:37 JK: It's common. It's so common. I see this all the time across companies everywhere. In fact, every company I've ever worked for had exactly this sort of situation when I arrived, which was that they felt like they could not count basic business operating metrics without doing a huge amount of gymnastics that, frankly, nobody really understood because those queries would get written and then they would get left alone – not looked at – all the institutional knowledge would melt away for why they existed in the first place.

28:10 DS: The punchline to that, I think, is “and that's why it's so critical for Revenue Operations to own this particular set of data and make it outstanding.” Because if there is an owner, it certainly seems to me that Revenue Operations is the right one for owning the data that is input and then ultimately presented through the CRM.

28:33 JK: I do think revenue operations is the correct owner for the go-to market data, just writ large, all of it, making sure that it is clean, consistent, and all of that. And that is why, over the last 10 years, I think there has been a trend of Revenue Operations teams not only coming together (so being born of siloed, Sales Ops, Success Ops, Support Ops, things along those lines), but coming together and saying “hey, this go-to market organism is something that we need manage together.” Because if you don't bring those teams together, they're gonna self-organize.

But also, those teams need to have SQL knowledge at a minimum. I would not hire somebody on my team that did not have SQL knowledge today and I think that was probably a hot take a few years ago. It's becoming less of a hot take now. The second thing is Revenue Operations teams are having to build their own data engineering and analytics skill and capacity in house, and that's not always because there is no data analytics or data engineering skill at the organization. Often there is a team like that; we have that at Retool, but we can be such better partners to that team if we show up as good customers for them and not as totally ignorant customers for them. So as an example, everybody on my team, including myself, we have licenses to dbt Cloud, we commit to our dbt analytics repos. We understand the stack – how all the data flows together – because we need to understand what's gonna happen with our data in order to make really good decisions about how we model and construct our data upstream. So we're very tight with our data analytics team and our data engineering team, and it's important that we also have our own access to those tools. So we use Polytomic as an example right now, which is our reverse ETL vendor that helps all kinds of data to all kinds of places. But we use it, not just our data team.

The data team uses it to move bits around and handle a bunch of ETL type jobs. We actually have our own space inside of our data warehouse and our own set of several hundred Polytomic jobs that manage our housekeeping. So whether it's data prep, data cleanliness, moving things between our own systems or handling how things move into our broader analytics stack. And that's super important for us because we need to be able to move quickly and we need to be able to manage the 42 systems that are under our umbrella.

31:02 DS: Jon, I think that's a good segue into something we had talked about before, which is some of the customized tooling that you've built that connects the data warehouse and Salesforce in a way that allows you to understand the business a little bit better, but also maintain that clean data. So I would love for you to jump into that and show us a little bit of what you've built, showing, the pot of gold at the end of the rainbow, so to speak; what people can look forward to and what they can gain in their businesses by having clean data, but also how you even got there.

31:42 JK: Common theme: this all really starts with having clean, consistent data that's running through the system. We talked a little bit about how we can get that clean data from a business process perspective by making sure that we're asking reps questions using automations and systems to parse the answers to those questions and make decisions about things like booking types and all of that.

But that isn't necessarily sufficient. That gets you part of the way there. The reality is that a business with all of its nuances and decisions and things like that is a living, breathing organism that has very analog-style instincts. Everything doesn't fit neatly into a validation rule or some sort of structure.

So in order to get to a world with really clean data, you actually need a multi-pronged defense strategy. So one is around process, which we just talked about. The other one is around observability and making sure that your CRM doesn't become this system that has a million validation rules in it such that every time your, reps try to move a little bit, they feel like they're in a straight jacket, because that's no good either.

But you still need to make sure that data is consistent and looks the way it's supposed to look. So in our CRM we actually use precious few validation rules. There's probably four or five, I would say, on the Account and the Opportunity objects primarily. And they're for things that absolutely cannot ever go wrong, and they fire almost never. So they're really just kind of traffic stoppers if you will. The way that I maintain really clean data looking across the organization has to do with a testing infrastructure that I built a couple of years ago after I just got really frustrated with… generally it was cross object stuff that was inconsistent or even within the same object and I just didn't wanna go down the validation rule path.

So the testing infrastructure is fairly straightforward. It is a set of several hundred SQL queries that are all declarations about our environment. They run in our data warehouse and they run against a replica of our Salesforce database that we dump into the data warehouse like most people do. and they look for things that might go wrong. And they're things that could be really tiny like there is a missing piece of data somewhere that there really should be. Or maybe an inconsistency, like we're saying an opportunity belongs to the Mid-Market segment, but it's currently owned by an Enterprise rep. That's the sort of thing that’s a yellow flag. It might be wrong, it might also just be an exception that we made because we decided to make an exception that day for whatever reason. But it looks a little strange and it looks a little funny and we should talk about it.

So there are two key parts to this assuming you have an analytics team that can get that read replica going in your data warehouse and a dbt project. And inside our dbt Rebo, like I said, we have several hundred declarations that all say something about how our business should function. So as an example right now, this declaration makes sure that the owner of the opportunity, their attribution that's on that opportunity – that they are in the sales team and they're on the Account Executive sub-team and they're on the Enterprise sub-team – matches what is actually written in the object.

This allows me to go and observe when there are problems. Maybe there are issues with our automations or whatever it is, but without bothering the reps, without creating a validation rule that stops them in their tracks. It also allows me to move a lot faster. One thing that a lot of organizations do, and it's become an increasing trend inside of Salesforce development, is actually developing directly in production.

And Salesforce's product has actually pushed us in this direction by creating better debugging tools in production, in Salesforce flow and in other things because they realize that you can move a lot faster than if you have multiple layers of sandboxes and deployments and things like that. All of that is still very powerful and very useful for very large organizations that have to coordinate. But if you are several hundred employees and you have a business that's still very fast growing, this sort of thing can be huge because it means I can move faster without having to test every little thing, knowing that my testing infrastructure is gonna tell me when something goes wrong and will let me know if… it basically will be the set of indicator lights that will go off flashing and so it tells me if I broke something. Again, not something I would advise if you are working at a publicly traded company or some sort of like much larger organization. But for a fast-growing startup, I think that's the right agility trade off. So all of these tests run every couple of hours against our organization.

And then once we've got those running, we actualize them inside of this Retool app. We call it our data testing app. And what this lets us do is this lets us look and see by object, opportunity account, etc. what's wrong. So you'll see these are the results of these tests and these are all things that I have to look through and fix.

These are inconsistencies with some of our finances. This is telling me that we have one deal that hasn't been marked as signed in our CPQ solution, etc. Now these are all, like I said, yellow flags. Sometimes they're red flags and I address them right away. And once I do, I can click on the record, open it, investigate. Okay, great. I took care of it. The Retool dashboard now writes back the fact that I took care of it, and it snoozes the record. This is all captured in a custom object in Salesforce.

Sometimes I wanna say “you know what? You know, yes. This rep went to EMEA for March.” And so what I might do is I might go and say, I'm gonna snooze this until this date. Great. And then this test will not show up again, or this failure will not show up again until that snooze date. So I can say “Hey, I just wanna put this on ice for a while,” and I can go and review those and see all of those exceptions. This has saved probably two or three headcount in my organization by making sure that I always have somebody being the testing infrastructure, looking over the data and making sure everything consistent

38:13 DS: When you say this is saved headcount, you mean that for some people to manually go and review all of this data, it would be two or three full-time people, is that what you're saying?

38:25 JK: It would be both two or three full-time people, and I think about the fire drills and the fire alarms saved when things break and we have to rally all the troops to get some data together for a board meeting or something like that because things break at inopportune times. We don't have those problems because we monitor things constantly.

So we make sure that things are always ready to be analyzed and looked at. You can see this is anonymized, but it's live data. We do have issues in our stack right now, but they are manageable and muted and things that we will handle. And a lot of these are actually things that are pending project work that we're doing to make sure that they get fixedso they're within acceptable tolerances.

39:08 DS: So let's say I’m on the Revenue Operations team, maybe I'm a Salesforce Administrator. I'm a little earlier in my career. I see this and think “oh wow, that's really valuable. Clearly we should have that at my company.” How would I go about recreating this? And I mean what skills do I need to learn in order to do this?

You mentioned dbt, you mentioned the data warehouse. How would you advise someone who doesn't know how to do any of this stuff on where to get started and how to end up at this place?

39:46 JK: Yeah. I mean, a lot of the technologies that we just talked about make this process simpler and easier, but at their heart, all we're doing is running a bunch of SQL queries and those SQL queries return results if something's wrong, and don't return results if nothing’s wrong. So if you can work with your data team to – or even if you can get a data warehouse stood up, uh, which is not that hard to do these days, like Amazon, Redshift or something like that – buy something like a Stitch or Fivetran off the shelf (Also pretty easy to do to dump your Salesforce data directly into it) then it's just a matter of running these queries. Every single one of the tools I just mentioned has the ability to schedule running SQL queries, so you run them and you get an output and you look at it.

That's all this is at the end of the day. A lot of this stuff like the app that we're staring at right now, this is workflow management. This is taking that output and giving me the ability to say, I addressed this problem. I didn't address this problem – that sort of thing. This is, in my mind, how this sort of infrastructure becomes collaborative, how it becomes auditable, and all of that.

But at its core, it's really that. And if you really, really wanna bake this down to its core, it's downloading data into, I don't know, Google sheets and doing some VLOOKUPs and writing some formulas and making sure things match.

41:03 DS: Yeah, but your point – I think the broader point – is this is a job that's easier to do outside of Salesforce rather than, in particular, imposing validation rules in Salesforce. So the key insight here is you've chosen to evaluate data after it has already been produced so that it is lower friction to reps rather than putting a bunch of blockers in front of them by having a validation rule on every single one of these things. And then you can go and address these things at some later date, you know, based on how important they actually are.

And like you said, in some cases there will be exceptions so you'll want to handle those manually. You don't necessarily want to have the validation rule because you will have exceptions here and there.

41:49 JK: That's exactly right. I think each of the things that we're talking about here, whether they're validation rules or analyzing your data after the fact, or even other tools in Salesforce like dependent picklists is a form of data validation. All of these things should be tools in a modern RevOps person's toolkit and we should apply them in very different ways.

Another form of data validation, frankly, has to do with UI. So I was talking to a member of my team. We have a button on our account record that allows you to click that button and create a new contract on the account. Very simple thing to do.

Well, you have a lot of choices about how you design the button. Does everybody see the button? Does just the account owner see the button? Does just somebody who's on the contracts team see the button? Well, all of these things lead to slightly different UIs and UXs. If everybody sees the button, then you've gotta make sure that the right people are clicking the button and the wrong people aren't clicking the button. And if the wrong people click the button, then it can create data issues downstream. So, okay, let's hide the button from those.

But now if I hide the button from folks, sometimes they're gonna start pinging you and they're gonna be like “well, I don't see this button.” And then you're gonna have to explain to them “well, you don't see the button because this account you're allowed to create contracts on, but that account you're not allowed to create contracts on.” So in this instance, I might have the button visible to create a good UX, but when you click on it, it provides you with an error that says “Hey, you need to move the account into a different state before you're able to create a contract.”

So the little nuance there: is it hidden when it's not relevant, or is it visible when it might be relevant? And then you throw an error that tells the rep how to fix it – it's very important because in the former case, I wind up with a Slack message from the rep saying “I can't find the button you just me about.” In the latter case, yeah, I create a little bit of friction, but I educate the rep at the same time.

And what I just described and thinking through that UX is ultimately about clean data because it allows only the right people to interact with the system in exactly the way I want them to and/or take the corrective actions I need and educate them on it.

44:13 DS: This goes to what you were saying right at the beginning of our conversation about Steve Jobs and how he declared that the iPhone shouldn't have a manual, it should just work. And how RevOps teams really are like the software and design and UX teams for the internal customers of salespeople, sales, leadership, customer success, and so forth. That's, I think, a good way to highlight exactly what you're talking about.

44:45 JK: Exactly, and that's why it's so important, I think, for revenue operations folks not to be Salesforce admins or strategy people or whatever. We are building a product and we all have to think of it exactly that way. We all have to think about the nuances. We all have to really sweat the details about this stuff.

And as an example, if I were to share my Salesforce view unvarnished and unanonymized right now, you would see tons of buttons and tons of fields and tons of things that if you were a rep, you wouldn't see. Because I'm an administrator and so I need lots of knobs and switches and dials to pull and all kinds of weird, esoteric stuff that I wanna look at. All of that should be abstracted away from a seller, 100%. So a seller will only see the one or two things they need to do at moment.

45:42 DS: You've shown us how you keep data clean. What do you get when you have this clean data? Do you have an example of, like I said, the pot of gold at the end of the rainbow? What's the huge benefit that you get with all these systems in place?

45:57 JK: So one thing that is one of the main work products that we get: you talk about analysis and analyzing the data and understanding the data. Well, this is a common – what I'm about to describe is a common challenge that often times Data teams are asked to solve, which is taking raw data and modeling it in a way that is consistent. The way we calculate win rate should be exactly the same way every single time. The way that we calculate coverage ratios and pipeline and all of those sorts of things should be the same all the time. A lot of times, folks will turn to Looker with LookML or layers in dbt or something like that.

The challenge with all that – and that's all great – is that you're now pulling the expertise that is defining those metrics further away from the people who actually understand the nuance behind all of those metrics and the people who are incentivized to maintain it all. So what we've done here is rather than outsource that to the data team, we have a custom object in Salesforce called the Goal Object, which started as a way for us to understand what everybody's goals were.

So you're an SDR, you've been here for 10 months, there's 10 rows in the goal. Object your name next to each row and every single month that you've been here it's got the number of sales qualified opportunities you're supposed to deliver. I built this a long time ago because, again, at every company I've ever worked for, one area of very bad data has always actually been about the rep roster.

So who's on what team? What quota do they have? What manager do they work for? What role were they in? Oftentimes we know what that looks like right now, but we actually care most for analysis about understanding what that looks like going back into history. So I built it myself, which was to map all of that stuff out.

And then what this object became over time was bigger than that because if you had the goals on a single object for every single month, period, you also could figure out the actuals. You could figure out performance, you could figure out how people were trending, and you could build a whole bunch of basic transformations on that data that allow you to very quickly get to answers of common questions that folks ask and then make that self-service.

People knock Salesforce reporting as an example, and they should 'cause it's pretty lousy. But at the end of the day, it's just an easy way to pull records out of a database and visualize them. So if you do a little bit of the modeling ahead of time, Salesforce reporting can actually be quite powerful.

So as an example, our Goal object has all of the things that I just described. So if we pull a very quick query of our goal objects, we can see here: this is exactly what I was saying for a given rep – and I've anonymized this data so you know, this is all the same rep – I know for this month, this period – so this was May – what was their ramp percentage? What was that number month that they were in seat in that role? What team, sub-team, level, segment, territory did they support? And then I've got their goals. So this was this particular rep's ARR goal, the number of sales qualified opportunities that they needed to produce, and so on and so forth.

Now, if you go into the query here and you look, millions of transformations are available. So I can look at their sales qualified opportunities quarter to date. I can look at them half to date. I can look at them year to date. I can do all of these things. You can see deal counts are available, average deal size, all of these permutations that are all things that you might ask.

Now, this is really great raw data to have access to, and this is a hundred percent automated, but it turns into nice looking reports that look like this, that are easily viewable in Salesforce and are very exec-friendly. So, revenue pacing, you can see folks' goals, et cetera. This is for an AE team. This is for an SDR team that we have. How many sales qualified opportunities that are there. So this stuff is what actually allows us to answer questions really quickly. Then having the folks on my team be literate in SOQL, which is the Salesforce query language, They can hook this stuff up to spreadsheets real fast.

And so I remember when the world started turning sideways about a year ago, and all of the macroeconomic contagions started, we, like many businesses said “hold the phone – we gotta figure out what's going on here.” Also we, like many businesses, had our BizOps team do a full-on analysis to figure out what was going on.

And I delivered this goal object to the BizOps team, and they were able to produce something like a 300-page analysis and a look-back in about 48 hours using this data because it was easy and clean and it had every rollup that they needed. And they just threw it into charts, reviewed it, and we've actually been able to rinse and repeat that.

Now, it's a lot of pages and we've since gotten it down to about 70 that matter. But the reality is that's where the value in clean data is. Because if you had asked somebody to go on a fact finding mission in some of the other organizations that don't have the sorts of thing, that would've been a month and a half long exercise, it would've been multi-part, lots of folks cleaning and scrubbing and standing it all up and then to do it again the next quarter would've been really hard. So those are the hidden ways that clean data really accelerates the business.

51:35 DS: you've talked a lot about the value of these things to management and to the leadership team in making decisions and being able to very quickly get analysis done that is gonna drive some of these decisions. But why should reps care? Or should they? Does that even matter?

51:54 JK: I think at the end of the day, reps probably shouldn't necessarily care about the clean data. If you get this right, they shouldn't have to worry about it. They should be doing their jobs, entering the information that makes sense to them about the deal. So things like MEDDPICC and deal strategy and all of those sorts of things, which they care about entering because their managers will ask them about it – will ask them about what's going on with the deal – and they will care about entering that information if they're a bit more senior and have an account team that they're working they wanna work well with others.

So if they're collaborative folks, they're gonna care about that sort of stuff. Reps absolutely should not care about how your bookings come together. They shouldn't care about attribution. They shouldn't care about any of the esoteric things that do drive a lot of the decision making. Which is why, in my view, a Revenue Operations systems team’s efforts need to go towards only interfacing with the reps to ask them the plain-text questions that they absolutely need to ask them in order to make those decisions, and then have everything else automated and automatically categorized by the system.

However, if you wanna get a little bit more meta about this, reps should care about management decision making because – and this is what I tell my reps when they ask these questions – this is how I get you more money. This is how I get your quotas lowered when times are tough. This is how I make the case that our ASPs are increasing and our deal velocities are increasing.

And that means that we can support higher OTEs, and we're producing more, and all of these sorts of things which they do care about. Or, as an example, if we have really good data about professional services and how it's working with our subscription business, that might give me the ability to go to finance and argue that you should be making a small bounty on every professional services deal that we sell.

There's that sort of thing; they do care about those things. So if you can link clean data back to their pay, to their day-to-day experience, then you definitely get folks caring about it.

53:59 DS: All right. Thanks John. This has been a really great session. I appreciate the deep-dive and showing some of the stuff that you've built directly in Salesforce and Retool. I think this is gonna give people a really good inspiration for what they can someday aspire to in the world of clean data or maybe give people the pathway there now.

54:24 JK: Awesome. Yeah, this was fun.


Chapters

The Cleanest CRM Data in the Business

feat. Jon Krangel

Jon Krangel of Retool shares how they keep the cleanest Salesforce data in the business and some of the valuable things they do with it. He discusses the importance of hiring people with technical skills in RevOps, then provides a live demo of how they work in Retool's dbt and Retool instances to keep data clean. He also discusses the philosophy behind understanding data's cleanliness after it has been created rather than enforcing rules and the time it is created. Finally, he shares the Goals object his team created in Salesforce to show the benefit of all his work on producing clean data.

Chapters

Transcript

0:27 Derek Steer: Jon

0:29 Jon Krangel: Derek,

0:30 DS: Thanks for joining me.

0:33 JK: Yeah, good to see you as well. Thanks for having me on.

0:36 DS: Why don't you start by introducing yourself. What do you do now? How'd you get there? And why did you choose to do this?

0:46 JK: Yeah, sure. So right now I am the Revenue Operations lead at Retool. So I lead a team of folks who cover all the go-to-market functions, marketing across strategy, operations systems, and other things like variable compensation and deal desk. It's the greatest job in the world. I love it. We have an amazing team, and I'm fortunate to be able to work on incredible infrastructure with them every single day and help the business make a ton of important decisions.

I got here actually through a bunch of interesting opportunities, of which one where we worked together at Mode where, I think I was the Head of Operations or something, which was basically a bunch of stuff including sales ops and strategy and a bunch of other things.

And really it was a combination of experiences where I was actually in the front of the house selling, doing customer facing work as well as the back of the house after I got into tech that culminated in having, the experience that I do right now, which this wide angle perspective across the go to market organization.


1:56 DS: What I remember really well from when we worked together was your technical bent – your ability to work with systems more effectively than anyone else I've seen in this particular role. I always find it interesting when we talk about this to remember that you at one point were a seller because I think of you as so technically advanced, but maybe talk about how you got some of those technical skills and how you became so systems oriented.

2:28 JK: Yeah, totally. So from a very young age, I've always been kind of an engineer at heart. I take things apart. I remember I got in trouble once when I was like eight because I took apart our television and my dad was not happy with that. But I wanted to see how it works and that was just like part of my psyche and that continues to this day.

When I got into business eventually, I always found myself applying technical skills, whether it was really large and unwieldy VBA-driven spreadsheets to help us fix some accounting or finance issue; and so that became a thing that I started to lean into over time. And what I found was bringing technical skills, even if they're not the most technical – I would never call myself a software engineer – but bringing some technical skills to a non-technical function or vice versa is a superpower because you wind up being able to stand out among your peers.

So as an example, I have always been very SQL literate. It is one of the most important technical skills in my toolkit and for my team as well. And when I was a seller, that was something I really leaned on. So there were zero sales reps that I knew who could write SQL or could do any sort of more advanced analytics. But I could, so when I became one, that was what powered all of my sales pitches. So I had queries and Tableau dashboards and all kinds of things that I would bring to customers to show them how they were stacking up against the competition and things like that.

I certainly was not the most technical seller from a kind of sales fundamentals perspective. I probably didn't do great discovery. I probably didn't do all the things you're supposed to do as a seller, but I learned a little bit about that. But what I did bring to the table was a whole different perspective to sales, which was very data-driven and being able to troubleshoot their problems – and when I say they, I mean the customer's problems – without them having to call a sales, you know, a support hotline or something like that.

4:29 DS: When you say you have always been very SQL literate, when does always start? How did you even get those skills in the first place and how would you recommend that someone go and get those skills today?

4:41 JK: Yeah, so I think I learned SQL for the first time right after college when I was working at Deloitte Consultinga and I was working – my client was a major financial institution that everybody's heard of – and I was working deep inside the bowels of their Oracle general ledger and consolidations and reporting system. Like basically how balance sheets balance and come together. I taught myself basic SQL in order to actually interface with that database because I needed to get ledger entries out and do some light analysis.

I don't remember exactly how I learned it; maybe probably Googled a little bit, maybe W3schools or some sort of equivalent sort of thing. It's shame. I don't think Mode SQL School existed back then. If it had, that would've probably been my first port of call. But it was definitely a really powerful weapon for figuring out how to speak to databases, get data out of them, and understand what was going on inside of them.

5:44 DS: Yeah. The thing about this that's interesting to me is you learned it out of necessity in a different job; a fundamentally different job from what you do today, but the carryover has been really direct and a powerful influence on the way that you do your job today.

5:58 JK: Very much so, and I would even say that the actual SQL syntax is 5% of it. It’s actually learning about how to think like a database. How tables come together; when you look at an application seeing… you know, when I look at an application right now and I see the front end of it, I actually see “what are all the objects behind the scenes that are coming together to model all of this stuff,” and it helps you understand how software comes together. And if you see how software comes together, it makes it easier for you to analyze the behavior of that software or understand how to dig into or query it. And then, of course, all of this is all in service of answering questions about the business, because at the end of the day, that's why we look at data: to answer questions, give us insight, help us predict what's gonna happen in the future.

6:49 DS: I had this experience, too, working in data where the first software company that I worked at, I, through working with the data, started to understand how software applications are structured. You know, the way that an application will read data and then display it to you on the screen as you're saying. So now I do the same thing: when I see a web app or a website I just deconstruct it mentally into all the components that would be displayed and how you would structure that, and there is a real utility to that when you are doing data analysis. One thing I've also noticed is that there's a real utility in thinking about a system like Salesforce that is essentially just a big database.

There are a lot of UI components on top of it, and there are validation rules and all the other things that go into a Salesforce instance, but at its heart it is data structure and understanding data structure. And so I'm curious if there are any specific ways where you've seen that kind of knowledge or expertise play out in the Salesforce systems world now that you are firmly in that RevOps seat.

7:56 JK: Definitely, I mean, Salesforce – really the answer is Salesforce most of the time – but the CRM is the beating heart of the go-to-market organization. It is the center, it's the brain, and understanding not only the kind of Salesforce-specific nuances (or whatever CRM you choose to use), but more importantly how those things interface with the actual business concepts that are important to you and to your business is incredibly important.

And mapping those two things together, it's kind of like when you were talking about kind of web application, somebody has to build the connective tissue between the thing that the users are interacting with – the front end and the logic that they care about – and the technical data that goes underneath all of it. So having that perspective, I think, is very important and ultimately, at the end of the day, when you're in revenue operations you really are just a product and design and EPD team thats customer is the go-to market. So that is how I think about our role, is we have to be totally obsessed with the customer, so our AEs are SDRs, our Sales leadership, all of these folks – even the board is a customer because they consume all of the outputs eventually of what the go-to-market produces. So we have to think about what kind of data structures need to underlie the business in order to give us the answers that we need or the visibility that we need, and then that becomes Salesforce configuration.

9:27 DS: So you're getting into a little bit of the job function of RevOps: who your customers are and what you need to provide to them. This is something that I'm very interested in because I also have found that people have different perspectives depending on their backgrounds.

And so I'm curious how you think of the role of Revenue Operations and really what it is you're trying to achieve. In two years at Retool, or two years from now at Retool, what will make you say “that was a great two years; I was successful.”

9:58 JK: Yeah. Yeah, absolutely. At the end of the day, my personal goal for the RevOps team is that we are the stewards. We are building the ultimate go-to-market machine for Retool. Now, every single one of those words has meaning to me. We wanna aspire to build something really incredible.We also want this thing to literally be a machine. And a machine is elegant, it is efficient, it functions well, it is something that produces a known output given certain inputs. So that's the way that we think about our job and we think about the infrastructure and the processes and everything that go into it.

At the end of the day, what we're trying to do is we're trying to accelerate the business – so help people work better, faster, stronger – and accelerate high-quality decision making. Because one thing that we both felt the pain of is trying to make a decision in a business context and – let's be honest – best case, you have 8% of the information you feel like you need in order to make a high-quality in any scenario. But most of the time we feel like we have one or 2% of the information that we could have, and that's the stuff that's really frustrating. When we feel like the business is not giving us clear and concise answers to things that we think it ought to. And so it's our job to make sure that that happens and that it happens in a way that is not obnoxious to the team.

So I've been a part of organizations where the operations team and the systems team feel like they're totally running the show in a way that is not productive and they have systems that are built in ways that sellers hate, that managers don't really know how to interface with. We don't want to do that here at Retool. We want this thing to feel kind of like… our aspiration is like when the original iPhone came out, Steve Jobs was very keen on saying that it didn't come with a manual because you shouldn't need one. You should turn it on – there was the one button at the top – and it should just make sense.

12:04 DS: So I think it's an admirable goal: you want your business systems to just make sense when people use them; Salesforce and anything else by extension. But it's a really hard line to toe. You nailed it that you want to make things easy for sellers, but they want just as little responsibility for this clean data as possible, in my experience, because that responsibility is not fun. It feels like extra work. You know, the common refrain is “Hey, I'm supposed to be selling, not punching data into my CRM.” So how do you find that line? Where do you draw the line so that you get what you need to make those decisions as a business, but also don't slow down the sales process too much.

12:51 JK: I hear reps say all the time that “I don't wanna fill in this field or that field, it's just gonna slow me down.” And those are often the sorts of things that managers want populated because they help us make decisions for how we drive the business. But the reality is, if you dig a little bit deeper behind this pushback, what I have found is there's often a lot more nuance to the story. So, as an example, reps don't generally hate – as much as other things – filling in context about deals, particularly when it is to their benefits. So they're writing down things like MEDDPICC as an example, or BANT, or things along those sorts of lines. That’s stuff that's familiar to them. They understand it, it's about deal strategy, they actually see the inherent value there. [It] communicates to their managers. It actually helps them work with other people on the account team that they want to. So I think an average rep is actually going to see some value in that.

What is incredibly laborious for a rep and just incredibly difficult for them to wrap their heads around in my experience, is the more esoteric stuff. So populating deal type information or figuring out a booking type or something along those lines. Looking it up on a Confluence page and then putting in, well this is an expansion deal versus a “this,” versus a “that” or getting the term start and end dates just right: all of those sorts of things. But the reality is those sorts of things follow business rules. And if they follow business rules, computers are often best suited – way better than humans – to figure out what they look like. So in our environment we ask reps to figure out exactly none of those things. We ask them very basic human-level questions.

So as an example, when you go to move an opportunity out of an unqualified state into a qualified state (this is our stage two), there's a ton of information we gotta figure out at that time in order to pipeline the deal. Is it expansion? Is it not expansion? How does it fit into our booking structure? All that. We don't ask reps any of that. What we do is we scan the Salesforce Account record, we look to see if this is an existing customer, and then we ask them plain text questions: “Is this related to this specific business unit that's already active or is this a new business unit?” Then they make an answer to that question, and then we take that answer and then we fill in all the blanks for them.

So that's the sort of thing that they can get behind because we're not asking them to understand the difference between an expansion and a new subsidiary and all these weird, esoteric things. We're just asking them a question that a person can answer based on the context they have in their head at that moment.

15:34 DS: So the key thing is “don't make 'em go look stuff up.” I'll let them just answer questions in the normal stream of their work, and then it's the RevOps burden to produce all the rest of the information that's necessary to properly allocate that deal to the correct rep.

15:53 JK: Exactly. I have a strong point of view that anytime a manager or a leader says “hey, can you document how this works in Confluence or in the Wiki or whatever the case is…” if I actually have to do that for a day-to-day thing that a rep needs to do, that means I've probably failed at delivering a robust and high-quality user experience for that rep.

Now, we do have some documentation here at Retool, but usually it's the long tail of stuff: the nuances of your comp policy that we don't generally need to talk about unless we have to adjudicate some sort of conflict or a really long tail exception process or something like that. But to just work a deal, you shouldn't ever have to pull up a Confluence page.

16:40 DS: Yeah, I think most people would agree with you on that. Although, to your point, that's the way it gets set up in a lot of cases.

16:47 JK: It is, and the tools now are really, really powerful. So Salesforce has evolved to the point where you can do a bunch of really amazing things with Lightning record pages. So right now, in our Salesforce instance, as you advance opportunities through stages, the few bits of enablement you should need – you need a technical evaluation template, okay, great – a link to that shows up in the upper right hand corner of your deal at stage three when you're about to write the technical evaluation for that deal. And it goes away when you advance to stage four, once you've already submitted that. So what we do is we say, “okay, what are the things that are going to help the reps?” And then let's surface exactly those pieces of information to them at exactly the time that they need it, and then make it go away.

17:32 DS: You said something to me when we talked recently about earning the right to do an analysis. And I would love for you to tell that story or explain that philosophy because I think it's very related to exactly this: data capture and having the right types of data and the right level of data cleanliness.

17:54 JK: When I joined Retool back in 2021, we were just really getting started.

We had a handful of sellers on the sales team and we really didn't understand a whole lot about what our sales process would look like. So our bias was not to get in the way. We wanted to have the lowest-friction sales processes possible because, like I said, we didn't really know what it looked like yet.

So we did things that I don't think are long-term best practice but were pragmatic at the time, like there was no gating or really regulation between stage two when we pipelined a deal and stage six when we were saying the deal was getting signed; very few validation rules or really anything there.

Reps could move forward in stages, backward in stages, do all kinds of stuff. And POCs would happen sometimes and often we'd forget to move it into the POC stage and all of that sort of normal thing. But that was a bit by design because we were really trying to figure out what was happening.

About a year and a half on, I was in our go-to-market leadership channel and one of our new joiners, a new leader, asked a very reasonable question, which was, well “how many deals die at the POC stage?” Stage four, what's our conversion rate look like to Closed Won from that. And a few of the other managers piled on and said “oh yeah, we should definitely look at that and we should diagnose and figure out what's going on – I think we're losing this and that. Jon, can you go and dig into this?”

And I said, no. Absolutely not. I explained that we had no governance, we had no gating. We had now come a long way where we had a sales process we believed in, we understood what it looked like, but we had not actualized that in our processes – not only our systems, but our managers weren't holding reps accountable to making sure stages were where they needed were where they needed to be, etc. And my comment in that moment was “we have not earned the right to ask the business that question.” And I said “if we want to earn the right to ask the business that question, I need from all of you to know exactly what the exit criteria are for all of these deals, to make sure that reps understand that deals move forward and not backwards,

to build all of these things into the system in a robust, easy to capture way, and for all of us to lock arms and say, this is how we sell. We can change that over time, but this is how we sell right now and we're gonna actualize that.” And so we did that three months later and a few months after that – after I built the timestamps and all of these things – I said “okay, great. Now we've got three or four months worth of data. I can answer that question.”

20:31 DS: So just to put a finer point on it, the specific challenge was that before adding that structure in, reps would leave a deal in stage two, probably longer than it should have been. They might have met what should have been the exit criteria, though there was no official exit criteria at the time.

They would've sufficiently advanced the deal but not bothered to change it in the CRM either because they just didn't care to really and there was nothing requiring them to, or because they weren't necessarily aware of that exit criteria. But it sounds like the key there was hardening the criteria for getting from one stage to the next, making sure that it's necessary for reps to do that in order for them to eventually get paid on the other end; to advance the deal at the appropriate points through the stages at Salesforce. And then ultimately that's what's gonna produce some clean data for you to go answer that question.

21:40 JK: Exactly. And that's the key is that clean data isn't just a thing that we do. It's not a vanity exercise. Any data that you collect that is clean should be in service of helping the business answer questions or understand something about itself. Otherwise, what's the point? Then you definitely shouldn't have reps wasting time or anybody really wasting time maintaining it.

22:04 DS: Let's talk about clean data for a second. What does clean data even mean to you? Like if I say the phrase “clean data,” is that important? Is it important to have clean data? Why is that good and why does it matter?

22:19 JK: Yeah, the lifeblood of the business at the end of the day. And I know that sounds like a cliche, but clean data really is. What clean data means to me is that it makes sense. It all hangs together. So if you think about a data set that is a bunch of addresses, the cities that are listed in various rows need to all be parts of the states that they are listed next to. Those states need to all be actually parts of the countries that they are listed next to and when you have New Jersey, as an example, listed, it should always be capitalized in exactly the same way, not slight variations on things. It shouldn't say USA, U dot S dot A dot; all of those things. Because those are the sorts of things which honestly elongate and make very difficult the process of asking questions of the business. I have been part of any number of data and analytical teams where somebody comes to you and you say, “I wanna know X and I think this is really straightforward to get because of Y”; some sort of business question. Then the analyst pulls it; takes it away and says “okay, this makes sense.”

But they know that because they're part of an organization that does not value this, they're gonna have to spend the first half of the project just getting the dataset to the point where they can actually analyze it. So that's the dirty little truth about being on a data and analytics team is that you spend a really unfortunate amount of time in most contexts just cleaning and scrubbing data, not actually even doing the fun stuff, which is analyzing it and talking to the data and seeing what it says, but just getting it to a place where it's not gross. And that's because those who produce the data do not see themselves as accountable for producing clean, consistent data.

24:14 DS: So you bring up an interesting point here, which is that someone's cleaning the data at some point. It's really just a question of who. It could be that we could get people to put clean data into the system. It could be that there are some automatic things within the system to help clean up the data. So, for example, stripping all of the periods out of U dot S dot A so that it matches with USA;. normalizing data in that kind of way. And then it could also just be that “hey, you know, we've put the burden on analysts. We're gonna have messy data in our CRM and every time we want to answer a question, an analyst is gonna essentially go and redo the same work that they do every time of cleaning up what all of the country names look like” as just one example.

But there are many possible ways in which that flavor of messy data can creep in. Is there one of those ways that's better than the others? How do you think about these three things together, these three possibilities?

25:19 JK:Yeah, so I think the most common example I see is that analysts wind up doing the data cleansing. The folks at the very end, very downstream of all of this. The challenge with that is multifold. So the first problem with that is that a data analyst – if you're lucky, you have that capacity on the revenue operations team, but most people don't and they will look to a data team or maybe a finance team or something like – they're too far removed from the day-to-day minutiae of the business and why data structures exist the way they do. And so they wind up having to make a bunch of assumptions, some of which they're able to validate with the business, but most of which they just have to make.

And those assumptions can lead to really, really different outcomes in the way that they're answering the question that's being asked. So in my experience, it's actually a self-preservation issue. If I want other consumers to be able to take the data that we produce and do things that are not damaging with that data, I need to make sure it's as squeaky clean as possible because they're always gonna have to understand the business.

So a data analyst on a data team needs to know the difference between an Account Executive and an SDR and the difference between recurring revenue and a sales qualified opportunity and those sorts of things. That is already a huge education that they have to figure out that is a little bit separate from their day job of being a really robust analyst. But if you have to then, on top of that, understand all the weird edge cases and nuances and corner cases it's no good. I mean, an example that you and I experienced directly was we, at Mode, I remember we had in the early days the revenue query that one of the folks wrote that was to calculate our revenue.

And I remember pulling it up and showing it to you and saying “Derek, why is this 500 lines? It's revenue. It should be like 10 lines with like two things in the where clause.” And you agreed that, you know, every little weird nuance in that query was an opportunity for something to break. But that was what had to happen in order to ingest the data structures that were at the time in our CRM.

27:36 DS: I’m sad to say that I do remember that.

27:37 JK: It's common. It's so common. I see this all the time across companies everywhere. In fact, every company I've ever worked for had exactly this sort of situation when I arrived, which was that they felt like they could not count basic business operating metrics without doing a huge amount of gymnastics that, frankly, nobody really understood because those queries would get written and then they would get left alone – not looked at – all the institutional knowledge would melt away for why they existed in the first place.

28:10 DS: The punchline to that, I think, is “and that's why it's so critical for Revenue Operations to own this particular set of data and make it outstanding.” Because if there is an owner, it certainly seems to me that Revenue Operations is the right one for owning the data that is input and then ultimately presented through the CRM.

28:33 JK: I do think revenue operations is the correct owner for the go-to market data, just writ large, all of it, making sure that it is clean, consistent, and all of that. And that is why, over the last 10 years, I think there has been a trend of Revenue Operations teams not only coming together (so being born of siloed, Sales Ops, Success Ops, Support Ops, things along those lines), but coming together and saying “hey, this go-to market organism is something that we need manage together.” Because if you don't bring those teams together, they're gonna self-organize.

But also, those teams need to have SQL knowledge at a minimum. I would not hire somebody on my team that did not have SQL knowledge today and I think that was probably a hot take a few years ago. It's becoming less of a hot take now. The second thing is Revenue Operations teams are having to build their own data engineering and analytics skill and capacity in house, and that's not always because there is no data analytics or data engineering skill at the organization. Often there is a team like that; we have that at Retool, but we can be such better partners to that team if we show up as good customers for them and not as totally ignorant customers for them. So as an example, everybody on my team, including myself, we have licenses to dbt Cloud, we commit to our dbt analytics repos. We understand the stack – how all the data flows together – because we need to understand what's gonna happen with our data in order to make really good decisions about how we model and construct our data upstream. So we're very tight with our data analytics team and our data engineering team, and it's important that we also have our own access to those tools. So we use Polytomic as an example right now, which is our reverse ETL vendor that helps all kinds of data to all kinds of places. But we use it, not just our data team.

The data team uses it to move bits around and handle a bunch of ETL type jobs. We actually have our own space inside of our data warehouse and our own set of several hundred Polytomic jobs that manage our housekeeping. So whether it's data prep, data cleanliness, moving things between our own systems or handling how things move into our broader analytics stack. And that's super important for us because we need to be able to move quickly and we need to be able to manage the 42 systems that are under our umbrella.

31:02 DS: Jon, I think that's a good segue into something we had talked about before, which is some of the customized tooling that you've built that connects the data warehouse and Salesforce in a way that allows you to understand the business a little bit better, but also maintain that clean data. So I would love for you to jump into that and show us a little bit of what you've built, showing, the pot of gold at the end of the rainbow, so to speak; what people can look forward to and what they can gain in their businesses by having clean data, but also how you even got there.

31:42 JK: Common theme: this all really starts with having clean, consistent data that's running through the system. We talked a little bit about how we can get that clean data from a business process perspective by making sure that we're asking reps questions using automations and systems to parse the answers to those questions and make decisions about things like booking types and all of that.

But that isn't necessarily sufficient. That gets you part of the way there. The reality is that a business with all of its nuances and decisions and things like that is a living, breathing organism that has very analog-style instincts. Everything doesn't fit neatly into a validation rule or some sort of structure.

So in order to get to a world with really clean data, you actually need a multi-pronged defense strategy. So one is around process, which we just talked about. The other one is around observability and making sure that your CRM doesn't become this system that has a million validation rules in it such that every time your, reps try to move a little bit, they feel like they're in a straight jacket, because that's no good either.

But you still need to make sure that data is consistent and looks the way it's supposed to look. So in our CRM we actually use precious few validation rules. There's probably four or five, I would say, on the Account and the Opportunity objects primarily. And they're for things that absolutely cannot ever go wrong, and they fire almost never. So they're really just kind of traffic stoppers if you will. The way that I maintain really clean data looking across the organization has to do with a testing infrastructure that I built a couple of years ago after I just got really frustrated with… generally it was cross object stuff that was inconsistent or even within the same object and I just didn't wanna go down the validation rule path.

So the testing infrastructure is fairly straightforward. It is a set of several hundred SQL queries that are all declarations about our environment. They run in our data warehouse and they run against a replica of our Salesforce database that we dump into the data warehouse like most people do. and they look for things that might go wrong. And they're things that could be really tiny like there is a missing piece of data somewhere that there really should be. Or maybe an inconsistency, like we're saying an opportunity belongs to the Mid-Market segment, but it's currently owned by an Enterprise rep. That's the sort of thing that’s a yellow flag. It might be wrong, it might also just be an exception that we made because we decided to make an exception that day for whatever reason. But it looks a little strange and it looks a little funny and we should talk about it.

So there are two key parts to this assuming you have an analytics team that can get that read replica going in your data warehouse and a dbt project. And inside our dbt Rebo, like I said, we have several hundred declarations that all say something about how our business should function. So as an example right now, this declaration makes sure that the owner of the opportunity, their attribution that's on that opportunity – that they are in the sales team and they're on the Account Executive sub-team and they're on the Enterprise sub-team – matches what is actually written in the object.

This allows me to go and observe when there are problems. Maybe there are issues with our automations or whatever it is, but without bothering the reps, without creating a validation rule that stops them in their tracks. It also allows me to move a lot faster. One thing that a lot of organizations do, and it's become an increasing trend inside of Salesforce development, is actually developing directly in production.

And Salesforce's product has actually pushed us in this direction by creating better debugging tools in production, in Salesforce flow and in other things because they realize that you can move a lot faster than if you have multiple layers of sandboxes and deployments and things like that. All of that is still very powerful and very useful for very large organizations that have to coordinate. But if you are several hundred employees and you have a business that's still very fast growing, this sort of thing can be huge because it means I can move faster without having to test every little thing, knowing that my testing infrastructure is gonna tell me when something goes wrong and will let me know if… it basically will be the set of indicator lights that will go off flashing and so it tells me if I broke something. Again, not something I would advise if you are working at a publicly traded company or some sort of like much larger organization. But for a fast-growing startup, I think that's the right agility trade off. So all of these tests run every couple of hours against our organization.

And then once we've got those running, we actualize them inside of this Retool app. We call it our data testing app. And what this lets us do is this lets us look and see by object, opportunity account, etc. what's wrong. So you'll see these are the results of these tests and these are all things that I have to look through and fix.

These are inconsistencies with some of our finances. This is telling me that we have one deal that hasn't been marked as signed in our CPQ solution, etc. Now these are all, like I said, yellow flags. Sometimes they're red flags and I address them right away. And once I do, I can click on the record, open it, investigate. Okay, great. I took care of it. The Retool dashboard now writes back the fact that I took care of it, and it snoozes the record. This is all captured in a custom object in Salesforce.

Sometimes I wanna say “you know what? You know, yes. This rep went to EMEA for March.” And so what I might do is I might go and say, I'm gonna snooze this until this date. Great. And then this test will not show up again, or this failure will not show up again until that snooze date. So I can say “Hey, I just wanna put this on ice for a while,” and I can go and review those and see all of those exceptions. This has saved probably two or three headcount in my organization by making sure that I always have somebody being the testing infrastructure, looking over the data and making sure everything consistent

38:13 DS: When you say this is saved headcount, you mean that for some people to manually go and review all of this data, it would be two or three full-time people, is that what you're saying?

38:25 JK: It would be both two or three full-time people, and I think about the fire drills and the fire alarms saved when things break and we have to rally all the troops to get some data together for a board meeting or something like that because things break at inopportune times. We don't have those problems because we monitor things constantly.

So we make sure that things are always ready to be analyzed and looked at. You can see this is anonymized, but it's live data. We do have issues in our stack right now, but they are manageable and muted and things that we will handle. And a lot of these are actually things that are pending project work that we're doing to make sure that they get fixedso they're within acceptable tolerances.

39:08 DS: So let's say I’m on the Revenue Operations team, maybe I'm a Salesforce Administrator. I'm a little earlier in my career. I see this and think “oh wow, that's really valuable. Clearly we should have that at my company.” How would I go about recreating this? And I mean what skills do I need to learn in order to do this?

You mentioned dbt, you mentioned the data warehouse. How would you advise someone who doesn't know how to do any of this stuff on where to get started and how to end up at this place?

39:46 JK: Yeah. I mean, a lot of the technologies that we just talked about make this process simpler and easier, but at their heart, all we're doing is running a bunch of SQL queries and those SQL queries return results if something's wrong, and don't return results if nothing’s wrong. So if you can work with your data team to – or even if you can get a data warehouse stood up, uh, which is not that hard to do these days, like Amazon, Redshift or something like that – buy something like a Stitch or Fivetran off the shelf (Also pretty easy to do to dump your Salesforce data directly into it) then it's just a matter of running these queries. Every single one of the tools I just mentioned has the ability to schedule running SQL queries, so you run them and you get an output and you look at it.

That's all this is at the end of the day. A lot of this stuff like the app that we're staring at right now, this is workflow management. This is taking that output and giving me the ability to say, I addressed this problem. I didn't address this problem – that sort of thing. This is, in my mind, how this sort of infrastructure becomes collaborative, how it becomes auditable, and all of that.

But at its core, it's really that. And if you really, really wanna bake this down to its core, it's downloading data into, I don't know, Google sheets and doing some VLOOKUPs and writing some formulas and making sure things match.

41:03 DS: Yeah, but your point – I think the broader point – is this is a job that's easier to do outside of Salesforce rather than, in particular, imposing validation rules in Salesforce. So the key insight here is you've chosen to evaluate data after it has already been produced so that it is lower friction to reps rather than putting a bunch of blockers in front of them by having a validation rule on every single one of these things. And then you can go and address these things at some later date, you know, based on how important they actually are.

And like you said, in some cases there will be exceptions so you'll want to handle those manually. You don't necessarily want to have the validation rule because you will have exceptions here and there.

41:49 JK: That's exactly right. I think each of the things that we're talking about here, whether they're validation rules or analyzing your data after the fact, or even other tools in Salesforce like dependent picklists is a form of data validation. All of these things should be tools in a modern RevOps person's toolkit and we should apply them in very different ways.

Another form of data validation, frankly, has to do with UI. So I was talking to a member of my team. We have a button on our account record that allows you to click that button and create a new contract on the account. Very simple thing to do.

Well, you have a lot of choices about how you design the button. Does everybody see the button? Does just the account owner see the button? Does just somebody who's on the contracts team see the button? Well, all of these things lead to slightly different UIs and UXs. If everybody sees the button, then you've gotta make sure that the right people are clicking the button and the wrong people aren't clicking the button. And if the wrong people click the button, then it can create data issues downstream. So, okay, let's hide the button from those.

But now if I hide the button from folks, sometimes they're gonna start pinging you and they're gonna be like “well, I don't see this button.” And then you're gonna have to explain to them “well, you don't see the button because this account you're allowed to create contracts on, but that account you're not allowed to create contracts on.” So in this instance, I might have the button visible to create a good UX, but when you click on it, it provides you with an error that says “Hey, you need to move the account into a different state before you're able to create a contract.”

So the little nuance there: is it hidden when it's not relevant, or is it visible when it might be relevant? And then you throw an error that tells the rep how to fix it – it's very important because in the former case, I wind up with a Slack message from the rep saying “I can't find the button you just me about.” In the latter case, yeah, I create a little bit of friction, but I educate the rep at the same time.

And what I just described and thinking through that UX is ultimately about clean data because it allows only the right people to interact with the system in exactly the way I want them to and/or take the corrective actions I need and educate them on it.

44:13 DS: This goes to what you were saying right at the beginning of our conversation about Steve Jobs and how he declared that the iPhone shouldn't have a manual, it should just work. And how RevOps teams really are like the software and design and UX teams for the internal customers of salespeople, sales, leadership, customer success, and so forth. That's, I think, a good way to highlight exactly what you're talking about.

44:45 JK: Exactly, and that's why it's so important, I think, for revenue operations folks not to be Salesforce admins or strategy people or whatever. We are building a product and we all have to think of it exactly that way. We all have to think about the nuances. We all have to really sweat the details about this stuff.

And as an example, if I were to share my Salesforce view unvarnished and unanonymized right now, you would see tons of buttons and tons of fields and tons of things that if you were a rep, you wouldn't see. Because I'm an administrator and so I need lots of knobs and switches and dials to pull and all kinds of weird, esoteric stuff that I wanna look at. All of that should be abstracted away from a seller, 100%. So a seller will only see the one or two things they need to do at moment.

45:42 DS: You've shown us how you keep data clean. What do you get when you have this clean data? Do you have an example of, like I said, the pot of gold at the end of the rainbow? What's the huge benefit that you get with all these systems in place?

45:57 JK: So one thing that is one of the main work products that we get: you talk about analysis and analyzing the data and understanding the data. Well, this is a common – what I'm about to describe is a common challenge that often times Data teams are asked to solve, which is taking raw data and modeling it in a way that is consistent. The way we calculate win rate should be exactly the same way every single time. The way that we calculate coverage ratios and pipeline and all of those sorts of things should be the same all the time. A lot of times, folks will turn to Looker with LookML or layers in dbt or something like that.

The challenge with all that – and that's all great – is that you're now pulling the expertise that is defining those metrics further away from the people who actually understand the nuance behind all of those metrics and the people who are incentivized to maintain it all. So what we've done here is rather than outsource that to the data team, we have a custom object in Salesforce called the Goal Object, which started as a way for us to understand what everybody's goals were.

So you're an SDR, you've been here for 10 months, there's 10 rows in the goal. Object your name next to each row and every single month that you've been here it's got the number of sales qualified opportunities you're supposed to deliver. I built this a long time ago because, again, at every company I've ever worked for, one area of very bad data has always actually been about the rep roster.

So who's on what team? What quota do they have? What manager do they work for? What role were they in? Oftentimes we know what that looks like right now, but we actually care most for analysis about understanding what that looks like going back into history. So I built it myself, which was to map all of that stuff out.

And then what this object became over time was bigger than that because if you had the goals on a single object for every single month, period, you also could figure out the actuals. You could figure out performance, you could figure out how people were trending, and you could build a whole bunch of basic transformations on that data that allow you to very quickly get to answers of common questions that folks ask and then make that self-service.

People knock Salesforce reporting as an example, and they should 'cause it's pretty lousy. But at the end of the day, it's just an easy way to pull records out of a database and visualize them. So if you do a little bit of the modeling ahead of time, Salesforce reporting can actually be quite powerful.

So as an example, our Goal object has all of the things that I just described. So if we pull a very quick query of our goal objects, we can see here: this is exactly what I was saying for a given rep – and I've anonymized this data so you know, this is all the same rep – I know for this month, this period – so this was May – what was their ramp percentage? What was that number month that they were in seat in that role? What team, sub-team, level, segment, territory did they support? And then I've got their goals. So this was this particular rep's ARR goal, the number of sales qualified opportunities that they needed to produce, and so on and so forth.

Now, if you go into the query here and you look, millions of transformations are available. So I can look at their sales qualified opportunities quarter to date. I can look at them half to date. I can look at them year to date. I can do all of these things. You can see deal counts are available, average deal size, all of these permutations that are all things that you might ask.

Now, this is really great raw data to have access to, and this is a hundred percent automated, but it turns into nice looking reports that look like this, that are easily viewable in Salesforce and are very exec-friendly. So, revenue pacing, you can see folks' goals, et cetera. This is for an AE team. This is for an SDR team that we have. How many sales qualified opportunities that are there. So this stuff is what actually allows us to answer questions really quickly. Then having the folks on my team be literate in SOQL, which is the Salesforce query language, They can hook this stuff up to spreadsheets real fast.

And so I remember when the world started turning sideways about a year ago, and all of the macroeconomic contagions started, we, like many businesses said “hold the phone – we gotta figure out what's going on here.” Also we, like many businesses, had our BizOps team do a full-on analysis to figure out what was going on.

And I delivered this goal object to the BizOps team, and they were able to produce something like a 300-page analysis and a look-back in about 48 hours using this data because it was easy and clean and it had every rollup that they needed. And they just threw it into charts, reviewed it, and we've actually been able to rinse and repeat that.

Now, it's a lot of pages and we've since gotten it down to about 70 that matter. But the reality is that's where the value in clean data is. Because if you had asked somebody to go on a fact finding mission in some of the other organizations that don't have the sorts of thing, that would've been a month and a half long exercise, it would've been multi-part, lots of folks cleaning and scrubbing and standing it all up and then to do it again the next quarter would've been really hard. So those are the hidden ways that clean data really accelerates the business.

51:35 DS: you've talked a lot about the value of these things to management and to the leadership team in making decisions and being able to very quickly get analysis done that is gonna drive some of these decisions. But why should reps care? Or should they? Does that even matter?

51:54 JK: I think at the end of the day, reps probably shouldn't necessarily care about the clean data. If you get this right, they shouldn't have to worry about it. They should be doing their jobs, entering the information that makes sense to them about the deal. So things like MEDDPICC and deal strategy and all of those sorts of things, which they care about entering because their managers will ask them about it – will ask them about what's going on with the deal – and they will care about entering that information if they're a bit more senior and have an account team that they're working they wanna work well with others.

So if they're collaborative folks, they're gonna care about that sort of stuff. Reps absolutely should not care about how your bookings come together. They shouldn't care about attribution. They shouldn't care about any of the esoteric things that do drive a lot of the decision making. Which is why, in my view, a Revenue Operations systems team’s efforts need to go towards only interfacing with the reps to ask them the plain-text questions that they absolutely need to ask them in order to make those decisions, and then have everything else automated and automatically categorized by the system.

However, if you wanna get a little bit more meta about this, reps should care about management decision making because – and this is what I tell my reps when they ask these questions – this is how I get you more money. This is how I get your quotas lowered when times are tough. This is how I make the case that our ASPs are increasing and our deal velocities are increasing.

And that means that we can support higher OTEs, and we're producing more, and all of these sorts of things which they do care about. Or, as an example, if we have really good data about professional services and how it's working with our subscription business, that might give me the ability to go to finance and argue that you should be making a small bounty on every professional services deal that we sell.

There's that sort of thing; they do care about those things. So if you can link clean data back to their pay, to their day-to-day experience, then you definitely get folks caring about it.

53:59 DS: All right. Thanks John. This has been a really great session. I appreciate the deep-dive and showing some of the stuff that you've built directly in Salesforce and Retool. I think this is gonna give people a really good inspiration for what they can someday aspire to in the world of clean data or maybe give people the pathway there now.

54:24 JK: Awesome. Yeah, this was fun.


Wanna see what we do next?