Growth@Scale – Episode 19 – Rich Skinner

MAVANJanuary 8, 2024

0:00:05 - (Matt Widdoes): Welcome to growth at scale. I'm your host, Matt Widdoes. This is a podcast for leaders who want to bring sustainable, predictable, scalable growth to their businesses. Every episode, I sit down with world class growth experts across product marketing, finance, operations, you name it. The hope is that these conversations will give you real, actionable advice for building and sustaining company growth.

0:00:33 - (Matt Widdoes): Welcome to this week's episode of growth at Scale. I'm your host, Matt Widows, and today we're joined by an expert who's been an integral part of data teams from Google, Zynga, Zillow, and many more. He's now our head of data at Mavan. Rich Skinner, welcome to the podcast.

0:00:47 - (Rich Skinner): Thanks, man. What a wonderful intro.

0:00:49 - (Matt Widdoes): Yeah, you're welcome. Well, for people who don't know you, tell us, who are you? Where have you been? What do you do?

0:00:54 - (Rich Skinner): Where to begin with that question? So who I am? I'm rich skinner. Love everything data, as that intro probably alluded to. But who I am is someone who honestly really loves helping people. So I actually got my start, not in any kind of data career. I actually got my start doing aerospace engineering. My father was a mechanic and aerospace engineer who worked at McDonald Douglas, at Boeing. And I thought that's what I wanted for my career. And that's where I began up until I realized that my first jobs out of school were actually creating weaponry. And so that was kind of antithetical to what I'd hoped to do with my life.

0:01:36 - (Rich Skinner): Decided that what I really wanted to do was help people. So I actually changed my last semester in college to focus on green engineering. That's not really that popular of a major, but it got me into classes that focused on sustainability and preserving the world, cover a lot of different topics. And then I actually went into environmental consulting, which was able to mix my math engineering background with solving kind of critical problems around energy.

0:02:04 - (Rich Skinner): So what I was able to translate there is an ability to help people through data, and I think that really kick started my entire career.

0:02:12 - (Matt Widdoes): Cool. Well, I love the background and did not know about the experience in weaponry. We'll have to talk about that more. Maybe a different podcast to talk about all the declassified things that you worked on there. But I'm curious. A lot of our listeners are ceos of early stage ventures. Many are growth experts in other areas, and many of them, if not all of them, work really closely with data teams. But they themselves are not on the data side in the traditional sense.

0:02:36 - (Matt Widdoes): When thinking about growth, what are most people kind of likely to overlook on the data side, particularly nontraditional data people. And what are some of the kind of core elements that would be really valuable but don't get put in, or don't get put in early enough?

0:02:49 - (Rich Skinner): Yeah, I would say most of it, but generally I find that there's two different types of people, and the one person is the experienced technologist. So whether you're a product manager, or you have worked in customer service and you're starting a new company, or you're in operations, people who have had proximity to some kind of data talent within their group or their company understand the benefits of it. And it's really, really easy to espouse why data is important.

0:03:19 - (Rich Skinner): And the reason I call this out is I think there is a difference between those who eventually start their own companies and have had that experience and background, and not just the prevalence of data, but also the power of it. They have seen it firsthand. You no longer need to convert them and understanding why some of the best practices are important. The other side are people who probably haven't touched it, may have touched it very, very lightly. They have never worked with anyone in data analytics, not even excel spreadsheets, but maybe just be doing, creating a business that might be a person to person business. So I think you mentioned, I worked at Zillow, and we saw that a lot with agents who were starting their own business, very averse to using data, very averse to quantifying outcomes, wanting to maybe move quickly or make decisions through gut feel, they ended up punishing them down the road because that aversion to measuring what matters ended up permeating their entire business.

0:04:11 - (Rich Skinner): So financially, they weren't keeping a great look at their books. They pretty much knew that they were surviving. And if there was more money coming in than coming out, everything was okay. So they didn't need to measure their conversion rates, they didn't need to measure their funnels, they didn't need to measure reactivation campaigns. They weren't interested. And I think there is a very big difference between somebody who has already affixed themselves to not wanting to do it versus somebody who understands its value.

0:04:38 - (Rich Skinner): And I think that leads into what are the rest of the things that are missing and which are what's important when you get there. So, one, I think a data readiness and a data culture and an understanding that you don't know everything leads into how do you create the best possible architecture or practices that will help you get to where you want to go. And I think to start off is just being inquisitive and saying that you don't know certain things. The only real way to know for certain is some kind of causal measurement, and you really are as good as what your data is able to tell you. And I think that goes pretty well with the current kind of quick iteration, lean startup methodologies, depending on if you're just starting out in your company or if you've been long entrenched in a position of leadership, it is critical to be able to iterate quickly, learn quickly, promulgate those learnings. And there are certain infrastructure, we can go kind of like deeper, if you'd like, but infrastructure to make sure that when you do have to make those decisions and you do need to iterate quickly, you're getting the right answer instead of the wrong answer, which can lead you astray.

0:05:43 - (Matt Widdoes): Yeah. So you've covered a few elements there that I think are worth calling out. So, one, sometimes people, because of their industry or because of particularly the volume of information, like the example of a real estate agent or a seller, kind of in many functions, to the ability to capture that in a low friction way versus having the kind of breadth of data points to be able to infer something, because you can just be like, okay, I lost ten deals. I gained one deal.

0:06:10 - (Matt Widdoes): What am I to infer from that? And so it's like, where are you tracking it? How are you tracking it? And this is probably not the case, but I would think most people generally agree that more data is better than no data, and that data is helpful in making decisions. It's like, prerequisite to really be making decisions, because so much people can bring down into gut decisions. And we see this all the time and have seen it many times before.

0:06:32 - (Matt Widdoes): Companies with tons of data are still operating based on their gut or based on the feeling of the founder or the feeling of some business leader, and nobody's stopping to question it or two. They don't actually have the data to fight it, which is why they're not making decisions with the data to begin with. And so when you think through early stage, let's take somebody who's, they've got a digital business, because that'll make it easier.

0:06:53 - (Matt Widdoes): And they have some customers, and they have lots of points in their funnel that they need to track. They have different things, like emails that they need to think about the consumers coming in and out of the product or engaging with the software or company at all sorts of different points, and they're buying different things. And there's more of a matrix of stuff to track when somebody's. And granted, I'm sure.

0:07:16 - (Matt Widdoes): For the sake of this example, let's say the person has properly integrated Google Analytics and they more or less are tracking these things. What are some of the early things that people often overlook there where they have data, but they just have this bank of figures and they're not really necessarily slicing that up or it's not able to serve them yet. What are kind of some of those gaps?

0:07:37 - (Rich Skinner): Yeah. So I think you're largely right around more data is generally better, I think insofar as that data is organized and accessible. If it isn't, then you're going to probably spend a lot of money on data that you're not using, and that will cause its own problems if you're starting, say, a digital business. One of the things that we see pretty often is just a lack of measurement around conversion, a lack of understanding about the scientific method, and a key struggle to understand the levers of math and churn. And it's basic kind of things about product. Right? So I think most people can understand a level of friction when it comes to a product, but they may not understand that adding in a couple of steps will actually reduce the amount of traffic they get to a page. And so I see this all the time with companies that have a find and store locator. They have a find and store locator two or three steps down a funnel, and then that tool on its own has some friction because it has many funnels embedded within it in order just for someone to potentially sign up for their product. And that is not something that they fully consider how much friction they're putting into their current funnel. But overall, I see a ton of clients these days who say that they're trying to measure conversion about a particular product or service, and then their ability to let a customer buy that product or service is so convoluted or so difficult that they haven't architected a way to measure that and tie those two things together.

0:09:01 - (Rich Skinner): And you see this predominantly with companies that have an ecommerce presence, but maybe their presence isn't a predominant part of the revenue and they're still largely retail. And tying those two things back together to grow their ecommerce business, which will have higher margins, is really difficult. They're not entirely understanding the questions they need to ask themselves in order to understand how to get from landing on their web page to someone buying in store. And how do you actually measure that entire journey? And what are the most important things to measure? What are the critical indicators to let you know, say, even if you can't measure conversion, but you should obviously still be measuring conversion. So that's what I see most often. And then I would say lastly, it's really around constant data quality. And actually it's kind of like people who find themselves, they make a decent amount of money and then they wonder why they're living paycheck to paycheck.

0:09:51 - (Rich Skinner): Data is very similar. So you actually need to look at your reports as well. This is going to sound incredibly basic, but a lot of clients that we talk to do not actually look at any of the reports that are created. They don't see any of the insights. They don't understand their different segmentations, who's buying the product and who's not buying the product and just looking at it once a month. I would say there are plenty of customers who don't look at something like Google Analytics.

0:10:15 - (Rich Skinner): Probably they might look at it once a quarter, maybe once a year. And that is far, far too infrequent to really understand your business. And so my key call out for those kinds of companies know, make sure you understand what is critical to your business surviving and whatever that is. You're looking at that every day.

0:10:31 - (Matt Widdoes): Yeah. And I think that in the general sense, what I've seen in this respect is it's not that the people don't care about it or that they don't realize that there's salvation in it. It's that everything, especially if they're not kind of full stop, if they're not a data person, it can appear to be overwhelming. So from the setting it up correctly, which that's a huge asterisk that is scary to making sure it's organized and can be leveraged and can be trusted. Right. So it is set up correctly in that the pipes are flowing and then it is accurate in that it has been tested and back tested and has been revisited many times. And every time you add something new in, is that actually now tracking and should it be tracked and where should it live, to the segmentation of that in reporting and the automation of that report, to the analysis that comes from that. So even if you have a great report, to your point, if you're not looking at it regularly, I think even then a lot of people are like, oh man, it's so overwhelming and it's way easier or more natural for a nontechnical, non data person to gravitate towards the next product feature or the next button change, or new page on the website, or the next blog post or the next anything that isn't data, that is seemingly or gut tells them will get them a step closer to more revenue or more clicks or widgets or whatever it is that they're pursuing. I think that's why it often gets overlooked. And data people, by and large, are also fairly expensive in the grand scheme of things. And so I think oftentimes people kind of try to wing it, and then it's one of those things that they don't do a really great job at.

0:12:04 - (Matt Widdoes): And they kind of convince themselves that they're doing an okay job of it, even though they haven't done really much of anything to back that up. And part of that comes from ignorance. Part of that comes from a lack of experience or real ability of what is actually good versus what do I think is good as a non data person. And you mentioned on the metrics side, thinking about the data points that matter, and I'm curious, granted, every business is different. So if you're selling real estate as a real estate agent, or you're zillow pushing new offers and trying to level up things that people are interested in, or you are door to door, whatever, those things are going to be different. But at the kind of foundational level, generally speaking, what are kind of main data points that people should be looking at at a high level? Yeah.

0:12:43 - (Rich Skinner): And I think you touched on that a little bit of how custom a solution is. The example you gave even for Zillow, which is a lead generation business.

0:12:51 - (Matt Widdoes): Right?

0:12:51 - (Rich Skinner): So if you get into the lead generation business, I want to sell advertising to customers. One that you need to be able to understand how many customers you have coming to your platform, but also how many people are advertising on your platform. And at basic level, even if you're doing this highly inefficiently, and you say, like, if you want to advertise in this particular geography, it's a dollar, you want the most number of people paying that dollar. And so if you're looking at what matters most, you're looking at at the end of the day, I just need people paying me the dollar. And more people, more people, more people paying me the dollar.

0:13:24 - (Rich Skinner): And we're just going to assume this is theoretically like the unit. Economics is great on this.

0:13:28 - (Matt Widdoes): Sure.

0:13:28 - (Rich Skinner): You just want more of those people paying you the dollar. And so if you were to say, like, okay, what metrics matter to me, it's going to be like, okay, I just need people paying me the dollar. Yes, there are ways to make this more efficient. Yes, there are ways to improve audience on the buy side of this. But really what you want is kind of like that key metric. It's just like you work at McDonald's, whether you're ecommerce or not, you need more people coming through the door to buy things. Now, do you eventually want your average order size to be larger, 100%? Do you want that average order size to be high margin?

0:13:58 - (Rich Skinner): You absolutely do. But those are kind of secondary to the primary key of getting people in the door and selling. And so I think whatever your business's analogy is for, how do I get people in paying me for the thing that's going to keep this business alive and thriving is usually the first step that we try to level set with our clients, or what I've done in the past with stakeholders that I've worked with, it's just reorienting them from a ground up approach.

0:14:22 - (Rich Skinner): And it seems simplistic over time, but everything really is. I've done incredibly complicated math courses, but they're all predicated on you knowing addition, subtraction, multiplication and division. And that's exactly what this is. And so it seems very basic to begin with, but you look at yourself a year or two later, and you find that you're making much more sophisticated choices, much more deliberate kind of concessions around what you end up selling people or the avenues you might go around once you have that kind of like, key metric in view every single day. And then you continue to focus on more and more things as you go along.

0:14:56 - (Rich Skinner): And maybe eventually that key metric no longer matters to you as much. And you saw this with a lot of growth companies that were focused on gaining audience, right? Audience was their currency, especially in the vc world, hoping that that audience then eventually translates into money. But once you do hit critical mass, what do you need to do? You need to actually monetize those users better. And so I think, like every business goes through this maturation cycle of eventually hitting a point where you're needing to grow in order to scale, and then you need to create efficiencies around that scale or create ancillary products to help service maybe some other opportunities in that market that you haven't previously considered. And so for anyone looking to get started, start with the most fundamental, basic thing that you can possibly start with, and move out from there, move out from what is actually going to make this business hit a specific goal. So say I want $1,000 in revenue because $1,000 in revenue means $500 in profit. And again, we're going to pretend like that $1,000 is materially important and has good economics. And so eventually you make that $1,000 for six straight months, you're probably going to say like, okay, well, I realize I'm spending 20% of that $1,000 on this thing that could be done more efficiently. So if I build this particular thing, it reduce that cost by 20%. So now you're making $1,000 with 300, or you're going to say, hey, I can make $1,000, but I add ten more people over the year, it's going to amount to an additional $120, which allows me to invest in more things. I could reinvest that $120 in something else. So this is somewhat of a long winded answer of the basics. Gets you to the stepwise kind of like function to get to where you eventually want to go, which is a much more sophisticated solution. But no need to boil that ocean early. And I think any data person will tell you the earlier it is, the simpler you should be. You should not be complicated early on. And it's typically why you don't see a lot of early companies with a modern data stack or you don't see them with tremendously complex infrastructure, because it's really like the early wins that get you to that eventual efficiency.

0:16:54 - (Matt Widdoes): Yeah, well, and I think that going back to starting with what your goal is and what your desired outcome is, and then working backwards into, even with limited information or purely with assumptions when you're starting, even if you have zero data to work off of, it is important to kind of, it's that classic. If you don't know where you're heading, it doesn't matter which way you go. Because if you take, like, the most basic example of, say, somebody who's, I don't know, they're selling anything and they know that, okay, if I talk to ten people, I can get one sale. Okay, well, how many sales do I need to make for this to matter? This many sales. Okay, then it's that many sales times ten is how many people I need to talk to.

0:17:30 - (Matt Widdoes): And then what do I need to do to get that many people to talk to me? This. And you really are kind of looking at the overall funnel, making assumptions or leveraging the data that you already have to kind of draw what you've done historically, and then you get into this area where you can start making optimizations to every area of that. And so that's where it can get not complicated, but it gets a little bit more broad in its scope. And I think so many people are just making choices all up and down the funnel at all times where they're like, okay, we're going to try this new offer, or we're going to try this new channel to attract first calls, or we're going to put this new web page up and they may not even be operating on the area that is the most efficient in that funnel to increase what they actually care about at the bottom.

0:18:10 - (Matt Widdoes): And I think if you take the most basic version of that, and again, hopefully this isn't oversimplifying things, but you think of a kid running a lemonade stand, it's like, okay, well, you're going to probably way overestimate what you think you're going to sell, right? Because you don't know. And that's okay. We're all, in some ways, kids at lemonade stands doing something. It's just, it's a different problem.

0:18:29 - (Matt Widdoes): And so it's like, okay, so let's make enough lemonade to sell 100 cups. And so we do. And then we go sit out somewhere and we realize that first day we sold ten cups. Okay, well, you could track what time of day those cups were sold. And you might have found that they were all sold between three and four. Okay, that's important. You might find that they were actually completely dispersed, but that'll change your approach. If you realize that everything's being sold from three to four, then it's like, okay, well, now I just need to only do an hour a day and I'll sell ten cups. And that's maximally efficient. You may test the location where you put the stand. You may test the day, right, maybe Saturday. Saturday isn't better than Sunday.

0:19:04 - (Rich Skinner): Probably great for lemonade process during the week.

0:19:07 - (Matt Widdoes): Yeah, exactly. And so there's all these things that start to emerge if and when you're looking at the data. And then if you look at that and say, okay, well, in order for me to care about this lemonade stand, I need to make $100 a month. And then you think of Saturation, other things, right? Like over time, you start tracking it and you're like, I'm not selling lemonade anymore. I'm doing all the things the same way. But you realize that nobody wants to keep buying lemonade from you every single Saturday.

0:19:30 - (Matt Widdoes): So that's, again, a very simplified business. But in some ways, hopefully that's helpful because everybody else is working with way more complicated stuff. And so I think maybe talking. Let's talk a little bit. I mean, you spent a ton of time on experimentation, thinking through things like not only things like statistical significance, but identifying where in the funnel, there are clearer opportunities for optimization.

0:19:50 - (Matt Widdoes): Sometimes that might be on a tool side. Sometimes that might be on a faster load page for the website. Sometimes that might be copy. Sometimes that might be better data extraction or reporting, et cetera, visualizations. It could be any number of things. Let's talk about that. So, generally, with experimentation, and I think this is another one of those things where everybody agrees, like, well, yeah, you should experiment. You should change things up. You should try a limade, you should try a Saturday instead of a Monday.

0:20:14 - (Matt Widdoes): Most people are like, yeah, I get it. But I think oftentimes how people think about experimentation, like how they think about data, or like how they think about anything that they're not super deep in, they're often thinking maybe too surface level about it. So talk a little bit about that in no kind of particular order. Why is experimentation important generally? And maybe what are some examples of early testing that businesses can do? They're not really running a consistent experimentation process today.

0:20:40 - (Rich Skinner): Yeah, great question. And why experimentation? Why is it important? Well, I think everyone has probably heard the adage, correlation does not equal causation. And really, what most businesses are looking for is, did x cause y? So, if I create this promotion, will it boost sales? And you're trying to answer those kinds of questions. And that's where a b testing or experimentation can really give you a better, deeper look at whether or not that happened at random, or whether or not that happens because of the intervention that you introduced. And that kind of question is key, I think, to driving a lot of business value. It's why a lot of companies focus on learning over results. Because sometimes what doesn't work is just as important as what does.

0:21:28 - (Rich Skinner): So that kind of causal methodology is pretty critical into saying definitively, or I should say, with a much higher confidence than otherwise, what might actually be happening on your web page or what might be happening in your store. And I think what a lot of people do struggle with there is keeping it simple. So these tests don't need to be huge, but what we often see is huge redesigns. So you think something is wrong with your website. So let's change the entire website into something that looks so much better than the existing website does and has all these new fancy modules and looks like an architectural or magazine website with a bunch of features and widgets, when in actuality, that probably makes it a little bit more difficult. And so, for those who maybe are just starting, there's probably people who fall into kind of.

0:22:15 - (Rich Skinner): This might get me into a little bit of trouble, too, but people with enough volume and those without enough volume. And generally speaking, you're going to make changes that are probably relative to best practices in whatever domain that you're looking to be in. So we can use kind of core conversion rate optimization as an example here, best practices around your hero copy and subcopy, and making sure that the positioning actually shows your value proposition and differentiation are critically important. And changing that one thing could be what leads customers to actually converting on your website versus not. But testing that on its own lets you know not just that changing copy can have an impact on your website, but changing people's understanding of your product means that one there's probably better ways to talk about your product. There's probably something lost in translation there. Most people spend a ton of time in that hero section, and the vast majority of people don't actually scroll down at the bottom of a page. And so I think sometimes it is overlooked at how important that hero section is. So if you're a company who's just starting out, maybe you started your own EcoM company.

0:23:19 - (Rich Skinner): Easy things for you to try is just make sure that you uniquely talk about your value prop, talk about your differentiation, but also make it emotionally appealing. And the same thing goes with having people. This might seem a little reductive, but having people in your images, in your hero section actually really, really helps. Sometimes it's forgotten that we sell to people, and there can be tremendous gains in conversion rate when you actually see people in an image performing particularly an action you want. So if you're going to soccer camp, show kids having a great time kicking the soccer ball around in that hero image, and you might find that it boosts your conversion rate substantially because it translates what you're actually selling, as well as that copy more instinctually than a large paragraph full of text that people are probably going to gloss over once it's too long. So I think from that lens, it's the best practices that really rue the day.

0:24:09 - (Rich Skinner): Making sure that, for example, in the body of your website, you're clearly showing how your product works, examples of how it's going to help someone. We could probably go on a long conversation about the jobs to be done framework, but that's a framework that I traditionally use when helping people with their own landing pages or their home pages, making sure that they are speaking to what job that they are solving for their prospective customer.

0:24:33 - (Rich Skinner): And so from that vantage point, best practices are key on the more quantitative standpoint. So say you have hundreds of thousands of data points to millions or potentially billions or trillions, you've probably done some testing, but if you're starting out, you have a lot of leeway to do frequentist testing. And so the scary p value and the difficult to comprehend statistical significance, you can use those techniques to understand.

0:24:59 - (Rich Skinner): How do I get from my website now of potentially hundreds of thousands of users to having a meaningful conversion? And that's just going through the scientific method, which I referred to before, and saying, take a hypothesis. And most use it as an educated guess. And it really is, in this instance of, I expect by changing that hero image from just a soccer ball to kids playing soccer to impact my conversion rate, 5%.

0:25:25 - (Rich Skinner): You can then measure whether or not that is an actual impactful thing by testing it versus the old version, just the soccer ball, and determining is there a difference. Now I'll probably say a B testing is not easy. A B testing is like a very difficult thing for anybody to do correctly. There is like a ton of biases you need to watch out for. So if you run a test, you should run it for at minimum seven days. If you don't run it for at least seven days, you get day of week biases. So in the lemonade example, imagine most of your customers come on Saturdays, but you only ran your test Monday through Monday through Friday. You're probably not going to understand the impact of that result as much as you would as if you actually had ruled out day of week biases.

0:26:04 - (Rich Skinner): There's also other selection biases depending on how you actually set something up. So, for example, at Zillow, we had issues where everybody is put into a test or experiment based on their cookie. If you erase your cookies or if your cookie has expired, you might get actually put into a different experiment. Which means that now both of the groups of 50 50 you hope to test against each other are now not 50 50.

0:26:26 - (Rich Skinner): And now you need actually do another test to determine whether the sizes of those groups are close enough to 50 50 to still be able to run your test. There are a ton of novelty effects, so you might see that people really, really enjoy your product the first week compared to the control, and then find out over time they don't actually like it so much now that they have used it for quite a bit of time. The bright new shining thing isn't all that bright and shiny. They actually have missed some of the functionality of the previous version.

0:26:55 - (Rich Skinner): And so getting started with a B testing on the more kind of like volume rich side has a little bit more, I should say, caution that you should throw onto it and making sure that when you run your test, your test gives you the results you expect. And there is this law that I think everyone should abide by whenever seeing anything surprising. It's called Twyman's law, which it's defined as if you see anything unusual in a test, it probably is wrong. And so if you've ever seen the instances where somebody changed the color of a button and it changed conversion rates 10%, you should remain very, very skeptical about those kinds of changes, and then even vice versa. So if you launch a new feature and something is down. We actually saw this at Pandora. We launched an A B test, and our video ad revenue dropped 97%.

0:27:44 - (Rich Skinner): Believe it or not, we did a web redesign. So the Pandora page dropped ad revenue 97%. Now, did we actually think that the new experience was 97% worse? Absolutely not. What we realized is that in development of that new Pandora website, we actually hadn't included as many of the ad based triggers that we would normally have. So, for example, in the old Pandora, if you listened to a song five times, that would trigger an event for a video ad. In the new Pandora, you could watch, or you could listen to a song 20 times, and you would never get that video ad. And so what we found is that it wasn't because of the new experience. The experience wasn't inferior and causing a drop in ad revenue. It was because there were missing mechanisms in the product. And so it's really easy to introduce bugs into an experience.

0:28:33 - (Rich Skinner): It's really easy to miss some of the biases that can plague your tests. So the key thing for anyone with more volume is setting up tests based off of kind of tried and true methodologies. Making sure that people are equally split into a control and treatment, making sure that you avoid biases that you may introduce. Catching bugs, things of that nature are usually the things that get people who are a b testing.

0:28:58 - (Rich Skinner): And then I think the final thing I'll call out here is that most A B tests are flat. So about 90% of them are going to be flat. And I think most people expect if you don't launch a test and for there to be significant results, the key thing is you're building muscle around learning and you're filling out the things you probably shouldn't test anymore.

0:29:12 - (Matt Widdoes): Juan, it's those minor things that add up over time. I think one thing you touched on is people have this tendency, I feel like, to kind of forget how they are as consumers. You mentioned how important the hero image or kind of the top of the page, above the fold kind of the first things people see on the website how important that is in the success of that page. And really kind of with each new scroll wheel pulling down, you're having to kind of build more reasons to do that again versus close the window. And I see so many people try to pack so much stuff onto a page. And granted, there are pages that scroll forever because they're like, hey, if you're still scrolling at this point, we're going to show you our press page or something that isn't as important. But, like, you got here, we didn't think you'd ever land here. And I think a lot of people take the approach where they just think everybody that lands on this site is to read every word and they're going to look at the whole page and they're going to look at all these links, whereas they themselves don't even consume that way. And I think a big piece of that is they spend so much time with the content themselves that it's hard to separate themselves as a consumer. And then another big piece, which is this kind of seems crazy to me, but I've seen people do it so many times, is they don't have a proper control.

0:30:20 - (Matt Widdoes): You mentioned, in fact, they may have literally no control. They just change the site and let's see if it did better. And now there's like talking about biases, all sorts of mess in that. And I think that if you draw the parallel between a real lab experiment, like a physical world, you would never be playing around with petri dishes with dirty hands. You'd have, like, all, there's hygiene as a forethought. You'd be thinking of, again, the biases or other things that might introduce a problem. You'd be clearly mapping out what you would expect to be and be kind of calling that out with, even if you're guessing, but you'd be making a stance versus somebody kind of hand waving and saying, make that bigger. I think that needs to be bigger. And then they're like, okay, we made it bigger. And then they're like, all right, let's see if it's doing better. And it's like, it's doing better. Make that other thing bigger. It's like, wait, hold on. Going back to the causality and correlation stuff.

0:31:07 - (Matt Widdoes): And then the other thing on trusting the outcome, particularly in larger shifts, is that I think so many people overlook is running like an AA test and actually just we split the group the same, run the exact same page, you should have practically no difference. And if you see that one was 4% bigger, 4% better, and then you ran it as an AA and it's still 4% better. It's like, okay, we might have a sample size issue, might have any number of other things that have entered into this, that have made this nondeterminate. And the real risk, I think that's another thing you would think of in a real world experiment in an actual lab is how do I avoid false positives and negatives and what things might introduce that type of risk.

0:31:49 - (Matt Widdoes): And so the scarier thing for me is when I see decisions being made based on either limited data or a dirty test that are now, just for lack of a better word, just canon within the, where they're like, that's that now, that was like 75% better forever ago, and we're never testing that again because that was amazing. And it's like, that was never amazing. There was something wrong with that, but everybody was moving so quickly, or there was no control or any number of other things that kind of poisoned the well there with very limited kind of scientific rigor around how we run experimentation, et cetera.

0:32:20 - (Rich Skinner): Just to go back to your earlier points, I think it's a great call out is that very rarely do you get something for free in a b testing. So making an image bigger, bigger, bigger, or a text bigger, bigger, bigger. It means that in order to create that text to be bigger, something else needs to be less prominent. Like the visual hierarchy of the page now changes, which means that some other part of the app now doesn't perform as well.

0:32:44 - (Rich Skinner): And this is kind of an understated part of a b testing, right? Everyone thinks you're just changing. You're looking at one primary metric, which a lot of you'll probably hear overall evaluation criterion. I will try not to use terms like that in this podcast, but you might see it listed as that. But a key primary metric is a thing most people are focused on. But there are many, many other metrics that are impacted by launching any test.

0:33:05 - (Rich Skinner): And really what you're doing at that point is trying to determine whether or not the trade offs are worth it. This is why super sophisticated companies will create what's called do no harm thresholds. So if you want to test, but you know that it's going to have an adverse effect if engagement on a particular page goes down to some of your other products and you can't harm it, so you might create a threshold and say like, hey, this test also fails if engagement on this other page drops 5%.

0:33:29 - (Rich Skinner): And I think that those are critical parts to testing because you don't get nothing.

0:33:35 - (Matt Widdoes): It's like a whack a mole.

0:33:36 - (Rich Skinner): But usually you're okay making the trade off, right? So if you say like, hey, we're okay with this test if revenue drops 1% because we think that this will introduce more users that we will eventually monetize. And one of the things that we did even at Pandora, so the example we dropped 97%. And while we still kept going versus rolling back, is we needed to make an architecture decision. So we had him moved from HTML five. And I can't even believe that it was a thing and we were a flash based website on flash.

0:34:03 - (Rich Skinner): When you think about that change, it's like, yes, this is a dependency on literally everything else downstream because now we actually can't develop on our platform if we don't do this. And so there are sometimes dependencies that you just need to take into account just to make sure that if you are going on gut feel, it'll fail you when you don't realize what else is happening in your business. Because there are these silent latent metrics or things happening to users through their user experience that you just can't capture through gut feel.

0:34:29 - (Matt Widdoes): And I'm curious, nothing really happens in a vacuum. And so what advice would you have for people who are like, okay, great, I want to run some landing page tests. We feel good about the data. At the same time, our media buying team are running tests constantly that are hard to pin down because who knows what they're doing because they just have to operate and they're just working against their own number.

0:34:51 - (Matt Widdoes): Our lifecycle teams have new things that are going out all the time and they're running some weird subtest, too. So maybe you were a person, b variant or whoever got a worse off b here. How do you kind of account for the fact that there are so many touch points with the consumer that might feed into that and not even to account for the fact that, hey, there's some new feature change that we just rolled out, or the onboarding. And we later found out that onboarding was actually really wonky and people were dropping off, but that was eight weeks later and that probably impacted that old test.

0:35:21 - (Matt Widdoes): How do you create kind of cleanliness when there's so many people kind of coming in and out and they're all working on different parts of the patient, if you will?

0:35:28 - (Rich Skinner): Yeah. So when it comes to multiple parts, I think that's kind of the beauty and power of a b test. So if you work on a, you've got a landing page who maybe you're sending traffic and you're testing all kinds of different creative. Theoretically, if you are truly splitting 50 50 test versus control, both should get that same treatment, which means that both your control gets that new media spend and the same thing goes for your treatment. And what you'll see is whether or not it performs better or worse.

0:35:58 - (Rich Skinner): I think maybe the silent part of the question you're saying is there are effects that happen at a point in time that may maybe exacerbate a lift or decline on a particular test. And this happens with seasonality, right? So Christmas people buy a lot more, and so you might have a landing page that works better during high seasons versus lower seasons. So normally what happens is you need to actually test the longer over the period of time. There's really no trade off. There's no free lunch with a b testing.

0:36:27 - (Rich Skinner): There's no kind of like magical technique technically, to control for that. Pretty much just need to run the tests longer outside of that high season to determine whether or not there are any issues once you get back to normal. So I'll continue using zillow examples because they're top of mind. But we have high seasonality in the summers, low seasonality as we go into winter and then high again the beginning of the new year and in the summer. If you test something, you might realize, like, hey, maybe summer exacerbated this for the treatment. It's much more likely to be successful in the summer versus maybe in fall.

0:36:59 - (Rich Skinner): So you run that test in October, November, and see whether or not you saw the same delta between your time in the summer. And that's one way you can figure out whether or not a seasonality or particular creative or particular outside intervention is going to impact you long term. There's also an understanding of what a lot of people will talk about as interactive effects. So you're running something on the other side of the house, and you don't know if it's going to impact another side, your side of the house, so to speak.

0:37:31 - (Rich Skinner): And what you really want to be able to look for is what we call sample ratio mismatch. So I kind of lied when I said I wasn't going to come out with these acronyms before, but I'll try to keep it light. The sample ratio mismatch is just determining whether or not you have the equivalent splits on the test. And so there is a concept called orthogonality. It means that, and this is a layman's definition, so any technologists don't hold me to the high level version of this. But it is essentially looking at, are you splitting across all of these different tests equally?

0:38:02 - (Rich Skinner): So do you have an equal distribution of people in your test and control over all the different tests and controls? So you can look at it as a matrix if you're running ten tests with 20 different or two different variations. So at one control, one treatment, you'll want to make sure that people are equally distributed in each one of those tests equally. Otherwise, if you get in test number one, you have that 50 50, but 90% of the control goes into the treatment of test number two. That's probably going to make the effect you detect in your treatment in test number two weaker.

0:38:35 - (Rich Skinner): And so that plays a pretty critical role into ensuring that your tests over time are not polluting each other and that you could actually measure with reasonable confidence that what you actually tested is the effect that you hopefully spent all that time analyzing and launching. And then sometimes what you want to really figure out is what was the true impact of all of these tests over the year. And so maybe you've had some interactive effects with tests, but maybe you're not so worried about it because you felt like what you did overall was better for the organization.

0:39:03 - (Rich Skinner): Opportunity that exists there is, especially if you have enough traffic, are holdout groups. Holdout groups are great. And all that means is you take a subsegment of the population that gets neither the control or the treatment. They're kind of like the original control. So at the beginning of the year, and most people are annual. And so that's why I'm starting with beginning of the year, and you can actually see how all of your changes manifested. It impacts on that OEC, that overall evaluation criterion, and say that if it was conversions over the year, we held out 5% of users. Those 5% of users got absolutely nothing new. They kept the exact same experience the entire year. And for those people who kept the exact same experience the entire year, from a conversion standpoint, we got a lift, based on everything we tried, of 20%.

0:39:44 - (Rich Skinner): And that is a way of at least understanding at a top level. Here's how you've accumulated all of the impacts of the tests you've run. If you're not able to truly understand some of the individual impacts because you have missing issues, it's not all as technically lost. There are ways to measure it in aggregate, but you do lose some fidelity.

0:40:02 - (Matt Widdoes): Yeah, I think there are a lot of parallels with personal health and going to the gym and this. I think people also think, okay, I'll go to the gym for, like, a month. I'll get that beach body ready in a month. It's like, no, it doesn't work like that. And it's like, okay, well, I'm making something up here, but doing push ups is really driving all this chest growth. It's like, no, that's driving some of it, but everything is so interconnected that if you're just focusing on one or the classic, like, skipping legs day, you're going to have a problem.

0:40:28 - (Matt Widdoes): And that it's really hard to pinpoint or look at one particular day in the gym or one particular exercise or motion. In fact, you might actually be doing more damage with some things where you've got a clicky elbow or something. It's like you're doing that the wrong way, or you shouldn't be doing that. You should be doing this other lower impact thing to drive. Results and really, that success is measured over the course of many different tests and experiments over long periods of time, where it's these, especially at scale, these incremental improvements that are really adding up to that overall performance and that it's not about any one day. And so I think a lot of people treat it as this, like, okay, it's gym week, and we're going to go do experimentation. It's like, no, this should always be on, and it should be a core part of business. Would you agree with that analogy? Yeah.

0:41:11 - (Rich Skinner): And you're going to start getting me into one of my other passions, which is lifting.

0:41:15 - (Matt Widdoes): I know. Yeah.

0:41:18 - (Rich Skinner): I mean, 100% that muscle you're building. And you would imagine, right, you go to the gym one day and you try to bench press 200 pounds, right? You fail. And that could be the equivalent of you trying to do this incredibly complicated test the first time, and it fails. What happens next? And in real life, in that example, you just lift less weight, hopefully. Or you never go back to the gym.

0:41:38 - (Matt Widdoes): Or you never go back to the.

0:41:39 - (Rich Skinner): Gym with this a b testing stuff. Just like, juice isn't worth the squeeze. But in the other, in the A B testing example, you probably say, like, okay, what we need to do is fix this so we can actually lift this kind of proverbial weight. And so you might say like, hey, we had issues with sample ratio mismatch. We had issues with this kind of biases. How do we make sure that we don't do it the next time, right? And you build a process and you build the muscle so that each time you do lift, you get more and more out of it. That's exactly what happens, I think, in the gym.

0:42:07 - (Rich Skinner): And you follow best practices with a b testing, just like you follow best practices in the gym, you're using appropriate technique. And if you look at the stimuli based off of certain exercises, some are better to promote muscle growth in your chest than others. Like push ups are a very reasonable exercise, but you actually find that dumbbell incline benches are actually the most performant exercise for that chest. And so you probably want to be doing those more than you're doing push ups. And so I think that similar type of muscle you're building in the gym is a similar type of muscle that you end up building in a B testing as well.

0:42:39 - (Matt Widdoes): And if you were in front of a CMO at a major company or maybe a CEO of an earlier stage venture, what high level piece of advice would you give them in relation to experimentation or data? So for everybody listening, if there's kind of like one piece that they should take away or really high level element you'd want to share, what would that be?

0:42:59 - (Rich Skinner): That's a very difficult one, just because.

0:43:01 - (Matt Widdoes): There'S so much to pull from multidiscipline.

0:43:03 - (Rich Skinner): But I would say the biggest impediment I have found around a b testing and experimentation has been a culture without a culture of testing, without a culture of being okay to be wrong, you'll never build the muscle to actually test. If everyone is so afraid of coming out with the wrong product and it not performing well, you'll never figure out what could be. And so I think that the culture of testing and being open to learning and having that iterative process, just like they've adopted in CI CD in the engineering world and testing a bunch of different types of creative or content in the marketing world, this is very, very similar, is that you need to create a culture of that before you can actually see it through to the end. It's almost like imagining of not building the habit to going, working out and then expecting it to be there. It's why many bodybuilders or trainers will tell you to instead of going in and trying to lift as much as you can the first day, just go into the gym, walk around, just make.

0:43:58 - (Matt Widdoes): It there for 1 minute. Like if you just show up, set that habit of being there and then walking away is way better than being in the gym every day for five minutes is better than hit or miss for an hour at a time. Exactly.

0:44:09 - (Rich Skinner): Charles Duhig, writer of the power of habit, talks about this a lot, and obviously a lot of trainers espouses as well, but just getting in the building, being around people who lift, creating that habit loop of going there every day, seeing the people want to lift, seeing the people who you want to look more alike or perform more like is critically important into you building the muscle. And so the same thing goes with a b testing.

0:44:29 - (Rich Skinner): The more days you can kind of get in the gym, the more days you can try. And just starting out doing something, even if it's a little, is going to move you a long way. And this is almost like the compound interest analogy that you start off a little bit every day, and eventually in two years, you'll actually have found that you made substantial gains in whatever metrics you're interested in and whatever business you're trying to grow.

0:44:49 - (Matt Widdoes): Cool. Well, thanks for your time today, and thanks for everybody for tuning in to this week's episode of growth at scale with Rich Skinner. I hope our listeners enjoyed today's conversation about leveraging data to maximize growth as much as I did. And rich, thanks again. It's always a pleasure. Thanks for having me on.

0:45:02 - (Rich Skinner): Appreciate it, Matt.

0:45:07 - (Matt Widdoes): If you enjoyed this conversation, why not subscribe so you catch every episode of growth at scale. See you next time.

Book a complimentary consultation with one of our expertsto learn how MAVAN can help your business grow.


Want more growth insights?