burgerlogo

User Testing IoT Products

User Testing IoT Products

avatar

IoT For All

- Last Updated: November 14, 2023

IoT For All

- Last Updated: January 1st, 2020

featured imagefeatured imagefeatured image

https://youtu.be/6T_zw_MpO0w

Luke Freiler, CEO of Centercode, joins Ryan Chacon on the IoT For All Podcast to discuss user testing IoT products. They talk about how connected products have changed, the benefits of product testing and how it's done, the different kinds of product testing, the challenges of user testing, testing software versus hardware, how to know when a product is ready, and how to do user testing well.

Episode 320's Sponsor: DAIN Studios

Are you looking to optimize your IoT data strategy and harness the full value of your data? Partner with DAIN Studios, the experts in data driven solutions. They specialize in building robust IoT data strategies that fully leverage the potential of your data. Their team can assist you in systematically identifying, prioritizing, and implementing data opportunities to drive your business forward.

By establishing key enablers such as data governance and architecture, they ensure the successful implementation of your IoT ambitions. Learn more and visit them at dainstudios.com.

About Luke Freiler

Luke Freiler, CEO and co-founder of Centercode, provides user testing solutions to leading tech companies. With a background in software development, he spearheads the design of the Centercode Platform, a SaaS platform that facilitates continuous audience engagement throughout product development. As a tech idealist, Luke aims to use technology to reduce friction and solve real problems. He is dedicated to connecting product creators and their audiences to actualize this vision, one product at a time.

Interested in connecting with Luke? Reach out on LinkedIn!

About Centercode

Centercode believes that creating exceptional products that truly meet customer needs requires a robust and effective user testing program. They developed a powerful delta testing and feedback management platform that helps companies around the world unlock the full potential of their products.

With their platform, you can identify and recruit the right testers for your product, efficiently manage the feedback process, and analyze the data to gain valuable insights that can inform future product development decisions. Their platform is trusted by some of the world's leading brands, including Microsoft, GoPro, and Bose, to name just a few.

Key Questions and Topics from this Episode:

(00:38) Introduction to Luke Freiler and Centercode

(04:15) How have connected products changed?

(05:55) Benefits of product testing and how it's done

(07:41) Different kinds of product testing

(08:50) Challenges of user testing

(10:49) Testing software vs hardware

(12:58) How do you know when a product is ready?

(16:58) How to do user testing well

(19:56) Learn more and follow up


Transcript:

- [Ryan] Welcome Luke to the IoT For All Podcast. Thanks for being here this week.

- [Luke] Thank you, Ryan. Happy to be here.

- [Ryan] Yeah. It's great to have you. Exciting conversation I know we have planned, but I wanted to kick it off and just have you give an introduction to the audience about yourself and the company. 

- [Luke] My name is Luke Freiler. I'm the CEO of a company called Centercode. This has been a passion project for me for some time. I actually started it when I was very young around 2021 or around sorry 2001. Prior to that, I spent some time in corporate America. I worked for Samsung. I worked for Ericsson and it was through those experiences that I fell in love with what was at the time called usability but eventually became more widely recognized as user experience. And I just fell in love with the idea that technology should be accessible. It should solve real problems for real people. And I just zeroed in on that. And while doing that, I was running an early web team at Ericsson. A product manager came to me and said, hey, I need you to run a beta test for this product we've all collectively been working on this big investment by Ericsson. And I said, okay, what is, what does that mean exactly? And he said, oh, we get a bunch of customers and we try it. I was like no, I understand what a beta test means. I don't know what it means in Ericsson's context. We have a process for absolutely everything. What is our process? What do I do? Where are the steps? And he said, look, man, we don't have any. And at first I didn't believe him. This was a hundred year old, a hundred thousand person tech company, and it just made no sense to me for years, honestly, but I zeroed in and started talking to a lot of people and I found out that there was this very interesting gap in the market that was ultimately being perceived as a necessary evil.

Everybody understood that testing real products and real environments with real people is essential to really figuring out how that product's going to run out in that real world but there's a lot of friction and part of the problem is that you're basically given, you know, a knowingly broken product, an unfinished product to a group of strangers and then asking them to provide you meaningful feedback and there's just so many things in that that are tricky So for me, I fell in love with solving that problem. The idea that we can help be an orchestrator for or a facilitator for the relationship between companies and their customers to ultimately build something that's better for everyone. So started the company, bootstrapped it initially and built it from there. And to this day, we work with a lot of the biggest companies in tech.

- [Ryan] And when we're talking about products like connected products, is this more consumer side or is this enterprise side, or is it a mixture of both? 

- [Luke] It's a mixture of both, and we test both hardware and software, but IoT connected products is the sweet spot. It would certainly make up the majority and part of that is because there's always a combination of hardware, software, and a service component to the definition of IoT. So as a result, there's a lot that can go wrong. So that is where we see a lot of our business. 

- [Ryan] Yeah, I was going to ask you the kind of the focus for you all, whether it's hardware and software, or and or software because for, let's say enterprise, I guess even consumer side, there's always that software component where it's an app, it's a web interface, it's something that allows you to interact with the device or see the data and things like that. So I assume you are handling the testing and the on both the hardware and the software side for both sides there.

- [Luke] Yeah, absolutely. I would say 90 percent of what we touch has what you would consider to be software. And it may be software embedded into the product itself. It may be through an app. Virtually everything has an app now. So there's almost always software involved and then there's definitely a lot of hardware. 

- [Ryan] With the growth of connected products over the years, how has, how have they just changed in general? Like how, obviously this will play into kind of a question around testing and how that's evolved and the importance of it and so forth. But just generally speaking, if we're looking at connected products, what have been the biggest changes that have influenced kind of the area that you work?

- [Luke] The biggest and probably most obvious is that products and just the nature of being connected is that they become iterative. We often say that people don't think of products as products anymore, they think of them as services, right? Back in the day, a product was more of a fire and forget strategy of I'm going to put out this speaker and it's going to do speaker things. Or maybe the best example is I'm going to put out this AC unit and it's going to do air conditioning things, HVAC things. And then there was this turning point when they got connected and now you don't just expect it to do what it's doing forever, even in a connected basis, you're expecting it to evolve. You're expecting it to iterate, gain new features, solve new issues, connect with new products that didn't exist when it was created, and that connected nature created an iterative development process. It fueled Agile into the hardware space, which probably would have been unthinkable 20 years ago, but now is effectively necessary.

I'll never forget a conversation I had at Bose where they said they had to change their entire culture as a company because everything used to be an 18 to 24 month release cycle. They put out a great pair of headphones, a great speaker and then they never touch it again. And they said they just cannot afford to think like that. If they do that, their competitors will run circles around them. So they had to change the entire mindset of the organization to be more agile in response to connected products. 

- [Ryan] People listening out there trying to understand, I think at a high level, people understand the value of testing, but just if you were to talk about the overall benefits of product testing, especially in the IoT space, what are those benefits and how is testing kind of done without getting into too granular detail, just take us through what does that even really mean? 

- [Luke] Yeah. In our space, the basic idea is exactly what it sounds like. You're going to find real people who have the real problem that your product solves, you're going to distribute it to them, and they're going to use it. And really what you're trying to do is study the issues they have, the ideas that they feel would complement the product or improve the product as well as the praise they have. You're looking for sort of those three things. Issues, ideas, and praise, but you're looking for it over time because one of the things about connected products again is not only is the development not fire and forget, but nor is the usage.

So it's that adoption component that you can't really capture in a traditional QA environment. You can't capture it through automated testing. You really need to let people use the product in their natural environments. And what that typically means is not only interacting with many other products because again, the nature of a connected product is that it's reliant on products outside of its control to perform successfully.

Not only are they, you looking for how it interacts with those products, but you're looking for how it interacts with those products over time as they evolve as well because the same iterations that you're making, they're making. And that could be creating all sorts of problems. So that continuous testing and adoption of features and products over time is really what separates this and is so critical in the connected space.

- [Ryan] I imagine there's different kinds of testing throughout the development from early ideation all the way through launch and just continual growth of a product itself or just new versions of the product. 

- [Luke] Exactly. So we definitely think of it as, you know, when you're starting off in that kind of alpha ish phase and obviously there's testing that goes beyond, you know, long before a customer is ever even involved, but you know for us it's typically getting it in the hands early to test the basics, to make sure that even in a feature incomplete product, it's doing what it's supposed to be doing and looking for that early feedback. You then have what you would consider to be that traditional beta test of four to eight weeks before launch, you're going to have people use a near final product, hopefully it's feature complete, but it's probably still got some known issues, and you're going to look for more issues in those real experiences. And then from there, it's all about the maturity of the product, right? Now, again, that iterative component is products maturing over time and every new release you're doing could brick the product technically. That's the risk of our space. That's the trade off. So making sure that a small group of people are doing everything they can to ensure that's not going to be the case becomes the initiative from release onward. 

- [Ryan] User testing these connected products, obviously, the products vary in type of end user, the environment in which they're going to be used, the amount of functionality and features they have. What are some of the biggest challenges that people need to be aware of as they approach testing these IoT devices? 

- [Luke] One of the challenges of an unreleased IoT product is you typically have only so many units, right? It's pre production, they're expensive, they're hard to come by, everybody in the organization wants them. We've often talked about how in our space, we don't have the luxury of big data. Post release, you can study all sorts of data but prior to release, it's all about maximizing small data. So for us, it's about thinking about our audience in a couple of ways. We want to think about the profile of who they are, do they have the problem that this product solves? And that's an entry point for everybody into that test. From that, it becomes about their level of experience, which again, could impact their user experience, how savvy they are to the space. And then equally important, their environment, understanding what other types of products they have available to them.

Those products may not be configured in their default states. Almost certainly aren't actually that you might've already tested in. So it's about getting as much coverage both on the demographic side of kind of who they are and the problems they have but also on the technographic side of what types of products is this going to interact with and again, they'd have to adapt as part of an existing ecosystem. So finding those people is really key. Once you release the product and you've already got customers and you're doing iterative releases, it gets a lot easier to at least have a pool available, and it gets a lot cheaper because you can basically target your existing customers who are enthusiastic about your product, or again, whatever problem it solves.

So going in, it's very much about profiling to maximize limited resources. And then from there, it's about building an ongoing pool of people that are always ready and available and excited to test and help shape the future of a product. 

- [Ryan] With an IoT device, we've already mentioned that there's the hardware and the software component, and obviously, through over the air updates, software is oftentimes easier to update when there's a bug or there's an issue. But how, what about the hardware side? I imagine there has to be a different plan and thought process that goes into how we're going to not only test but also when something is deemed ready for launch because if you push a product and lots of units out, it's being used by lots of people and there's an issue with the hardware, that's much more difficult to fix than a software bug that you can have just them update the new, the latest version, and then it's probably patched and we're good to go.

- [Luke] Yeah. Very different problems and very different results, right? If it's a software update, then theoretically you can recover, but you've got brand reputation and whatnot that is now hurt, whereas if it's a hardware issue, that's, you know, the, recalls and whatnot are pretty much the worst possible scenario.

So that beta phase is really designed around ensuring that the hardware is operating in every way that it needs to, whereas those post release, those ongoing, what we call delta tests, are very much about making sure the software is performing. You really need both to fully succeed because obviously you need to make sure the product, the functional hardware is going to work. But that is what that more in depth test at the beginning is that's focused on stressing the entire product. From there, you're really just focusing on the delta between releases. What is it that changed and hopefully you've gotten through the majority of your hardware issues. It is incredibly common to iterate hardware as well, but in our space, it's typically pretty quiet between major releases. You'll have five different versions of a 1.0 product to the outside world that have evolved and improved and either parts are getting cheaper, things are getting more efficient, batteries are getting better and so on. Often the end users don't even know that's happening, but those are all still being tested prior to release. That would generally be the goal. 

- [Ryan] Is there a certain difference in threshold or I guess level of something being quote unquote finished for hardware versus software, knowing that you can more easily fix hardware bugs and updates prior or once it's been launched versus hardware. And then I guess that kind of ties into a question I've been curious about is how does a company get something to a point where they're ready to go live and feel like, okay, all the stuff we didn't know we feel like we know now as opposed to leaving things on the table that they may still not know they need to know that for potential issues after you invest the time in the hardware and money in the hardware and that goes out, that's hard, much harder to walk backwards than it is software.

- [Luke] We've looked at this problem forever and a number of years ago, we came up with basically three metrics that we use, which are the KPIs in our space, which were a big gap for a long time, and we break it down to three things. Number one is what we call the health of the test, so it's a health score. And what it's really looking at is two sides of a coin. One is, are you, are testers testing everything? So are they going through and actually using the product and testing all the features that you want tested and do you have evidence of that? Two is, are they then giving you actionable feedback on the things that either didn't meet their expectations or went above and beyond their expectations?

And again, it's the issues, the ideas, and the praise. It all boils down to that. If they've achieved, if a test has achieved both of those things, it has full coverage, it's widespread coverage of the product, everything's being tested, and you're given clear instruction to fix the issues that were found and address the shortcomings that were found, then you've got a high health score. And then that's an important starting point to know that okay we've sufficiently tested it. Has nothing to do with the results, has nothing to do what they found, but we have tested it. We have confidence in our test. The next score leans on that one, and it's what we call a success score. And what that's doing is looking at the product from the eyes of the customer and saying, okay, on a scale of one to a hundred, how happy were they with the result? And if you back into the things that I said we were collecting, we can use those to figure it out.

We're rating and providing a score for every piece of feedback. Not all feedback's created equal, right? A security issue is typically much more important and severe than a cosmetic issue. We're looking at those things and we're saying, okay, if everything was praise and there were no issues and no room for improvement, which has never happened, but it's theoretically possible, you've got a very high success score. If you had no praise, everything was issues, everything was shortcomings, then you've got a very low score. The reality, and this is why the score is useful, is always somewhere in between. It's this outcome of, okay, we know there's some good things, some bad things, we know where we are. What that then gives you is a target point to say, okay, we got a 60, right? And our goal was an 80, and I'll touch on that goal in a second. What we're goal here is then to give you the instruction for how you need to get that 80. To show you exactly where you need to go. And then the last core metric, and this is another really important one, is looking at the difference between what that score would have been had you just released the product as is and what that score is recognizing that you fixed bugs, you implemented new ideas, whatever it happens to be and then that shows you the impact of that testing effort. And all three of those scores become beautiful KPIs because they're things that you can take simple tactics to improve. You know, improving your engagement, improving your product itself, and improving how you respond to and address issues in that product.

So the goal is that as a company, you're typically doing releases over time. You're doing again, if you're agile, you're doing releases biweekly, monthly, whatever makes sense for you. So that number becomes a touchstone. It becomes a benchmark that you start to get to know and relative to your product, you can have targets that are relevant to your organization, right?

If you're, if you have endless money and you want to invest to be perfect, you can do that. You can spend a lot of money to get a really high score. If you understand that the market impact of this product is limited, it's an experiment, whatever it happens to be, you might have lower ambitions, but it provides those touchstones to get you through that.

- [Ryan] I guess a good way to wrap this conversation up is to talk about how people listening to this can do testing well. How, we've obviously seen lots of advancements in AI that I'm sure play into this now or will play into this. What should people be thinking about when they are venturing down that path of getting a product ready for release or maybe they already have product out there and they're like, hey, we did not thoroughly test this the way that we probably should have. How do we, how can we approach that retroactively? So what advice would you have for people listening to this on how to do testing well and where this is all going? 

- [Luke] So I'll give the slightly biased answer or maybe more than slightly biased answer, and then I'll try to genericize it a little bit. Actually, I'll do those in reverse. I'll give you the generic one first. There's no bad time to start. The reality is every company matures at different rates and whatnot. So the idea of getting together a group, it doesn't have to be huge, 20 people, 30 people, we actually don't recommend tests that are too large but getting that group as soon as possible and just giving them a channel to get your feedback and communicating clearly with them about the types of feedback you're looking for. That's very important. From our world, the more biased answer is, we shifted our go to market strategy in the last year. We used to be pretty much enterprise facing. We only worked with the biggest of the big. We're in a different place now to where we've expanded our market to individual product managers who are having trouble sleeping at night. And there's a free version of our product that they could use to run a test with without ever talking to us, without ever dealing with us. We then have your kind of typical swipe a credit card for a little bit more functionality. And then as your program matures, we of course have those traditional enterprise offerings available. But what we've brought to market for the functionality they get, again, for free to start and then that paid version is a lot, and we get them pretty far. And again, they don't have to talk to a salesperson. They can just get going and it's month to month. They can it at any time. It's very simple. 

While it is very biased, obviously, because it's what we've worked so hard to build, it's a great solution that is incredibly inexpensive and available to anybody. Our goal as a company is actually not to make money there. It's to continue to gain awareness and allow them to show value within their organizations so that as they do mature and see this value, we can have a real commercial discussion six months from now, a year from now, whatever. I would also say just again on the free side, two things that we do to help everybody that anybody can take advantage of, one, we have a community called betabound.com. It is a community of beta testers looking to test products. And we make it available freely. If you want to post your opportunities there, we'll push testers to you at no cost. And that's just betabound.com. And then the last one, a big part of our strategy, we're not marketing and salespeople. We're very much a product brand company. We produce an enormous amount of content. We just have a no secret sauce mentality. So our entire strategy for growth is around content marketing and we produce a lot of valuable information. So go to centercode.com, read some information and again, absolutely no cost. We're not going to hound you. And hopefully that will help you get your head wrapped around where you can improve here. 

- [Ryan] Luke, thanks for taking some time. I was actually, the last question I was going to ask you is how to follow up, but you already plugged that there, which is perfect. I guess, it's a really interesting space just to think about all these products that we use either whether it's consumer or enterprise and realizing how much work and effort has to go in to get it to the state that you actually the end user actually has it when it's in their hands. Just like security and conversations I've had in the past, you can never kind of start this process of testing too early it sounds like and the better you can prepare yourself, the better you can discover the issues earlier on, get them fixed, and get something that your consumers are going to love at the end of the day while you continue to iterate and grow and test throughout the entire life of the product. This was fantastic. Thanks for taking the time to shed light on this, and it was great to have you here. 

- [Luke] Thank you, Ryan. I appreciate it.

Need Help Identifying the Right IoT Solution?

Our team of experts will help you find the perfect solution for your needs!

Get Help