Priscilla McKinney: You know, if you have listened at all to this podcast, you know that I am in circles in market research all the time. So it’s no secret that I end up in conversations about AI and digital transformation success around the market research industry. So I brought one of my friends along who I think is doing something pretty innovative. I think you’re going to be certainly impressed and I hope that you could take some learnings, also some attitudes, some mindsets from Jeremy. So Jeremy Antoniuk, welcome to the show.
Jeremy Antoniuk: Thanks so much for having me. It’s great to be here.
Priscilla McKinney: It has been a real joy talking with you over the last couple of months because this industry is ripe for disruption. Let’s put it that way. There are some data quality issues that are out there, and there’s also just a terrible amount of inefficiencies. And then let’s layer that on to the fact that consumer sentiment is changing very quickly. So companies are left with a real need to understand what’s going on in the real world and to be able to interact with their consumer in a quicker way.
So I was so excited when you and I ended up talking about a new release you’ve done of Scalafai Studio. Tell us about the studio first. We don’t have all the time to talk about everything you’ve built completely at Scalafai, so I’m going to have to keep myself focused here. But tell me a little bit about the studio specific to the core promise it has for researchers and what this has to do with digital transformation success.
Jeremy Antoniuk: Yeah, absolutely. Scalafai Studio is really a culmination of my past 20 years of experience in market research, starting as a project manager back in 2003 and trying to learn all these different tools. Fast forward two decades and now I look at the ResТech landscape with over 300 different platforms. In my experience trying to hire and scale operations teams at Research Now, Focus Vision, and Rep Data, just trying to train a project manager was an ordeal — go learn this system, then go learn this system, and then here’s another system.
And then all of these if-then statements — if you’re doing this, make sure you do this, otherwise something bad could really happen. This is just way too complicated. The better, faster, cheaper pressure that we’ve been under for the last two decades has never been higher than it is at this point. So the platform that we’ve built is really an attempt to give researchers the tools that I wish I had when I was running research operations. With the amazing generative AI technologies available, it can help — not replace, but help — researchers in so many ways, from drafting surveys to programming to helping clean the sample automatically, doing some analysis, doing some charts, even some presentation drafts, just to make that end-to-end research process so much simpler than it’s been historically.
Priscilla McKinney: Yeah, and it is complicated. You mentioned something to me when we were talking about what you called the zero learning curve, and it sounded almost too good to be true. This idea that really the modern researcher would not have to touch a platform themselves, but that could really be in a studio and serve up all of these different access points — really all of the things they need from one place. So tell me a little bit about that zero learning curve model you created.
Jeremy Antoniuk: Yeah, absolutely. That is something that’s really been interesting as we’re building the application. If you think historically, you’ve had services that do it for you or a DIY tool, and in a DIY you’re going through and pushing buttons and selecting tables and checkboxes, et cetera. In the world of AI assistance, you can still go through and push the buttons if you want, more in a traditional DIY way. Or you can just ask the AI to do things for you — like, hey, write a survey, we’re going to measure brand awareness and perception, give it the requirements, and things will just happen.
Sometimes those can be a very quick task. Sometimes those can actually go to an AI agent where it’s like, okay, maybe we’re doing a really deep data cleaning and this is going to take 20 minutes to go through this entire file. But either way, the learning curve of having to go through and push buttons and know which filter to select — it just largely dissipates. We kind of joke with our clients that if you can speak English, you can use the platform.
Priscilla McKinney: Yeah, that’s interesting because the way you just talked about it is you just said, hey studio, do this. I think we’re all getting kind of used to this — of course, Siri never listens to me, but beyond that, in our day-to-day lives we’re kind of getting used to this digital transformation. But now this is happening at work. Was there initial pushback on that model of, you really don’t have to actually go out on all of these crazy amount of platforms that are available out there in ResTech, and the agent can do it for you? It does sound too good to be true, but let’s just say it is true — did people push back even with the good news?
Jeremy Antoniuk: I think the biggest pushback is that it sounds too good to be true. Or I would say the biggest one we get is, yeah, I’m sure that works for easy surveys, but we do more complex stuff. And you’re like, well, if by more complex you mean max diff and turf and conjoints and all these different things, then yes, we can support those — well, conjoints are coming out in another month or two, but the rest of those, yes, we can absolutely support. I think it’s just a bit of a recalibration.
We’ve been operating under pretty much the same technology — these survey platforms that are a decade if not two decades old — where things work a certain way. If you’re doing a 15-minute medium complexity survey, that may take X number of days and cost you Y thousands of dollars. And now it’s like, well, we can do that in a tenth of the time and a tenth of the cost. That sounds too good to be true. So the biggest thing is working with our clients and showing them a demo that is much more relevant to the work that they’re doing, instead of just a generic demo.
Priscilla McKinney: Now at the very top of the show, I mentioned one of the layers of complexity with market research and what’s going on in this interesting bubble where we live, which is data quality. You and I started talking about this and got really far down this rabbit hole. But you had mentioned that one of your customers that was in a pilot right now came back to you and said, my gosh, they were thinking of a particular study that they would not have had to refield if they had originally done it in Scalafai Studio. Is that just so common? What are you hearing? What was that story about, and why was Scalafai Studio giving such a better use case for this customer?
Jeremy Antoniuk: Yeah, part of it is just resetting some of the things that we’ve grown so accustomed to because we’ve been doing it for two decades. We field a survey, we go in and fill quotas, work really hard — somebody’s hitting refresh on the reporting at 2 a.m. to make sure we hit quota. And then somebody cleans the data and, not all that uncommonly, guess what? We need to go back and refield. That’s super disruptive for lots of reasons and very time consuming.
So we took a step back and asked, what if a complete — when it counts against quota — has already been cleaned? If a complete counts against quota, chances are pretty high that it’s really a good quality complete. To do that, we need to be doing a lot of the straight lining and the open-end gibberish and the bot detection in more real time. That may sound like a lot, but the reality is that the technology is 100% there. Even with generative AI, looking at the open ends and coding them and detecting syntax or profanity — all that stuff can be detected in real time and scored. That way, when you get to the point where you need a good data set for analysis, you’ve already gone through it. You’re not holding your breath and waiting to see what happens. You can move very quickly into weighting if you want or jump straight into the analysis. That’s ultimately the goal — not stop the press, export the file, switch gears into Excel, sort and pivot and delete rows until we get what we need.
Priscilla McKinney: Ouch, that sounds pretty painful. So we know that everybody is using AI in our environment — certainly at this professional level, everybody’s using it, but not everybody is using it well. And I think the first thing that people come to AI for is that efficiency. I know that’s kind of the basic thing that Scalafai provides, but let’s talk about that for a minute because when people are listening to Digital Transformation Success podcast, they’re thinking, my gosh, my team needs to get more efficient. You have talked about specific labor reductions and you have certain targets for different types of activities that a normal market researcher does. So tell me a little bit about some of those benchmarks and what you think is really possible right out of the box with Scalafai Studio.
Jeremy Antoniuk: Yeah, absolutely. As a recovering market research operations executive, I know as much as anybody how important productivity metrics are going to be. It’s one thing to say it’s going to make you more productive. The next question is, well, great, how? And at some point we need to do a proper ROI analysis, so we need some real metrics. We’ve come up with a way where we can work with clients and take a survey that they fielded recently — typically that survey will come pre-programming in a Word document — and we say, okay, take that survey, and they know they paid X dollars and it took Y number of days, maybe two to three days to get that survey programmed.
If we’re talking about a 15-minute medium complexity survey, probably around 45 to 50 questions with a fair amount of logic and piping, our platform can literally pull it in and it takes a handful of minutes to do the initial programming, and then you’re moving very quickly into QA. We’re literally measuring mathematically — this survey has, let’s say, out of 45 questions, and by the time you add in variables and logic and quotas, let’s say 100 different distinct requirements. We go through with a binary flag — did Scalafai Studio import that and meet that requirement, yes or no? This is not a hand grenade, this is not horseshoes — did it set the quota, did it set the variable with the exact logic, yes or no? And then we simply score that. Some of the early surveys that we’re doing, we’re already north of 90%.
Priscilla McKinney: Wow, okay, 90% efficiency on programming the survey. Now tell me — that sounds great, but then there’s a remaining human element effort that needs to be made. What does that effort look like? I just want to know, is this a mess when I’m done? Obviously you’re not going to say that probably, but I really want to know what is the quality of the work that I have to do when I step in as the human.
Jeremy Antoniuk: Yeah, so one of the big things that we are in the process of adding — if you say we’re going to aspire to be 100% perfect with the AI, let’s just assume for argument’s sake that it’s not. There’s going to be some exceptions, and I’ve seen some where there are just flat out conflicting instructions. One instruction says skip and the other one says show, and you’re like, well, which is it? We don’t want the AI to assume. So we’re going to create an exception list to say, okay, here’s a list of all the things that were needed, here’s where the AI thinks it did really well, and here’s where it wasn’t confident in making decisions.
Instead of having to look at every little nook and cranny to see if each one is right, you get to more of a management by exception. I brought the survey in, there are 45 questions, most of it’s done, here are five or so things that need my attention. Let me go through that, let me look at the results of some automated tests just to make sure there’s nothing missing, and then we move on and get into the fun stuff of fielding.
Priscilla McKinney: Yeah, and you get into the rest of the project. But let’s talk about something kind of interesting. I know I wish we could go over every element of Scalafai Studio, but one thing I was really interested in talking with you about is this promise of pre-visualization — this idea that you can actually do some very good work on executive presentations with much less work involved. So tell me about that. What were you envisioning when you created that piece of Scalafai Studio?
Jeremy Antoniuk: Yeah, so that’s actually the very first thing that came up as we were meeting with prospective clients and were kind of in the ideation phase. We heard data quality, we heard programming, but presentations was definitely one that came up over and over and over again. And now having helped the engineering team in great detail and understanding how complicated Microsoft PowerPoint is underneath the hood, I get it. There are a couple of things that are fundamentally broken.
This notion of back to data cleaning — if you stop data collection, export the data, and then do manual cleaning, now you’re in manual land. Your chances of building a PowerPoint deck programmatically are very, very low unless you have a very bespoke tool to do that. So we said, okay, if we can get the data cleaned and ready, and if they’re weighting, great, then we can take that data and feed a presentation generator. Are we doing simple top lines? Are we doing an executive summary? Between some programmatic rules and templates — client-specific templates, where if I’m doing a research study for Company A, I know here are their brand colors, which could be very different from Company B — we just get all that loaded into the platform and then it’s simply merging the survey data into it. I’m probably making it sound easier than it is because PowerPoint is a very convoluted technology under the hood, but we’re already seeing some pretty huge gains. For a more advanced presentation, we’re thinking more like 50% to 60% efficiencies, and the realist in me says, wow, that’s not as good as 90%, but also wow, it’s 50% to 60%.
Priscilla McKinney: That’s 50%, and these are typically top level people. At this point, we’re really digesting the insights and trying to think about what do the executives really need to know, and am I telling the right story, and is this really coalescing? That human input is somebody at a very high level. So if you can save someone 50% of their day that they were going to be spending on this presentation, I think that is huge. So let me ask you this — this is maybe a little bit of a poking the bear question, but this is cutting edge. And as you mentioned, it’s not perfect, and everybody knows that. Our computers aren’t perfect and yet we’re using them all day long. But people are using this, and some people are thinking, okay, well our company needs to build it, not buy it. There’s a lot of going back and forth about this. What are you hearing about that? This is really important for people listening to Digital Transformation Success because they don’t know — they get into that little special snowflake conundrum. No, we’re just so much more complicated, we’re different from everyone else, we can’t have something out of the box. So do you build or buy? What are you hearing?
Jeremy Antoniuk: The build or buy debate that’s been around for decades with technology has never been more important in the world of AI. In the last eight weeks, I’ve literally had seven conversations with organizations that, after talking about Scalafai Studio, said they basically tried to build that, or tried to build some component of that, and ultimately threw in the towel. Because in the world of AI, you would love to think you’re going to hook this up, it’s going to talk to the AI, you’re going to get your answer, and you can do these things. It’s actually pretty complicated under the hood, and it’s evolving very, very fast.
There’s a reason a company like Southwest doesn’t manufacture their own tires — because there’s some specialty in there. I think it’s very much the same thing with technology, but especially with AI technology. The part where I think Scalafai is finding an interesting niche is that I’ve seen in the industry a bunch of engineers who are very good with technology but have zero subject matter expertise when it comes to research. They just don’t understand why some things are so important. And on the flip side, you’ve got the operations side — workflows, and how do we actually get this done?
Priscilla McKinney: Right, or even to your point, the operations. You’ve spent years in operations. So there’s the technology and then there’s the research and the rigor and what needs to be done, but there’s also just the operation of how do we get this done.
Jeremy Antoniuk: Yeah, exactly. For me, probably the biggest flattery that I’ve gotten — and I’ve heard this with some other platforms that I’ve built over the years and I’m hearing again with Scalafai Studio — is, I can tell whoever built this has actually run a project before. And that gives me chills. Because while it may have been a few years since I ran a project end to end, building software that truly helps a user get through their day faster and with less pain — to me, that’s the ultimate measure of success.
Priscilla McKinney: I love that. Okay, so you’re still building it. I kind of hear in the language you’re using that there’s still feedback you’re wanting from people and you’re listening quite a bit to your audience. So you were in a pilot phase for a while — are you still in pilot phase, and if you are, who are you looking for? Who would be a good fit for you?
Jeremy Antoniuk: Yeah, this is the nature of software, so we’re going to be building indefinitely — it will never be quote unquote finished. But yes, we are piloting. We have 12 pilots that are either underway or launching soon, so it’s a really exciting time. We ended up coming up with a really interesting solution for these pilots where, in talking with our prospective clients, they’d say, well, I love the idea, it sounds great, but I’m just not comfortable using this new tool on a live client project. And I get it — I’ve been in that hot seat before and I would probably say the same thing.
So what we came up with, which is really starting to resonate, is we call it a retro project. We’re going to take a project that they’ve already fielded and do a retrospective delivery. They give us the Word document and then we go through and say, okay, let’s import that and see if we can get to the 90% or 100% mark of automating the programming. We’re not doing sample obviously, but we can take their data that they’ve collected and import that into our file — because we built it to be very modular, knowing some clients have Decipher, Qualtrics, Confirmit, et cetera. I talked to a client the other day who said, hey, I love what you’re doing, but we just signed our big contract three weeks ago. So we’ve made it modular where even if you don’t use it for the survey programming, you can still pull in the data and run our quality checks through that, which gives you the foundation to get into the deliverables. These retro projects work with clients that are interested in innovation, have a little bit of patience to help us work through some minor defects, and ideally have some interest in giving us feedback on how to make it even better. Either way, we’re able to do this without any risk to live client projects at all.
Priscilla McKinney: Yeah, I can see the value there. So let me end on this. I kind of want your perspective on what you think Scalafai Studio is doing the very best, because you mentioned that you started from this data visualization piece but then really the promise of Scalafai Studio is more about end-to-end efficiency and operational management of projects. I’m curious — in going from zig to zag, what is the thing you’re most proud of? Where are you hanging your hat and thinking, okay, we’ve really nailed this, or we’re really different in this way?
Jeremy Antoniuk: Gosh, that’s a dangerous thing when you’re building software — I give myself about 10 seconds to pat myself and my team on the back, and then we’ve got to move on to the next thing. But actually just this past Friday, we had a call with a major auto manufacturer. They’re looking to streamline — they do about half their projects internally and half they outsource to agencies, and they’re under the better, faster, cheaper pressure just like everybody else. So we talked about doing one of these retro projects and we also mentioned that as part of an extension of these presentations, we can do a really cool infographic because we can take that survey data, pull it in, put the brand guidelines, the logo, the survey content in there, and about 30 seconds later you get this really incredible infographic.
They had expressed some interest in that, and Friday afternoon I thought, you know what, I want to create one of these for them after the call. But to do that I needed the data, and to do that I needed the survey. So I went into Scalafai Studio and gave it some instructions — let’s write the survey, here are a couple of tweaks, great, let’s program it, done. Now let’s go through, and for that one I just ran some synthetic test data through it to get some placeholder results. Then we got into the analysis and created the infographic. That whole process took about 55 minutes, all in. And that’s where I was like, okay, this is starting to get kind of exciting.
Priscilla McKinney: Yeah, we’ve got something here. Well, you need to connect with Jeremy Antoniuk — find him on LinkedIn and ask him all the questions you want to ask him. I really appreciate you coming on and sharing your expertise. We’re moving from the wild, wild west, and it’s going to get settled here at some point. It’s going to be very interesting to see who settles out here in the wild, wild west of AI and market research.
Jeremy Antoniuk: Indeed, I’m looking forward to it.