From Pixie Dust to Magic Machines, Evaluating AI in Research Tools

Bowl of porridge with apples and walnuts

There’s no shortage of tools available to product delivery teams when it comes to gathering, storing, socializing, and leveraging their customer insights to inform product strategy. As Jake Burghardt, a respected thought leader in ResearchOperations, commonly known as ReOps, said in a recent LinkedIn post, “The market of apps has become dizzying…” Artificial Intelligence (AI) is a key factor in this boom.

It seems a new tool pops up every week or so. While AI has been around for decades, it wasn’t until OpenAI released ChatGPT in late 2022 that the experience made AI usable for the masses and demonstrated the value and potential of the technology. This unleashed the wave of AI tools we’re all riding now, and sparked plenty of debates along the way.

Perhaps the biggest debate is whether AI will take our jobs. While attending a keynote during San Diego Startup Week in October 2023, a speaker said, “AI isn’t going to take your job, people who know how to use AI will take your job.” In other words, shunning AI is a career killer.

Integrating AI

While there are ethical implications to consider when using AI, it’s a technology that’s not going away. This is why we, at Rutabaga, have been mindful of how we integrate AI in our platform. Our approach resonates with our prospective customers, as one person said during a platform demo, “LLMs are the future of our field and I haven’t seen anyone design this in a way that supports discoverability like what you’re doing.” Even Judd Antin mentioned prompt engineering as one of the five tools great researchers have at their disposal, in his recent appearance on Lenny’s podcast.

“LLMs are the future of our field and I haven’t seen anyone design this in a way that supports discoverability like what you’re doing.”

AI is changing how product delivery teams are doing research, and it starts with the tools people use. How are tools using AI? We’ve been keeping a pulse on just that, with a special focus on researchers because the first application we’ll launch on our platform supports qualitative data analysis. We’ve found tools tend to fall into three camps regarding their use of AI, and like Goldilocks’ three bowls of porridge, they run too hot, too cold, and just right:

Magic AI Machine 

The tools in this “too hot” camp risk alienating their audience because they’re too reliant on AI, using the technology to replace humans. With AI’s propensity for hallucinations and bias, it becomes difficult to reliably trust outputs. Some tools have gone so far as to replace a researcher and/or a research participant with an AI chat. Soon, these tools may just be AI chatting with themselves. So long as there are latent needs, AI will not be able to do all the work for you. They may help you quickly find the low hanging fruit, but human insights will likely be left in the transcripts…at least when human participants are used.

Pixie Dust

“Too cold” tools in this camp tend to have existed before the AI boom we’re seeing today. They’re releasing AI features to remain relevant and keep up with the changing landscape. However, AI capabilities are more of an after-thought, a sprinkling of pixie dust on existing workflows, that provide little to marginal value to the end user or the business.


There’s a certain advantage that companies building new tools have over existing tools, and that’s the ability to meaningfully bake AI into workflows from the get-go. Instead of looking at a tool and wondering, ‘How might we add AI here?,’ we can ask, ‘How might we think about AI as a fundamental part of our tool?’ This camp is by far the smallest of the three because it’s where the hard work is.

The possibilities of AI are massive, yet, the reality of what AI can accomplish today, in context to nuanced data analysis, requires human input and oversight in prompt engineering. The challenge is to figure out how AI can be an assistant, in truly meaningful and trusted ways, to help researchers with the tedious aspects of the job, so they might instead, focus on delivering value. Rutabaga fits in this camp and it has been a fun problem to explore.

With the above framework for evaluating tools, teams may have an easier time navigating their choices. We firmly believe tools like Rutabaga will provide the most value, because they take the best AI has to offer and deploy that as an assistant, while recognizing the limitations of the technology. Your organization’s culture and appetite for using AI in the first place, are also critical considerations, of course. But adding the right level of AI to your toolkit for gathering, storing, socializing, and leveraging customer insights will change the game for your organization.

Share the Post:

Related Posts