ai tools research

Deep Research Is the Most Underrated AI Feature

From streaming service comparisons to interactive infographics, weekend three of the AI Resolution closed the trust gap between 'AI can research' and 'I trust this for decisions.'

January 18, 2026 · 6 min read · Michael Schilling
A translucent magnifying glass floating above layered documents, transforming them into structured data nodes with an emerald light beam

Everyone talks about AI for code generation and content creation. Those are visible, impressive, and easy to demo. But after three weekends of systematic AI testing, I am convinced that deep research is the feature most people are sleeping on.

Weekend three of the AI Resolution was about closing a specific gap: the distance between “AI can do research” and “I trust this output enough to make real decisions based on it.” That gap is smaller than you think.

The streaming service problem

Let me start with a real problem. My family needed to rationalize our streaming subscriptions. We had accumulated services the way most households do — signing up for one show, forgetting to cancel, layering on another. I wanted to figure out the optimal combination for our household in the Netherlands.

The constraints: I am a massive Marvel fan (Agents of S.H.I.E.L.D. specifically — the show that inspired The Hub’s name), we have kids, and I did not want to spend more than necessary. This is the kind of decision that normally takes an evening of googling, cross-referencing pricing pages, and making a spreadsheet.

I gave the task to Perplexity.

Within minutes, I had a structured comparison of every streaming service available in the Netherlands, with current pricing, content libraries relevant to my criteria, and — crucially — sourced information I could verify. Perplexity cited its sources inline, which meant I could spot-check the claims against the actual service pages.

The findings: Disney+ is the only service in the Netherlands that carries Agents of S.H.I.E.L.D. That alone makes it a year-round subscription. Netflix has the strongest kids catalog but we do not need it continuously — vacation months make more sense. Several telecom providers bundle streaming services with internet packages at a discount worth checking.

The result was a “hopping strategy”: Disney+ year-round, Netflix for vacation months when the kids are home, and a note to check our telecom bundle at the next renewal. Annual cost: roughly 260 to 325 euros, down from what we were paying for always-on subscriptions to everything.

This is not a groundbreaking research task. But it is a real one, and the speed and quality of the output genuinely surprised me. What would have been an evening project became a twenty-minute exercise with verifiable results.

The Masked Singer experiment

The second research project was more playful but technically more interesting.

The Dutch version of The Masked Singer was airing, and I wanted to see if AI could help analyze the cryptic hints that each masked performer gives. These hints are deliberately vague — references to Dutch culture, wordplay, career milestones — and the fun is in decoding them.

I used Google AI Studio for this, and the result went far beyond what I expected.

Instead of a static analysis, I ended up building an interactive infographic with two AI-enhanced features:

The Hint-Kraak Machine (hint-cracking machine) — You input the cryptic hints from an episode, and it cross-references them against a database of Dutch celebrities, suggesting matches with confidence scores and reasoning. It understood Dutch cultural references, wordplay in Dutch, and could connect hints about career milestones to specific public figures.

The Kostuum Ontwerper (costume designer) — A more creative feature that could generate costume design concepts based on the show’s aesthetic. This was mostly for fun, but it demonstrated something important about AI Studio: it enables interactive outputs, not just static text.

The Masked Singer project was not about making real decisions. But it proved a capability I had not fully appreciated: AI research tools can produce interactive, explorable outputs — not just reports you read and forget.

The presentation showdown

While I was testing research tools, I also ran a side experiment: giving the same research input to two different AI presentation tools and comparing the results.

Manus produced what I would call the cinematic approach. Polished visuals, narrative flow, the kind of presentation you would put in front of an executive who wants to feel like they are watching a keynote. The information was accurate but wrapped in a layer of production value that sometimes obscured the actual data.

GenSpark took the analytical route. Data-focused, structured, heavy on comparisons and numbers. Less visually impressive, but if you needed to make a decision based on the presentation, GenSpark gave you the clearer foundation.

Neither was strictly better. They serve different audiences and different purposes. But the comparison reinforced something I keep learning: the same input produces wildly different outputs depending on the tool, and understanding those differences is part of building your AI toolkit.

Why deep research is underrated

Here is why I think research is the most underrated AI feature:

The trust problem is solvable. The biggest objection to AI research is “how do I know it is accurate?” Perplexity solves this with inline citations. You can verify every claim against the original source. This is not blind trust — it is trust with verification, which is exactly how we treat any research.

It handles locale-specific knowledge. The streaming service comparison required Netherlands-specific pricing, availability, and telecom bundle information. A year ago, AI tools would have given me US-centric answers. Perplexity handled Dutch-market specifics without prompting.

Interactive outputs change the game. Google AI Studio’s ability to produce interactive, explorable research outputs — not just text — opens up use cases that static research cannot touch. The Masked Singer project was a toy example, but imagine the same capability applied to competitive analysis, market research, or data exploration.

Speed changes the economics of decision-making. Some decisions are not worth spending an evening researching. But if the research takes twenty minutes instead of three hours, the calculus changes. You make better decisions more often because the cost of research drops below the threshold of “not worth the effort.”

Two glass platforms with a glowing emerald bridge of citation chains connecting a question mark to a checkmark — doubt to trust

The gap is closed

At the start of weekend three, I had a gap between “AI can do research” and “I trust this for real decisions.” By the end, that gap was closed.

I am not saying AI research is perfect. It is not. You still need to verify, you still need critical thinking, and you still need domain knowledge to evaluate whether the output makes sense. But the baseline quality has crossed the threshold where the output is good enough to act on — with verification — rather than being a starting point you need to rebuild from scratch.

The tools that got me there: Perplexity for sourced, verifiable research with locale awareness. Google AI Studio for interactive and explorable research outputs. Manus and GenSpark for different flavors of research presentation.

If you are still using AI primarily for code generation and ignoring the research capabilities, you are leaving the most practical value on the table. Code generation saves you typing. Research saves you decisions.

Next up: I dove into building an actual application — a basketball scouting tool that would push my frontend skills and AI integration much further than research ever could.

#ai #tools #research