Research reduced from days to ~60 minutes
Reduced documentation research and drafting time through reusable AI workflows
Cut documentation research from 2–3 days to about 60 minutes by building 22 AI-assisted workflow commands. In a single 3-week sprint: 12 PRs authored, 8 reviewed, 6 tickets closed — same editorial quality, less time sitting in the research queue.
~60 min
Per research task
22
Reusable AI commands built
12
PRs merged in 3 weeks
2–3 days
Research time before
The challenge
Every ticket started the same way. Read the description and comments, find the architecture docs, map what already existed, figure out who to loop in. For anything cross-product, that phase could run a week before a single word got written.
The writing was almost never the bottleneck. Context was — getting enough of it to write with confidence. Same grind every ticket, regardless of scope.
It got worse with complexity. More product areas meant more sources to track down: engineering notes, support data, changelog entries, docs scattered everywhere. Work through incomplete context and you end up writing the wrong thing, or framing it badly, or shipping something that comes back with three rounds of revisions.
The approach
The goal wasn't to replace judgment. It was to stop spending days just getting to the point where judgment could apply. It started with one command. Over five months, friction with real work pushed it to 22.
The /research command did the heavy lifting: pull the ticket and comments, find related architecture docs, map what already existed, surface stakeholders. Three sessions, a few minutes each — what used to take days took about an hour.
Other commands addressed specific friction points in the daily workflow:
/draft— scaffold new pages with correct content type, frontmatter, and component imports in ~5 minutes instead of ~30/review-doc— run systematic style guide checks that previously depended on reviewer availability/sprint-prep— pull from product feedback channels, support tickets, and community threads, cluster by theme, and rank by severity/escalation-gaps— surface support patterns across too many tickets to review manually, so the highest-pain documentation gaps could be identified systematically/check-images— automate visual audits that were previously skipped because there were too many images to review by hand
None of them were planned. Each one started as a fix for something that kept taking too long.
The outcomes
The numbers shifted fast. In three weeks: 12 PRs authored and merged, 8 reviewed, 6 tickets closed, 3 follow-ups filed, 7 merge requests for the tooling itself. Every PR still went through full human review.
| Task | Before | After |
|---|---|---|
| Researching a ticket | 2–3 days | ~60 min AI processing + review |
| Scaffolding a new page | ~30 min setup | ~5 min via /draft |
| Product rename across docs | 2–3 weeks (estimated) | 3 days for 695 files across 4 products |
| Image audit | Rarely done — too many to review | Routine via /check-images |
| Style guide review | Inconsistent, reviewer-dependent | Systematic via /review-doc |
A concrete example
One ticket asked for a comparison of an internal training course against hundreds of existing docs — the kind that used to sit in the queue for a week before research was even done. The /research command ran a few times over a couple of days, finding context, locating materials, mapping gaps. Total AI processing time: about 60 minutes.
The biggest gap was visible before writing even started: no page explaining why a customer would want the product. Configuration docs existed. The business case didn't. By the following morning the page was written, reviewed, and merged. Six existing pages got cross-links. Three follow-up tickets went in for the rest.
Days of research, one hour of processing. The writing still took a focused morning — that part didn't change.
Why it mattered
The bottleneck in documentation is rarely the writing. It's getting to the point where you know what to write. Once that's clear, the rest moves fast.
By compressing the research phase, the tooling freed up time for the part that actually needs a human — deciding what to write, how to frame it, whether something should exist at all. Product teams got faster turnaround. Support gaps went from "addressed when someone notices" to tracked and prioritised.
The tooling also spread. Commands built for one person's bottleneck got picked up by the team. After five months of daily use, 22 commands — not because they were designed that way, but because each new problem got a fix and the fix got reused.
One measure of that: I created a connectivity options page comparing 11 deployment options with the /research command. This page gathered around 2,500 views in its first two months. A ticket that research-heavy would have sat in the backlog for weeks because of its complexity. Instead it took a few days.