Client Case: Winc Academy

Winc Academy builds professional online training courses. The problem wasn't quality. It was scalability.
Every new course required external subject matter experts, days of research, and significant manual work before anyone could draft an outline. Expert availability was the bottleneck, and it meant courses struggled to keep up with how fast industries move.
We built an AI product to remove that bottleneck.
The workflow told us everything
We started by sitting with the team and watching how course developers actually work.
The process was structured but time consuming. Define learning goals, spend days searching for sources, read industry reports, pull everything together into a structured outline. Then wait for an external expert to validate the content. It resulted in high quality, but heavily dependent on someone else's calendar and availability.
The developers didn't need a smarter search engine. They needed something that could do the research work and hand it back in a format they could immediately build on.
Specific, not generic
We fed existing Winc training materials into the system. The AI doesn't research topics generically. It researches them through the lens of how Winc structures courses. Their methodology. Their quality standards. Their way of defining learning objectives and the balance between theory and practice.
Upload a course outline and the AI reviews its learning goals and gives new module suggestions with reasoning. It searches for authoritative sources: industry reports, academic papers, trend analyses. Every claim comes with a citation. The design phase provides a structured course outline per lesson, with step-by-step content creation. Ready to export. Ready for a course developer to refine rather than start from scratch.
Two design decisions that earned trust
The interface had to match how the team actually works. No onboarding friction. A dashboard, one button to start reviewing your course outline and start designing, a short guided flow to collect the right inputs.
But what determined whether developers actually relied on it came down to two things.
Transparency: Every AI output includes source citations you can click, verify, and push back on. Trust in AI-generated research doesn't come from confidence scores. It comes from being able to check the work.
Control: The outline is presented as a set of suggestions the course developer can accept, adjust, or reject before anything moves forward. The AI handles the research and design. The course developer handles the judgement.
What this taught us
Winc didn't need a general-purpose tool. They needed one built around their methodology and their process. That specificity is what determines whether something gets used week to week or quietly shelved.
We also saw again how much a working prototype changes a conversation. Two days in, the discussion shifted from "what could AI do for us" to "how do we extend this." That shift is hard to achieve on a slide deck.

