Option B · PM Analysis
Conversational Evidence Request
Auditor asks in chat. Owner replies. AI ties everything to controls.
Thesis
Email and Slack are how evidence actually flows in real audits — not formal PBC lists. Replace the inbox with a structured chat layer where AI auto-tags every exchange to the right framework points. The interaction model is familiar; the structure is invisible. Adoption goes viral via auditors pushing it on multiple clients.
Target user
👤
Control owner + external auditor
GRC + Audit firms
- Frequency
- Daily during fieldwork
- Tools today
- Email PDFs back and forth · Slack threads · shared drive folders
- Core pain
- Auditor sends an email asking for X. Owner replies with attachments. Six weeks later in next audit, no one can find the thread. Chain of custody is a guess.
- Win state
- Open the inbox, answer 3 chat-style requests in the morning. Each becomes audit-trail-ready evidence linked to controls automatically.
Business Model Canvas
The nine standard blocks, mapped to this option.
Customer Segments
Same as A, plus direct positioning to external audit firms (auditors as users, not just buyers)
Value Proposition
"Stop emailing PDFs. Every evidence conversation becomes audit-trail-ready, linked to controls, instantly searchable."
Channels
AuditBoard direct + auditor-led co-sell (auditors push their clients to use it) · freemium tier for auditors
Customer Relationships
Lighter implementation (2–4 weeks) · auditor-driven adoption is viral · self-serve onboarding
Revenue Streams
Per-seat for control owners · auditor seats free or low-cost (drives adoption + lock-in)
Key Resources
LLM inference budget · chat infrastructure · auditor relationships and trust
Key Activities
Train AI on control mapping · maintain conversation UX · auditor enablement and training
Key Partners
External audit firms (KPMG, Deloitte, BDO, mid-market) · LLM vendor (Anthropic / OpenAI) · GRC framework bodies
Cost Structure
LLM inference (variable, scales with usage) · engineering (med) · sales (med) · auditor partner enablement
Pros
- Familiar UX (chat) → fast adoption. Demos beautifully — instant "I get it" reaction.
- Asymmetric viral path: auditors push it on their portfolio of clients (one auditor → 10–50 customers).
- Smaller engineering footprint than A — primary cost is LLM and UX.
- Works for evidence types that have no integration (the "messy middle" — PDFs, screenshots, attestations).
- AI summary + auto control-linking is genuinely useful and shows AI value clearly.
Cons / risks
- Wrapper risk — anyone can build a chat UI in 6 weeks. Without the control graph (C) + integration layer (A) underneath, this is undifferentiated.
- LLM inference costs scale with conversation volume — pricing pressure as customers grow.
- Doesn't reduce the work, it just structures it. Owner still types every reply.
- AI mis-tagging = compliance risk. Auditor signs off on the wrong control = real liability.
- Auditor-side adoption is hard. Auditors have their own workpaper tools and workflows.
- Privacy concerns with AI reading evidence (PHI in healthcare, financials in fintech).
Build & time
Build complexity
Medium
Time to MVP
3–4 months
Time to "wow"
~6 weeks
Path to GA
- Build chat MVP with AI control-mapping (LLM-backed).
- Pilot with 3 audit firms — measure mapping accuracy + auditor satisfaction.
- Iterate on AI grounding + retrieval to reduce mis-tags.
- Add async handoffs (auditor leaves, returns; owner replies overnight).
- Layer on top of A or C as the UX surface (this is its real future).
Fit assessment
★★★★★
Tactically attractive (fast demo, viral path, easy build). Strategically weak as a standalone — wrapper territory. Best treated as a UX layer on top of A or C, not as the product itself. Five-star demo, three-star moat.