ADR-0007: No AI/LLM Features in MVP¶
Status: Accepted Date: 2025-01-20 (Estimated) Deciders: GetCimple Team Tags:
architecture,scope,ai,mvp
Context¶
GetCimple has extensively documented AI-powered features (policy analysis, risk identification, report generation) in specs. However, as a 3-person startup building an MVP, we must decide: AI features in MVP, or defer to post-MVP?
AI Features Considered: - LLM-powered policy analysis and extraction - AI-driven risk identification from compliance data - Automated report generation with AI insights - Question categorization and intelligent routing
Requirements: - Launch MVP quickly to validate market - Demonstrate Essential 8 compliance value - Board reporting functionality - Assessment questionnaire workflow
Constraints: - 3-person team with limited AI/ML expertise - Budget: API costs (Claude API, GPT-4) could be $500-2000/month - Complexity: Prompt engineering, error handling, testing - Risk: AI outputs require validation for compliance use case - Time: AI features add 4-8 weeks to MVP development
Options Considered¶
Option A: No AI in MVP (Rule-Based Logic Only)¶
Description: Build MVP with TypeScript/SQL business logic only. Defer all AI features to post-MVP.
Pros: - β Faster MVP: 4-8 weeks faster development (no AI integration) - β Deterministic: Rule-based logic is predictable, testable, auditable - β Lower cost: No AI API costs during MVP validation - β Simpler testing: Unit tests straightforward (no LLM unpredictability) - β Compliance clarity: Rule-based outputs easier to audit and explain - β Focus: Team focuses on core compliance workflow, not AI complexity - β Risk reduction: Avoid AI hallucination risks in compliance context
Cons: - β Less "innovative" (no AI differentiation in MVP) - β More manual effort for users (policy analysis, report writing) - β Defers key differentiation to post-MVP - β Might need to re-architect for AI later
Estimated Effort: MVP launch in 8-12 weeks
Option B: AI Features in MVP¶
Description: Include LLM-powered policy analysis, risk identification, and report generation in MVP.
Pros: - β Differentiation: AI features make GetCimple stand out - β Better UX: Automated policy analysis saves user time - β Competitive advantage: Few compliance tools have good AI integration
Cons: - β Slower MVP: 4-8 weeks additional development - β Higher risk: AI outputs need validation (compliance-critical) - β Cost: $500-2000/month in AI API costs - β Complexity: Prompt engineering, error handling, testing - β Team skill gap: Limited AI/ML expertise - β Unpredictable: LLM responses vary, harder to test - β Compliance risk: AI hallucinations in compliance context dangerous
Estimated Effort: MVP launch in 12-20 weeks
Option C: Hybrid (One Simple AI Feature)¶
Description: Include one low-risk AI feature (e.g., AI-assisted report drafting) in MVP.
Pros: - β Some AI differentiation - β Moderate development timeline
Cons: - β Still adds 2-4 weeks to MVP - β Doesn't significantly differentiate (half-measure) - β API costs still apply
Estimated Effort: MVP launch in 10-16 weeks
Decision¶
We chose: Option A - No AI in MVP
Rationale: 1. Speed to market: 4-8 weeks faster launch lets us validate product-market fit sooner 2. Compliance safety: Rule-based logic is deterministic, auditable, explainable to boards 3. Cost control: Avoid AI API costs until we have revenue and usage data 4. Focus: 3-person team can ship core compliance workflow without AI complexity 5. Risk mitigation: Defer AI uncertainty until product validated and team has AI expertise 6. User validation: Test if customers want AI features before building them 7. Architecture flexibility: Can add AI features incrementally post-MVP without re-architecture
Key Trade-offs Accepted: - We're accepting less differentiation in MVP to ship faster and reduce risk - We're accepting more manual user work (policy analysis) to ensure accuracy - We're betting that core compliance workflow is valuable without AI - We're deferring "cool factor" to focus on "works reliably"
Consequences¶
Positive¶
- β Faster MVP: Launch 4-8 weeks sooner, validate market fit earlier
- β Lower risk: No AI hallucination concerns in compliance-critical features
- β Predictable: Rule-based logic is deterministic, easier to test and debug
- β Lower cost: Save $500-2000/month in AI API costs during MVP
- β Simpler codebase: TypeScript/SQL business logic easier to maintain
- β Compliance-safe: Deterministic outputs easier to audit and explain to boards
- β Team focus: Can concentrate on nailing core workflow, not AI edge cases
- β User feedback first: Learn what AI features users actually want
Negative¶
- β οΈ Manual effort: Users must analyze policies manually (vs AI automation)
- β οΈ Less differentiation: MVP won't have "AI-powered" marketing angle
- β οΈ Competitive gap: Competitors with AI might seem more advanced
- β οΈ Deferred value: AI features delayed to post-MVP (might be table stakes by then)
Risks¶
| Risk | Likelihood | Impact | Mitigation |
|---|---|---|---|
| Customers expect AI features in compliance tools | MEDIUM | MEDIUM | Position as "accurate and auditable" vs "AI-powered but uncertain"; add AI post-MVP based on feedback |
| Competitors release AI features first | MEDIUM | LOW | Fast follower strategy; we can add AI in 4-6 weeks post-MVP; focus on workflow quality |
| Re-architecture needed for AI later | LOW | MEDIUM | Design with extensibility in mind; AI can be added as optional enhancement layer |
| Market moves faster toward AI than expected | LOW | MEDIUM | Monitor competitor landscape; have AI features in backlog ready to prioritize |
Compliance Note¶
ACSC Essential 8 Impact: - Not directly applicable: AI vs non-AI is implementation detail for Essential 8 compliance
Australian Data Residency: - Benefit of deferral: Avoids sending compliance data to AI APIs (Claude, GPT-4) which may process outside Australia - Post-MVP consideration: When adding AI, ensure Australian API regions or on-premise models
Audit Trail: - Advantage: Rule-based logic provides clear audit trail (input β deterministic output) - AI complexity: LLM outputs harder to audit ("why did AI suggest this risk?")
Implementation Notes¶
MVP Approach (Rule-Based):
Policy Analysis: - Manual upload and categorization by users - Template-based policy structure - Predefined compliance mappings
Risk Identification: - Checkbox questionnaires based on Essential 8 controls - Rule-based maturity level calculation - Predefined risk templates
Report Generation: - Template-based reports with variable substitution - User fills in narrative sections - Automated compliance status aggregation
Question Flow: - Hardcoded decision trees - Category-based routing - Simple TypeScript logic
Post-MVP AI Integration Plan: - Add AI as enhancement layer over rule-based foundation - AI provides suggestions, rule-based logic provides guarantees - Users can choose AI-assisted or manual workflow - All AI outputs shown with confidence scores and sources
Integration Points: - None currently (AI deferred) - Future: Anthropic Claude API, LangGraph orchestration
Monitoring: - N/A for MVP (no AI to monitor) - Post-MVP: Track AI API costs, response times, accuracy metrics
Documentation Updates Needed: - β Tech stack clarifies "no AI in MVP" - β οΈ AI specs remain in /specs/post-mvp/ai/ for future reference - β Implementation guide focuses on rule-based logic
Revisit¶
Revisit By: 2025-07-01 or after 50 active customers (whichever comes first) Blast Radius: MEDIUM - AI features are additive, not fundamental architecture changes
Conditions for Revisit: - Customer feedback explicitly requests AI features (>30% of users ask) - Competitors release AI features that become table stakes - Team gains AI/ML expertise (hire or upskill) - Revenue supports AI API costs ($500-2000/month comfortable) - Australian AI API providers available (Claude, GPT-4 Sydney regions)
Next Review: 2025-05-01 (post-MVP customer feedback analysis)
Post-MVP AI Strategy¶
When adding AI features:
- Start small: One AI feature at a time (e.g., AI-assisted report drafting)
- User control: AI suggestions, user approval (never fully automated)
- Confidence scores: Show AI certainty, let users override
- Audit trail: Log all AI inputs/outputs for compliance
- Fallback: Rule-based logic always available if AI fails
- Testing: Extensive prompt testing, adversarial inputs, hallucination detection
- Cost monitoring: Track per-user AI API costs, optimize prompts
References¶
- Tech Stack MVP - Processing & Automation
- AI Feature Specs - Future reference
- LLM Query System Spec
- ADR-0001: Supabase Backend
Version History¶
| Version | Date | Author | Changes |
|---|---|---|---|
| 1.0 | 2025-10-20 | Claude | Initial ADR capturing scope decision for MVP |