Skip to content

ADR-0007: No AI/LLM Features in MVP

Status: Accepted Date: 2025-01-20 (Estimated) Deciders: GetCimple Team Tags: architecture, scope, ai, mvp

Context

GetCimple has extensively documented AI-powered features (policy analysis, risk identification, report generation) in specs. However, as a 3-person startup building an MVP, we must decide: AI features in MVP, or defer to post-MVP?

AI Features Considered: - LLM-powered policy analysis and extraction - AI-driven risk identification from compliance data - Automated report generation with AI insights - Question categorization and intelligent routing

Requirements: - Launch MVP quickly to validate market - Demonstrate Essential 8 compliance value - Board reporting functionality - Assessment questionnaire workflow

Constraints: - 3-person team with limited AI/ML expertise - Budget: API costs (Claude API, GPT-4) could be $500-2000/month - Complexity: Prompt engineering, error handling, testing - Risk: AI outputs require validation for compliance use case - Time: AI features add 4-8 weeks to MVP development

Options Considered

Option A: No AI in MVP (Rule-Based Logic Only)

Description: Build MVP with TypeScript/SQL business logic only. Defer all AI features to post-MVP.

Pros: - βœ… Faster MVP: 4-8 weeks faster development (no AI integration) - βœ… Deterministic: Rule-based logic is predictable, testable, auditable - βœ… Lower cost: No AI API costs during MVP validation - βœ… Simpler testing: Unit tests straightforward (no LLM unpredictability) - βœ… Compliance clarity: Rule-based outputs easier to audit and explain - βœ… Focus: Team focuses on core compliance workflow, not AI complexity - βœ… Risk reduction: Avoid AI hallucination risks in compliance context

Cons: - ❌ Less "innovative" (no AI differentiation in MVP) - ❌ More manual effort for users (policy analysis, report writing) - ❌ Defers key differentiation to post-MVP - ❌ Might need to re-architect for AI later

Estimated Effort: MVP launch in 8-12 weeks


Option B: AI Features in MVP

Description: Include LLM-powered policy analysis, risk identification, and report generation in MVP.

Pros: - βœ… Differentiation: AI features make GetCimple stand out - βœ… Better UX: Automated policy analysis saves user time - βœ… Competitive advantage: Few compliance tools have good AI integration

Cons: - ❌ Slower MVP: 4-8 weeks additional development - ❌ Higher risk: AI outputs need validation (compliance-critical) - ❌ Cost: $500-2000/month in AI API costs - ❌ Complexity: Prompt engineering, error handling, testing - ❌ Team skill gap: Limited AI/ML expertise - ❌ Unpredictable: LLM responses vary, harder to test - ❌ Compliance risk: AI hallucinations in compliance context dangerous

Estimated Effort: MVP launch in 12-20 weeks


Option C: Hybrid (One Simple AI Feature)

Description: Include one low-risk AI feature (e.g., AI-assisted report drafting) in MVP.

Pros: - βœ… Some AI differentiation - βœ… Moderate development timeline

Cons: - ❌ Still adds 2-4 weeks to MVP - ❌ Doesn't significantly differentiate (half-measure) - ❌ API costs still apply

Estimated Effort: MVP launch in 10-16 weeks


Decision

We chose: Option A - No AI in MVP

Rationale: 1. Speed to market: 4-8 weeks faster launch lets us validate product-market fit sooner 2. Compliance safety: Rule-based logic is deterministic, auditable, explainable to boards 3. Cost control: Avoid AI API costs until we have revenue and usage data 4. Focus: 3-person team can ship core compliance workflow without AI complexity 5. Risk mitigation: Defer AI uncertainty until product validated and team has AI expertise 6. User validation: Test if customers want AI features before building them 7. Architecture flexibility: Can add AI features incrementally post-MVP without re-architecture

Key Trade-offs Accepted: - We're accepting less differentiation in MVP to ship faster and reduce risk - We're accepting more manual user work (policy analysis) to ensure accuracy - We're betting that core compliance workflow is valuable without AI - We're deferring "cool factor" to focus on "works reliably"

Consequences

Positive

  • βœ… Faster MVP: Launch 4-8 weeks sooner, validate market fit earlier
  • βœ… Lower risk: No AI hallucination concerns in compliance-critical features
  • βœ… Predictable: Rule-based logic is deterministic, easier to test and debug
  • βœ… Lower cost: Save $500-2000/month in AI API costs during MVP
  • βœ… Simpler codebase: TypeScript/SQL business logic easier to maintain
  • βœ… Compliance-safe: Deterministic outputs easier to audit and explain to boards
  • βœ… Team focus: Can concentrate on nailing core workflow, not AI edge cases
  • βœ… User feedback first: Learn what AI features users actually want

Negative

  • ⚠️ Manual effort: Users must analyze policies manually (vs AI automation)
  • ⚠️ Less differentiation: MVP won't have "AI-powered" marketing angle
  • ⚠️ Competitive gap: Competitors with AI might seem more advanced
  • ⚠️ Deferred value: AI features delayed to post-MVP (might be table stakes by then)

Risks

Risk Likelihood Impact Mitigation
Customers expect AI features in compliance tools MEDIUM MEDIUM Position as "accurate and auditable" vs "AI-powered but uncertain"; add AI post-MVP based on feedback
Competitors release AI features first MEDIUM LOW Fast follower strategy; we can add AI in 4-6 weeks post-MVP; focus on workflow quality
Re-architecture needed for AI later LOW MEDIUM Design with extensibility in mind; AI can be added as optional enhancement layer
Market moves faster toward AI than expected LOW MEDIUM Monitor competitor landscape; have AI features in backlog ready to prioritize

Compliance Note

ACSC Essential 8 Impact: - Not directly applicable: AI vs non-AI is implementation detail for Essential 8 compliance

Australian Data Residency: - Benefit of deferral: Avoids sending compliance data to AI APIs (Claude, GPT-4) which may process outside Australia - Post-MVP consideration: When adding AI, ensure Australian API regions or on-premise models

Audit Trail: - Advantage: Rule-based logic provides clear audit trail (input β†’ deterministic output) - AI complexity: LLM outputs harder to audit ("why did AI suggest this risk?")

Implementation Notes

MVP Approach (Rule-Based):

Policy Analysis: - Manual upload and categorization by users - Template-based policy structure - Predefined compliance mappings

Risk Identification: - Checkbox questionnaires based on Essential 8 controls - Rule-based maturity level calculation - Predefined risk templates

Report Generation: - Template-based reports with variable substitution - User fills in narrative sections - Automated compliance status aggregation

Question Flow: - Hardcoded decision trees - Category-based routing - Simple TypeScript logic

Post-MVP AI Integration Plan: - Add AI as enhancement layer over rule-based foundation - AI provides suggestions, rule-based logic provides guarantees - Users can choose AI-assisted or manual workflow - All AI outputs shown with confidence scores and sources

Integration Points: - None currently (AI deferred) - Future: Anthropic Claude API, LangGraph orchestration

Monitoring: - N/A for MVP (no AI to monitor) - Post-MVP: Track AI API costs, response times, accuracy metrics

Documentation Updates Needed: - βœ… Tech stack clarifies "no AI in MVP" - ⚠️ AI specs remain in /specs/post-mvp/ai/ for future reference - βœ… Implementation guide focuses on rule-based logic

Revisit

Revisit By: 2025-07-01 or after 50 active customers (whichever comes first) Blast Radius: MEDIUM - AI features are additive, not fundamental architecture changes

Conditions for Revisit: - Customer feedback explicitly requests AI features (>30% of users ask) - Competitors release AI features that become table stakes - Team gains AI/ML expertise (hire or upskill) - Revenue supports AI API costs ($500-2000/month comfortable) - Australian AI API providers available (Claude, GPT-4 Sydney regions)

Next Review: 2025-05-01 (post-MVP customer feedback analysis)


Post-MVP AI Strategy

When adding AI features:

  1. Start small: One AI feature at a time (e.g., AI-assisted report drafting)
  2. User control: AI suggestions, user approval (never fully automated)
  3. Confidence scores: Show AI certainty, let users override
  4. Audit trail: Log all AI inputs/outputs for compliance
  5. Fallback: Rule-based logic always available if AI fails
  6. Testing: Extensive prompt testing, adversarial inputs, hallucination detection
  7. Cost monitoring: Track per-user AI API costs, optimize prompts

References


Version History

Version Date Author Changes
1.0 2025-10-20 Claude Initial ADR capturing scope decision for MVP