Skip to content

πŸ”Œ E8 Integration Testing Requirements

Overview

This document defines integration testing requirements for the E8 Assessment Framework's connections with other GetCimple systems. Each integration point must be validated before production deployment.

1. Unified Question Bank (UQB) Integration

Integration Requirements

Data Structure Compatibility

// Test: E8Question extends UnifiedQuestion
interface IntegrationTest_UQB_Structure {
  validate_inheritance: () => {
    // Confirm E8Question has all UnifiedQuestion properties
    required_fields: ['id', 'text', 'type', 'domain', 'metadata']
    e8_specific: ['e8Id', 'boardTranslation', 'maturityImpact']
    return all_fields_present
  }
}

Question ID Scheme

  • Format: E8_STRATEGY_NNN (e.g., E8_MFA_001)
  • Uniqueness: No ID collisions with existing UQB questions
  • Persistence: IDs remain stable across updates
  • Traceability: Each ID maps to specific ACSC control

Test Cases

Test 1: Question Registration

  • Register all 40 E8 questions in UQB
  • Verify no conflicts or duplicates
  • Confirm metadata properly stored
  • Expected: 100% successful registration

Test 2: Question Retrieval

  • Query E8 questions by strategy
  • Filter by role assignment
  • Sort by completion status
  • Expected: < 100ms response time

Test 3: Answer Storage

  • Store answers with confidence scores
  • Link evidence to answers
  • Track answer history
  • Expected: Full audit trail maintained

Integration Points

Component Integration Type Test Priority
Question Store Direct extension CRITICAL
Answer Repository Shared database CRITICAL
Metadata Engine Tagged queries HIGH
Export Service Format mapping MEDIUM

2. Policy Template Integration

Crossover Mapping Requirements

Policy Section Identification

policy_mappings:
  application_control:
    source: "policy_templates/it_security.md"
    sections: ["4.1", "4.2", "4.3"]
    extraction_rules:
      - pattern: "application whitelist"
      - pattern: "approved software"
      - pattern: "executable control"
    confidence: 0.9

  patching:
    source: "policy_templates/patch_management.md"
    sections: ["2.1", "2.2", "3.0"]
    extraction_rules:
      - pattern: "patch.*within (\d+) (hours|days)"
      - pattern: "critical updates"
      - pattern: "vulnerability management"
    confidence: 0.85

Test Cases

Test 1: Policy Parsing

  • Input: IT Security Policy (45 pages)
  • Process: Extract relevant sections
  • Output: Mapped to 18 E8 questions
  • Success: 45% coverage achieved

Test 2: Confidence Scoring

  • Fresh policy (< 3 months): 0.9 confidence
  • Recent policy (3-6 months): 0.7 confidence
  • Older policy (6-12 months): 0.5 confidence
  • Outdated (> 12 months): 0.3 confidence

Test 3: Conflict Resolution

  • Scenario: Policy says "monthly patches", procedure says "weekly"
  • Resolution: Use most restrictive/recent
  • Audit: Flag conflict for review

Data Extraction Rules

Policy Type Questions Covered Extraction Method Confidence
IT Security 15-20 Keyword + section 0.85
Access Control 8-10 Role mapping 0.90
Backup Policy 3-4 Direct values 0.95
Incident Response 5-7 Process mining 0.75

3. Insurance Questionnaire Integration

Questionnaire Mapping

Standard Insurance Providers

const insurerMappings = {
  chubb: {
    question_map: {
      MFA_ENABLED: 'E8_MFA_001',
      PATCH_FREQUENCY: 'E8_PATCH_001',
      BACKUP_TESTING: 'E8_BACKUP_003',
    },
    confidence: 0.85,
  },
  aig: {
    question_map: {
      MULTI_FACTOR: 'E8_MFA_001',
      UPDATE_CYCLE: 'E8_PATCH_002',
      RECOVERY_TEST: 'E8_BACKUP_003',
    },
    confidence: 0.8,
  },
}

Test Cases

Test 1: Provider Coverage

  • Validate mappings for top 5 cyber insurers
  • Each insurer should cover 25-30% of E8 questions
  • Combined coverage should reach 30% target

Test 2: Answer Format Conversion

  • Boolean β†’ Yes/No/Partial
  • Dates β†’ Compliance timeframes
  • Percentages β†’ Maturity levels
  • Text β†’ Structured options

Test 3: Freshness Validation

  • Answers < 3 months: Use directly
  • Answers 3-6 months: Flag for review
  • Answers > 6 months: Require revalidation

Integration Validation

Insurer Questions Mapped Coverage % Confidence
Chubb 12 30% 0.85
AIG 11 27.5% 0.80
Zurich 10 25% 0.82
Liberty 9 22.5% 0.78
Berkshire 13 32.5% 0.88

4. Prior Assessment Integration

Assessment Type Mappings

Framework Crosswalks

framework_mappings:
  iso_27001:
    controls:
      A.12.2: E8_PATCH_001 # Malware protection
      A.9.4: E8_MFA_001 # Access control
      A.12.3: E8_BACKUP_001 # Backup
    coverage: 20%
    confidence: 0.75

  nist_csf:
    controls:
      PR.AC-7: E8_ADMIN_001 # Admin privileges
      PR.IP-1: E8_PATCH_002 # Configuration
      PR.PT-3: E8_APP_001 # Least functionality
    coverage: 25%
    confidence: 0.80

  pci_dss:
    requirements:
      8.3: E8_MFA_002 # MFA requirement
      6.2: E8_PATCH_003 # Security patches
      2.3: E8_ADMIN_002 # Encrypted admin
    coverage: 15%
    confidence: 0.90

Test Cases

Test 1: Multi-Framework Merge

  • Input: ISO 27001 + NIST results
  • Process: Merge without duplication
  • Output: Combined 35% coverage
  • Conflicts: Higher confidence wins

Test 2: Age Degradation

  • Fresh (< 3 months): 100% confidence
  • Recent (3-6 months): 75% confidence
  • Aging (6-12 months): 50% confidence
  • Old (> 12 months): Review required

Test 3: Evidence Linking

  • Each imported answer links to source
  • Original evidence remains accessible
  • Expiry dates properly tracked
  • Update triggers identified

Crosswalk Validation

Framework E8 Coverage Typical Confidence Update Frequency
ISO 27001 20% 0.75 Annual
NIST CSF 25% 0.80 Annual
PCI DSS 15% 0.90 Annual
SOC 2 18% 0.70 Semi-annual
HIPAA 10% 0.65 Annual

5. Role-Based Routing Integration

Routing Engine Tests

Role Assignment Logic

interface RoutingTest {
  test_board_routing: () => {
    input: E8Question[]
    expected_board: 5
    expected_it: 35
    tolerance: 1  // Β±1 question acceptable
  }

  test_delegation_chain: () => {
    board β†’ executive β†’ it_manager β†’ admin
    each_level_can_delegate_down: true
    escalation_up_allowed: true
  }

  test_visibility_rules: () => {
    board_sees: all_questions_readonly
    it_sees: assigned_and_delegated
    visibility_overrides: configurable
  }
}

Test Cases

Test 1: Default Routing

  • Small org (no board): All to owner/IT
  • Medium org (board): 5/35 split
  • Large org (complex): Multi-level delegation
  • Success: 95% correct routing

Test 2: Re-routing

  • Board delegates to CFO
  • CFO delegates to IT
  • IT escalates back to CFO
  • Audit trail maintained

Test 3: Batch Assignment

  • Group similar questions
  • Assign to same role
  • Maintain question order
  • Preserve dependencies

Routing Accuracy Metrics

Org Type Board Questions IT Questions Delegation Depth Accuracy
Small 0 40 1 level 100%
Medium 5 35 2 levels 95%
Large 3 37 3 levels 92%
Complex 5 35 4 levels 90%

6. Maturity Calculation Integration

Calculation Engine Tests

Scoring Algorithm

def test_maturity_calculation():
    # Test ML0 determination
    assert calculate_ml(0.0) == "ML0"
    assert calculate_ml(0.29) == "ML0"

    # Test ML1 threshold
    assert calculate_ml(0.30) == "ML1"
    assert calculate_ml(0.69) == "ML1"

    # Test ML2 threshold
    assert calculate_ml(0.70) == "ML2"
    assert calculate_ml(0.94) == "ML2"

    # Test ML3 threshold
    assert calculate_ml(0.95) == "ML3"
    assert calculate_ml(1.0) == "ML3"

    # Test strategy weighting
    critical = ["app_control", "patch_apps", "admin_priv"]
    assert weighted_score(critical) > simple_average()

Test Cases

Test 1: Individual Strategy Scoring

  • Each strategy scored independently
  • 8 separate ML ratings
  • Overall uses weighted average
  • Critical controls weighted higher

Test 2: Partial Implementation

  • 75% of ML2 = "ML1+"
  • 90% of ML3 = "ML2+"
  • Granular progress tracking
  • Clear advancement path

Test 3: Benchmark Comparison

  • Industry average calculation
  • Percentile ranking
  • Peer comparison
  • Regulatory minimum check

Maturity Validation Matrix

Implementation % Expected Level ACSC Alignment Board Display
0-29% ML0 βœ… Correct "Not Implemented"
30-69% ML1 βœ… Correct "Partially Implemented"
70-94% ML2 βœ… Correct "Mostly Implemented"
95-100% ML3 βœ… Correct "Fully Implemented"

Integration Test Summary

Critical Path Tests

  1. βœ… UQB question registration and retrieval
  2. βœ… Policy extraction achieving 45% coverage
  3. βœ… Insurance mapping reaching 30% coverage
  4. βœ… Role routing 95% accuracy
  5. βœ… Maturity calculation ACSC alignment

Performance Requirements

  • Question loading: < 100ms
  • Pre-fill processing: < 5 seconds
  • Routing calculation: < 1 second
  • Maturity calculation: < 500ms
  • Report generation: < 3 seconds

Data Integrity Requirements

  • No data loss during integration
  • Full audit trail maintained
  • Confidence scores preserved
  • Evidence links retained
  • Version history tracked

Next Steps

  1. Execute integration tests in sequence
  2. Document any failures or gaps
  3. Update integration code as needed
  4. Retest until all pass
  5. Sign off for UI implementation