AI & Decision Support Use Cases¶
Advanced AI use cases for decision support, quality assurance, and intelligent automation.
Use Case: US-AI-004 - AI Draft Self-Review¶
| Field | Description |
|---|---|
| ID | US-AI-004 |
| Title | AI Draft Self-Review (Quality Assurance) |
| Priority | P2 (Phase 1) |
| User Story | As a system, I want to validate my own draft replies before showing them to staff so that only high-quality drafts reach human reviewers. |
| Input | Generated email draft + original customer enquiry. |
| Logic | 1. AI Proofreader Node: LLM reviews draft for accuracy, tone, professionalism 2. Scoring: Returns sendable: true/false + feedback3. Conditional: If false, loop back to draft generation with feedback4. Max Retries: After 3 attempts, escalate to US-KBN-013 (Manual Queue) |
| Output | Validated draft with confidence score OR escalation flag. |
| Technical | LangGraph conditional edge based on sendable state. |
Industry Best Practice: Production email automation systems use AI self-review
Use Case: US-AI-005 - Retry Logic & Fallback¶
| Field | Description |
|---|---|
| ID | US-AI-005 |
| Title | Retry Logic & Fallback to Human |
| Priority | P2 (Phase 1) |
| User Story | As a system, I want to retry failed AI operations with a max limit so that I don't get stuck in infinite loops. |
| Input | Failed AI operation (draft generation, classification, etc.). |
| Logic | 1. Counter: Track retry attempts in state (trials)2. Max Attempts: Default = 3 3. Retry: Re-run AI node with improved prompt/context 4. Fallback: After 3 failures → route to US-KBN-013 (Manual Queue) |
| Output | Success OR human escalation flag. |
| Technical | State variable trials: int in LangGraph, conditional check. |
Prevents: Infinite loops when AI lacks necessary information
Use Case: US-AI-006 - Draft Approval Analytics¶
| Field | Description |
|---|---|
| ID | US-AI-006 |
| Title | Draft Approval Analytics Dashboard |
| Priority | P2 (Phase 1) |
| User Story | As an owner, I want to see which AI drafts are approved vs edited vs rejected so that I can measure AI accuracy and identify training needs. |
| Input | Activity Stream events (DraftApproved, DraftEdited, DraftRejected). |
| Logic | 1. Query: Aggregate approval events by staff, product, customer 2. Metrics: Approval rate, avg edit count, rejection reasons 3. Visualization: Bar charts, trends over time |
| Output | Dashboard showing AI draft performance. |
| Technical | SQL queries on Activity Stream + React chart library. |
Business Value: Identify which products/customers need better AI training