Question Router is a ai-agents Claude Skill built by ai-analyst-lab. Best for: Data analysts and AI pipeline managers use this to automatically triage incoming questions and deploy proportional analysis depth, avoiding overkill on simple lookups and ensuring deep investigations get full resources..
Classify analytical questions into 5 complexity levels (L1-L5) and route to appropriate response paths with specific time estimates.
Classify incoming user questions into complexity levels (L1-L5) and route them to the appropriate response path. This replaces the old "skip-step" logic with a structured classification that adapts the workflow depth to the question's actual needs.
Pattern: User wants a specific number or fact from the data. Examples:
Response path: Query the data directly. Return the answer with source citation (table, column, filter). No agents needed.
Time: ~30 seconds
Pattern: User wants to compare two things or see a breakdown. Examples:
Response path: Query + quick chart. Use chart_helpers directly.
Apply Visualization Patterns skill. No full pipeline.
Time: ~2 minutes
Pattern: User has a specific analytical question requiring multiple steps. Examples:
Response path: Subset of the pipeline — Frame → Explore → Analyze → Validate → Present findings. Skip storyboard/deck unless requested. Use 3-5 agents.
Time: ~10 minutes
Pattern: User needs root cause analysis, opportunity sizing, or experiment design. Examples:
Response path: Full pipeline minus deck. Frame → Hypothesize → Explore → Analyze → Root Cause → Validate → Size → Present findings. Use 6-10 agents.
Time: ~20 minutes
Pattern: User wants a complete analysis with a polished slide deck. Examples:
/run-pipelineResponse path: Complete 18-step pipeline. All agents, full storyboard, charts, narrative, and Marp deck.
Time: ~30-45 minutes
Enrichment steps — never block routing. If any sub-step fails, skip it silently.
Feedback check — The Feedback Capture skill runs BEFORE this router. By the time a message reaches here, corrections/learnings are already captured. If the message was purely feedback (no analytical question), it was handled upstream — skip routing.
Entity disambiguation — If the entity index is loaded (from bootstrap):
resolve_entity(query_text, entity_index) from
helpers/entity_resolver.py.format_disambiguation(matches) and set
{{RESOLVED_ENTITIES}} for downstream agents.{{RESOLVED_ENTITIES}} empty.Corrections check — Read .knowledge/corrections/index.yaml.
total_corrections > 0 for the active dataset, set
{{CORRECTION_COUNT}} so analysis agents check the correction log
before writing SQL (e.g., known join pitfalls, filter requirements).total_corrections is 0, set
{{CORRECTION_COUNT}} to 0.Archaeology note — The Query Archaeology skill provides SQL pattern context (prior queries, reusable CTEs) to analysis agents when available. No action needed here — just acknowledge it flows downstream automatically.
After pre-flight completes, proceed to Step 1.
Extract:
| Signal | L1 | L2 | L3 | L4 | L5 |
|--------|----|----|----|----|-----|
| Asks for a single number | +3 | | | | |
| Uses "compare" or "by {dimension}" | | +3 | | | |
| Uses "why", "investigate", "root cause" | | | | +3 | |
| Uses "analyze", "what's happening with" | | | +3 | | |
| Mentions "deck", "presentation", "slides" | | | | | +3 |
| Uses /run-pipeline | | | | | +5 |
| Mentions sizing, opportunity, impact | | | | +2 | |
| Mentions experiment, A/B test | | | | +2 | |
| Question has multiple sub-questions | | | +2 | +1 | |
| "Quick" or "just" qualifier | +2 | +1 | | | |
Assign the level with the highest score. Ties break toward the lower level (prefer faster response).
If .knowledge/user/profile.md exists, read the user's preferences:
For L1-L2: Execute immediately. No confirmation needed.
For L3-L5: Brief the user on the plan:
I'd classify this as a **[Level] — [Label]**. Here's my plan:
1. [Step summary]
2. [Step summary]
...
Estimated time: ~[X] minutes. Want me to proceed, or adjust the scope?
The user can:
When routed to L3+, the Question Router hands off to the appropriate agents by setting the entry point in the Default Workflow:
| Level | Entry Point | Exit Point | |-------|-------------|------------| | L3 | Step 1 (Frame) | Step 7 (Validate) — present findings inline | | L4 | Step 1 (Frame) | Step 8 (Size) — present findings inline | | L5 | Step 1 (Frame) | Step 18 (Close the Loop) — full deck |
Before classifying, check whether the question references a dataset other than the currently active one.
.knowledge/datasets/ to get all known dataset IDs and display names./switch-dataset {id})"This prevents accidentally running analysis on the wrong dataset.
After delivering results at any level, offer 2-3 relevant next actions based on what was just completed. Match suggestions to the level and findings.
After L1/L2 results:
After L3 findings:
After L4 investigation:
After L5 deck delivery:
/archive)"/export)"Always tailor suggestions to the actual findings — reference specific metrics, segments, or anomalies discovered. Generic suggestions ("want to know more?") are not helpful.
/plugin install question-router@ai-analyst-labRequires Claude Code CLI.
Data analysts and AI pipeline managers use this to automatically triage incoming questions and deploy proportional analysis depth, avoiding overkill on simple lookups and ensuring deep investigations get full resources.
No reviews yet. Be the first to review this skill.