Query Execution
Execute queries across multiple LLM providers to collect citation data and analyze which domains get referenced in AI-generated responses.
Overview
The Execute page is where you run queries against LLM providers (Claude, GPT-4, Gemini, Perplexity) and collect their responses with extracted citations.
Purpose:
- Test how LLMs respond to consumer queries
- Extract citation patterns
- Compare provider responses
- Build citation datasets for analysis
Accessing Execute Page
Navigate to the Execute page:
Dashboard → Execute
URL: /dashboard/execute
Or load from Library:
Library → Query Set → Execute button
URL: /dashboard/execute?setId=abc-123
API Key Status
Top section shows which LLM providers are configured.
Status Indicators
✅ Configured:
- API key detected in environment
- Provider ready to use
- Selectable for execution
⚠️ Not Configured:
- API key missing
- Provider unavailable
- Cannot select for execution
Example Display
API Key Status
Claude (Anthropic) ✅ Configured
GPT-4 (OpenAI) ✅ Configured
Gemini (Google AI) ⚠️ Not Configured
Perplexity ✅ Configured
Adding Missing Providers
If provider shows ⚠️ Not Configured:
- Get API key from provider (see Configuration Guide)
- Add to
.env.local:ANTHROPIC_API_KEY=sk-ant-... - Restart dev server:
Ctrl+Cthennpm run dev - Refresh Execute page
- Status should change to ✅
Execution Modes
Choose between single query or batch execution.
Single Query Mode
When to Use:
- Testing individual queries
- Quick citation checks
- Exploring LLM responses
- Learning how providers respond
Limitations:
- One query at a time
- Manual input required
- No systematic coverage
Batch Query Mode
When to Use:
- Systematic testing (20-100 queries)
- Loading from library
- Building citation datasets
- Comprehensive analysis
Benefits:
- Parallel execution (fast)
- Consistent testing
- Statistical significance
- Automated workflow
Switching Modes
Click mode buttons:
[Single Query] [Batch Queries]
Note: Mode locked to Batch when loading from library (?setId=...)
Single Query Execution
Input Query
Field: Text input box
Placeholder: e.g., How do I fix a leaking faucet?
Best Practices:
- Use natural, conversational language
- Focus on problem-solving queries
- Ask complete questions
- 5-20 words optimal
Examples: ✅ "What are the best running shoes for beginners?" ✅ "How do I unclog a bathroom sink drain?" ✅ "Should I hire a plumber or DIY my toilet repair?"
❌ "running shoes" (too vague) ❌ "plumbing" (not a question)
Select Providers
Checkboxes:
- ☑️ Claude (Anthropic)
- ☐ GPT-4 (OpenAI)
- ☐ Gemini (Google AI)
- ☐ Perplexity
Requirements: Select at least one provider
Recommendations:
- Single provider: Faster, cheaper, test one at a time
- Multiple providers: Compare responses, identify patterns
- All providers: Comprehensive analysis (highest cost)
Cost Impact:
- 1 provider: ~$0.003 per query
- 2 providers: ~$0.006 per query
- 4 providers: ~$0.012 per query
Execute
Click Execute Query button.
Process:
- Query sent to selected providers
- Real-time progress shown
- Responses returned with citations
- Results displayed below
Duration: 2-5 seconds per provider
Review Results
Results displayed in expandable sections:
✅ Execution Complete
Completed: 1/1
Total Responses: 2 (Claude, GPT-4)
Total Citations: 8
Total Cost: $0.006
Execution Time: 3.2 seconds
[View Analytics →]
Claude Response (4 citations)
━━━━━━━━━━━━━━━━━━━━━━━━━
For beginners, I recommend starting with neutral running shoes...
[Citations: runnersworld.com, rei.com, ...]
GPT-4 Response (4 citations)
━━━━━━━━━━━━━━━━━━━━━━━━━
The best running shoes for beginners depend on your foot type...
[Citations: fleetfeet.com, nike.com, ...]
Information Shown:
- Response text (truncated if long)
- Citation count per provider
- Extracted URLs with domains
- Cost per response
- Latency per provider
Batch Query Execution
Input Queries
Field: Multi-line text area
Format: One query per line
Example:
What are the best running shoes for beginners?
How do I fix a leaking faucet?
Should I hire a plumber or DIY?
What tools do I need for hanging drywall?
Requirements:
- Minimum: 2 queries
- Maximum: 100 queries
- One per line
- No empty lines
Loading from Library
When accessing via ?setId=...:
Automatic:
- Queries pre-loaded in text area
- Mode locked to Batch
- Query count displayed in banner
- Query set name shown
Banner Display:
📘 Query Set Loaded: Home Improvement - Plumbing DIY
20 queries ready for execution
[Back to Library]
Editing:
- Can edit queries before executing
- Can add/remove queries
- Changes don't affect saved set
Select Providers
Same as single query mode:
Strategy 1: One provider for all queries
- Fastest execution
- Lowest cost
- Good for initial testing
Strategy 2: Multiple providers for comparison
- Compare citation patterns
- Higher confidence in findings
- 2-4x cost increase
Example:
- 20 queries × 1 provider = 20 requests (~$0.06)
- 20 queries × 4 providers = 80 requests (~$0.24)
Execute Batch
Click Execute Batch Queries button.
Process:
- Parse queries (split by newline)
- Validate count (2-100)
- Execute in parallel batches
- Update progress in real-time
- Collect all responses
- Extract citations from each
- Save to database
- Display summary
Duration:
- 20 queries, 1 provider: ~30-45 seconds
- 20 queries, 4 providers: ~60-90 seconds
- 100 queries, 1 provider: ~2-3 minutes
Real-Time Progress
Progress indicator shows:
Executing Queries... 7/20
Query 7: "What tools do I need for replacing a toilet?"
Provider: Claude ✓ | GPT-4 ⏳ | Gemini - | Perplexity -
✓ Complete ⏳ In Progress - Not Selected
Updates:
- Current query number
- Query text being processed
- Provider-level progress
- Completion status
Batch Results Summary
After completion:
✅ Batch Execution Complete
Completed Queries: 20/20
Total Responses: 80 (20 queries × 4 providers)
Total Citations: 312
Total Cost: $0.24
Execution Time: 87.3 seconds
Average Citations per Response: 3.9
Top Cited Domains:
1. homedepot.com - 23 citations
2. lowes.com - 19 citations
3. thisoldhouse.com - 17 citations
4. youtube.com - 14 citations
5. familyhandyman.com - 12 citations
[View Full Analytics →]
Actions:
- Click domain name → Domain detail page
- Click View Full Analytics → Analytics dashboard
- Stay on page to execute more queries
Cost Estimation
Live Cost Calculator
Before executing, see estimated cost:
Cost Estimate
Selected Providers: 2 (Claude, GPT-4)
Query Count: 20
Estimated Cost: $0.06 - $0.10
Claude: ~$0.003 per query = $0.06
GPT-4: ~$0.010 per query = $0.20
Total: $0.26
Calculation:
- Provider base cost × query count
- Shown before execution
- Updated when changing provider selection
Actual Cost
After execution, actual cost shown:
Total Cost: $0.24 (saved $0.02 vs estimate)
Variations:
- Shorter responses = lower cost
- Longer responses = higher cost
- Usually within 10% of estimate
Provider Selection Strategy
By Use Case
Exploratory Research:
- Use: Gemini (free tier) or Perplexity (cheapest)
- Why: Low cost, good enough for discovery
Production Analysis:
- Use: Claude or GPT-4
- Why: Higher citation quality, better reasoning
Competitive Analysis:
- Use: All providers
- Why: See which LLMs cite competitors
Content Validation:
- Use: Claude + GPT-4
- Why: Market leaders, representative sample
By Budget
$0.01 per query:
- Gemini only (free)
- Or Perplexity only
$0.05 per query:
- Claude only
- Or Perplexity + GPT-4
$0.10+ per query:
- All 4 providers
- Comprehensive analysis
By Speed
Fastest (30-45 sec for 20 queries):
- 1 provider only
Medium (60-90 sec):
- 2 providers
Slowest (2-3 min):
- All 4 providers
Execution Best Practices
Query Quality
✅ Do:
- Use problem-focused queries
- Ask complete questions
- Use natural language
- Focus on consumer intent
❌ Don't:
- Use brand-only queries
- Ask for specific URLs
- Use marketing language
- Test non-questions
Batch Sizing
Recommended Batches:
- 20 queries: Quick testing, single focus
- 50 queries: Standard analysis
- 100 queries: Comprehensive research
Avoid:
- 1-5 queries: Too small for patterns
- 150+ queries: Split into multiple batches
Provider Coverage
Initial Test:
- Run 20 queries with Claude only
- Review results
- Decide if need more providers
Full Analysis:
- Run same 20 queries with all providers
- Compare citation patterns
- Identify provider differences
Timing
Best Times to Execute:
- ✅ Off-peak hours (lower latency)
- ✅ After generating new queries
- ✅ Weekly routine (Mondays)
Avoid:
- ❌ Peak business hours (slower)
- ❌ Immediately before meetings (might timeout)
- ❌ During provider maintenance
Handling Errors
Single Query Errors
Provider Timeout:
- Error: "Request timed out"
- Solution: Retry with same query
- Usually resolves on second attempt
Invalid API Key:
- Error: "Authentication failed"
- Solution: Check
.env.local, restart server - Verify key on provider dashboard
Rate Limit:
- Error: "Rate limit exceeded"
- Solution: Wait 1 minute, retry
- Or switch to different provider
Batch Execution Errors
Partial Success:
- Some queries succeed, some fail
- Results saved for successful ones
- Error summary shown for failures
- Retry failed queries individually
Complete Failure:
- No queries executed
- Check API keys
- Check network connection
- Review Vercel logs (if deployed)
Example Error Display:
⚠️ Batch Partially Complete
Completed: 17/20 queries
Failed: 3 queries
Failed queries:
- Query 5: "..." (Claude timeout)
- Query 12: "..." (GPT-4 rate limit)
- Query 18: "..." (Network error)
[Retry Failed Queries]
After Execution
View Analytics
Click View Analytics or View Full Analytics button.
Redirects to: /dashboard/analytics
What You'll See:
- Updated total query count
- New citations added
- Top domains refreshed
- Provider metrics updated
Execute More Queries
From Same Page:
- Clear query input (or load new set)
- Change provider selection if desired
- Execute again
From Library:
- Return to Library
- Load different query set
- Execute new set
Compare Providers
After executing same queries with different providers:
- Go to Analytics
- Click Providers tab
- View side-by-side comparison
- See which provider cites which domains
Advanced Execution Techniques
A/B Testing Queries
Scenario: Test two query phrasings
Setup:
Batch 1:
- "What are the best running shoes?"
- "Which running shoes should I buy?"
- "Top running shoes for beginners?"
Batch 2:
- "How do I choose running shoes?"
- "What makes good running shoes?"
- "Running shoe buying guide?"
Execute: Both batches with same provider
Compare: Citation patterns differ by phrasing
Progressive Provider Testing
Week 1: Execute 50 queries with Claude Week 2: Same 50 queries with GPT-4 Week 3: Same 50 queries with Gemini Week 4: Same 50 queries with Perplexity
Analysis: Track citation consistency across providers
Filtered Execution from Library
From Library Detail Page:
- Filter queries (e.g., Persona: Consumer only)
- Click "Execute Filtered Queries"
- Only filtered subset executes
- Compare to other persona later
Troubleshooting
Execution Hangs
Symptom: Progress stuck at "Executing..."
Causes:
- Network timeout
- Provider API down
- Rate limiting
Solutions:
- Wait 2 minutes
- Refresh page
- Retry with fewer queries
- Try different provider
No Citations Extracted
Symptom: "Total Citations: 0" after execution
Causes:
- Provider didn't include URLs in response
- Citation extraction regex failed
- Query type not conducive to citations
Solutions:
- Try different provider (Claude best for citations)
- Use problem-focused queries
- Review response text manually
High Cost
Symptom: Cost higher than expected
Causes:
- Long responses (verbose answers)
- Multiple providers selected
- High query count
Solutions:
- Use cheaper providers (Perplexity, Gemini)
- Execute fewer queries at once
- Review provider pricing docs
Execution Metrics
After each execution, track:
Efficiency:
- Queries per minute
- Cost per query
- Citations per query
Quality:
- Citation relevance
- Response usefulness
- Provider differences
Coverage:
- Unique domains cited
- Query types tested
- Persona distribution
Next Steps
After executing queries:
- Analytics → - Analyze citation patterns
- Library → - Save and organize queries
- Query Generation → - Generate more queries
Related Guides
- Quick Start - First execution walkthrough
- Configuration - Provider API setup
- Concepts: LLM Providers - Provider comparison