MCP Timeouts and Slow Responses: How to Stabilize Your Setup
Experiencing timeouts or slow MCP tool calls? This performance troubleshooting guide helps you diagnose timeout causes, optimize long-running operations, implement retry patterns, and reduce response times through pagination and query optimization.
Understanding Timeouts and Slow Responses
Timeouts occur when a tool call takes longer than the client's timeout threshold. Slow responses work but take an unusually long time, indicating performance issues.
Common Causes
- Large result sets (too many tasks/projects returned)
- Complex queries without pagination
- Too many parallel tool calls
- Network latency or connectivity issues
- Server-side processing delays
- Client timeout settings too low
Step 1: Diagnose the Issue
Identify whether the problem is timeouts (calls fail) or slow responses (calls succeed but take time):
Timeout vs Slow Response
Timeout Symptoms
- Tool call fails with timeout error
- No response received within timeout period
- Connection drops during long operations
Slow Response Symptoms
- Tool call succeeds but takes 10+ seconds
- Response eventually arrives but very slowly
- UI becomes unresponsive during calls
Step 2: Use Pagination
Large result sets are a common cause of timeouts. Use pagination to limit the number of results:
Pagination Best Practices
Use Limit Parameter
// Instead of getting all tasks
list_tasks() // May return 1000+ tasks, causing timeout
// Use pagination
list_tasks(limit: 50, offset: 0) // Get first 50
list_tasks(limit: 50, offset: 50) // Get next 50
Recommended Limits
- list_tasks: Use limit: 50-100 (default is often 50)
- list_task_comments: Use limit: 25-50
- list_projects: Usually small, but limit: 20 if needed
- list_boards: Usually small, limit rarely needed
Prompt Patterns for Pagination
Good: Paginated Request
"List my tasks, but only show the first 50. If I need more, I'll ask for the next batch."
This ensures the AI uses limit: 50
Bad: Unbounded Request
"Show me all my tasks"
This may try to fetch all tasks, causing timeout
Step 3: Reduce Parallel Tool Calls
Making too many tool calls simultaneously can overwhelm the connection and cause timeouts:
Sequential vs Parallel Calls
Parallel Call Issues
Some AI assistants try to call multiple tools in parallel, which can:
- Overwhelm the connection
- Exceed rate limits
- Cause timeouts when multiple calls compete
Prompt Patterns to Reduce Parallel Calls
Good: Sequential Approach
"First, list my projects. Then, for each project, list the tasks one at a time."
This encourages sequential, not parallel, calls
Bad: Parallel Approach
"Get all my projects and all their tasks at once"
This may trigger parallel calls, causing timeouts
Step 4: Optimize Queries
Use filters to narrow down results before fetching:
Use Filters Effectively
Filtered Query (Faster)
// Get only tasks due this week
list_tasks(due_date: "2026-06-02", status: "in_progress", limit: 50)
// Instead of getting all tasks and filtering client-side
list_tasks() // Returns everything, then filters (slow)
Available Filters
- project_id: Filter by specific project
- board_id: Filter by specific board
- status: Filter by status (open, in_progress, done, blocked)
- due_date: Filter by due date
- keyword: Search by keyword (server-side filtering)
Step 5: Implement Retry Patterns
For transient timeouts, implement retry logic with exponential backoff:
Retry Strategy
Exponential Backoff
- First retry: Wait 1 second, then retry
- Second retry: Wait 2 seconds, then retry
- Third retry: Wait 4 seconds, then retry
- Max retries: 3 attempts total
When to Retry
Retry for:
- Timeout errors (connection may be temporarily slow)
- Network errors (transient connectivity issues)
- 5xx server errors (temporary server issues)
Don't retry for:
- 401 Unauthorized (auth issue, won't fix with retry)
- 403 Forbidden (permission issue, won't fix with retry)
- 400 Bad Request (invalid input, won't fix with retry)
- 404 Not Found (resource doesn't exist)
Prompt Patterns for Retries
Explicit Retry Instruction
"If a tool call times out, wait 2 seconds and retry once. If it still fails, report the error."
This guides the AI to implement retry logic
Step 6: Optimize Large Operations
For operations that process many items, break them into smaller batches:
Batch Processing Pattern
Batch Processing
- List items in batches (limit: 50)
- Process each batch separately
- Wait for batch to complete before next
- Report progress after each batch
Example: Batch Update Pattern
Good: Batched Updates
"Update task titles in batches of 10:
1. Get first 10 tasks
2. Update each one
3. Wait for all 10 to complete
4. Then get next 10 tasks
5. Repeat until done"
This prevents overwhelming the connection
Step 7: Check Network and Server Health
Sometimes the issue is network-related or server-side:
Network Diagnostics
Test connection speed:
# Test endpoint response time
time curl -H "Authorization: Bearer YOUR_API_KEY" \
https://app.corcava.com/mcp
If this takes more than 2-3 seconds, there may be network issues.
Server Health Check
Server-Side Delays
If the server is experiencing high load:
- All operations may be slower than usual
- Timeouts may occur even with small queries
- Check Corcava status page or support channels
- Wait and retry during off-peak hours
Quick Optimization Checklist
Before Reporting Performance Issues
- ✅ Using pagination (limit parameter) for list operations
- ✅ Using filters to narrow results before fetching
- ✅ Avoiding parallel tool calls (sequential instead)
- ✅ Processing large operations in batches
- ✅ Tested network connection speed
- ✅ Verified server is not experiencing issues
- ✅ Implemented retry logic for transient failures
Performance Best Practices
1. Always Use Pagination
Never request all items at once. Always use limit and offset for list operations.
2. Filter Before Fetching
Use server-side filters (project_id, status, due_date) rather than fetching everything and filtering client-side.
3. Sequential Over Parallel
Make tool calls sequentially when possible. Only use parallel calls for independent, small operations.
4. Batch Large Operations
Break large operations (updating 100 tasks) into batches of 10-20 items.
5. Cache When Appropriate
If you need the same data multiple times, ask the AI to remember it rather than re-fetching.
Related Troubleshooting
- Connection Failed - Diagnose network issues that cause timeouts
- 429 Rate Limited - Fix rate limiting that can cause timeouts
- Pagination Guide - Learn pagination patterns for MCP tools
- Troubleshooting Index - Browse all troubleshooting guides
