What is this?
The Sentry MCP Server, packaged as @modelcontextprotocol/server-sentry, brings Sentry’s error tracking and performance monitoring directly into AI-driven workflows. It exposes Sentry issues, events, and telemetry through standardized MCP resources and tools, so AI assistants can automatically fetch critical errors, analyze performance trends, and propose remediation steps. This integration makes observability a first-class citizen in conversational and code-centric AI processes.
By bridging development teams, incident responders, and AI agents, the server accelerates incident analysis, enhances collaboration, and provides proactive performance guidance. Instead of manual dashboard sifting, you can query issues by status, correlate exceptions with releases, detect anomalies, and generate AI-driven summaries—all via simple JSON-RPC calls conforming to the MCP v1.0 protocol.
Quick Start
Install the server using npm:
npm install @modelcontextprotocol/server-sentry
Then add it to your MCP client configuration:
{
"mcpServers": {
"sentry-mcp-server": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-sentry"],
"env": {
"API_KEY": "your-api-key-here"
}
}
}
}
Key Features
Comprehensive Issue Access: Query, filter, and fetch full details for Sentry issues across projects and environments.
Performance Telemetry Retrieval: Retrieve transaction throughput, latency percentiles, failure rates, and custom span aggregations via MCP resources.
AI-Driven Error Analysis Tools: Use built-in MCP tools like AnalyzeErrorPatterns and SuggestRemediation to cluster exceptions and propose fixes automatically.
Example Usage
Here’s how an AI assistant can analyze recurring error patterns across multiple issues:
// Example code
const result = await client.callTool({
name: "mcp.tools.sentry.AnalyzeErrorPatterns",
arguments: {
issueIds: ["1234567890", "2345678901"],
windowSize: "24h",
sensitivity: "medium"
}
});
This call clusters related exceptions over the last 24 hours, helping you identify common root causes and prioritize fixes based on historical patterns.
Configuration
The server accepts the following environment variables:
API_KEY – Your Sentry authentication token (set as SENTRY_AUTH_TOKEN) with the project:read scope to access issue and event data.
REDIS_URL (optional) – URL for a Redis instance to enable in-memory caching and rate-limit buffering.
Available Tools/Resources
mcp.resources.sentry.ListIssues: List and filter Sentry issues programmatically.
mcp.tools.sentry.AnalyzeErrorPatterns: Analyze recurring error patterns across multiple issues.
Who Should Use This?
This server is perfect for:
Use case 1: Developers who want automated error triage without leaving their chat or IDE.
Use case 2: DevOps engineers monitoring performance regressions and anomaly detection.
Use case 3: Incident response teams leveraging AI summaries to accelerate root-cause analysis.
Conclusion
Get started with the Sentry MCP Server to seamlessly integrate Sentry telemetry into your AI workflows, reduce MTTR, and improve software reliability. Install the package, configure your Sentry credentials, and unlock AI-powered observability today!
Check out the GitHub repository for more information and to contribute.