What is this?
The VMware Tanzu MCP Server is an enterprise-grade implementation of the Model Context Protocol (MCP) that integrates VMware Tanzu Application Platform with leading AI and GenAI capabilities. Packaged as @vmware/tanzu-mcp-server, it provides a standardized interface and runtime for building cloud-native, agentic AI applications at scale using Kubernetes. It simplifies connecting AI models, defining execution plans, and enforcing governance policies without reinventing integrations.
Under the hood, the MCP Server runs stateless microservices on Kubernetes, including an ingress gateway, core protocol handlers, model connectors, a policy engine, and a multicluster controller. It offers built-in support for Anthropic Claude and other AI providers, fine-grained security controls, rate limiting, audit logging, and seamless deployment across multicloud and hybrid environments. This combination of extensibility, compliance, and resilience makes the Tanzu MCP Server a solid choice for enterprise AI workflows.
Quick Start
Install the server using npm:
npm install @vmware/tanzu-mcp-server
Then add it to your MCP client configuration:
{
"mcpServers": {
"vmware-tanzu-mcp-server": {
"command": "npx",
"args": ["-y", "@vmware/tanzu-mcp-server"],
"env": {
"API_KEY": "your-api-key-here"
}
}
}
}
Key Features
Feature 1: Standardized MCP Protocol Compliance and Extensibility – provides a reference-grade implementation with pluggable model connectors and policy hooks for consistent integrations.
Feature 2: Enterprise Governance & Security – includes Open Policy Agent-based compliance, rate limiting, audit logging, and role-based access controls for regulated industries.
Feature 3: Multicloud & Multicluster Management – deploy and synchronize MCP resources across Kubernetes clusters on any cloud or on-premises using a single control plane.
Example Usage
Imagine you need to enrich a user’s text prompt with Claude in a data pipeline. You can call the Claude connector directly through the MCP client and get a response in a few lines of code.
// Example code
const result = await client.callTool({
name: "claude",
arguments: {
prompt: "Tell me a joke"
}
});
This code invokes the Anthropic Claude model via the Tanzu MCP Server and returns a generated text response, such as a joke, in the result object.
Configuration
The server accepts the following environment variables:
API_KEY – the API key for your AI model provider (e.g., Anthropic, AWS, Azure) to authenticate model calls.
LOG_LEVEL (optional) – set the logging level (DEBUG, INFO, WARN, ERROR) for troubleshooting and audit.
Available Tools/Resources
socrata: run database queries against Socrata datasets.
http: perform external HTTP calls to REST APIs and web services.
transform: execute custom JavaScript scripts to shape and post-process data.
Who Should Use This?
This server is perfect for:
Use case 1: Enterprise developers building regulated AI workflows with governance and audit requirements.
Use case 2: Teams needing to deploy AI services consistently across multicloud and hybrid clusters.
Use case 3: Organizations that require seamless integration of AI models, data connectors, and security policies.
Conclusion
The VMware Tanzu MCP Server streamlines enterprise AI development by combining a standardized MCP interface, model governance, and multicluster orchestration in a single platform. Whether you’re building chatbots, data pipelines, or agentic applications, the Tanzu MCP Server provides the tools and policies you need to stay secure and compliant at scale. Give it a try in your development cluster and explore the extensibility through custom connectors and policies.
Check out the GitHub repository for more information and to contribute.