Skip to main content

Quickstart: Virtual MCP Server

In this tutorial, you'll learn how to deploy Virtual MCP Server to aggregate multiple MCP servers into a single endpoint. By the end, you'll have a working deployment that combines tools from multiple backends.

What you'll learn

  • How to create an MCPGroup to organize backend servers
  • How to deploy multiple MCPServers in a group
  • How to create a VirtualMCPServer that aggregates them
  • How tool conflict resolution works
  • How to connect your AI client to the aggregated endpoint

Prerequisites

Before starting this tutorial, make sure you have:

  • A Kubernetes cluster with the ToolHive operator installed (see Quickstart: Kubernetes Operator)
  • kubectl configured to communicate with your cluster
  • An MCP client (Visual Studio Code with Copilot is used in this tutorial)

Step 1: Create an MCPGroup

First, create an MCPGroup to organize your backend MCP servers:

apiVersion: toolhive.stacklok.dev/v1alpha1
kind: MCPGroup
metadata:
name: demo-tools
namespace: toolhive-system
spec:
description: Demo group for Virtual MCP aggregation

Apply the resource:

kubectl apply -f mcpgroup.yaml

Verify the group was created:

kubectl get mcpgroups -n toolhive-system

Step 2: Deploy backend MCPServers

Deploy two MCP servers that will be aggregated. Both reference the group:

apiVersion: toolhive.stacklok.dev/v1alpha1
kind: MCPServer
metadata:
name: fetch
namespace: toolhive-system
spec:
image: ghcr.io/stackloklabs/gofetch/server
transport: streamable-http
proxyPort: 8080
mcpPort: 8080
groupRef: demo-tools
resources:
limits:
cpu: '100m'
memory: '128Mi'
requests:
cpu: '50m'
memory: '64Mi'
---
apiVersion: toolhive.stacklok.dev/v1alpha1
kind: MCPServer
metadata:
name: osv
namespace: toolhive-system
spec:
image: ghcr.io/stackloklabs/osv-mcp/server
transport: streamable-http
proxyPort: 8080
mcpPort: 8080
groupRef: demo-tools
resources:
limits:
cpu: '100m'
memory: '128Mi'
requests:
cpu: '50m'
memory: '64Mi'

Apply the resources:

kubectl apply -f mcpservers.yaml

Wait for both servers to be running:

kubectl get mcpservers -n toolhive-system -w

You should see both servers with Running status before continuing.

Step 3: Create a VirtualMCPServer

Create a VirtualMCPServer that aggregates both backends:

apiVersion: toolhive.stacklok.dev/v1alpha1
kind: VirtualMCPServer
metadata:
name: demo-vmcp
namespace: toolhive-system
spec:
# Reference the MCPGroup containing fetch and osv servers
groupRef:
name: demo-tools

# No incoming auth for development (anonymous access)
incomingAuth:
type: anonymous

# Auto-discover auth config from backend MCPServers
outgoingAuth:
source: inline
# No default specified will use anonymous

# Tool aggregation with prefix strategy to avoid naming conflicts
aggregation:
conflictResolution: prefix
conflictResolutionConfig:
prefixFormat: '{workload}_'

# Expose as ClusterIP (internal access only)
serviceType: ClusterIP

Apply the resource:

kubectl apply -f virtualmcpserver.yaml

Check the status:

kubectl get virtualmcpservers -n toolhive-system

You should see output similar to:

NAME        STATUS   URL                                                              AGE
demo-vmcp Ready http://vmcp-demo-vmcp.toolhive-system.svc.cluster.local:4483 30s
What's happening?

The operator discovered both MCPServers in the group and configured Virtual MCP to aggregate their tools. With the prefix conflict resolution strategy, all tools are prefixed with the backend name.

Step 4: Verify the aggregation

Check the discovered backends:

kubectl describe virtualmcpserver demo-vmcp -n toolhive-system

Look for the Discovered Backends section in the status, which should show both backends.

Step 5: Connect your client

Port-forward to access Virtual MCP locally:

kubectl port-forward service/vmcp-demo-vmcp -n toolhive-system 4483:4483

Test the health endpoint:

curl http://localhost:4483/health

You should see {"status":"ok"}.

Step 6: Test the aggregated tools

Try asking your AI assistant questions that use the aggregated tools. Both tools work through the same Virtual MCP endpoint!

Step 7: Clean up

Delete the resources when you're done:

kubectl delete virtualmcpserver demo-vmcp -n toolhive-system
kubectl delete mcpserver fetch osv -n toolhive-system
kubectl delete mcpgroup demo-tools -n toolhive-system

What's next?

Congratulations! You've successfully deployed Virtual MCP Server and aggregated multiple backends into a single endpoint.

Next steps:

Troubleshooting

VirtualMCPServer stuck in Pending

Check that the MCPGroup exists and backend MCPServers are running:

kubectl get mcpgroups,mcpservers -n toolhive-system

Check the operator logs:

kubectl logs -n toolhive-system -l app.kubernetes.io/name=toolhive-operator
Only some tools appearing

Verify both backends are discovered:

kubectl get virtualmcpserver demo-vmcp -n toolhive-system -o jsonpath='{.status.discoveredBackends[*].name}'

Check backend health in the status:

kubectl describe virtualmcpserver demo-vmcp -n toolhive-system