MCP Server
Overview
The Central Monitoring MCP (Model Context Protocol) Server allows you to access Central Monitoring data directly from AI assistants like Claude, Copilot, or any MCP-compatible chatbot. Instead of writing API calls manually, you can ask questions in natural language and your LLM will query logs, workloads, tickets, jobs, and billing data on your behalf.
The MCP server wraps the Central Monitoring API endpoints and makes them accessible to AI tools.
To request access to the Central Monitoring MCP server, please reach out to the CM team at cmsupport@accenture.com.
Available Tools
Tools are actions the AI can call to query Central Monitoring data. These tools are wrappers over the API layer, and provide the same aggregation and scroll functionality.
When to Use Scroll vs Aggregation
-
Scroll tools (
search_logs,search_workloads,search_tickets): Use when you need to find or inspect specific documents. Best for targeted filtering, viewing individual records, or pattern matching across a small result set.- Example: "Show me the last 10 ERROR logs for client X"
-
Aggregation tools (
aggregate_logs,aggregate_workloads,aggregate_tickets): Use when you need to analyse large volumes of data for trends, counts, or summaries. Returns statistics rather than raw documents, making it far more token-efficient.- Example: "How many failed workloads per domain over the last 7 days?"
The following tools are available:
| Tool | Description |
|---|---|
search_logs | Paginate through logs with cursor-based scrolling |
search_workloads | Paginate through workloads with cursor-based scrolling |
search_tickets | Paginate through tickets with cursor-based scrolling |
search_jobs | Paginate through jobs with cursor-based scrolling |
aggregate_logs | Run OpenSearch aggregations on log data (counts, averages, groupings, etc.) |
aggregate_workloads | Run OpenSearch aggregations on workload data |
aggregate_tickets | Run OpenSearch aggregations on ticket data |
aggregate_jobs | Run OpenSearch aggregations on job data |
Available Resources
Resources provide schema information that helps LLMs understand the structure of your data. The LLM reads these automatically to know which fields exist and how to build filters.
| Resource | Description |
|---|---|
schema://logs | Schema and example documents for log entries (ATR, Quasar, EventOps) |
schema://workloads | Schema and example documents for workload entries |
schema://tickets | Schema and example documents for ticket entries |
schema://jobs | Schema and example documents for job entries |
When to Use Tools vs Resources
- Resources allow servers to share data that provides context to language models, such as files, database schemas, or application-specific information. LLMs use them to gain context about the structure of the data it's querying. CM MCP resources provide the field mappings for the data it's querying, so that it can effectively filter and aggregate.
- Tools enable the LLM to interact with the external systems, such as the CM API routes. When the LLM is asked a question such as "show me failed workloads from the last week", it will construct a query and call the tool (such as
aggregate_workloads) to fetch the data
TLDR: resources teach the LLM what the data looks like, tools fetch the data.