Skip to main content

MCP Integrations

MCP servers are a protocol between an LLM and external data, so they can be integrated into any MCP-enabled LLM. This includes most AI chatbots such as ChatGPT, Claude, Kiro, Copilot, and others. This also enables MCP servers to be used within agentic workflows.

For an overview of the CM MCP server, please read about MCP Servers before integrating with any LLM.

info

To use the CM MCP server, you need CM API credentials. For details on acquiring credentials, refer to API Access.

MCP server requests use your API credentials and count towards your daily API call allowance.


Prerequisites

  • Python 3.10+
  • An MCP-compatible LLM client (Claude Desktop, Kiro, VS Code with Copilot, etc.)
  • CM API credentials (Client ID and Secret)
  • CM MCP and Token URL (Email cmsupport@accenture.com for access)
  • The following Python packages:
pip3 install mcp httpx boto3 requests

How It Works

Most MCP client configurations cannot handle authentication natively. The CM MCP server requires Cognito authentication before it can accept requests, so a proxy script is used to bridge the gap.

The proxy script runs locally on your machine and:

  1. Authenticates with AWS Cognito using client credentials (client ID + client secret)
  2. Forwards MCP requests to the CM runtime with the obtained token
  3. Returns responses back to your LLM client
LLM Client  ←stdio→  Proxy Script  ←https→  CM MCP Server

Setup

Step 1: Download the Proxy Script

Download the cm_mcp_server.py proxy script and save it to a local directory: ⬇ Download cm_mcp_server.py

We recommend saving the script to a dedicated directory such as ~/.mcp in your home folder. You will need to create this directory.

mkdir -p ~/.mcp
# Copy or download cm_mcp_server.py into ~/.mcp/

Step 2: Configure Your LLM Client

Add the following MCP server configuration to your LLM client. The configuration format is the same across most clients — only the config file location differs.

Remember: Replace the placeholder values:

  • Provided by user:
    • Python Path
    • Path to proxy script
  • Provided by CM:
    • CM API credentials
    • MCP URL
    • Token URL
{
"mcpServers": {
"central-monitoring": {
"command": "path/to/python3",
"args": [
"/path/to/.mcp/cm_mcp_server.py"
],
"env": {
"CLIENT_ID": "<your-client-id>",
"CLIENT_SECRET": "<your-client-secret>",
"MCP_URL": "<cm-mcp-url>",
"TOKEN_URL" : "<cm-token-url>"
}
}
}
}

Client-Specific Config Locations

Claude Desktop

Edit the config file at:

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json

Alternatively, go to Settings → Developer → Edit config

Paste the JSON configuration above into this file. Restart Claude Desktop after saving.

Kiro

Workspace: Create a file called mcp.json inside the .kiro folder in your project. If the .kiro folder doesn't exist, create it first. The MCP will only be available in this workspace.

Global (all projects): Create or edit the config file at:

  • macOS: ~/.kiro/settings/mcp.json
  • Windows: %APPDATA%\kiro\settings\mcp.json

Paste the JSON configuration above into this file. Kiro will automatically detect the change and connect.

VS Code (Copilot / Amazon Q / Other Extensions)

Workspace: Create a file called mcp.json inside the .vscode folder in your project. If the .vscode folder doesn't exist, create it first. The MCP will only be available in this workspace.

Global (all projects): Create or edit the config file at:

  • macOS: ~/Library/Application Support/Code/User/settings.json
  • Windows: %APPDATA%\Code\User\settings.json

Paste the JSON configuration above into whichever location you prefer.

The exact setup may vary depending on which AI extension you're using — check your extension's documentation if the above doesn't work.


Step 3: Verify the Connection

Once configured, open your LLM client and try a simple query such as:

"Show me the last 10 ERROR logs for the past hour for EC2 applications"

If the MCP server is connected correctly, the LLM will call the appropriate CM tool and return results.

Remember to add MCP resources as needed.



Troubleshooting

Authentication Errors

If you see authentication errors, verify that your CLIENT_ID and CLIENT_SECRET are correct. You can test authentication independently:

python3 -c "
import requests
resp = requests.post(
'https://cm-api-core-prod.auth.ap-southeast-2.amazoncognito.com/oauth2/token',
data={'grant_type': 'client_credentials', 'client_id': '<your-client-id>', 'client_secret': '<your-client-secret>'},
headers={'Content-Type': 'application/x-www-form-urlencoded'}
)
print('Auth successful' if 'access_token' in resp.json() else f'Auth failed: {resp.text}')
"

Python Not Found

Ensure python3 is downloaded, and the full path to your Python binary is in the command field. To locate your python path run:

which python3

Copy this path and replace the command variable in the configuration file from Step 2.

Missing Dependencies

Install the required packages:

pip3 install mcp httpx boto3 requests

Server Not Appearing in LLM Client

  • Ensure the config file is saved in the correct location for your client
  • Restart the LLM client after making config changes
  • Check that the path to cm_mcp_server.py in the args field is correct and the file exists

Final Notes

Warning

The MCP server can return raw logs and workflows, but retrieving hundreds of dense records will consume a large number of tokens. For high-volume queries, use the aggregation endpoints instead of search endpoints to get summarised results efficiently. See MCP Server for guidance on choosing the right endpoint.

Tips for Getting the Best Results

  • Use aggregation queries (e.g. "how many", "group by", "most common") instead of asking for raw documents when you need summaries
  • Use filtering for more targeted results, i.e. constrict time ranges ( 2 hours is better than recently) and specify application types and domain names
  • Limit result sizes — asking for "the last 10" or "the surrounding 10" is more efficient than "show me all"
  • Use resources when filtering or aggregating to give the LLM context about available fields

Sample Prompts

Here are some example prompts you can try once the MCP server is connected:

"How many tickets were raised this week, grouped by severity?"

"What are the 5 most common log messages in the past hour for Kubernetes applications?"

"How many critical tickets have been created in the last 7 days?"

"What fields are available in the log schema?"

"Show me the last 10 completed workflows from the past hour for Kubernetes applications"

"What are the most common workflows completed over the last day, grouped by application type?"

For more sample prompts, refer to MCP Sample Prompts