A Model Context Protocol (MCP) server providing AI assistants with access to IMAS (Integrated Modelling & Analysis Suite) data structures through natural language search and optimized path indexing.
Select the setup method that matches your environment:
- HTTP (Hosted): Zero install. Connect to the public endpoint running the latest tagged MCP server from the ITER Organization.
- UV (Local): Install and run in your own Python environment for editable development.
- Docker : Run an isolated container with pre-built indexes.
- Slurm / HPC (STDIO): Launch inside a cluster allocation without opening network ports.
Choose hosted for instant access; choose a local option for customization or controlled resources.
HTTP | UV | Docker | Slurm / HPC
Connect to the public ITER Organization hosted server—no local install.
Ctrl+Shift+P→ "MCP: Add Server"- Select "HTTP Server"
- Name:
imas - URL:
https://imas-dd.iter.org/mcp
Workspace .vscode/mcp.json (or inside "mcp" in user settings):
{
"servers": {
"imas": { "type": "http", "url": "https://imas-dd.iter.org/mcp" }
}
}Pick path for your OS:
Windows: %APPDATA%\\Claude\\claude_desktop_config.json
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Linux: ~/.config/claude/claude_desktop_config.json
{
"mcpServers": {
"imas-mcp-hosted": {
"command": "npx",
"args": ["mcp-remote", "https://imas-dd.iter.org/mcp"]
}
}
}Placeholder: clarify what "op" refers to (e.g. OpenAI, Operator) to add tailored instructions.
Install with uv:
# Standard installation (requires API key for embeddings)
uv tool install imas-mcp
# Install with local embedding support (includes sentence-transformers)
uv tool install "imas-mcp[transformers]"
# Add to a project env with transformers support
uv add "imas-mcp[transformers]"The IMAS MCP server supports two modes for generating embeddings:
-
API-based embeddings (default): Uses remote embedding APIs via OpenRouter
- Requires
OPENAI_API_KEYandOPENAI_BASE_URLenvironment variables - No local dependencies needed
- Example model:
openai/text-embedding-3-small
- Requires
-
Local embeddings: Uses sentence-transformers library
- Install with
[transformers]extra:pip install imas-mcp[transformers] - Runs models locally without API calls
- Example model:
all-MiniLM-L6-v2(default)
- Install with
Configuration:
# API-based (requires API key)
export OPENAI_API_KEY="your-api-key"
export OPENAI_BASE_URL="https://openrouter.ai/api/v1"
export IMAS_MCP_EMBEDDING_MODEL="openai/text-embedding-3-small"
# Local transformers (requires [transformers] extra)
export IMAS_MCP_EMBEDDING_MODEL="all-MiniLM-L6-v2"Error Handling:
If you attempt to use local embeddings without the [transformers] extra installed, you'll see:
ImportError: sentence-transformers is required for local embedding models but is not installed.
To fix this, either:
1. Install with transformers support: pip install imas-mcp[transformers]
2. Set an API key to use remote embeddings:
- Set OPENAI_API_KEY environment variable
- Set OPENAI_BASE_URL environment variable (e.g., https://openrouter.ai/api/v1)
- Set IMAS_MCP_EMBEDDING_MODEL to an API model (e.g., openai/text-embedding-3-small)
VS Code:
{
"servers": {
"imas-mcp-uv": {
"type": "stdio",
"command": "uv",
"args": ["run", "--active", "imas-mcp", "--no-rich"]
}
}
}Claude Desktop:
{
"mcpServers": {
"imas-mcp-uv": {
"command": "uv",
"args": ["run", "--active", "imas-mcp", "--no-rich"]
}
}
}Run locally in a container (pre-built indexes included):
docker run -d \
--name imas-mcp \
-p 8000:8000 \
ghcr.io/iterorganization/imas-mcp:latest-streamable-http
# Optional: verify
docker ps --filter name=imas-mcp --format "table {{.Names}}\t{{.Status}}"VS Code (.vscode/mcp.json):
{
"servers": {
"imas-mcp-docker": { "type": "http", "url": "http://localhost:8000/mcp" }
}
}Claude Desktop:
{
"mcpServers": {
"imas-mcp-docker": {
"command": "npx",
"args": ["mcp-remote", "http://localhost:8000/mcp"]
}
}
}Helper script: scripts/imas_mcp_slurm_stdio.sh
VS Code (.vscode/mcp.json, JSONC ok):
Launch behavior:
- If
SLURM_JOB_IDpresent → start inside current allocation. - Else requests node with
srun --ptythen starts server (unbuffered stdio).
Resource tuning (export before client starts):
| Variable | Purpose | Default |
|---|---|---|
IMAS_MCP_SLURM_TIME |
Walltime | 08:00:00 |
IMAS_MCP_SLURM_CPUS |
CPUs per task | 1 |
IMAS_MCP_SLURM_MEM |
Memory (e.g. 4G) |
Slurm default |
IMAS_MCP_SLURM_PARTITION |
Partition | Cluster default |
IMAS_MCP_SLURM_ACCOUNT |
Account/project | User default |
IMAS_MCP_SLURM_EXTRA |
Extra raw srun flags |
(empty) |
IMAS_MCP_USE_ENTRYPOINT |
Use imas-mcp entrypoint vs python -m |
0 |
Example:
export IMAS_MCP_SLURM_TIME=02:00:00
export IMAS_MCP_SLURM_CPUS=4
export IMAS_MCP_SLURM_MEM=8G
export IMAS_MCP_SLURM_PARTITION=computeDirect CLI:
scripts/imas_mcp_slurm_stdio.sh --ids-filter "core_profiles equilibrium"Why STDIO? Avoids opening network ports; all traffic rides the existing srun pseudo-TTY.
Once you have the IMAS MCP server configured, you can interact with it using natural language queries. Use the @imas prefix to direct queries to the IMAS server:
Find data paths related to plasma temperature
Search for electron density measurements
What data is available for magnetic field analysis?
Show me core plasma profiles
Explain what equilibrium reconstruction means in plasma physics
What is the relationship between pressure and magnetic fields?
How do transport coefficients relate to plasma confinement?
Describe the physics behind current drive mechanisms
Analyze the structure of the core_profiles IDS
What are the relationships between equilibrium and core_profiles?
Show me identifier schemas for transport data
Export bulk data for equilibrium, core_profiles, and transport IDS
Find all paths containing temperature measurements across different IDS
What physics domains are covered in the IMAS data dictionary?
Show me measurement dependencies for fusion power calculations
Explore cross-domain relationships between heating and confinement
How do I access electron temperature profiles from IMAS data?
What's the recommended workflow for equilibrium analysis?
Show me the branching logic for diagnostic identifier schemas
Export physics domain data for comprehensive transport analysis
The IMAS MCP server provides 8 specialized tools for different types of queries:
- Search: Natural language and structured search across IMAS data paths
- Explain: Physics concepts with IMAS context and domain expertise
- Overview: General information about IMAS structure and available data
- Analyze: Detailed structural analysis of specific IDS
- Explore: Relationship discovery between data paths and physics domains
- Identifiers: Exploration of enumerated options and branching logic
- Bulk Export: Comprehensive export of multiple IDS with relationships
- Domain Export: Physics domain-specific data with measurement dependencies
The server includes integrated search for documentation libraries with IMAS-Python as the default indexed library. This feature enables AI assistants to search across documentation sources using natural language queries.
-
search_docs: Search any indexed documentation library- Parameters:
query(required),library(optional),limit(optional, 1-20),version(optional) - Supports multiple documentation libraries
- Returns comprehensive version and library information
- Parameters:
-
search_imas_python_docs: Search specifically in IMAS-Python documentation- Parameters:
query(required),limit(optional),version(optional) - Automatically uses IMAS-Python library
- IMAS-specific search optimizations
- Parameters:
-
list_docs: List all available documentation libraries or get versions for a specific library- Parameters:
library(optional) - When no library specified: returns list of all available libraries
- When library specified: returns versions for that specific library
- Shows all indexed versions and latest
- Parameters:
add-docs: Add new documentation libraries via command line- Usage:
add-docs LIBRARY URL [OPTIONS] - Requires: OpenRouter API key and embedding model configuration
- Supports custom max-pages and max-depth settings
- Includes
--ignore-errorsflag (enabled by default) to handle problematic pages gracefully - See examples below
- Usage:
# Search IMAS-Python documentation
search_imas_python_docs "equilibrium calculations"
search_imas_python_docs "IDS data structures" limit=5
search_imas_python_docs "magnetic field" version="2.0.1"
# Search any documentation library
search_docs "neural networks" library="numpy"
search_docs "data visualization" library="matplotlib"
# List all available libraries
list_docs
# Get versions for specific library
list_docs "imas-python"
# Add new documentation using CLI
add-docs udunits https://docs.unidata.ucar.edu/udunits/current/
add-docs pandas https://pandas.pydata.org/docs/ --version 2.0.1 --max-pages 500
add-docs imas-python https://imas-python.readthedocs.io/en/stable/ --no-ignore-errors
IMAS-Python documentation is automatically scraped during build.
docker-compose up --build# 1. Start docs-mcp-server
python scripts/start_docs_server.py
# 2. In another terminal, start IMAS-MCP server
python -m imas_mcp
# 3. Scraping IMAS-Python documentation (first time only)
python scripts/scrape_imas_docs.pyFor documentation scraping capabilities, you'll need an OpenRouter API key:
For Local Development:
# Set up environment variables (create .env file from env.example)
cp env.example .env
# Edit .env with your OpenRouter API keyFor CI/CD (GitHub Actions):
- Go to your repository settings:
Settings→Secrets and variables→Actions - Add a new repository secret:
- Name:
OPENAI_API_KEY - Value: Your OpenRouter API key
- Name:
📖 Detailed Setup Guide: See .github/SECRETS_SETUP.md for complete instructions on configuring GitHub repository secrets and troubleshooting.
Build Behavior:
- With OPENAI_API_KEY: Full documentation scraping during build
- Without OPENAI_API_KEY: Documentation scraping is skipped, build continues
- The container works normally regardless of scraping status
Local Docker Build:
# Build with API key
docker build --build-arg OPENAI_API_KEY=your_key_here .
# Build without API key (scraping will be skipped)
docker build .Use the add-docs CLI command to add new documentation:
# Add documentation libraries
add-docs udunits https://docs.unidata.ucar.edu/udunits/current/
add-docs numpy https://numpy.org/doc/stable/ --max-pages 500 --max-depth 3Note: Requires OPENAI_API_KEY environment variable to be set (see API Key Configuration above).
If documentation search is unavailable:
- Check docs-mcp-server is running:
curl http://localhost:6280/api/ping - Verify environment:
echo $DOCS_SERVER_URL - Check logs for connection errors
- Follow setup instructions in error messages
For local development and customization:
# Clone repository
git clone https://github.com/iterorganization/imas-mcp.git
cd imas-mcp
# Install development dependencies (search index build takes ~8 minutes first time)
uv sync --all-extrasThis project requires additional dependencies during the build process that are not part of the runtime dependencies:
imas-data-dictionary- Git development package, required only during wheel building for parsing latest DD changesrich- Used for enhanced console output during build processes
For runtime: The imas-data-dictionaries PyPI package is now a core dependency and provides access to stable DD versions (e.g., 4.0.0). This eliminates the need for the git package at runtime and ensures reproducible builds.
For developers: Build-time dependencies are included in the [build-system.requires] section for wheel building. The git package is only needed when building wheels with latest DD changes.
# Regular development - uses imas-data-dictionaries (PyPI)
uv sync --all-extras
# Set DD version for building (defaults to 4.0.0)
export IMAS_DD_VERSION=4.0.0
uv run build-schemasLocation in configuration:
- Build-time dependencies: Listed in
[build-system.requires]inpyproject.toml - Runtime dependencies:
imas-data-dictionaries>=4.0.0in[project.dependencies]
Note: The IMAS_DD_VERSION environment variable controls which DD version is used for building schemas and embeddings. Docker containers have this set to 4.0.0 by default.
# Run tests
uv run pytest
# Run linting and formatting
uv run ruff check .
uv run ruff format .
# Build schema data structures from IMAS data dictionary
uv run build-schemas
# Build document store and semantic search embeddings
uv run build-embeddings
# Run the server locally (default: streamable-http on port 8000)
uv run --active imas-mcp --no-rich
# Run with stdio transport for MCP clients
uv run --active imas-mcp --no-rich --transport stdioThe project includes two separate build scripts for creating the required data structures:
build-schemas - Creates schema data structures from IMAS XML data dictionary:
- Transforms XML data into optimized JSON format
- Creates catalog and relationship files
- Use
--ids-filter "core_profiles equilibrium"to build specific IDS - Use
--forceto rebuild even if files exist
build-embeddings - Creates document store and semantic search embeddings:
- Builds in-memory document store from JSON data
- Generates sentence transformer embeddings for semantic search
- Caches embeddings for fast loading
- Use
--model-name "all-mpnet-base-v2"for different models - Use
--forceto rebuild embeddings cache - Use
--no-normalizeto disable embedding normalization - Use
--half-precisionto reduce memory usage - Use
--similarity-threshold 0.1to set similarity score thresholds
Note: The build hook creates JSON data. Build embeddings separately using build-embeddings for better control and performance.
The repository includes a .vscode/mcp.json file with pre-configured development server options. Use the imas-local-stdio configuration for local development.
Add to your config file:
{
"mcpServers": {
"imas-local-dev": {
"command": "uv",
"args": ["run", "--active", "imas-mcp", "--no-rich", "--auto-build"],
"cwd": "/path/to/imas-mcp"
}
}
}- Installation: During package installation, the index builds automatically when the module first imports
- Build Process: The system parses the IMAS data dictionary and creates comprehensive JSON files with structured data
- Embedding Generation: Creates semantic embeddings using sentence transformers for advanced search capabilities
- Serialization: The system stores indexes in organized subdirectories:
- JSON data:
imas_mcp/resources/schemas/(LLM-optimized structured data) - Embeddings cache: Pre-computed sentence transformer embeddings for semantic search
- JSON data:
- Import: When importing the module, the pre-built index and embeddings load in ~1 second
The IMAS MCP server now includes imas-data-dictionaries as a core dependency, providing stable DD version access (default: 4.0.0). The git development package (imas-data-dictionary) is used during wheel building when parsing latest DD changes.
- Runtime:
uv add imas-mcp- Includes all transports (stdio, sse, streamable-http) - Full installation:
uv add imas-mcp- Recommended for all users
The system uses composable accessors to access IMAS Data Dictionary version and metadata:
- Environment Variable:
IMAS_DD_VERSION(highest priority) - Set to specify DD version (e.g., "4.0.0") - Metadata File: JSON metadata stored alongside indexes
- Index Name Parsing: Extracts version from index filename
- Package Default: Falls back to
imas-data-dictionariespackage (4.0.0)
This design ensures the server can:
- Build indexes using the version specified by
IMAS_DD_VERSION - Run with pre-built indexes using version metadata
- Access stable DD versions through
imas-data-dictionariesPyPI package
- Index Building: Requires
imas-data-dictionarypackage to parse XML and create indexes - Runtime Search: Only requires pre-built indexes and metadata, no IMAS package dependency
- Version Access: Uses composable accessor pattern with multiple fallback strategies
The search system is the core component that provides fast, flexible search capabilities over the IMAS Data Dictionary. It combines efficient indexing with IMAS-specific data processing and semantic search to enable different search modes:
-
Semantic Search (
SearchMode.SEMANTIC):- AI-powered semantic understanding using sentence transformers
- Natural language queries with physics context awareness
- Finds conceptually related terms even without exact keyword matches
- Best for exploratory research and concept discovery
-
Lexical Search (
SearchMode.LEXICAL):- Fast text-based search with exact keyword matching
- Boolean operators (
AND,OR,NOT) - Wildcards (
*and?patterns) - Field-specific searches (e.g.,
documentation:plasma ids:core_profiles) - Fastest performance for known terminology
-
Hybrid Search (
SearchMode.HYBRID):- Combines semantic and lexical approaches
- Provides both exact matches and conceptual relevance
- Balanced performance and comprehensiveness
-
Auto Search (
SearchMode.AUTO):- Intelligent search mode selection based on query characteristics
- Automatically chooses optimal search strategy
- Adaptive performance optimization
- Search Mode Selection: Choose between semantic, lexical, hybrid, or auto modes
- Performance Caching: TTL-based caching system with hit rate monitoring
- Semantic Embeddings: Pre-computed sentence transformer embeddings for fast semantic search
- Physics Context: Domain-aware search with IMAS-specific terminology
- Advanced Query Parsing: Supports complex search expressions and field filtering
- Relevance Ranking: Results sorted by match quality and physics relevance
We plan to implement MCP resources to provide efficient access to pre-computed IMAS data:
- Static JSON IDS Data: Pre-computed IDS catalog and structure data served as MCP resources
- Physics Measurement Data: Domain-specific measurement data and relationships
- Usage Examples: Code examples and workflow patterns for common analysis tasks
- Documentation Resources: Interactive documentation and API references
ids://catalog- Complete IDS catalog with metadataids://structure/{ids_name}- Detailed structure for specific IDSids://physics-domains- Physics domain mappings and relationshipsexamples://search-patterns- Common search patterns and workflows
Specialized prompts for physics analysis and workflow automation:
- Physics Analysis Prompts: Specialized prompts for plasma physics analysis tasks
- Code Generation Prompts: Generate Python analysis code for IMAS data
- Workflow Automation Prompts: Automate complex multi-step analysis workflows
- Data Validation Prompts: Create validation approaches for IMAS measurements
physics-explain- Generate comprehensive physics explanationsmeasurement-workflow- Create measurement analysis workflowscross-ids-analysis- Analyze relationships between multiple IDSimas-python-code- Generate Python code for data analysis
Continued optimization of search and tool performance:
- ✅ Search Mode Selection: Multiple search modes (semantic, lexical, hybrid, auto)
- ✅ Search Caching: TTL-based caching with hit rate monitoring for search operations
- ✅ Semantic Embeddings: Pre-computed sentence transformer embeddings
- ✅ ASV Benchmarking: Automated performance monitoring and regression detection
- Advanced Caching Strategy: Intelligent cache management for all MCP operations (beyond search)
- Performance Monitoring: Enhanced metrics tracking and analysis across all tools
- Multi-Format Export: Optimized export formats (raw, structured, enhanced)
- Selective AI Enhancement: Conditional AI enhancement based on request context
Comprehensive testing strategy for all MCP components:
- MCP Tool Testing: Complete test coverage using FastMCP 2 testing framework
- Resource Testing: Validation of all MCP resources and data integrity
- Prompt Testing: Automated testing of prompt templates and responses
- Performance Testing: Benchmarking and regression detection for all tools
The server is available as a pre-built Docker container with the index already built:
# Pull and run the latest container
docker run -d -p 8000:8000 ghcr.io/iterorganization/imas-mcp:latest
# Or use Docker Compose
docker-compose up -dSee DOCKER.md for detailed container usage, deployment options, and troubleshooting.
{ "servers": { "imas-slurm-stdio": { "type": "stdio", "command": "scripts/imas_mcp_slurm_stdio.sh" } } }