An AI-powered coding assistant plugin for the Eclipse IDE based on a fork of Wojciech Gradkowski's AssistAI.
- Features
- System Requirements
- Installation
- Quick Start
- Initial Setup
- Main Interface
- Navigation & Shortcuts
- Code Analysis Features
- Interactive Code Blocks
- Git Integration
- Chat Area Context Menu
- Advanced Features
- Customisation
- Troubleshooting
- Security Considerations
- FAQ
- Contributing
- Credits
- License
- Multiple AI Provider Support: Works with OpenAI, OpenRouter, Ollama, and other OpenAI-compatible APIs
- Advanced Code Analysis: Comprehensive code review, explanation, debugging, and optimization capabilities
- Interactive Code Blocks: Copy, replace, and review code changes directly from AI responses
- Git Integration: Analyse diffs and generate commit messages from staged changes
- Customisable Prompts: Full template system with context-aware variables
- Reasoning Model Support: Special support for reasoning models with thinking blocks
- Rich Text Rendering: Code syntax highlighting and LaTeX math support
- Conversation Management: Import/export conversations, full undo/redo support
- Eclipse IDE: 2022-03 or later
- Java: 17 or later
- Operating System: Windows, macOS, or Linux
- Memory: Minimum 4GB RAM recommended for large conversations
- Network: Internet connection for cloud AI providers (not required for local providers)
-
Download the latest release:
- Go to the Releases page
- Download the latest
eclipse.plugin.aiassistant_X.X.X.jar
file from the most recent release
-
Install the plugin:
- Copy the JAR file to your Eclipse
dropins
folder (usuallyeclipse/dropins/
) - Restart Eclipse
- The plugin will be automatically detected and installed
- Copy the JAR file to your Eclipse
-
Eclipse PDE (Plug-in Development Environment):
- Go to Help → Eclipse Marketplace
- Type "Eclipse PDE (Plug-in Development Environment)" in find textbox to search
- Install "Eclipse PDE (Plug-in Development Environment) latest"
-
EGit - Git Integration for Eclipse:
- Go to Help → Eclipse Marketplace
- Type "EGit - Git Integration for Eclipse" in find textbox to search
- Install "EGit - Git Integration for Eclipse X.X.X"
-
Import the project from GitHub:
- Go to: File → Import → Git → Projects from Git
- Select "Clone URI" and click Next
- Enter https://github.com/jukofyork/aiassistant in the URI field
- (This should automatically fill the Host field as "github.com" and the Repository path field as "/jukofyork/aiassistant")
- Leave the Protocol as "https" and the Port blank
- Enter your GitHub username and a 'Personal Access Token' (NOT your GitHub password, see: https://github.com/settings/tokens)
- Click finish
-
Build and install the plugin:
- Go to: File → Export → Plug-in Development → Deployable plug-in and fragments
- Select "eclipse.plugin.aiassistant (X.X.X qualifier)"
- Set the destination as "Install into host repository:" and leave as default ".../org.eclipse.pde.core/install/" location
- Click finish and restart the IDE when asked
- Install the plugin following the Installation instructions
- Open the AI Assistant view: Window → Show View → Other... → AI Assistant → AI Assistant
- Click the "Settings" button and configure your AI provider
- Right-click in any code editor and try "AI Assistant → Explain"
- Start coding with AI assistance!
- Go to Window → Show View → Other...
- Expand "AI Assistant" category
- Select "AI Assistant" and click Open
- Click the "Settings" button in the AI Assistant view
- Configure your AI provider settings:
- Model Name: The specific model to use (e.g.,
llama3.1:8b
,gpt-4
,claude-3-sonnet
) - API URL: The base URL for your AI provider
- API Key: Your authentication key (leave blank for local providers like Ollama)
- JSON Overrides: Additional API parameters in JSON or TOML format
- Model Name: The specific model to use (e.g.,
The plugin supports multiple AI providers with pre-configured examples:
Provider | Example URL | Notes |
---|---|---|
OpenAI | https://api.openai.com/v1 |
Supports all OpenAI's models including reasoning models |
OpenRouter | https://openrouter.ai/api/v1 |
Access to Anthropic's Claude and other models |
Ollama | http://localhost:11434/v1 |
Local model hosting |
llama.cpp | http://localhost:8080/v1 |
Local model hosting |
TabbyAPI | http://localhost:5000/v1 |
Local model hosting |
Use the bookmarked settings table to quickly switch between different AI providers and models:
- Bookmark: Save current settings for easy switching
- Populate: Automatically discover available models from your API provider
- Sort: Organise bookmarks alphabetically
- Clear: Remove all bookmarked settings
The main chat area displays conversations between you and the AI assistant. Messages are color-coded:
- User messages: Darker background
- Assistant responses: Lighter background (darker for collapsible thinking blocks)
- Code blocks: Black background with syntax highlighting based on the detected language
- Notifications: Red background for errors/warnings and blue background for status updates
The user input area provides a rich text editing experience with navigation and spell-checking capabilities:
The yellow arrow buttons on the right side of the input area allow you to navigate through your message history:
This feature helps you quickly access and reuse previous prompts without retyping them, and the history is saved between sessions.
The input area includes built-in spell checking that highlights misspelled words with red wavy underlines.
To correct a misspelled word:
- Left-click on the misspelled word to position your cursor
- Select the correct spelling from the suggested alternatives
The Right-click context menu provides standard text editing operations including Undo, Redo, Cut, Copy, Paste, and Select All:
- Stop: Cancel current AI request
- Clear: Clear conversation history
- Undo: Remove last interaction
- Redo: Restore undone interaction
- Import: Load conversation from JSON file
- Export: Save conversation as JSON or Markdown
- Settings: Open preferences dialog
Shortcut | Action |
---|---|
Enter |
Send message and request AI response |
Shift+Enter |
Insert newline |
Ctrl+Enter |
Send message without requesting immediate response |
Ctrl+Z |
Undo text changes |
Ctrl+Shift+Z |
Redo text changes |
- Ctrl+Scroll: Navigate to top/bottom of conversation
- Shift+Scroll: Navigate between messages
- Shift+Hold: Highlight current message with a blue border whilst scrolling
Right-click in any code editor to access AI-powered analysis tools:
- Discuss: Open-ended discussion about code
- Explain: Detailed explanation of code functionality
- Code Review: Comprehensive code analysis and suggestions
- Best Practices: Check adherence to coding standards
- Robustify: Improve error handling and edge cases
- Debug: Identify potential bugs and issues
- Optimize: Performance improvement suggestions
- Refactor: Restructure code for better maintainability
- Write Comments: Generate documentation and comments
- Code Generation: Create boilerplate code
- Code Completion: Fill in missing implementations
When Eclipse detects compiler errors or warnings, additional options appear:
- Fix Errors: Address compilation errors
- Fix Warnings: Resolve compiler warnings
- Fix Errors and Warnings: Address both simultaneously
- Add To Message: Include code/file in your message
- Add As Context: Provide background information for AI analysis
- Add Staged Diff: Include Git changes in your request
AI responses containing code include interactive buttons:
- Copy Code: Copy to clipboard
- Replace Selection: Replace selected code in editor
- Review Changes: Open side-by-side comparison
- Apply Patch: Use Eclipse patch wizard (for diff blocks)
The Review Changes feature opens Eclipse's compare editor for selective code application:
For code blocks marked as diff
, use Eclipse's built-in patch application:
If any hunks fail to match, you'll get a warning notification that can be pasted back to the AI to help diagnose the problem:
You can either partially apply the successful hunks or ask for a complete redo depending on your choice in the patch wizard.
Access Git-specific features through the Team menu:
- Add Staged Diff: Analyse current project-level staged changes to help review modifications
- Git Commit Comment: Automatically generate descriptive commit messages based on staged changes
The AI can also analyse file-level Git differences using the right-click context menu:
Right-click in the chat area for additional options:
- Copy: Copy selected text to clipboard
- Replace Selection: Replace editor selection with chat text
- Review Changes: Compare chat text with editor selection
- Paste To Message: Add clipboard content as message
- Paste As Context: Add clipboard content as context
The first three options are only enabled when a section of code is selected. These trigger the same action as the buttons, but for the selected section of code only:
Special support for reasoning models with collapsible thinking blocks:
When using reasoning models (such as o3
, deepseek-reasoner
, claude-sonnet-4
, and similar models), the AI Assistant automatically detects and displays the model's internal reasoning process in collapsible "Thinking..." blocks. These blocks contain the step-by-step reasoning that the model performs before providing its final answer.
Key Features:
- Collapsible Display: Thinking blocks are shown as collapsible sections that can be expanded or collapsed
- Automatic Detection: The plugin automatically identifies reasoning content via simple
<think>
tags or fromreasoning
/reasoning_content
sections in the returned response - Clean Conversation History: Thinking blocks are automatically removed from messages before they're sent back to the AI model, ensuring the conversation history remains clean and focused on the actual dialogue
NOTE: Some providers such as OpenAI and Google do not return their reasoning tokens.
After each AI response, a detailed usage report is displayed in the notification area:
The usage report provides comprehensive information about the API request:
Model Information:
- Model: The exact model name used for the request
- Finish: Reason the response ended (
stop
,length
,content_filter
, etc.)
Token Usage:
- Prompt: Input tokens consumed with percentage of model's context window used
- Cached: Cached tokens from previous requests (shown in brackets when available)
- Response: Output tokens generated with percentage of model's output limit used
- Reasoning: Additional reasoning tokens (shown in brackets for reasoning models)
Cost Analysis:
- Charge: Total cost for the request with breakdown showing prompt cost + response cost
- Uses cents (¢) for amounts under $1.00 and dollars ($) for larger amounts
- Cost data may not be available for all models/providers
Availability Notes:
- Token percentages require model context limits to be known
- Reasoning tokens only appear for reasoning-capable models
- Cached tokens depend on provider support for prompt caching
- Cost information requires pricing data in the model database
Render mathematical formulas and equations:
- Import/Export conversations as JSON
- Export conversations as Markdown
Customise all prompt templates through the preferences:
Templates use the StringTemplate syntax with these context variables:
Variable | Description |
---|---|
<taskname> |
Name of the current prompt task |
<usertext> |
User input from the chat interface |
<project> |
Current project name |
<filename> |
Active file name |
<language> |
Programming language of active file |
<tag> |
Markdown language tag for syntax highlighting |
<warnings> |
Compiler warnings for active file |
<errors> |
Compiler errors for active file |
<document> |
Full text of active document |
<clipboard> |
Current clipboard contents |
<selection> |
Currently selected text (or full document if none) |
<lines> |
Description of selected line numbers |
<documentation> |
Appropriate documentation format for language |
<file_diff> |
Git diff for current file |
<project_diff> |
Git diff for entire project |
Use the special <<switch-roles>>
tag to create multi-turn conversations:
Analyze this code:
```java
<selection>
```
<<switch-roles>>
This creates alternating user and assistant messages for complex interactions.
Configure advanced API parameters using either JSON or TOML syntax:
JSON Examples:
"temperature": 0.7, "max_tokens": 2000, "top_p": 0.9
TOML Examples:
temperature = 0.7, max_tokens = 2000, reasoning_effort = "high"
Customize interface settings:
- Font Sizes: Adjust chat and notification text size
- Timeouts: Configure connection and request timeouts
- Streaming: Enable/disable real-time response streaming
- Message Types: Choose between System and Developer messages
If the chat area appears blank or grey, set these environment variables before starting Eclipse:
Linux/macOS:
export WEBKIT_DISABLE_COMPOSITING_MODE=1
export WEBKIT_DISABLE_DMABUF_RENDERER=1
./eclipse
Windows:
SET WEBKIT_DISABLE_COMPOSITING_MODE=1
SET WEBKIT_DISABLE_DMABUF_RENDERER=1
start eclipse.exe
- Connection Errors: Verify API URL and key in settings
- Model Not Found: Ensure model name matches provider's available models
- Slow Responses: Increase request timeout in preferences
- Large Conversations: Use Clear button periodically for better performance
- Memory Issues: Restart Eclipse if conversations become very large
- SSL/TLS Errors: Check firewall and proxy settings
Important: This plugin sends your code to external AI services. Consider the following:
- Sensitive Code: Be cautious when analyzing proprietary or sensitive code
- Local Providers: Use Ollama or llama.cpp for complete privacy
- API Keys: Store API keys securely and rotate them regularly
- Network: Use HTTPS endpoints and consider VPN for additional security
Q: Can I use this plugin offline?
A: Yes, with local providers like Ollama or llama.cpp.
Q: How do I switch between different AI models quickly?
A: Use the bookmarked API settings feature to save and switch between configurations.
Q: Can I customise the prompts?
A: Yes, all prompts are fully customisable through the Prompt Templates preferences page.
The plugin follows Eclipse's standard architecture:
- Main View:
MainView
andMainPresenter
handle UI and business logic - Browser Integration: Custom browser functions for code interaction
- Network Layer: OpenAI-compatible API client with streaming support
- Prompt System: Template-based prompt generation with context injection
- New Prompt Types: Add to
Prompts
enum and create template file - Browser Functions: Extend
DisableableBrowserFunction
for new interactions - Context Variables: Modify
Context
class to add new template variables
- Wojciech Gradkowski for the original AssistAI project that this plugin is based on
- The Highlight.js team for code block syntax highlighting
- The MathJax team for the mathematical notation rendering
- The LiteLLM team for the model_prices_and_context_window.json file this project uses to calculate the API charges and context windows for the usage reports
The MIT License is inherited from Wojciech Gradkowski's original AssistAI project - see LICENSE for details.