Use third-party and local models
By default, AI Assistant provides access to a set of cloud-based models from various AI providers, but you can also configure it to use locally hosted models or models provided by third parties. Supported providers include:
Anthropic – provides the Claude family of language models.
OpenAI – offers GPT, o-series, and other general-purpose AI models.
OpenAI-compatible endpoints – services that expose an API compatible with the OpenAI API, such as llama.cpp or LiteLLM.
Ollama – runs open-source models locally on your machine.
LM Studio – runs local language models and exposes them through an OpenAI-compatible API.
To use models from these providers, you first need to configure a connection. For cloud-based models, this involves entering an API key. For locally hosted models, you need to specify the URL where the model is running, so AI Assistant can connect to it.
Access models from third-party AI providers
To access models from third-party providers such as OpenAI, Anthropic, or other OpenAI-compatible endpoints, AI Assistant requires an API key and, in some cases, an endpoint URL. Entering the key allows AI Assistant to authenticate with the provider and access its models.
To provide the API key:
Navigate to .

In the Third-party AI providers section, select the Provider.
Enter the API Key and click Test Connection to check whether the connection is established successfully.

If you are configuring an OpenAI-compatible provider, specify the URL of the provider's API endpoint in addition to the API Key. Also, indicate whether the model supports calling tools configured through the Model Context Protocol (MCP) by enabling or disabling the Tool calling setting.

Click Apply to save changes.
Once the connection is established, models from the configured provider become available for use in AI Chat.

Connect local models
Providers like Ollama and LM Studio run models on your computer. Connecting to them in AI Assistant allows you to use these models directly from your local setup.
Navigate to .
In the Third-party AI providers section, select the Provider.
Specify the URL where it can be accessed and click Test Connection to check whether the connection is established successfully.

Click Apply to save changes.
Once the connection is established, local models become available for use in AI Chat.
Additionally, locally hosted models can also be assigned to AI Assistant features.
Assign models to AI Assistant features
Local models and models accessed from the OpenAI-compatible endpoint can be assigned to AI Assistant features such as code completion, in-editor code generation, commit message generation, and some others.
To assign models to be used in AI features:
Go to .
In the Models Assignment section, specify the models that you want to use for core, lightweight, and code completion features. Also, define the model context window size if needed.

Core features – this model will be used for in-editor code generation, commit message generation, as a default model in chat, and other core features.
Instant helpers – this model will be used for lightweight features, such as chat context collection, chat title generation, and name suggestions.
Completion model – this model will be used for the inline code completion feature in the editor. Works only with Fill-in-the-Middle (FIM) models.
Context window – allows you to configure the model context window for local models. A larger window lets the model handle more context in a request, while a smaller one reduces memory usage and may improve performance. This helps balance context length with system resources. The default value is 64 000 tokens.
Click Apply to save changes.
As a result, AI Assistant uses the assigned models when the corresponding feature is triggered.
Activate JetBrains AI
If you are using AI Assistant without a JetBrains AI service subscription, some features may not work properly when using models from third-party AI providers.
To ensure that all features are available and work as expected, you can purchase and activate a JetBrains AI service subscription. An active subscription covers the features that might not work properly or are unavailable with models from third-party AI providers.
To enable your JetBrains AI subscription:
Navigate to .
In the JetBrains AI section, click Activate JetBrains AI. You will be redirected to AI Chat.

Click Log in to JetBrains Account, enter your credentials, and wait for the login process to complete.
After you sign in with a JetBrains Account that has an active JetBrains AI subscription, you can start using AI Assistant with full functionality.
Switch to offline mode
Not available in IDE versions starting from 2025.3.1
If you want to restrict calls to remote models and only use the local ones, you can enable the offline mode. In this mode, most cloud model calls will be blocked, and all AI-related features will rely on the local models instead.
To enable offline mode:
Go to .
Select your local third-party provider.
In the Local models section, specify the models that you want to use for AI features.
Enable the Offline mode setting.

Click Apply to save changes.
Once you have finished the setup, you can toggle offline mode on and off whenever applicable:
Click the
JetBrains AI widget located in the toolbar in the window header.
Hover over the Offline Mode option and click Enable or Disable.
