IDE Services 2025.5 Help

Manage AI Enterprise

AI Enterprise lets you use different providers of AI services across your organization — JetBrains AI or a custom solution, such as:

You can enable all options and then choose a preferred provider for specific user profiles.

Enable AI Enterprise in your organization

  1. In the Web UI, open the Configuration page and navigate to the License & Activation tab.

  2. Scroll down to the AI Enterprise section and click Enable:

    Enable AI Enterprise
  3. In the Enable AI Enterprise dialog, choose one of the AI providers. For specific configuration instructions, refer to the following procedures:

    If you'd like to use different AI providers for specific profiles, you can easily add and enable an additional provider at any time.

  4. Set the usage limit for AI Enterprise.

    Enable the Unlimited option.

    Unlimited usage of AI Enterprise

    Disable the Unlimited option and specify the limit on the number of AI Enterprise users.

    Limited number of AI Enterprise users
  5. Click Apply.

  6. After enabling AI Enterprise in your organization, you need to select and enable an AI provider for relevant profiles. Until then, developers won't have access to AI features and the AI Assistant plugin.

  7. Make sure developers are connected to IDE Services Server through the Toolbox App, otherwise they won't be able to use the provisioned AI features.

Use the JetBrains AI service

By default, the AI features in JetBrains products are powered by the JetBrains AI service. This service transparently connects you to different large language models (LLMs) and enables specific AI-powered features within JetBrains products. It's driven by OpenAI and Google as the primary third-party providers, as well as several proprietary JetBrains models. JetBrains AI is deployed as a cloud solution on the JetBrains' side and does not require any additional configuration from your side.

Add AI provider: JetBrains AI

  1. Navigate to Configuration | License & Activation.

  2. Scroll down to the AI Enterprise section and click Settings.

  3. Open the AI Providers tab.

  4. In the Add AI provider dialog, select the JetBrains AI provider.

    Configure JetBrains AI
  5. Click Save.

Use your own AI provider

AI Enterprise works with Google Vertex AI, Amazon Bedrock, and selected presets powered by OpenAI. You can also connect to the on-premises LLMs using Hugging Face.

Third party AI

OpenAI Platform

Before starting, make sure to set up your OpenAI Platform account and get an API key for authentication. For more information, refer to the OpenAI documentation.

Add AI provider: OpenAI Platform

  1. Navigate to Configuration | License & Activation.

  2. Scroll down to the AI Enterprise section and click Settings.

  3. Open the AI Providers tab.

  4. Click Add provider and choose OpenAI from the menu.

    Add AI Provider
  5. In the OpenAI dialog, specify the following details:

    • Select OpenAI Platform from the Preset list.

    • Provide an endpoint for communicating with the OpenAI service. For example, https://api.openai.com/v1.

    • Provide your API key to authenticate to the OpenAI API. For more details, refer to the OpenAI documentation.

    Configure OpenAI Platform
  6. (Optional) AI Enterprise uses the GPT-3.5-Turbo, GPT-4, and GPT-4o mini models for AI-powered features within JetBrains products. However, if you have the GPT-4o model available on your account, we recommend adding it to the list by clicking Add model.

  7. Click Save.

Azure OpenAI

Before enabling Azure OpenAI as your provider, you need to complete the following steps:

  1. Create an Azure OpenAI resource.

  2. Deploy the required models: GPT-3.5-Turbo, GPT-4.

  3. Obtain the endpoint and API key. Navigate to your Azure OpenAI subscription | Resource Management | Keys and Endpoints

  4. Obtain the deployment names of your models. You can find them at your Azure OpenAI subscription | Resource Management | Model Deployments | Manage Deployments

Once you have completed the above preparation steps, you can enable Azure OpenAI in your IDE Services.

Add AI provider: Azure OpenAI

  1. Navigate to Configuration | License & Activation.

  2. Scroll down to the AI Enterprise section and click Settings.

  3. Open the AI Providers tab.

  4. In the OpenAI dialog, specify the following details:

    • Select Azure OpenAI from the Preset list.

    • Provide an endpoint for communicating with the Azure OpenAI service. For example, https://YOUR_RESOURCE_NAME.openai.azure.com.

    • Provide your API key to authenticate to the Azure OpenAI API.

    Configure Azure OpenAI
  5. Specify the deployment names of your models. Click the gear icon next to each model to enter its name.

  6. (Optional) AI Enterprise uses the GPT-3.5-Turbo, GPT-4, and GPT-4o mini models for AI-powered features within JetBrains products. However, if you have the GPT-4o model available on your account, we recommend adding it to the list by clicking Add model.

  7. Click Save.

Google Vertex AI

Before enabling Google Vertex AI as your provider, you need to complete the following steps:

  1. Log in to your Google Cloud account or create one.

  2. Create a new service account with the role: Vertex AI Service Agent.

  3. In your service account, navigate to the Keys tab and create a new key. It's a JSON file which you can upload:

    Create Google Service Account Key
  4. Enable the gemini-2.5-pro and gemini-2.0-flash models.

Once you have completed the above preparation steps, you can enable Google Vertex AI in your IDE Services.

Add AI provider: Google Vertex AI

  1. Navigate to Configuration | License & Activation.

  2. Scroll down to the AI Enterprise section and click Settings.

  3. Open the AI Providers tab.

  4. In the Google Vertex AI dialog, specify the following details:

    Configure Google Vertex AI
  5. Click Save.

Amazon Bedrock

AI Enterprise provides an integration with Amazon Bedrock, a fully managed service that provides access to a variety of high-performing foundation models. In the current version, AI Enterprise supports the following LLMs for use in the AI Assistant: Claude 3.0 Haiku, Claude 3.5 Haiku, Claude 3.5 Sonnet V2, Claude 3.7 Sonnet, Claude 4 Sonnet, Claude 4.5 Haiku, and Claude 4.5 Sonnet.

Before adding Amazon Bedrock as an AI provider in IDE Services, you need to set up authentication and access to the supported models.

Step 1. Configure authentication on the AWS side

You can choose one of the following authentication options supported by IDE Services:

Access Keys
  1. Follow the Getting Started instructions to:

    • Create an AWS account (if you don't already have one).

    • Create an AWS Identity and Access Management role with the necessary permissions for Amazon Bedrock.

    • Request access to the foundation models (FM) that you want to use.

  2. Access AWS IAM Identity Center, find your user, and review the Permissions policies section.

    • In addition to the default permission policy AmazonBedrockReadOnly, add a new inline policy for the Bedrock service.

    • Configure the new inline policy to have the Read access level for the InvokeModel and InvokeModelWithResponseStream actions.

  3. Generate an access key for your user.

    • When creating an access key, specify Third-party service as a use case.

    • You'll need to provide the access key ID and secret when you configure Amazon Bedrock in IDE Services. Make sure to save these values.

IAM Role

Here is how to set up AWS IAM Role for Amazon Bedrock:

  1. Navigate to the IAM service in your AWS Console.

  2. Create Permissions Policy.

    Go to Policies | Create policy. Choose the JSON tab, paste the below permissions policy and click Next. Name it (e.g., "JetBrains-Bedrock-Policy") and create the policy.

    Permissions Policy:

    { "Version" : "2012-10-17", "Statement" : [ { "Effect" : "Allow", "Action" : [ "bedrock:InvokeModel", "bedrock:InvokeModelWithResponseStream", "bedrock:ListFoundationModels", "bedrock:GetFoundationModel" ], "Resource" : "*" } ] }
  3. Create IAM Role

    Navigate to Roles | Create role.

  4. Configure Trust Policy

    Select Custom trust policy and paste the below trust policy JSON. Click Next

    Trust Policy:

    { "Version" : "2012-10-17", "Statement" : [ { "Effect" : "Allow", "Principal" : { "AWS" : "arn:aws:iam::205930650357:role/aws-env-iam-testing-role" }, "Action" : "sts:AssumeRole", "Condition" : { "StringEquals" : { "sts:ExternalId" : "B8983A86714F2468CF7FD95329FF9D61" } } } ] }
  5. Attach Permissions

    Search for the policy you created in Step 2, select it, and click Next.

  6. Name and Create Role

    Give your role a descriptive name (e.g., JetBrains-Bedrock-Access), optionally add tags, and click Create role.

  7. Copy Role ARN

    After creation, click on the role name to view details. Copy the Role ARN and paste it in the Role ARN field in the AI provider configuration form in IDE Services.

Default Credentials

Default Credentials is an authentication method that relies on the AWS SDK Default Credential Provider Chain to automatically locate and load AWS credentials from supported sources. This enables applications — including the JetBrains IDE Services integration with Amazon Bedrock — to authenticate to AWS services without manually supplying static credentials.

The provider chain searches for credentials in the following order:

  1. Web identity token (OIDC) — for example, a Kubernetes service‑account token used to assume an IAM role via STS AssumeRoleWithWebIdentity. More details.

  2. Environment variablesAWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and optionally AWS_SESSION_TOKEN. More details.

  3. Container credentials — credentials provided by the container engine (ECS/EKS) for tasks or pods assigned an IAM role. More details.

  4. Instance profile / EC2 metadata service — credentials obtained via the EC2 instance metadata service for an IAM role attached to the instance. More details.

Using this provider chain allows IDE Services to transparently pick up AWS credentials from the running environment, aligning with AWS best practices for credential handling and security.

Set Up Default Credentials Authentication. To enable the Default Credentials authentication method for Amazon Bedrock in IDE Services AI Enterprise, administrators must configure AWS credential sources. Two supported options are described below.

Option 1: IRSA (Recommended for EKS)

When IDE Services runs on Amazon Elastic Kubernetes Service (EKS), the recommended approach is to use Kubernetes ServiceAccounts combined with IAM Roles for Service Accounts (IRSA) for secure credential handling. AWS documentation

  1. Create an OIDC provider for your EKS cluster (one‑time setup):

    eksctl utils associate-iam-oidc-provider --cluster <cluster‑name> --approve

    This step associates the cluster’s OIDC provider. AWS Guide: Create an IAM OIDC provider

  2. Create an IAM role with a trust policy allowing sts:AssumeRoleWithWebIdentity for the ServiceAccount.

    AWS Guide: Assign IAM roles to ServiceAccounts

  3. Annotate the Kubernetes ServiceAccount with the IAM role ARN:

    metadata: name: <service-account-name> namespace: <namespace> annotations: eks.amazonaws.com/role-arn: arn:aws:iam::<ACCOUNT_ID>:role/<ROLE_NAME>

    AWS Guide: ServiceAccount annotations

  4. Configure IDE Services in your Helm values:

    useS3AutoConfiguration: true featureFlags: bedrock-iam-role-auth: on bedrock-default-credentials: on

Option 2: Environment Variables (Non‑production / Testing)

If IRSA is not available in your environment, you may supply credentials via environment variables. This method is less secure and recommended only for development or testing.

  1. Create AWS access keys in IAM with permissions to access Amazon Bedrock and S3.

    Generate AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and optionally AWS_SESSION_TOKEN.

    AWS Guide: Environment variables for credentials

  2. Create a Kubernetes secret to hold the keys:

    kubectl create secret generic aws-credentials \ --from-literal=AWS_ACCESS_KEY_ID=<your_access_key_id> \ --from-literal=AWS_SECRET_ACCESS_KEY=<your_secret_access_key>
  3. Inject environment variables into the IDE Services deployment (via Helm or manifests) to expose the keys to the container environment.

  4. Configure IDE Services:

    useS3AutoConfiguration: true featureFlags: bedrock-default-credentials: on

Important: While this method works for testing, it does not provide credential rotation, least‑privilege enforcement, or audit benefits that IRSA offers. Use it with caution.

Step 2. Configure access to models on AWS side

Follow the AWS instructions to request access to the following supported models:

  • Claude 3.0 Haiku

  • Claude 3.5 Haiku

  • Claude 3.5 Sonnet V2

  • Claude 3.7 Sonnet

  • Claude 4 Sonnet

  • Claude 4.5 Haiku

  • Claude 4.5 Sonnet

Step 3. Add AI provider: Amazon Bedrock

  1. Navigate to Configuration | License & Activation.

  2. Scroll down to the AI Enterprise section and click Settings.

  3. Open the AI Providers tab.

  4. Click Add AI Provider and choose Amazon Bedrock.

    The Amazon Bedrock configuration dialog will appear.

  5. In the Region field, specify the AWS region that supports Amazon Bedrock.

  6. Use cross-Region inference. This option automatically chooses the most suitable AWS Region within your geographic area to handle user requests. This enhances the experience by optimizing resource utilization and ensuring high model availability. (Required for Claude 3.7 Sonnet model)

  7. Choose the authentication option you have set up on the AWS side:

    Configure Amazon Bedrock
  8. Click Add Model and choose the model you want to use from the list.

  9. In the configuration dialog, specify the model name and click Save.

  10. Save your settings.

Hugging Face

AI Enterprise allows you to use on-premises models, such as Llama 3.1 Instruct 70B powered by Hugging Face, for air-gapped operations.

For deploying and serving Llama 3.1 Instruct 70B, use the instructions provided in the official Hugging Face documentation. Refer to this section to learn about model-specific requirements.

Add AI provider: Hugging Face

  1. Navigate to Configuration | License & Activation.

  2. Scroll down to the AI Enterprise section and click Settings.

  3. Open the AI Providers tab.

  4. In the Hugging Face dialog, specify the following details:

    • Click Add llama-3.1 model.

      Configure Hugging Face
    • Specify the Model URL and Model API token in the Configure llama-3.1 Model dialog. Click Save.

      Configure the Llama model
  5. Click Save.

OpenAI Compatible

AI Enterprise supports integration with AI routers such as OpenRouter and LM Studio, allowing you to access a wide range of models from various pre-approved providers.

  • This setup enables you to use well-known, high-performing models like Claude and GPT, as well as custom models that may not be available otherwise. For example, you can select custom models such as Grok and DeepSeek for use in AI Assistant Chat.

  • You can operate AI Enterprise in an isolated environment by connecting to an OpenAI compatible server deployed on-premises, such as llama.cpp, vLLM, LMStudio.

Add AI provider: OpenAI Compatible

  1. Navigate to Configuration | License & Activation.

  2. Scroll down to the AI Enterprise section and click Settings.

  3. Open the AI Providers tab.

  4. In the OpenAI Compatible dialog, specify the following details:

    • Specify an endpoint for communicating with your AI router service. For example https://openrouter.ai/api/v1

    • Provide your Bearer token to authenticate with your AI router service. If your service requires a custom header for authentication, obtain and provide header Name and Value in the corresponding fields.

      Configure OpenAI Compatible
    • Click Add Model and choose the model you want to use from the list.

      If the model you are adding doesn't support Tools calling, Functions calling or Multimedia messages, uncheck these options.

    • If adding a custom model:

      • In the Configure ... model dialog, choose the model name that matches your initial selection.

      • Set Max input/output tokens. Custom models require specifying input and output context lengths — the maximum tokens a model can handle per request. Tools like AI Assistant and Junie use these values to decide how much data to include as context. Set them according to the limits defined by the provider or model documentation. The input limit is especially important, since it determines how tools select and trim files or history for processing.

      • Please note that performance of custom models cannot be guaranteed.

    • Click Save.

      Repeat these steps for additional models.

  5. Click Save.

Junie coding agent

JetBrains Junie is an AI-powered coding agent designed to work directly within supported JetBrains IntelliJ IDEA-based IDEs and is available as a plugin.

Unlike traditional code assistants, Junie can autonomously perform tasks such as generating code, running tests, fixing errors, and adapting to project-specific guidelines—all while keeping the developer in control. It understands project context, supports collaborative workflows, and aims to enhance both productivity and code quality.

Junie can be powered by large language models (LLMs) provided by the JetBrains AI service, or, as an alternative, it can use a specific set of models available through OpenAI and Amazon Bedrock.

Enable Junie

  1. If you have JetBrains AI service enabled, no further configuration of AI Enterprise settings are required. To make Junie available to developers, proceed to enable it in selected profiles.

    If JetBrains AI service is not enabled, you should select Azure OpenAI or Open AI Platform and Amazon Bedrock as your providers and configure as follows:

  2. On your dashboard (home page), locate the AI Enterprise widget and click Settings.

  3. Enable and configure either of the following AI providers:

    • Azure OpenAI

      Add the following models to the configuration:

      • GPT-4o

      • GPT-4o-Mini or GPT-4.1-Mini

      • GPT-4.1

      • GPT-5

    • Open AI platform

      Add the following models to the configuration:

      • GPT-4o

      • GPT-4o-Mini or GPT-4.1-Mini

      • GPT-4.1

      • GPT-5

    • Amazon Bedrock

      Add the following models to the configuration:

      • Claude 4.5 Sonnet

      • Claude 3.5 Haiku

      Select Use cross-Region inference.

  4. To make Junie available to developers, proceed to enable it in selected profiles.

AI Enterprise Settings

Enable additional AI providers

When enabling AI Enterprise for your organization, you get to choose only one AI provider. To enable an additional provider:

  1. Navigate to Configuration | License & Activation.

  2. Scroll down to the AI Enterprise section and click Settings.

  3. Open the AI Providers tab.

  4. Click Add provider and choose one from the menu.

    Add AI Provider
  5. If you're adding a Google Vertex, OpenAI, or AWS Bedrock provider, refer to the specific configuration instructions for further steps.

  6. Click Save.

Test connection to AI provider

  • If AI models stop responding, test the connection. The problem could be related to authentication, such as an expired token, or configuration changes on the AI provider's end. A failed connection test returns an error with a description to help you identify and fix the issue.

  • You can also check the connection when adding an AI provider to make sure your configuration is correct and that the API key or token you entered is valid.

To test the connection:

  1. Navigate to Configuration | License & Activation.

  2. Scroll down to the AI Enterprise section and click Settings.

  3. In the AI Providers tab, click Test connection next to the AI provider you want to check:

    Aie Test Connection

Set default AI providers

If you have more than one AI provider enabled for your organization, the providers you set as default will be preselected when you enable AI Enterprise in profiles. Additionally, it allows you to centrally switch providers for all profiles that have the Default provider option currently selected.

You can set multiple default providers, except when JetBrains AI is selected — in that case, it must be the only default provider.

To choose default providers:

  1. Navigate to Configuration | License & Activation.

  2. Scroll down to the AI Enterprise section and click Settings.

  3. Open the AI Providers tab.

  4. Select one or more of the listed AI providers as Default, then confirm and save your selection.

    Set default AI provider

Set the Code Completion provider

The Mellum engine powers code completion in AI Enterprise. You can configure whether to use a self-hosted Mellum instance, the JetBrains-hosted Mellum service (JetBrains AI), or let the system automatically select the most suitable provider from available options.

To set the Code Completion provider:

  1. Navigate to Configuration | License & Activation.

  2. Scroll down to the AI Enterprise section and click Settings.

  3. Open the Code Completion tab.

  4. From the Provider menu, select one of the following options:

    • Auto (best available): Automatically selects the most suitable provider from those which are available for the particular user. If JetBrains AI is available, it will be used.

    • JetBrains AI: uses Mellum hosted by JetBrains.

    • JetBrains Mellum (self-hosted): uses a Mellum engine installed on-premises.

  5. Click Save.

Set the Code Completion provider

Update the AI Enterprise usage limit

  1. Navigate to Configuration | License & Activation.

  2. Scroll down to the AI Enterprise section and click Settings.

  3. On the AI Enterprise Settings page, configure the usage limit for AI Enterprise:

    • Enable the Unlimited number of users option to let all users with AI Enterprise enabled on the profile level gain access to the AI features.

    • Disable the Unlimited number of users option and specify the limit on the number of AI Enterprise users. Users above this limit will have restricted access to the product features.

  4. Click Save.

Allow detailed data collection

  1. Navigate to Configuration | License & Activation.

  2. Scroll down to the AI Enterprise section and click Settings.

  3. In the General tab, use the Allow detailed data collection option to enable or disable detailed data collection in your organization. When you enable this option, users will be asked to grant permission for data sharing. Collecting AI interaction data helps improve LLM performance.

    Enable detailed data collection
  4. Click Save.

Disable AI Enterprise for your organization

  1. In the Web UI, open the Configuration page and navigate to the License & Activation tab.

  2. In the AI Enterprise section, click Disable.

  3. In the Disable AI Enterprise? dialog, click Disable.

    The Disable AI Enterprise dialog

Add more AI Enterprise users to your IDE Services

With the prepaid billing model, you can purchase more resources for your IDE Services license from your organization's JetBrains Account.

  1. Log in to your JetBrains Account with the organization or team administrator permissions.

  2. In the menu on the left, click your organization's name.

    The menu item to access your company's profile from your JetBrains Account
  3. At the top of the page, click View licenses.

    The button to view all your company's licenses in your JetBrains Account
  4. On the license overview page, select the Team Tools & Services tab.

    The Team Tools & Services tab on in a company's profile
  5. Locate the IDE Services license and click Add more resources. You may need to scroll down to find the license.

    Button to add more resources to the lisense
  6. The checkout page will open. There you can select the resources you want to add and pay for them.

24 November 2025