AI Enterprise in isolated infrastructure
IDE Services supports the following on-premises solutions for air-gapped operations of AI Enterprise.
OpenAI Compatible
AI Enterprise lets you integrate with on-premises OpenAI Compatible LLM servers, such as:
Follow the corresponding instructions to deploy a server on-premises, then connect to it from the IDE Services UI.
These servers give you access to well-known, high-performing models like Claude and GPT, as well as some custom models.
Hugging Face
Hugging Face integration lets you use Llama 3.1 Instruct 70B model for core AI Assistant features.
Deploy Llama 3.1 Instruct 70B. Use the instructions provided in the official Hugging Face documentation. Refer to this section to learn about model-specific requirements.
Configure the Hugging Face provider and connect to the model from the IDE Services Web UI.
JetBrains Mellum
Use JetBrains Mellum for lightweight features and code completion.
Follow the JetBrains Mellum instructions to install and configure JetBrains Mellum on-premises.
In the IDE Services Web UI, set Mellum as the code completion provider.