Requirements for Using Customer-Hosted Azure OpenAI LLMs
To integrate your own Azure-hosted deployments with Deepdesk, we need the following details for each deployment. Commonly used models include gpt-4o-mini and text-embedding-ada-002.
Required information per deploymentβ
For each model deployment you want Deepdesk to use, please provide:
| Item | Description |
|---|---|
| Model / deployment name | Must match the OpenAI model name (e.g. gpt-4o-mini, text-embedding-ada-002) or use a prefix β see Deployment name and prefix below. |
| Deployment region URL | The full base URL of your Azure Cognitive Services endpoint, e.g. https://swedencentral.api.cognitive.microsoft.com/ |
| API key | A valid Azure OpenAI API key with access to the deployment. |
Authenticationβ
You can use API key authentication (above) or, optionally, Entra ID (managed identity) authentication for your Azure OpenAI deployments. If you use Entra ID, provide the Entra ID client credentials instead of (or in addition to) the API key, and we will configure the connection accordingly.
Deployment name and prefixβ
Deployment names must match the OpenAI model names, e.g. gpt-4o-mini and text-embedding-ada-002. You may use a prefix to distinguish these deployments from others. The format is:
<prefix>-<model-name>
Examples:
deepdesk-gpt-4o-minideepdesk-text-embedding-ada-002
Tell us which prefix you use so we can configure it in Deepdesk. In Admin this is set as the deployment prefix on the LLM Config β see LLM User Guide.
Notesβ
- Quota β Ensure you have sufficient quota allocated in your Azure subscription for these models.
- Security β API keys and endpoints are stored securely and used only for request routing under your account.
See alsoβ
- LLM Overview β How LLMs and LLM Configs work
- LLM User Guide β Managing LLM Configs and customer endpoints in Admin
- LLM Gateway β Technical routing and integration