AI Security18 min read

Azure AI Foundry Private Link Setup: Secure Azure OpenAI, AI Search, and Storage End-to-End

Securing Azure OpenAI alone is not enough if Azure AI Search, Storage, or Key Vault still expose data over public paths. This guide shows how to build an end-to-end private Azure AI Foundry architecture using Private Link, Private DNS, and segmented subnets.

I
Microsoft Cloud Solution Architect
Azure AI FoundryAzure OpenAIAzure AI SearchPrivate LinkPrivate EndpointAzure StorageKey VaultNetwork Security

Why Securing Only Azure OpenAI Is Not Enough

Many teams do one important thing right: they add a private endpoint for Azure OpenAI and disable public network access.

That is a strong start, but it is not the full architecture.

An Azure AI app built in Azure AI Foundry usually depends on more than one service:

  • Azure OpenAI
  • Azure AI Search
  • Storage Account
  • Key Vault
  • App hosting layer such as App Service, AKS, or a VM

If Azure OpenAI is private but your search index, blob storage, or secrets path still uses public endpoints, your overall design is not actually private end to end.

That is the gap this article closes.

If you only need to secure Azure OpenAI itself, read the original [Azure OpenAI private endpoint guide](/blog/secure-azure-openai-private-endpoint-terraform). This article is the next step: securing the full Azure AI Foundry data path.

What We Are Building

The goal is a design where:

  • Application traffic stays on private IP space
  • DNS resolves service names to private endpoints
  • Public network access is disabled wherever practical
  • Sensitive dependencies such as model calls, vector search, secrets retrieval, and document access never rely on public internet routing

Think of it as the difference between securing one building entrance and securing the whole campus.

The Typical Azure AI Foundry Request Path

A real-world Azure AI app often looks like this:

  1. User sends a request to a private web app or internal API
  2. The app queries Azure AI Search for relevant content
  3. The app fetches blobs or documents from Storage
  4. The app reads credentials, connection strings, or certificates from Key Vault
  5. The app sends the final enriched prompt to Azure OpenAI
  6. The answer returns through the same private application path

If any one of those dependencies still uses a public endpoint, your "private AI architecture" is only partially private.

The Core Services You Usually Need to Privatize

For most Azure AI Foundry deployments, focus on these first:

1. Azure OpenAI

This is the most obvious one and usually the first service teams lock down.

If your retrieval layer still uses a public endpoint, sensitive document lookups and vector queries may leave your private path assumptions behind.

3. Azure Storage

This often holds:

  • Knowledge base documents
  • Prompt assets
  • Uploaded files
  • Generated outputs

Storage is one of the easiest places to accidentally leave exposed.

4. Azure Key Vault

If your app pulls secrets or certificates over public network paths, you are creating a weak point right in the middle of your supposedly private architecture.

5. App Hosting Layer

Whether you host on App Service, AKS, Container Apps, or VMs, the compute layer needs network access to all of the above over private routes and correct DNS.

Reference Architecture

At a high level, the clean pattern is:

  • One VNet dedicated to the AI application environment
  • Separate subnets for compute and private endpoints
  • Private endpoints for Azure OpenAI, Azure AI Search, Storage, and Key Vault
  • Private DNS zones linked to the VNet
  • Public network access disabled on the data plane where supported
  • NSGs and UDRs designed to allow only required east-west traffic

You do not need to make this overly complicated. In fact, the best private-link designs are usually the most boring ones.

Use a structure like this:

vnet-ai-foundry-prod (10.20.0.0/16)
├── snet-apps             (10.20.1.0/24)
├── snet-private-endpoints (10.20.2.0/24)
├── snet-build-agents     (10.20.3.0/24)   optional
└── snet-management       (10.20.10.0/24)  optional

Why separate the private endpoint subnet?

  • Easier NSG management
  • Easier IP planning
  • Easier troubleshooting when multiple private endpoints exist

Private DNS Zones You Are Likely to Need

This is where many Azure AI Foundry rollouts fail.

If DNS is wrong, private link is "deployed" but the application still resolves to public endpoints or fails unpredictably.

Common private DNS zones include:

  • privatelink.openai.azure.com
  • privatelink.search.windows.net
  • privatelink.blob.core.windows.net
  • privatelink.vaultcore.azure.net

Depending on your storage usage, you may also need additional storage-related zones such as file, queue, or table endpoints.

Service-by-Service Setup Logic

1. Azure OpenAI

For Azure OpenAI:

  • Create a private endpoint for the account
  • Attach the private DNS zone group
  • Disable public network access
  • Confirm that the application resolves the openai hostname to a private IP

This is already covered in depth in your existing [Azure OpenAI private endpoint article](/blog/secure-azure-openai-private-endpoint-terraform), so use that as the detailed service-level reference.

This is the most common next dependency after Azure OpenAI.

Private-linking Azure AI Search matters because:

  • Retrieval queries may contain sensitive text
  • Index data may represent sensitive internal content
  • Public exposure makes your RAG layer much less private than you think

Design steps:

  1. Create a private endpoint for the search service
  2. Link the privatelink.search.windows.net DNS zone
  3. Confirm the hostname resolves to a private IP from the compute subnet
  4. Restrict public access according to the service capabilities and your application model

3. Azure Storage

Storage often becomes the hidden weak point in AI architectures.

A lot of teams private-link OpenAI and forget the blob path used for:

  • Source documents
  • Chunking pipelines
  • Model input files
  • Generated artifacts

Design steps:

  1. Private-link the storage account subresources you actually use
  2. Link the correct storage private DNS zones
  3. Disable public blob access and review account-level public network settings
  4. Test blob reads from the application subnet, not from your laptop

4. Azure Key Vault

Key Vault should be part of the same private architecture if your app depends on it for:

  • API credentials
  • Certificates
  • Encryption keys
  • Service connection secrets

Design steps:

  1. Create the private endpoint
  2. Link privatelink.vaultcore.azure.net
  3. Restrict public network access where practical
  4. Confirm secret retrieval works from the app subnet

App Service, AKS, or VM: Which Compute Pattern Works Best?

The compute choice changes the operational details, but not the network principles.

App Service

Good fit when:

  • You want less infrastructure management
  • The app is web-oriented
  • VNet integration is sufficient for outbound private access

Watch closely:

  • Regional VNet integration
  • DNS behavior
  • Outbound routing to private endpoints

AKS

Good fit when:

  • You need container orchestration
  • You run multiple AI services or ingestion jobs
  • You want tighter control over network segmentation

Watch closely:

  • DNS resolution from pods
  • Private cluster and egress design
  • Internal ingress patterns

VM or self-hosted compute

Good fit when:

  • You want maximum control
  • You are troubleshooting a first implementation
  • You need a simple test harness before platforming the app

The Most Common Misconfiguration Patterns

These are the mistakes I see most often.

1. Azure OpenAI is private, but AI Search is public

Teams check one box, feel done, and leave retrieval exposed.

2. Private endpoints exist, but DNS still resolves to public IPs

This is the classic "it works from my machine but not from the app" problem.

3. Key Vault is public because "it only stores secrets"

That logic breaks down fast in high-trust application paths.

4. Storage subresources were not mapped correctly

Blob may be private while another required subresource still uses a public path.

5. Validation was done from a laptop instead of the app subnet

Testing from your corporate laptop proves almost nothing about the final runtime path.

How to Validate the Architecture Properly

Do not stop at "the deployment succeeded."

Validate from the actual compute layer.

DNS checks

Run hostname lookups from the app runtime environment and confirm private IP resolution for:

  • Azure OpenAI
  • Azure AI Search
  • Storage endpoints
  • Key Vault

Connectivity checks

Test HTTPS connectivity to each private-linked dependency.

Application checks

Run a real end-to-end request:

  1. Fetch a secret
  2. Query the search index
  3. Read a storage object if your flow depends on one
  4. Call Azure OpenAI
  5. Return the response

If any part fails, do not just assume the AI service is the problem. In most cases, DNS or subnet routing is the real issue.

What to Log and Monitor

For a production-ready setup, monitor:

  • Private endpoint connection status
  • DNS resolution issues from compute
  • Key Vault access logs
  • Storage access logs
  • Azure AI Search query and service diagnostics
  • Azure OpenAI usage and network-related failures

Also route relevant diagnostics into Log Analytics or Sentinel if this environment matters to the business.

A Practical Rollout Sequence

If you are building this from scratch, use this order:

  1. Deploy the VNet and subnet structure
  2. Private-link Key Vault and Storage first
  3. Add Azure AI Search private link
  4. Add Azure OpenAI private link
  5. Integrate the compute layer into the VNet
  6. Test DNS and service connectivity one dependency at a time
  7. Disable public network access only after private validation succeeds

That order keeps the troubleshooting surface smaller.

When to Split VNets or Use Peering

A single VNet is usually enough for the first production deployment.

Split or peer networks when:

  • Multiple application teams share central AI services
  • You need environment separation across dev, staging, and production
  • You have centralized inspection or egress controls

Do not over-engineer this too early. A simple, well-tested private architecture beats a beautifully complicated one that nobody can debug.

Frequently Asked Questions

Is a private endpoint for Azure OpenAI enough to secure Azure AI Foundry?

No. It secures one dependency, but not the full application path. Azure AI Search, Storage, Key Vault, and compute routing also matter.

Which private DNS zones do I usually need?

Most commonly: privatelink.openai.azure.com, privatelink.search.windows.net, privatelink.blob.core.windows.net, and privatelink.vaultcore.azure.net.

What usually breaks first in these deployments?

DNS. Private endpoints are often created successfully while name resolution still points the app to public endpoints or the wrong private IP path.

Should I disable public network access immediately?

Only after validating private connectivity from the real compute environment. Disabling too early is a common self-inflicted outage.

Does this replace identity security controls?

No. Private networking reduces exposure, but you still need managed identities, least privilege RBAC, Conditional Access where relevant, and logging.

Closing Thought

Securing Azure AI workloads is not about private-linking one shiny service and declaring victory. It is about making sure the whole request path, from secrets to retrieval to model call, behaves like a private system.

That is what turns an Azure AI Foundry deployment from "partially locked down" into something you can defend with confidence.

I

Microsoft Cloud Solution Architect

Cloud Solution Architect with deep expertise in Microsoft Azure and a strong background in systems and IT infrastructure. Passionate about cloud technologies, security best practices, and helping organizations modernize their infrastructure.

Share this article

Questions & Answers

Related Articles

Need Help with Your Security?

Our team of security experts can help you implement the strategies discussed in this article.

Contact Us