AI Security20 min read

Azure AI Foundry Security: Threat Model, RBAC, and Data Governance Controls (2026)

Azure AI Foundry introduces hubs, projects, and layered managed identities that fundamentally shift your Azure security model. This guide covers six critical threat scenarios — from cross-team data exfiltration to MI lateral movement — with correct RBAC design, data governance controls, and KQL queries for detection.

I
Microsoft Cloud Solution Architect
Azure AI Foundry Security: Threat Model, RBAC, and Data Governance Controls (2026) infographic
Azure AI FoundryAzure SecurityRBACThreat ModelData GovernanceManaged IdentityCloud SecurityAI Security

Why Azure AI Foundry Shifts Your Azure Security Model

Raw Azure OpenAI is a single resource with a single control plane. You scope a private endpoint, lock down the network, assign Cognitive Services roles, and you're mostly done. Azure AI Foundry introduces two structural layers on top of that: hubs and projects. A hub is a shared workspace with centralized compute, storage, Key Vault, and Container Registry connections. Projects inherit from the hub and share those connections unless explicitly overridden. That inheritance is where most security mistakes happen.

The managed identity situation compounds this. Every hub gets a system-assigned managed identity. Every project can get its own. When Foundry provisions a hub, it automatically grants that hub MI access to the connected storage account, Key Vault, and Azure Container Registry — by default. If you then create five projects under that hub, those projects can interact with the hub MI's permissions depending on how role assignments are scoped. You end up with a chain: project-level code → hub-scoped MI → Key Vault secrets → downstream systems. That chain is not always visible in the Azure portal, and it's almost never reflected in the threat models that teams draw before deploying Foundry.

The third shift is the model catalog. Azure OpenAI gives you Microsoft-managed models. Foundry lets you pull from a curated catalog that includes open-source models from Hugging Face, Meta, and others, deployed as managed online endpoints in your subscription. Each of those endpoints runs inference workloads in your network, billed to your subscription, with data flowing through your storage. The security posture of a Llama 3 deployment in your Foundry project is your problem, not Microsoft's.

---

Azure AI Foundry Threat Model

Threat 1: Project-Level Access Leading to Cross-Team Data Exfiltration

Scenario: A developer has Azure AI Developer role on a hub. That role grants read access to all projects under the hub — including projects owned by other teams — through the hub's shared storage account. The developer queries model logs and conversation histories stored in the hub's connected storage, extracting prompts and outputs that contain another team's IP or customer PII.

Impact: Confidentiality breach across organizational boundaries. In regulated industries, this can constitute a data incident even if no external party is involved.

Mitigation:

  • Never assign Azure AI Developer at hub scope for users who should only access specific projects. Scope it to the project resource.
  • Isolate project data by using separate storage accounts per project instead of sharing the hub's default storage.
  • Audit role assignments monthly using Azure Resource Graph:
az role assignment list \
  --scope /subscriptions/<sub-id>/resourceGroups/<rg>/providers/Microsoft.MachineLearningServices/workspaces/<hub-name> \
  --query "[].{Principal:principalName, Role:roleDefinitionName, Scope:scope}" \
  --output table

---

Threat 2: Model Exfiltration via Inference Endpoint

Scenario: An attacker with Cognitive Services User access — or a stolen API key — runs thousands of structured queries against a fine-tuned model deployment. The queries are designed to probe the model's behavior on specific input distributions. Over time, the attacker reconstructs enough of the model's response characteristics to replicate the fine-tuned behavior in an external environment. This is a model-stealing attack, not a weight-extraction attack, but the commercial impact is equivalent.

Impact: Loss of competitive advantage from a fine-tuned proprietary model. Potential compliance implications if the model was trained on regulated data.

Mitigation:

  • Rate-limit inference endpoints. Foundry online endpoints support deployment-level traffic rules; set request_settings.max_concurrent_requests_per_instance and use Azure API Management in front of endpoints for quota enforcement.
  • Enable request logging on every deployment. Log the full request payload (with PII redaction if needed) so that high-volume or structured-probing patterns are detectable.
  • Require client certificates or managed identity authentication for endpoint access. Disable shared key access on the endpoint where possible.
  • Alert on daily inference volume exceeding a defined baseline per principal.

---

Threat 3: Data Poisoning via Connected Data Stores

Scenario: The hub's connected storage account has a container used as the grounding data source for a RAG pipeline. A user with Storage Blob Data Contributor on the storage account — a role that many teams assign casually — uploads a modified document into the grounding container. The next time the RAG pipeline refreshes its index, the poisoned document becomes part of the knowledge base. The model begins returning answers that reflect the attacker-controlled content.

Impact: Integrity breach of model outputs. In customer-facing applications, this produces incorrect or manipulated responses at scale. Detection is difficult because the model is "working correctly" — it's just grounded in poisoned data.

Mitigation:

  • Treat the grounding data container as a critical data asset. Apply a separate, tightly scoped storage account for grounding data — do not co-locate it with general-purpose project storage.
  • Require PR-style review for any changes to grounding data. No human writes directly to the container; use a pipeline (Azure DevOps, GitHub Actions) with an approval gate.
  • Enable blob versioning and soft delete. Set the retention period to at least 30 days so poisoning can be detected and rolled back.
  • Monitor for blob writes to the grounding container outside of the approved pipeline service principal.

---

Threat 4: Managed Identity Lateral Movement

Scenario: A hub is provisioned with a system-assigned managed identity. At deployment time, Foundry automatically grants this MI Key Vault Secrets User on the connected Key Vault. A web application that integrates with the hub is also granted the hub MI's permissions through an impersonation pattern or a misconfigured federated credential. An attacker who compromises the web application can use the hub MI to query Key Vault secrets — including API keys, database connection strings, or certificates that have nothing to do with AI workloads.

Impact: A compromise of one application tier becomes a multi-target breach through the MI trust chain.

Mitigation:

  • Review every role assignment made to the hub's system-assigned MI immediately after provisioning. Foundry adds several assignments automatically — not all of them are necessary for your specific use case.
  • Prefer user-assigned managed identities for hub and project workloads so that MI lifecycle and permissions are explicit and auditable.
  • Use separate Key Vault instances for AI Foundry secrets vs. application secrets. Scope MI access to the Foundry-specific vault only.
  • Enable Key Vault diagnostic logs and alert on any secret access from identities other than the expected service principals.

---

Threat 5: Supply Chain Risk from Model Catalog Imports

Scenario: A data scientist pulls a model from the Foundry model catalog — specifically a community-contributed checkpoint hosted on Hugging Face and surfaced through the Foundry registry. The model contains a serialized pickle payload that executes on load. Because Foundry provisions model deployments in your subscription's managed compute, the execution happens in your environment, under your hub's managed identity.

Impact: Remote code execution in your subscription. The attacker gains the permissions of the compute identity, which includes access to the hub's connected storage and potentially Key Vault.

Mitigation:

  • Create an Azure Policy that restricts model catalog imports to Microsoft-curated models only. You can do this with a deny policy on Microsoft.MachineLearningServices/workspaces/models/versions/write that requires the ModelCatalogName tag to match an approved list.
  • For any open-source model that must be used: pull the weights to an internal artifact store (Azure Container Registry or Blob), scan with a model security scanner (e.g., ProtectAI's ModelScan), and deploy from the internal source.
  • Never allow data scientists to deploy directly from the public catalog to production hubs. Use a hub separation model: experimental hub (lower trust) → approved hub (production).

---

Threat 6: Audit Log Blind Spots

What Foundry does NOT log by default:

EventLogged?Where to find it
Model deployment creationYesAzure Activity Log
Role assignment changesYesAzure Activity Log
Inference requests (content)NoMust enable endpoint request logging explicitly
Content filter policy changesYesAzure Activity Log
Content filter bypass eventsPartialAI Foundry portal only; not in Log Analytics by default
Grounding data reads during RAGNoStorage diagnostic logs only (separate enablement)
Model catalog import from external sourceYesAzure Activity Log
Token usage per user/requestNoMust use APIM + logging layer

The practical gap is inference content. By default, what users send to your models and what the models return is not retained anywhere auditable. For regulated industries that need to demonstrate what data the AI system processed, this is a compliance gap from day one.

Mitigation: Enable diagnostic settings on every Foundry hub and project to send logs to a Log Analytics workspace. Then additionally enable per-endpoint request logging for all online deployments:

az ml online-deployment update \
  --name <deployment-name> \
  --endpoint-name <endpoint-name> \
  --workspace-name <hub-name> \
  --resource-group <rg> \
  --set request_logging.enabled=true \
  --set request_logging.capture_mode=all

---

RBAC Design for Azure AI Foundry

Hub vs. Project Role Separation

The most common RBAC mistake in Foundry deployments is treating the hub as the unit of access control. It's not. The hub is an administrative boundary. Project is the operational boundary. Roles assigned at hub scope propagate to all projects. Roles assigned at project scope are isolated to that project.

The rule: No developer or data scientist gets roles at hub scope. Admins only.

Built-In Roles Reference

RoleScopeWhat it allowsAssign at
OwnerHubFull control including RBACHub — hub admins only
ContributorHubAll operations, no RBACHub — platform team only
Azure AI AdministratorHubManage workspaces, compute, connectionsHub — AI platform team
Azure AI DeveloperProjectCreate/manage experiments, models, endpointsProject — ML engineers
Azure AI Inference Deployment OperatorProjectDeploy and manage inference endpointsProject — MLOps
Cognitive Services UserProjectCall inference APIsProject — application service principals
ReaderHub or ProjectRead-only viewProject — auditors, stakeholders

Do not use Contributor at the project level for application identities. Cognitive Services User is sufficient for inference workloads and gives no write permissions.

Bicep Role Assignment Example

param hubName string
param projectName string
param mlEngineerPrincipalId string
param appServicePrincipalId string

resource hub 'Microsoft.MachineLearningServices/workspaces@2024-04-01' existing = {
  name: hubName
}

resource project 'Microsoft.MachineLearningServices/workspaces@2024-04-01' existing = {
  name: projectName
}

// Azure AI Developer at project scope only — NOT hub scope
var azureAiDeveloperRoleId = '64702f94-c441-49e6-a78b-ef80e0188fee'
resource mlEngineerProjectRole 'Microsoft.Authorization/roleAssignments@2022-04-01' = {
  name: guid(project.id, mlEngineerPrincipalId, azureAiDeveloperRoleId)
  scope: project
  properties: {
    roleDefinitionId: subscriptionResourceId('Microsoft.Authorization/roleDefinitions', azureAiDeveloperRoleId)
    principalId: mlEngineerPrincipalId
    principalType: 'User'
  }
}

// Cognitive Services User for the app service principal at project scope
var cogServicesUserRoleId = '5e0bd9bd-7b93-4f28-af87-19fc36ad61bd'
resource appServiceProjectRole 'Microsoft.Authorization/roleAssignments@2022-04-01' = {
  name: guid(project.id, appServicePrincipalId, cogServicesUserRoleId)
  scope: project
  properties: {
    roleDefinitionId: subscriptionResourceId('Microsoft.Authorization/roleDefinitions', cogServicesUserRoleId)
    principalId: appServicePrincipalId
    principalType: 'ServicePrincipal'
  }
}

One thing Bicep won't enforce for you: preventing Contributor at hub scope for non-admins. Enforce that with Azure Policy:

az policy assignment create \
  --name 'deny-hub-contributor-non-admin' \
  --policy '/providers/Microsoft.Authorization/policyDefinitions/<policy-def-id>' \
  --scope '/subscriptions/<sub-id>/resourceGroups/<rg>'

Use a custom policy definition targeting Microsoft.Authorization/roleAssignments/write with conditions on roleDefinitionId matching Contributor and scope matching the hub resource type.

---

Data Governance Controls

Content Filtering Policies

Every model deployment in Foundry supports content filtering at the API layer. Filters operate on both inputs and outputs across four harm categories: hate, violence, sexual content, and self-harm. Each category has a threshold (low, medium, high) for both blocking and annotation.

The governance requirement is ensuring filters are enabled and the thresholds match your risk tolerance — and that they cannot be silently disabled by project members.

# List current content filter configs on a hub
az ml content-filter list \
  --workspace-name <hub-name> \
  --resource-group <rg>

# Assign a content filter policy to a deployment
az ml online-deployment update \
  --name <deployment-name> \
  --endpoint-name <endpoint-name> \
  --workspace-name <project-name> \
  --resource-group <rg> \
  --set content_filter_id=<content-filter-resource-id>

Prompt Shields are a separate control targeting indirect prompt injection — the threat where malicious instructions arrive via grounding documents or tool results rather than the user directly. Enable Prompt Shields on every RAG deployment. They're configured at the content filter resource level and add a detectable-injection signal to the API response headers.

Data Boundary Enforcement

Azure AI Foundry respects Azure's data residency controls, but you must configure them explicitly. For EU data residency:

  • Deploy hubs in swedencentral or westeurope
  • Set the storageAccountHnsEnabled flag to false on connected storage to avoid ADLS Gen2 cross-region replication
  • Use Azure Policy to deny resource creation outside approved regions for the Microsoft.MachineLearningServices resource provider

The Foundry portal will not warn you if your hub is in eastus but your storage account is in westus2. Regional separation is your responsibility.

Connected Data Stores and Classification Posture

Every data connection you add to a Foundry project — whether an Azure Blob container, an Azure Data Lake path, or a SQL endpoint — inherits whatever classification posture you've established in that source system. If the source isn't tagged, scanned, or governed, connecting it to Foundry does not make it governed.

The practical step: before connecting any data store to a Foundry project, run a Microsoft Purview scan on it. After scanning, apply sensitivity labels to the source data. Then configure the Foundry project's data connection to enforce label-based access policies through Purview's data use governance integration.

To wire a Purview account to Foundry:

az ml connection create \
  --name purview-connection \
  --workspace-name <hub-name> \
  --resource-group <rg> \
  --type AzurePurview \
  --account-name <purview-account-name>

Once connected, Purview sensitivity labels applied to blobs will propagate as metadata that Foundry-connected pipelines can read and filter on. This does not automatically block access — you still need DLP policies in Purview to enforce label-based restrictions.

---

Network Security

The complete private networking architecture for Azure AI Foundry — private endpoints for OpenAI, AI Search, Storage, and Key Vault; Private DNS zone configuration; subnet segmentation — is covered in detail in the [Azure AI Foundry Private Link setup guide](/blog/azure-ai-foundry-private-link-setup). Read that first if you haven't already deployed the network layer.

Two controls that article doesn't cover in depth:

Managed network isolation modes. Every Foundry hub has a managedNetwork property with three possible isolation modes: Disabled (all outbound is open), AllowInternetOutbound (compute can reach the internet for model downloads and package installs), and AllowOnlyApprovedOutbound (all outbound traffic blocked except explicitly approved FQDNs). For production hubs handling sensitive data, the target state is AllowOnlyApprovedOutbound. This restricts both your managed compute and any pipelines running under the hub MI from making arbitrary outbound connections.

resource hub 'Microsoft.MachineLearningServices/workspaces@2024-04-01' = {
  name: hubName
  properties: {
    kind: 'Hub'
    managedNetwork: {
      isolationMode: 'AllowOnlyApprovedOutbound'
      outboundRules: {
        allowAzureActiveDirectory: {
          type: 'PrivateEndpoint'
          destination: {
            serviceResourceId: '<aad-resource-id>'
          }
        }
      }
    }
  }
}

Egress filtering for model catalog downloads. Even with AllowOnlyApprovedOutbound, model catalog deployments need outbound access to specific Microsoft endpoints to pull model weights. When you deploy a catalog model, Foundry uses managed compute to download weights from https://models.aiservices.azure.com and related CDN endpoints. If your egress is blocked at the firewall before the isolation mode takes effect (e.g., routing through Azure Firewall), add *.aiservices.azure.com and *.blob.core.windows.net to your approved outbound FQDN list for the Foundry subnet. Failing to do this results in deployment failures with misleading timeout errors rather than explicit access-denied messages.

---

Monitoring and Detection

KQL: Model Deployment Changes

AzureActivity
| where OperationNameValue == "Microsoft.MachineLearningServices/workspaces/onlineEndpoints/deployments/write"
| project TimeGenerated, Caller, ResourceGroup, ResourceId, ActivityStatus, Properties
| where ActivityStatus == "Succeeded"
| order by TimeGenerated desc

Alert threshold: any model deployment outside of an approved change window, or by any caller other than the approved deployment service principal.

KQL: Role Assignment Changes on AI Foundry Resources

AzureActivity
| where OperationNameValue == "Microsoft.Authorization/roleAssignments/write"
| where ResourceId contains "MachineLearningServices"
| extend RoleAssignmentDetails = parse_json(Properties)
| project TimeGenerated, Caller, ResourceId,
    NewRole = RoleAssignmentDetails.roleDefinitionName,
    PrincipalId = RoleAssignmentDetails.principalId,
    ActivityStatus
| where ActivityStatus == "Succeeded"
| order by TimeGenerated desc

Alert threshold: any new role assignment at hub scope, or any assignment of Owner, Contributor, or Azure AI Administrator anywhere in the Foundry hierarchy.

KQL: Content Filter Policy Modifications

AzureActivity
| where OperationNameValue contains "contentFilters"
    or OperationNameValue contains "raiPolicies"
| where ResourceId contains "MachineLearningServices"
| project TimeGenerated, Caller, OperationNameValue, ResourceId, ActivityStatus, Properties
| where ActivityStatus == "Succeeded"
| order by TimeGenerated desc

Alert on any write or delete operation here. Content filter changes should be treated as security-relevant configuration changes, not routine ML operations.

Defender for Cloud Coverage Gaps

Defender for Cloud's AI workloads protection plan (preview as of early 2026) covers prompt injection detection, anomalous inference volume, and model access from unexpected geolocations. It does not currently cover:

  • Grounding data integrity (blob writes to RAG containers)
  • MI lateral movement through hub-to-Key Vault chains
  • Model catalog import from external sources

For those three gaps, rely on the Azure Monitor alerts and Azure Policy controls described above. Do not assume Defender for Cloud gives you complete AI workload visibility — it doesn't yet.

---

Hardening Checklist

  • [ ] No developer roles assigned at hub scope — Azure AI Developer and below scoped to project only
  • [ ] Hub system-assigned MI permissions reviewed post-provisioning — remove unnecessary role assignments added by Foundry automatically
  • [ ] Separate storage accounts per project for grounding data — do not rely on shared hub storage
  • [ ] Grounding data containers use Azure DevOps/GitHub Actions pipeline for writes — no direct human write access
  • [ ] Blob versioning and soft delete enabled on all grounding data storage accounts (30-day retention minimum)
  • [ ] Managed network isolation mode set to AllowOnlyApprovedOutbound for production hubs
  • [ ] Private endpoints deployed for all dependent services — see [Private Link setup guide](/blog/azure-ai-foundry-private-link-setup)
  • [ ] Content filtering policies enabled on all deployments with organization-approved thresholds
  • [ ] Prompt Shields enabled on all RAG and agentic deployments
  • [ ] Request logging enabled on all online endpoints — capture mode set to all
  • [ ] Diagnostic settings configured on hub and projects — logs sent to Log Analytics workspace
  • [ ] Azure Policy denying model catalog imports from non-Microsoft sources in production hubs
  • [ ] Purview scan completed on all connected data stores before connection creation
  • [ ] KQL alerts deployed for model deployment changes, role assignment changes, and content filter policy modifications

Get weekly security insights

Cloud security, zero trust, and identity guides — straight to your inbox.

I

Microsoft Cloud Solution Architect

Cloud Solution Architect with deep expertise in Microsoft Azure and a strong background in systems and IT infrastructure. Passionate about cloud technologies, security best practices, and helping organizations modernize their infrastructure.

Share this article

Questions & Answers

Related Articles

Need Help with Your Security?

Our team of security experts can help you implement the strategies discussed in this article.

Contact Us