Skip to content
Ed Freeman By Ed Freeman Software Engineer I
Troubleshooting "NotImplemented / Access is denied" error when integrating Azure App Service with Azure Virtual Network (VNet)

We recently ran into quite an obscure error whilst trying to integrate a VNet with our app service using the Regional VNet integration (which is currently in preview):

As you can see - not many details about this error, other than "NotImplemented" and "Access is denied". What's more, is that we were only seeing this on roughly 50% of new deployments. What could be going wrong?

TL;DR

The regional VNet integration (which is different to the version which enables integration with VNets in other regions or with Classic VNets, in that you don't need a VNet Gateway so long as your app service and VNet live in the same region) is a preview feature. One of the documented limitations of this feature is that:

The feature is only available from newer App Service scale units that support PremiumV2 App Service plans.

We were trying to deploy a Standard (S1) app service plan. We had assumed that our configuration wasn't in breach of this limitation, but apparently not. Some Standard app service plans are deployed onto infrastructure which doesn't support the preview VNet integration feature - i.e. infrastructure whereby the app service scale units don't support PremiumV2 app service plans.

But how do you ensure that your app service plan is being deployed onto supported infrastructure?

The only way you can ensure you're on this new infrastructure is if you initially deploy your app service plan as P1v2P2v2, or P3v2. You can then scale it back down to a standard SKU (which is what we wanted), if you wish. There's no way to say "I want a Standard sku which supports scaling to a Premium V2 sku."

We opted to initially deploy as a P1v2 sku, then use PowerShell to scale back down straight after. You could also use another ARM template, of course, or just do it manually in the portal.

Full story

Introduction

In our solution, we are deploying a Virtual Network (VNet) in Azure with a number of associated resources that are configured to integrate with it:

  • API App Service
  • Databricks
  • Data storage (blob)
  • SQL DB
  • Key Vault

The App Service -> VNet integration that we are using within our ARM templates is the (currently in preview) VNet integration feature (https://docs.microsoft.com/en-us/azure/app-service/web-sites-integrate-with-vnet#regional-vnet-integration). As mentioned above, this preview VNet integration doesn't require a VNet Gateway to be deployed, since our App Service and VNet are located in the same region.

Symptoms of error

The ARM templates for the preview App Service -> VNet integration had been working correctly on new and existing environments. However, we started to observe that the App Service resource would fail (the deployment would hang causing it to timeout) during the app service ARM template deployment in our Production subscription environment, with the vague error shown at the top of the blog. We confirmed that this wasn't a transient error - subsequent deployments would also fail.

Investigation

The deployments were running fine in our Development subscription for new and existing environments. The main difference between Production and Development

was that we were deploying all the Development environments to uksouth, whereas all the Production environments currently are deployed to eastus2.

After changing the location of a Development environment to eastus2, we managed to recreate the above error. So we determined that it was related to the different location/region that the resources were being deployed to.

We simplified the ARM template to just deploy the VNet and App Service resources.

During testing with the simplified deployment template, we observed that roughly 50% of the deployments succeeded, while the rest resulted in the same error message, causing the deployment to hang and timeout, or fail.

This intermittency of errors was unexpected. The same ARM template should result in the same behaviour.

Hypothesis 1 - ARM API version conflicts

Our initial hypothesis was that the error was related to the combination of ARM template resource API versions we were using to deploy the resources. After testing multiple combinations and still only having a 50% success rate, this was ruled out.

Hypothesis 2 - Incorrect ARM resource configuration

We then took a deeper look into the ARM template we were using to configure the App Service -> VNet integration, and cross-checked for any inconsistencies. There isn't an official piece of documentation for how to do this via ARM templates for the (preview) VNet integration, but we had based our template on the helpful example provided by Scott Holden on GitHub.

After testing numerous variations of the App Service configuration, we were still seeing the intermittent errors when deploying.

Hypothesis 3 - Unreliable ARM support

So, our next hypothesis was that the ARM support for the preview feature was unreliable. We subsequently reworked the deployment process to configure the VNet integration using an Azure CLI command. We had to use the Azure CLI, since (as far as we're aware) there is currently no equivalent in the Azure PowerShell module.

Testing of this approach in Development seemed to work initially.

New error symptoms in Production

Armed with a new approach, we redeployed two Production environments that had previously resulted in the error and timed-out. Both deployments (eventually) succeeded.

However, after checking in the Azure Portal, the first environment (env1) was integrated with the VNet successfully, but the second environment (env2) wasn't. Both of these environments were deployed using exactly the same build artifacts, and only differed in values for inconsequential metadata passed in as release variables (ProjectName for example).

They were both deployed to the same region - East US 2.

Calling in support and finding a resolution

At this point, a Microsoft support ticket was raised. Through collaboration with the support team, it was confirmed that env1 was deployed to a Stamp (a cluster of servers within a datacenter) in East US 2 which supported the preview VNet feature, but env2 was deployed to a different Stamp in East US 2 which didn't.

Products and features (including previews) are typically supported at a region-level across the Azure service, and that is the case for the VNet integration according to the original announcement (back in 2017). Of interest is the following limitation:

Your app must be in an Azure App Service deployment that's capable of scaling up to Premium v2.

That's what was catching us out. We were only after a Standard SKU and scripted our ARM templates as such, and we were unaware that some of our app service plans were being deployed onto old infrastructure that didn't support the preview VNet integration feature.

There seems to be no way to specify that you want a Standard SKU in an App Service Plan that supports scaling up to a PremiumV2 SKU. So, the only way to ensure the App Service Plan supports the PremiumV2 app service plan tiers, is by deploying an App Service Plan with a SKU of P1v2P2v2or P3v2. This will ensure the deployment targets a Stamp that supports the feature, by nature of it supporting one of those SKUs.

If you want a Standard SKU App Service Plan with the preview VNet integration feature (as we did), you can ensure a successful deployment by deploying an App Service Plan with a SKU of P1v2P2v2or P3v2, and then either manually scaling it down afterwards (through the Azure Portal), or scaling it down programmatically (either using ARM or PowerShell, for example).

This resolution should be treated as a temporary workaround until the feature becomes Generally Available.

Conclusion

Preview features are rolled out onto different Stamps across different datacenters within Azure regions at different schedules.

In our case, since not all app service infrastructure had been upgraded to support the preview feature, some of our app service plans were being deployed onto supported infrastructure, and some weren't. We weren't knowingly breaching any of the preview feature limitations listed in the documentation, since we had had no problems with prior deployments to uksouth via ARM templates.

Our unawareness of this limitation breach was compounded by the fact that there is no documented way (that we know of) to specify that you want to deploy a Standard app service plan which supports the PremiumV2 tiers, so we had assumed that our ARM templates were satisfying this limitation.

In conclusion: if your application depends on a preview feature then you should double check the specific requirements under which the feature is available are met in your deployment. Even when we thought they were - they weren't!

Azure Weekly is a summary of the week's top Microsoft Azure news from AI to Availability Zones. Keep on top of all the latest Azure developments!

Ed Freeman

Software Engineer I

Ed Freeman

Ed is a Software Engineer helping to deliver projects for clients of all shapes and sizes, providing best of breed technology solutions to industry specific challenges. He focusses primarily on cloud technologies, data analytics and business intelligence, though his Mathematical background has also led to a distinct interest in Data Science, Artificial Intelligence, and other related fields.

He also curates a weekly newsletter, Power BI Weekly, where you can receive all the latest Power BI news, for free.

Ed won the Cloud Apprentice of the Year at the Computing Rising Star Awards 2019.