Skip to content
James Dawson By James Dawson Principal I
Adventures in Dapr: Episode 2 - Azure Key Vault

In this much belated post we pick-up where we left off at the beginning of the year with the experiment to migrate the excellent Traffic Control Dapr sample from local container-based services to Azure platform services.

At the end of the previous episode we had moved the PubSub component over to using Azure Service Bus, but we were in the undesirable situation of having the service bus connection string stored in a cleartext configuration file. The goal for this episode is to store that sensitive configuration value in a more secure manner.

Given we're using Azure, our secrets store of choice is going to be Azure Key Vault - so our first job is to update the Bicep templates to provision a fresh new Key Vault.

Update the infrastructure

First we update our components.bicep module to include the Azure Key Vault resource:

resource keyvault 'Microsoft.KeyVault/vaults@2021-11-01-preview' = {
  name: keyVaultName
  location: location
  properties: {
    sku: {
      family: 'A'
      name: 'standard'
    }
    accessPolicies: [
      {
        objectId: keyVaultAccessObjectId
        permissions: {
          secrets: [
            'get'
          ]
        }
        tenantId: tenant().tenantId
      }
    ]
    tenantId: tenant().tenantId
  }
}

We can also take advantage of Azure Resource Manager to store the Service Bus connection string in the Key Vault, so we never have to handle the secret in our deployment script.

resource connection_string_secret 'Microsoft.KeyVault/vaults/secrets@2021-11-01-preview' = {
  name: serviceBusConnectionStringSecretName
  parent: keyvault
  properties: {
    contentType: 'text/plain'
    value: listKeys(servicebus_authrule.id, servicebus_authrule.apiVersion).primaryConnectionString
  }
}

We're now referencing 3 new parameters so we need to declare those:

...
param keyVaultName string
param keyVaultAccessObjectId string
param serviceBusConnectionStringSecretName string
...

Next we integrate the above addition into our main.bicep deployment template.

We need a name for our Azure Key Vault and the secret that will hold the connection string:

var keyVaultName = '${prefix}aindkv'
var serviceBusConnectionStringSecretName = 'ServiceBus-ConnectionString'

The ObjectId needed for the Key Vault access policy will need to be passed in, so we need to add a new parameter for that:

...
@description('The ObjectId of the service principal that will be granted Key Vault access')
param keyVaultAccessObjectId string
...

Add the new required parameters to the components module reference:

module components 'components.bicep' = {
  ...
  params: {
    ...
    keyVaultName: keyVaultName
    keyVaultAccessObjectId: keyVaultAccessObjectId
    serviceBusConnectionStringSecretName: serviceBusConnectionStringSecretName
    ...
  }
}

Now run the deploy.ps1 script with the same value for the -ResourcePrefix parameter you used in episode 1 and you should see the new Azure Key Vault appear - if you skipped episide 1 (shame on you!) choose a prefix that is likely to produce globally unique resource names.

NOTE: If you want to check the secret has been added you will need to add yourself to the 'Secrets' access policy of the Key Vault.

Also note that the script has been updated since the last episode to create a dedicated service principal (for reasons that will become apparent in the next sections). There is some basic handling to reset the service principal's secret if it is not known (i.e. it was created on a previous run and the details are no longer available). The script will also log out some information that you'll need later to get things running locally.

NOTE: Due to recent changes in the Azure PowerShell modules that have migrated to Microsoft Graph, this script now requires v5.1.0 or greater of the Az.Resources module, which is available in v7.x of the main Az module.

Configure the Key Vault as a Dapr Secret Store

Now we have a Key Vault that stores our service bus connection string as a secret, we need to tell Dapr about it. Dapr has a specific Azure Key Vault secrets store component, so we start with registering that:

  1. Create a new component YAML file in dapr/components called secrets-keyvault.yaml
  2. Start by adding the sample config that uses a client secret for authentication purposes:
     apiVersion: dapr.io/v1alpha1
     kind: Component
     metadata:
       name: azurekeyvault
       namespace: default
     spec:
       type: secretstores.azure.keyvault
       version: v1
       metadata:
       - name: vaultName
         value: "[your_keyvault_name]"
       - name: azureTenantId
         value: "[your_tenant_id]"
       - name: azureClientId
         value: "[your_client_id]"
       - name: azureClientSecret
         value : "[your_client_secret]"
    
  3. Rename the component and set the namespace to match the application:
    ...
    metadata:
      name: trafficcontrol-secrets-kv
      namespace: dapr-trafficcontrol
    ...
    
  4. Since the sample uses scopes to apply access control to Dapr components, we need to ensure the applications that use the service bus can also access this secret store:
    scopes:
    - trafficcontrolservice
    - finecollectionservice
    
  5. Set the name of the Key Vault by updating the vaultName metadata key, using the output from the deploy.ps1 script. For the purposes of this exercise we'll hard-code the Key Vault name, but in the real world this would need to be configurable to support deployment to different environments:
    ...
    spec:
      type: secretstores.azure.keyvault
      version: v1
      metadata:
      - name: vaultName
        value: "<prefix>aindkv"
    ...
    

However, now we have a problem - in order to access the service bus connection string stored in Key Vault, we need to be able to authenticate to Azure. They Azure Key Vault secrets store implementation expects authentication using an AzureAD service principal. This means another secret and one that we definitely do not want to store in cleartext.

  • Is this a chicken and egg problem?
  • Are we no better off now having setup the Azure Key Vault secret store?

For production scenarios the application would ideally be running on a platform that supports managed identity, in which case no credential configuration would be required. When no credentials are provided in the component YAML then Dapr will attempt to authenticate via a managed identity (it will fail if one isn't available, or if it lacks the required permissions).

However, we want to maintain our low-friction 'dev inner-loop' so we need a solution that will work when are running the solution locally. Adding secrets to a source-controlled config file and remembering not to accidentally commit them is far from ideal.

There are a couple of potential options, but for this post let's take the opportunity to experiment with another secrets store implementation that Dapr offers - environment variables. Rather than hard-coding the service principal details in the component YAML, we can use the secretKeyRef syntax to look-up the values from environment variables - effectively nesting different secrets stores.

Configure Environment Variable Secrets Store

As before, we start with registering this additional secrets store by creating a new component YAML.

  1. Create a new component YAML file in dapr/components called secrets-envvars.yaml
  2. Add the configuration for this store using the sample here
    apiVersion: dapr.io/v1alpha1
    kind: Component
    metadata:
      name: envvar-secret-store
      namespace: default
    spec:
      type: secretstores.local.env
      version: v1
      metadata:
    
  3. Rename the component and set the namespace to match the application:
    metadata:
      name: trafficcontrol-secrets-envvars
      namespace: dapr-trafficcontrol
    
  4. As before, set scopes so the required applications can also access this secrets store:
    ...
    spec:
      ...
    scopes:
    - trafficcontrolservice
    - finecollectionservice
    
  5. Update the secrets-keyvault.yaml file to read the service principal details from environment variables:
    ...
    spec:
      type: secretstores.azure.keyvault
      version: v1
      metadata:
      - name: azureTenantId
        secretKeyRef: AZURE_TENANT_ID
      - name: azureClientId
        secretKeyRef: AZURE_CLIENT_ID
      - name: azureClientSecret
        secretKeyRef : AZURE_CLIENT_SECRET
    ...
    
  6. Add the auth configuration section that links the Azure Key Vault component Azure AML to the secret store containing the secretKeyRef's:
    ...
    spec:
        type: secretstores.azure.keyvault
        ...
    auth:
        secretStore: trafficcontrol-secrets-envvar
    scopes:
    ...
    

Update PubSub Component to use Key Vault

So far we have told Dapr about our new Key Vault and provided a mechanism to authenticate to it when we are running the solution locally. Now we need to tell the PubSub component to read the Service Bus connection string from the Key Vault.

  1. Open the pubsub.yaml component YAML file
  2. Update the name attribute of the secretKeyRef property to refer to the name of the secret stored in the Key Vault (i.e. ServiceBus-ConnectionString):

NOTE: We can also use the same value for the key property or we can just delete that property, it will work either way. If the value stored in Key Vault was in the form of a JSON object, rather than the simple string we're using, then the key property would let you reference a particular member inside the JSON.

...
spec:
  type: pubsub.azure.servicebus
  version: v1
  metadata:
  - name: connectionString
    secretKeyRef:
      name: ServiceBus-ConnectionString
...
  1. Update the auth section to refer to the Azure Key Vault secret store:
...
auth:
  secretStore: trafficcontrol-secrets-kv
...

Testing it locally

At this point all the required configuration is in-place and we are ready to test it.

Back in the infrastructure section above, the output from deploy.ps1 included some PowerShell that will set the environment variables we referenced in the secrets-keyvault.yaml component file.

Copy/paste and execute those 3 lines into the terminal window that you are going to launch the solution from:

$env:AZURE_CLIENT_ID = "<app-id-guid>"
$env:AZURE_CLIENT_SECRET = "<password>"
$env:AZURE_TENANT_ID = "<tenant-id-guid>"

As in the first post you can use the run-all-self-hosted.ps1 script to easily launch all the services - if you've not done this before refer to the earlier post.

Once running you won't see that much difference, unless things don't work! (Refer to the 'Testing our changes' section of the previous post for what to look out for when running solution)

Troubleshooting

If the environment variables are not set correctly then the Key Vault authentication issues will manifest with an error similar to this:

level=error msg="error getting secret: 
Authentication failed:\n(invalid_client)
AADSTS7000215: Invalid client secret provided. 
Ensure the secret being sent in the request is 
the client secret value, not the client secret 
ID, for a secret added to app '<app-id-guid>'

If the environment variables are not defined (as opposed to being defined with incorrect values), then the Key Vault secret store will fallback to attempting managed identity-based authentication. When running locally this will produce an error similar to the following:

level=warning msg="failed to init secret store
secretstores.azure.keyvault/v1 named 
trafficcontrol-secrets-kv: no suitable token 
provider for Azure AD; errors: 
Managed Identity Credential: no managed identity 
endpoint is available"

Review

If everything worked you should have seen the sample app running exactly as it did at the end of the previous post, however with our efforts we have achieved the following:

  • Improved the management of the security-sensitive Service Bus connection string by storing it in a Key Vault rather than local file, without us ever having to directly handle the secret itself
  • Used Dapr's environment variable secret store to streamline the use of Key Vault for the dev inner-loop
  • Proved that we can layer secret stores within Dapr components, whereby one secret store depends on another one

The plan for the next episode is to swap out the remaining container-based infrastructure service (MQTT) for Azure Storage Queues.

Azure Weekly is a summary of the week's top Microsoft Azure news from AI to Availability Zones. Keep on top of all the latest Azure developments!

James Dawson

Principal I

James Dawson

James is an experienced consultant with a 20+ year history of working across such wide-ranging fields as infrastructure platform design, internet security, application lifecycle management and DevOps consulting - both technical and in a coaching capacity. He enjoys solving problems, particularly those that reduce friction for others or otherwise makes them more effective.