Reliable and scalable infrastructure: Layers

This is a series of posts:

  1. Introduction
  2. Principles
  3. Layers (this post)
  4. Traffic
  5. Secrets

When designing your service’s infrastructure, you need to remember that your deployment (or scale, more below) unit can go down at any point of time for any period of time. And it doesn’t matter what’s the underlying technology is, whether it’s a Service Fabric cluster, a Kubernetes cluster, or a WebForms application running off Azure Websites aka App Service.

Usually a deployment is to blame, whether it was you or your upstream dependency. Behind a deployment usually there was a change. Then there was a mistake. And then a human being.

A maxim I learned in college (I’m paraphrasing here from Russian, thought) says:

Any found bug is at least one before the last one.

Because human engineers tend to make mistakes while making changes, there always would be one more bug out there.

What you cannot do? Change the human’s nature. What you can do though? Prepare yourself and your service’s infrastructure to a failure.

Let’s consider two scenarios when your deployment has failed:

  • It has failed and the service now is in unrecoverable state so you have to delete everything in order to start from scratch. For example, consequent deployment fail with 500 because upstream dependency fails.
  • It has failed and the service is in unrecoverable state but you cannot delete everything in order to start from scratch because something blocks you. For example, a security incident has occurred and the security team asks do not touch anything. Or the service team needs time to investigate the reasons for the failure so asks to do not change anything

What you do in either case? The answer lies in the ways how should’ve you modeled your infrastructure to be better prepared.

Let’s divide infrastructure into multiple layers, each with its role and lifecycle, also security and compliance boundaries. Often each layer also corresponds to its own set of secrets (certificates, mostly) that are shared downwards but are isolated upwards.

  • Cross-cloud
  • Cloud
  • Global
  • Environment
  • Data center
  • Scale unit

Let’s describe and explain each of them. The terminology is mine, might diverge from similar but more widely accepted in the industry. I’m happy to adjust it based on the feedback.

Cross-cloud. Super global across all clouds. Everything what’s happening over public Internet. The best example would be public DNS and email. Even sovereign (national) clouds use both public Internet and DNS, until we’re talking about air gapped solutions.

Cloud. Super global within a cloud and across its environments. Same as above but different clouds are now isolated from each other. However, there is still no isolation between environments. It should be relatively rarely used and not be considered to be a permanent solution, until it’s strictly necessary or otherwise impossible. But even so you should immediately start seeking a way to escape it. An example for would be a secret for an external monitoring mechanism when all environments and endpoints are monitored by the single external service.

Global. Considering the existence of the prior two layers, it’s not universally global. But it divides the plane into two principal parts that provide the minimum necessary separation: production and pre-production. An example would be a secret for AAD application, which has Prod and PPE versions of it. Or root DNS zone service.example.com.

Environment. Separated from one another by various physical boundaries, share nothing in common. For example, the Integration environment uses DNS zone int.service.example.com while the Test environment uses test.service.example.com.

Data center. In other words, a region in a cloud. Represents all the resources and the secrets that are necessary to serve traffic (or do other work) in particular geographical location but those which are not a part of a scale unit (see below). What means that there resources and secrets will be created before a scale unit is created and will continue to exist if a scale unit is deleted. Each environment would consist of at least one (or more) such data center. They can be further grouped into pairs or subdivided into availability zones. The candidate resource types would be Key Vaults (you don’t want to recreate secrets every time), Managed Identities (for same reason), IPs (created once will act as static), regional DNS records (e.g. westus2.int.service.example.com), Traffic Manager profiles this DNS record is a CNAME to.

Scale unit. The smallest unit of deployment. On-prem analogue would be a server, in the cloud it’s a VM scale set, a Service Fabric cluster, a Kubernetes cluster, etc. Groups all the resources needed to create such cluster. These resources should be deleted and recreated all together if something goes wrong. Each data center would consist of at least one (or more) such scale unit. The reasons for creating more than one would be: scalability, when one cluster is not enough to sustain the load, and reliability, when one goes down and you cannot failover traffic off the region.

To be continued…

Posted in Infrastructure | Tagged , , | Leave a comment

Reliable and scalable infrastructure: Principles

This is a series of posts:

  1. Introduction
  2. Principles (this post)
  3. Layers
  4. Traffic
  5. Secrets

First and foremost, you have to threat your service’s infrastructure as you threat your service’s code. In other words as infrastructure-as-code. This may include the techniques that are now common in general engineering processes such as:

  • Gated build. Each change is built and verified. If this an ARM template, you can run Test-AzResourceGroupDeployment
  • Gated deployment. Each change can not just be synthetically validated for the syntax correctness but actually deployed to a test cluster, alongside the basic infrastructure services if possible, what combined would help to ensure the changes are valid and functional
  • Continuous Integration (CI). Each change is immediately merged into the main branch and a ready-for-production build is produced
  • Continuous Delivery (CD). Each build is immediately deployed to an early test environment and the appropriate tests are performed. Then to another environment, then another.
  • Safe Deployment Practices (SDP). Each is build is not deployed to all available environments simultaneously but instead is slowly rolled out across environments and regions. They’re are grouped by kind (prod, pre-prod), geography (North America, Europe, Asia), type of customers (internal, partners, public), and so on.

You may refer to the Build and Deployment section of the Twelve-Factor App for more ideas how to the CI/CD process for both your services and infrastructure should look like.

Employing these and other techniques will help you to achieve multiple goals:

  • Increase the confidence in the changes
  • Increase the overall quality of the infrastructure by decreasing the number of errors slipping into production
  • Allow to catch issues early in the rollout
  • Increase the overall time-to-production, the total time it takes for a new feature or a fix to reach the target environment

To be continued…

Posted in Infrastructure | Tagged , , , | Leave a comment

3 ways to assign access policy for user-assigned managed identity on key vault using ARM template

This post is a summary of my experience dealing with user-assigned managed identity and key vaults in Azure, it explores multiple ways to achieve the same result – how to assign access policies using an ARM template. Each of the ways has its own pros and cons.

First, the simplest: to create a key vault with preassigned access policy:

{
  "resources": [
    {
      "type": "Microsoft.KeyVault/vaults",
      "apiVersion": "[variables('kvApiVersion')]",
      "name": "[parameters('kvName')]",
      "location": "[parameters('location')]",
      "properties": {
        "tenantId": "[variables('tenantId')]",
        "accessPolicies": "[parameters('accessPolicies')]",
        "sku": {
          "name": "Standard",
          "family": "A"
        }
      }
    }
  ]
}

The pros of this approach are same as the cons: you have to know all access policies ahead of time. That works but only in the simplest scenarios, such as for security groups as they’re created outside of ARM and have static, well-known OID.

Second: to create a key vault, then a user-assigned managed identity, and then add an access policy:

{
  "variables": {
    "uaidRef": "[concat('Microsoft.ManagedIdentity/userAssignedIdentities/', parameters('uaidName'))]",
  },
  "resources": [
    {
      "type": "Microsoft.KeyVault/vaults",
      "apiVersion": "[variables('kvApiVersion')]",
      "name": "[parameters('kvName')]",
      "location": "[parameters('location')]",
      "properties": {
        "tenantId": "[variables('tenantId')]",
        "accessPolicies": [],
        "sku": {
          "name": "Standard",
          "family": "A"
        }
      }
    },
    {
      "type": "Microsoft.ManagedIdentity/userAssignedIdentities",
      "apiVersion": "[variables('idApiVersion')]",
      "name": "[parameters('uaidName')]",
      "location": "[parameters('location')]"
    },
    {
      "type": "Microsoft.KeyVault/vaults/accessPolicies",
      "name": "[concat(parameters('kvName'), '/add')]",
      "apiVersion": "[variables('kvApiVersion')]",
      "properties": {
        "accessPolicies": [
          {
            "tenantId": "[variables('tenantId')]",
            "objectId": "[reference(variables('uaidRef'), variables('idApiVersion')).principalId]",
            "permissions": "[variables('uaidPermissions')]"
          }
        ]
      },
      "dependsOn": [
        "[concat('Microsoft.KeyVault/vaults/', parameters('kvName'))]",
        "[concat('Microsoft.ManagedIdentity/userAssignedIdentities/', parameters('uaidName'))]",
      ]
    }
  ]
}

The main drawback of this one is in the effect of eviction. Since a deployment of ARM template is effectively a PUT on the respective resource, immediately after the creation, a key vault has no access policies. What means all requests to access it will fail 403 until the respective polices are not added back. The time window might be relatively short but still exist what’s may and will cause outages and incidents.

Moreover Key Vault doesn’t support adding access policies in parallel. What means that if there are multiple policies to add they must be added sequentially. Each takes several seconds what increases the window up to a minute or more. If this is a production environment then this is guaranteed to have customer impact, makes it impossible to deployment transparently and without interruption of running services, violates one of the core principles of cloud and enterprise grade infrastructure.

Finally, third: create a user-assigned managed identity, then create a key vault with preassigned access policy:

{
  "variables": {
    "uaidRef": "[concat('Microsoft.ManagedIdentity/userAssignedIdentities/', parameters('uaidName'))]"
  },
  "resources": [
    {
      "type": "Microsoft.ManagedIdentity/userAssignedIdentities",
      "apiVersion": "[variables('idApiVersion')]",
      "name": "[parameters('uaidName')]",
      "location": "[parameters('location')]"
    },
    {
      "type": "Microsoft.KeyVault/vaults",
      "apiVersion": "[variables('kvApiVersion')]",
      "name": "[parameters('kvName')]",
      "location": "[parameters('location')]",
      "properties": {
        "tenantId": "[variables('tenantId')]",
        "accessPolicies": [
          {
            "tenantId": "[variables('tenantId')]",
            "objectId": "[reference(variables('uaidRef'), variables('idApiVersion')).principalId]",
            "permissions": "[variables('uaidPermissions')]"
          }
        ],
        "sku": {
          "name": "Standard",
          "family": "A"
        }
      },
      "dependsOn": [
        "[concat('Microsoft.ManagedIdentity/userAssignedIdentities/', parameters('uaidName'))]"
      ]
    }
  ]
}

This one basically combined the pros of the latter two and in my mind has no cons. It eliminates the window altogether, the key vault would never have no access polices even again.

Posted in Infrastructure | Tagged , , | Leave a comment

Reading books vs writing one

I have an issue with reading books. I read blogs and articles on the Internet often but physical books almost never. Back in the day when I was living in Moscow, I used to commute to college and work an hour each way every day and had a plenty of time for reading. Then after moving to the US, driving to work instead of taking public transport, having kids, and now permanently working from home – I don’t have neither time nor much of the desire.

A friend of mine gave once a sound advice: find more time to read books to advance my career. And he’s probably right, I should. On other hand, I can write a book of my own. I would title it:

Reliable and scalable infrastructure in Azure

Also compliant and using Service Fabric.

So here starts a series of blog posts which hopefully one day would be compiled to a book.

So far I came up with the following sections:

  1. Introduction (this post)
  2. Principles
  3. Layers
  4. Traffic
  5. Secrets
Posted in Thoughts | Tagged | Leave a comment

Carnation Anapa Winery, vol 3, day 4: yeast

Due to the pandemic and workaholism, everything takes longer this year.

I’m adding 5g of RC212 by Cellar Science (batch #52495, whatever it means) to the 5-gallon bucket of Petit Sirah. But first, to avoid shock, I’m diluting the yeast in a small amount of boiled water cooled down to 106°F.

Posted in Winemaking | Tagged | Leave a comment

Carnation Anapa Winery, vol 3, day 3: Potassium Metabisulfite

Last time when I added Potassium Metabisulfite the outcome was much better when I did not. So this time I’m adding it to both buckets of must, ~1.5 x ¼ tsp per 5 gallons.

Posted in Winemaking | Tagged | Leave a comment

Carnation Anapa Winery, vol 3, day 2: weighing

Some precalculations:

  • My weight: 74.15 kg
  • Empty bucket: 1.15 kg
  • Total: 75.45 kg

Bucket #1 (CS):

  • Total: 86.80 kg
  • Grapes: 12.65 kg

Bucket #2 (CS):

  • Total: 85.80 kg
  • Grapes: 11.65 kg

Bucket #3 (PS):

  • Total: 89.15 kg
  • Grapes: 15.00 kg

Bucket #4 (PS):

  • Total: 90.05 kg
  • Grapes: 15.9 kg

What in sum runs as:

  • Cabernet Sauvignon: 24.3 kg
  • Petite Sirah: 30.9 kg
  • Total: 55.2 kg (121.695 lbs)
Posted in Winemaking | Tagged | Leave a comment

Carnation Anapa Winery, vol 3, day 1: The journey continues

It’s that time of year when I drive to my friends at Carthage Vineyard in Zillah, WA and pick what’s left after the harvest season.

This year it was 2 buckets of Cabernet Sauvignon and 2 buckets of Petite Sirah.

Posted in Winemaking | Tagged | Leave a comment

How to configure Service Fabric to use AAD for client authentication

This blob post is intended to compliment the official doc which I personally don’t find helpful and comprehensive enough.

The configuration that works for me consists of 3 parts:

  1. Cluster ARM template change
  2. AAD app for the cluster identity (let’s call it client)
  3. AAD app for the users to access SFE (let’s call it cluster)

First you make the changes in your ARM template for the cluster and deploy:

"variables": {
  "clientAadAppId": "{client app id}",
  "clusterAadAppId": "{cluster app id}"
},
"resources": [
  {
    "type": "Microsoft.ServiceFabric/clusters",
    "apiVersion": "[variables('sfApiVersion')]",
    "name": "[parameters('clusterName')]",
    "location": "[parameters('location')]",
    "properties": {
      "addonFeatures": [],
      "azureActiveDirectory": {
        "tenantId": "[subscription().tenantId]",
        "clientApplication": "[variables('clientAadAppId')]",
        "clusterApplication": "[variables('clusterAadAppId')]"
      },
      "certificateCommonNames": {},
      "clientCertificateCommonNames": [],
      "clientCertificateThumbprints": [],
      "diagnosticsStorageAccountConfig": {},
      "fabricSettings": [],
      "reliabilityLevel": "[variables('reliabilityLevel')]",
      "upgradeMode": "Automatic",
      "vmImage": "Windows"
    }
  }
]

Then you create 2 third-party AAD applications and edit their manifests.

For the client app where you specify the Microsoft Graph and cluster app ids:

"requiredResourceAccess": [
  {
    "resourceAppId": "00000003-0000-0000-c000-000000000000",
    "resourceAccess": [
      {
        "id": "{random guid}",
        "type": "Scope"
      }
    ]
  },
  {
    "resourceAppId": "{cluster app id}",
    "resourceAccess": [
      {
        "id": "{your guid}",
        "type": "Scope"
      }
    ]
  }
],
"oauth2Permissions": [
  {
    "adminConsentDescription": "Allow the application to access SF Cluster Management application on behalf of the signed-in user.",
    "adminConsentDisplayName": "Access SF Cluster",
    "id": "{your guid}",
    "isEnabled": true,
    "lang": null,
    "origin": "Application",
    "type": "User",
    "userConsentDescription": "Allow the application to access SF Cluster Management application on your behalf.",
    "userConsentDisplayName": "Access SF Cluster",
    "value": "user_impersonation"
  }
]

And for the cluster app where you specify what roles have what permissions:

"appRoles": [
  {
    "allowedMemberTypes": [
      "User"
    ],
    "description": "ReadOnly roles have limited access",
    "displayName": "ReadOnly",
    "id": "{random guid}",
    "isEnabled": true,
    "lang": null,
    "origin": "Application",
    "value": "User"
  },
  {
    "allowedMemberTypes": [
      "User"
    ],
    "description": "Admins roles can perform all tasks",
    "displayName": "Admin",
    "id": "{random guid}",
    "isEnabled": true,
    "lang": null,
    "origin": "Application",
    "value": "Admin"
  }
]

And finally add your cluster’s SFE endpoint to the the Authentication section

https://{clusterName}.{clusterLocation}.cloudapp.azure.com:19080/Explorer/index.html

And finally go to the cluster app Overview and click Managed application in local directory, select Users and Group and assign permissions to your AAD groups you want to be Users or Admins.

That’s it, folks!

Posted in Infrastructure | Tagged | Leave a comment

How to hook up child DNS zone into parent by updating its NS records using ARM template

Imagine a scenario: you have one global DNS zone in Prod subscription and several child DNS zones for each environment in their own subscriptions, e.g.:

  • infra.example.com
    • Subscription: Prod
  • dev.infra.examle.com
    • Subscription: Dev
  • test.infra.example.com
    • Subscription: Test
  • prod.infra.example.com
    • Subscription: Prod

Each zone is created using its own ARM template. But in order a child zone to start working you need to hook it up into the parent zone by updating its NS record, e.g.:

  • dev.infra.example.com
    • NS
      • ns1-01.azure-dns.com.
      • ns1-01.azure-dns.net
      • ns1-01.azure-dns.org.
      • ns1-09.azure-dns.info.
  • infra.example.com
    • dev
      • NS
        • the records must be inserted here

Here’s how to achieve that using ARM template:

{
  "$schema": "http://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "environment": {
      "type": "string"
    },
    "globalSecretsSubscriptionId": {
      "type": "string"
    },
    "globalSecretsResourceGroupName": {
      "type": "string"
    },
    "globalDnsZoneName": {
      "type": "string"
    },
    "envDnsZoneName": {
      "type": "string"
    }
  },
  "variables": {
    "deploymentApiVersion": "2019-09-01",
    "dnsApiVersion": "2018-05-01"
  },
  "resources": [
    {
      "name": "[parameters('envDnsZoneName')]",
      "type": "Microsoft.Network/dnsZones",
      "apiVersion": "[variables('dnsApiVersion')]",
      "location": "global"
    },
    {
      "name": "[format('DNS-Global-{0}', parameters('environment'))]",
      "type": "Microsoft.Resources/deployments",
      "apiVersion": "[variables('deploymentApiVersion')]",
      "subscriptionId": "[parameters('globalSecretsSubscriptionId')]",
      "resourceGroup": "[parameters('globalResourceGroupName')]",
      "properties": {
        "mode": "Incremental",
        "template": {
          "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
          "contentVersion": "1.0.0.0",
          "resources": [
            {
              "name": "[format('{0}/{1}', parameters('globalDnsZoneName'), parameters('environment'))]",
              "type": "Microsoft.Network/dnsZones/NS",
              "apiVersion": "[variables('dnsApiVersion')]",
              "properties": {
                "TTL": 3600,
                "NSRecords": "[reference(resourceId('Microsoft.Network/dnszones/NS', parameters('envDnsZoneName'), '@'), variables('dnsApiVersion')).NSRecords]"
              }
            }
          ]
        }
      },
      "dependsOn": [
        "[concat('Microsoft.Network/dnsZones/', parameters('envDnsZoneName'))]"
      ]
    }
  ]
}

Here’s what it does:

  1. Creates a child zone in current subscription and resource group
  2. Updates the parent zone in its own subscription and resource group, creates NS record with the value of NS records of the child zone

Happy deployment!

Posted in Infrastructure | Tagged , | Leave a comment