Reducing Developer Cycle Time using Dapr and AKS

One man's journey rapidly building a prototype using AKS and Dapr.

Reducing Developer Cycle Time using Dapr and AKS
Photo by Tim van der Kuip / Unsplash
This blog post is part of the 2021 C# Advent series. Each day, you can enjoy not one but two new articles from awesome bloggers! Check it out! :) 

I'm currently in the middle of working on a small side project that one day I hope could turn into a real product (or a company? a guy can dream!). This isn't my first foray into a "startup" idea - It's my third! However, this is my first attempt starting from my own concept. The first two projects had me helping build another's vision.

Reflecting back on the first two projects (with several more years of industry experience behind me) and solely focusing on the technical execution portion, while each project had it's own issues, one common theme stands out: developer cycle time.

Cycle Time

I found this image from a Microsoft DevOps article really impactful when trying to understand both what is cycle time and how does it impact my project flow.

Image Credit: Microsoft DevOps Documentation

When a developer is writing code, being able to quickly validate changes, and make revisions is critical to keeping cycle time low.

Both of my prior two projects had similar reasons for cycle time difficulties, but the similar vein was the inability to reliably test and run code locally.

Project Analysis

Project 1: In this project we did not have a great test culture. One could claim that we were trying to "move fast," but the reality is that we did most of our testing in production.

What does this mean for cycle time? For each task/code change:

  1. Developers would deploy changes to production
  2. If any bugs were found, re-deploy the old image
  3. Fix all changes locally
  4. Push their branch to produce a deployable build and back to (1)

Ignoring for a moment the sin of testing in production. Only once a developer got out of this loop could they check their code into master. This process was insane! Step 4 alone took ~20 minutes between image creation and deployment. Two or three missed bugs could cost a developer ~1 hour in their day and considering we were all working on this in-addition to a day job, it killed productivity.

Project 2: We had decent local testing here. A developer could (somewhat) safely assume no regressions if our local tests passed. However, this project was a lot more complicated than the first - We had a many external dependencies (Azure Functions, SparkPost, SQL, Azure Table) and a big monolith application. It was incredibly hard to run this locally and experiment with the APIs. The APIs themselves were incredibly overloaded - Each API accomplishing 2-3 independent tasks.

Eventually we fell into a similar cycle as Project 1. However, deployments here took even longer given the cycles needed to deploy out changes to dependencies.

Today - Project 3

Admittedly, I have several more years of professional experience where I worked on designing and building new microservices at two different companies. I have seen two very different engineering cultures and could model my new project after the traits I liked from both company.


  1. API driven design built by single purpose* microservices.
  2. Everything can be run, built, and tested locally without the need for complicated setup. I had an internal goal of an imaginary new-hire having everything running in 30 minutes.
  3. Local/cloud dependency equivalence - While working on Project 2, I found myself often writing code like this:
string foo;
if (env.IsProduction()) {
    foo =;
} else {
    foo = "something that would work locally"

4.  Device agnostic - Daily I switch between Windows and Mac and I would never want to force one or the other on a developer (as much as possible).

(*) Single purpose microservices within reason. I have read how some companies take this rule, in my opinion, a bit too far (looking at you Lyft).

I was surprised at how easy it was to live within these four rules. When I started building out the project, GitHub repo, deployment space, etc - I was worried that I would get so bogged down adhering to these tenants that I would never make progress.

I was shocked. Within one evening I had a working Azure Kubernetes Service (AKS) cluster with two microservices making service-to-service calls. The next evening I had plumbed in a secret store (KeyVault). On the third evening, I was able to get Envoy running as an API gateway. Finally, I tied in local development using Docker Compose and some Dapr tricks.

Dapr Trick

Local Secrets

My favorite "Dapr trick" was making use of named secret stores. When my application is deployed to AKS, I have Kubernetes and Dapr configured to read to a KeyVault secret store. However, locally, I want to point to a JSON file on disk. Achieving this was remarkably simple. Consider the below Program file from one of my microservices:

public static IHostBuilder CreateHostBuilder(string[] args)
    return Host.CreateDefaultBuilder(args)
        .ConfigureAppConfiguration(config =>
            var daprClient = new DaprClientBuilder().Build();

                new List<DaprSecretDescriptor> { 
                    new DaprSecretDescriptor("appinsightskey"), 
                    new DaprSecretDescriptor("sqlconnectionstring") },
        .ConfigureWebHostDefaults(webBuilder =>


Above, I have a named Secret Store called "azurekeyvault." I also reference two secrets "appinsightskey" and "sqlconnectionstring." When this is run locally, I couldn't care less what the Application Insights Key is (app insights will no-op on a bad key), but I do want the SQL connection string to point to a local DB.

Now, let's take a look at my local Docker Compose file for one of these microservices.

    image: ${DOCKER_REGISTRY-}account
      context: .
      dockerfile: Account/Dockerfile
        - "53000:50001"

    image: "daprio/daprd:latest"
    command: [
     "-app-id", "account",
     "-app-port", "80",
     "-components-path", "./components",
      - account
    network_mode: "service:account"
      - "./components/:/components"

Finally, compare the below to component files:

kind: Component
  name: azurekeyvault
  namespace: default
  version: v1
  - name: vaultName
    value: havenakskv
  - name: spnTenantId
    value: "xxxxxxxxxxxxxxx"
  - name: spnClientId
    value: "xxxxxxxxxxxxxxx"
  - name: spnCertificate
      name: kvcert
      key: kvcertkey
    secretStore: kubernetes
kind: Component
  name: azurekeyvault
  namespace: default
  type: secretstores.local.file
  version: v1
  - name: secretsFile
    value: ./components/secrets-dev.json
  - name: nestedSeparator
    value: ":"

The top component file is applied to the Kubernetes cluster while the bottom component file is mounted into the Docker container when deployed locally. Notice the "components path" and volume mount in the sample Docker Compose.

The trick here is that both are named "azurekeyvault."

(I'm not showing the Kubernetes manifest because there really isn't anything interesting... Kubernetes knows how to manage components and Dapr consumes them!)

How did I know that I truly achieved cycle time success?

I normally work on a desktop in my home office, but my fiancé and I were leaving for a vacation.... no problem! I cloned down my GitHub repo, and within a few minutes of Visual Studio + Docker downloading, everything had deployed locally. I was able to make a change, write tests, and experiment on my local cluster all within 30 minutes of setup (okay... the Visual Studio download/install took some time perhaps I need a personal CDN?). Victory!