| Comments

Okay, so I won’t quit my day job in favor of trying to come up with a witty title for a blog post.  But this is one thing that I’m proud to see our team deliver: one of the fastest ways to get your ASP.NET app to a container service on Azure (or elsewhere) without having to know what containers are or learn new things.  No really!

Cloud native

Well if you operate in the modern web world you’ve heard this term ‘cloud native’ before. And everyone has an opinion on what it means. I’m not here to pick sides and I think it means a lot of different things. One commonality it seems that most can agree on is that one aspect is of deploying a service to the cloud as ‘cloud native’ is to leverage containers.  If you aren’t familiar with containers, go read here: What is a container? It’s a good primer on what they are technically but also some benefits. Once you educate yourself you’ll be able to declare yourself worthy to nod your head in cloud native conversations and every once in a while throw out comments like “Yeah, containers will help here for us.” or something like that. Instantly you will be seen as smart and an authority and the accolades will start pouring in.  But then you may actually have to do something about it in your job/app. Hey don’t blame me, you brought this on yourself with those arrogant comments! Have no fear, Visual Studio is here to help!

Creating and deploying a container

If you haven’t spent time working with containers, you will be likely introduced to new concepts like Docker, Dockerfile, compose, and perhaps even YAML. In creating a container, you typically need to have a definition of what your container is, and generally this will be a Dockerfile.  A typical Docker file for a .NET Web API looks like this:

#See https://aka.ms/customizecontainer to learn how to customize your debug container and how Visual Studio uses this Dockerfile to build your images for faster debugging.

FROM mcr.microsoft.com/dotnet/aspnet:7.0 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443

FROM mcr.microsoft.com/dotnet/sdk:7.0 AS build
WORKDIR /src
COPY ["CommerceApi.csproj", "."]
RUN dotnet restore "./CommerceApi.csproj"
COPY . .
WORKDIR "/src/."
RUN dotnet build "CommerceApi.csproj" -c Release -o /app/build

FROM build AS publish
RUN dotnet publish "CommerceApi.csproj" -c Release -o /app/publish /p:UseAppHost=false

FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "CommerceApi.dll"]

You can see a few concepts here that you’d have to understand and that’s not the purpose of this post. You’d then need to use Docker to build this container image and also to ‘push’ it to a container registry like Azure Container Registry (ACR). For a developer this would mean you’d likely have Docker Desktop installed that brings these set of tools to you locally to execute within your developer workflow.  As you develop your solution, you’ll have to keep your Dockerfile updated if it involves more projects, version changes, path changes, etc. But what if you just have a simple service, you’ve heard about containers and you just want to get it to a container service as fast as possible and simple.  Well, in Visual Studio we have you covered.

Publish

Yeah yeah, ‘friends don’t let friends…’ – c’mon let people be (more on that later). In VS we have a great set of tools to help you rapidly get your code to various deployment endpoints. Since containers are ‘the thing’ lately as of this writing we want to help you remove concepts and get their fast as well…in partnership with Azure.  Azure has a new service launched last year called Azure Container Apps (ACA), a managed container environment that helps you scale your app. It’s a great way to get started in container deployments easily and have manageability and scale.  Let me show you how we help you get to ACA quickly, from your beloved IDE, with no need for a Dockerfile or other tools.  You’ll start with your ASP.NET Web project and start from the Publish flow (yep, right-click publish).  From their choose Azure and notice Azure Container Apps right there for you:

Visual Studio Publish dialog

After selecting that Visual Studio (VS) will help you either select existing resources that your infrastructure team helped setup for you or, if you’d like and have access to create them, create new Azure resources all from within VS easily without having to go to the portal.  You can then select your ACA instance:

Visual Studio Publish dialog with Azure

And then the container registry for your image:

Visual Studio Publish dialog with Azure

Now you’ll be presented with an option on how to build the container. Notice two options because we’re nice:

Publish with .NET SDK selection

If you still have a Dockerfile and want to go that route (read below) we enable that for you as well. But the first option is leveraging the .NET SDK that you already have (using the publish targets for the SDK). Selecting this option will be the ‘fast path’ to your publishing adventure.

Then click finish and you’re done, you now have a profile ready to push a container image to a registry (ACR), then to a container app service (ACA) without having to create a Docker file, learn a new concept or have other tools.  Click publish and you’ll see the completed results and you will now be able to strut back into your manager’s office/cube/open space bean bag and say Hey boss, our service is all containerized and in the cloud ready to scale…where’s my promo?

Publish summary page

VS has helped with millions of cloud deployments every month whether they be to VMs, PaaS services, Web Deploy to on-metal cloud-hosted machines, and now easily to container services like ACA.  It’s very helpful and fast, especially for those dev/test scenarios as you iterate on your app with others.

Leveraging continuous integration and deployment (CI/CD)

But Tim, friends don’t let friends right-click publish! Pfft, again I say, do what makes you happy and productive.  But also, I agree ;-).  Seriously though I’ve become a believer in CI/CD for EVERYTHING I do now, no matter the size of project. It just raises the confidence of repeatable builds and creates an environment of collaboration better for other things. And here’s the good thing, VS is going to help you bootstrap your efforts here easily as well – EVEN WITH CONTAINERS! Remember that step where we selected the SDK to build our container? Well if your VS project is within a GitHub repository (free for most cases these days, you should use it!), we’ll offer to generate an Actions workflow, which is GitHub’s CI/CD system:

Publish using CI/CD

In choosing a CI/CD workflow, the CI system (in this case GitHub Actions) needs to know some more information: where to deploy, some credentials to use for deployment, etc. The cool thing is even in CI, Visual Studio will help you do all of this setup including retrieving and setting these values as secrets on your repo! Selecting this option would result in this summary for you:

GitHub Actions summary page

And the resulting workflow in an Actions YAML file in your project:

name: Build and deploy .NET application to container app commerceapp
on:
  push:
    branches:
    - main
env:
  CONTAINER_APP_CONTAINER_NAME: commerceapi
  CONTAINER_APP_NAME: commerceapp
  CONTAINER_APP_RESOURCE_GROUP_NAME: container-apps
  CONTAINER_REGISTRY_LOGIN_SERVER: XXXXXXXXXXXX.azurecr.io
  DOTNET_CORE_VERSION: 7.0.x
  PROJECT_NAME_FOR_DOCKER: commerceapi
jobs:
  build:
    runs-on: ubuntu-latest
    steps:
    - name: Checkout to the branch
      uses: actions/[email protected]
    - name: Setup .NET SDK
      uses: actions/[email protected]
      with:
        include-prerelease: True
        dotnet-version: ${{ env.DOTNET_CORE_VERSION }}
    - name: Log in to container registry
      uses: azure/[email protected]
      with:
        login-server: ${{ env.CONTAINER_REGISTRY_LOGIN_SERVER }}
        username: ${{ secrets.timacregistry_USERNAME_F84D }}
        password: ${{ secrets.timacregistry_PASSWORD_F84D }}
    - name: Build and push container image to registry
      run: dotnet publish -c Release -r linux-x64 -p:PublishProfile=DefaultContainer -p:ContainerImageTag=${{ github.sha }} --no-self-contained -p:ContainerRegistry=${{ env.CONTAINER_REGISTRY_LOGIN_SERVER }} -bl
    - name: Upload binlog for investigation
      uses: actions/[email protected]
      with:
        if-no-files-found: error
        name: binlog
        path: msbuild.binlog
  deploy:
    runs-on: ubuntu-latest
    needs: build
    steps:
    - name: Azure Login
      uses: azure/[email protected]
      with:
        creds: ${{ secrets.commerceapp_SPN }}
    - name: Deploy to containerapp
      uses: azure/[email protected]
      with:
        inlineScript: >
          az config set extension.use_dynamic_install=yes_without_prompt

          az containerapp registry set --name ${{ env.CONTAINER_APP_NAME }} --resource-group ${{ env.CONTAINER_APP_RESOURCE_GROUP_NAME }} --server ${{ env.CONTAINER_REGISTRY_LOGIN_SERVER }} --username ${{ secrets.timacregistry_USERNAME_F84D }} --password ${{ secrets.timacregistry_PASSWORD_F84D }}

          az containerapp update --name ${{ env.CONTAINER_APP_NAME }} --container-name ${{ env.CONTAINER_APP_CONTAINER_NAME }} --resource-group ${{ env.CONTAINER_APP_RESOURCE_GROUP_NAME }} --image ${{ env.CONTAINER_REGISTRY_LOGIN_SERVER }}/${{ env.PROJECT_NAME_FOR_DOCKER }}:${{ github.sha }}
    - name: logout
      run: >
        az logout

Boom! So now you CAN use right-click publish and still get started with CI/CD deploying to the cloud!  Strut right back into that office: Hey boss, I took the extra step and setup our initial CI/CD workflow for the container service so the team can just focus on coding and checking it in…gonna take the rest of the week off.

Cool, but I have advanced needs…

Now, now I know there will be always cases where your needs are different, this is too simple, etc. and YOU ARE RIGHT! There are limitations to this approach which we outlined in our initial support for the SDK container build capabilities.  Things like customizing your base container image, tag names, ports, etc. are all easily customizable in your project file as they feed into the build pipeline, so we have you covered on this type of customization. As your solution grows and your particular full microservices needs get more complex, you may outgrow this simplicity…we hope that means your app is hugely successful and profits are rolling in for your app! You’ll likely grow into the Dockerfile scenarios and that’s okay…you’ll have identified your needs and have already setup your starting CI/CD workflow that you can progressively also grow as needed. We will continue to listen and see about ways we can improve this capability as developers like you give us feedback!

Summary

Our goal in Visual Studio is to help you be productive with a range of tasks. Moving to ‘cloud native’ can be another thing that your team has to worry about and as you start your journey (or perhaps looking to simplify a bit) VS aims to be your partner there and continue to help you be productive in getting your code to the cloud quickly with as much friction removed from your normal workflow. Here’s a few links to read more in more corporate speak about these capabilities:

Thanks for reading!

| Comments

I’ve had a love/hate relationship with CI/CD for a long while ever since I remember it being a thing. In those early days the ‘tools’ were basically everyone’s homegrown scripts, batch files, random daemon hosts, etc. Calling something a workflow was a stretch. It was for that reason I just wasn’t a believer, it was just too ‘hard’ for the average dev. I, like many, would build from my machine and direct deploy or copy over to file shares (NOTE: LOTS of people still do this). Well the tools have gotten WAY better across the board from many different vendors and your options for great tools exist. I’ve been privileged to work with Damian Brady and Abel Wang to educate me on the ways of CI/CD a bit. I know Damian has a mantra about right-click publish, but that only made me want to make it simpler for devs.

NOTE: Did you know that for most projects in .NET working in VS you can use right-click Publish to generate a CI/CD workflow for you, further reducing the complexity?

Well, I’m a believer now and I make it part of my mission to improve the tool experience for .NET devs and also look to convince/advocate for .NET developers to use CI/CD even in the smallest of projects. I’ve honed my own workflows to now I truly just worry about development…releases just take care of themselves. It’s glorious and frees so much time. I go out of my way now when I see friend’s projects who are on GitHub but not using Actions, for example. Recently I was working with Mads Kristensen on some things and asked him if he’d consider using Actions. And in a few minutes I submitted a first PR to one of his projects showing how simple it was. I started from using my own `dotnet new workflow` tool as not all project types support the right-click Publish—>Actions work Visual Studio has done yet. This helps get started with the basics.

In a few back/forth with Mads he wanted to encapsulate more…the files were too busy for him LOL. Enter composite Actions (or technically composite run steps). This was my chance to look into these as I hadn’t really had a need yet. You should read the docs, but my lay explanation is that composite run steps enable you to basically templatize some of your steps into a single encapsulation…and VERY simply. 

Screenshot of GitHub Action YAML file

Let’s look at one example with Mads’ desires. Mads’ projects are usually Visual Studio extensibility projects and require a few things to build more than just the .NET SDK. In this particular instance Mads needed .NET SDK, NuGet, and MSBuild to be setup.  No problem, I started out with this, because duh, why not:

  # prior portion of jobs removed for brevity
  steps:
    - name: Setup dotnet
      uses: actions/[email protected]
      with:
        dotnet-version: 6.0.x

    - name: Setup MSBuild
      uses: microsoft/[email protected]

    - name: Setup NuGet
      uses: NuGet/[email protected]

But wanting less text, we discussed and I encapsulated these three in one single step using a new composite action. Creating a composite action is simple and enables you to deploy it in a few ways. First you can just keep these in your own repo itself without having to release anything, etc. This is helpful when yours are very repo-specific and nobody is sharing them across org/repos. Let’s look at the above and how we might encapsulate this. I still want to enable SDK version input to start so need an input parameter for that. So in the repo I’ll create two new folders in the .github/workflows folder, creating a new path called ./github/workflows/composite/bootstrap-dotnet and then place a new action.yaml file in that directory. My action.yaml file looks like this:

# yaml-language-server: $schema=https://json.schemastore.org/github-action.json
name: 'Setup .NET build dependencies'
description: 'Sets up the .NET dependencies of MSBuild, SDK, NuGet'
branding:
  icon: download
  color: purple
inputs:
  dotnet-version:
    description: 'What .NET SDK version to use'
    required: true
    default: 6.0.x
  sdk:
    description: 'Setup .NET SDK'
    required: false
    default: 'true'
  msbuild:
    description: 'Setup MSBuild'
    required: false
    default: 'true'
  nuget:
    description: 'Setup NuGet'
    required: false
    default: 'true'
runs:
  using: "composite"
  steps:
    - name: Setup dotnet
      if: inputs.sdk == 'true'
      uses: actions/[email protected]
      with:
        dotnet-version: ${{ inputs.dotnet-version }}

    - name: Setup MSBuild
      if: inputs.msbuild == 'true' && runner.os == 'Windows'
      uses: microsoft/[email protected]

    - name: Setup NuGet
      if: inputs.nuget == 'true'
      uses: NuGet/[email protected]

Let’s break it down. Composite actions still have the same setup as other custom actions enabling you to have branding/name/description/etc. as well as inputs as I’ve defined starting at line 6. I can then use these inputs in later steps (line 27/30). As you can see this action basically is a template for other steps that use other actions…simple!!! Now in the primary workflow for the project it looks like this:

# yaml-language-server: $schema=https://json.schemastore.org/github-workflow.json
name: "PR Build"

on: [pull_request]
      
jobs:
  build:
    name: Build 
    runs-on: windows-2022
      
    steps:
    - uses: actions/[email protected]

    - name: Setup .NET build dependencies
      uses: ./.github/workflows/composite/bootstrap-dotnet
      with:
        nuget: 'false'

Notice the path to the workflow itself using the new folder structure (line 14). Now when this workflow runs it will bring this composite action in and also run it’s steps…beautiful. If the action is more generic and you want to move it out of the repo you can do that. In fact in this one we did just that and you can see it at timheuer/bootstrap-dotnet and be able to use it just like any other action in your setup. An example of changed like the above is as simple as:

# yaml-language-server: $schema=https://json.schemastore.org/github-workflow.json
name: "PR Build"

on: [pull_request]

jobs:
  build:
    name: Build 
    runs-on: windows-2022
      
    steps:
    - uses: actions/[email protected]

    - name: Setup .NET build dependencies
      uses: timheuer/[email protected]
      with:
        nuget: 'false'

Done! What’s also great is because this still is a legit GitHub Action you can publish it on the marketplace for others to discover and use (hence the branding). Here is this one we just demonstrated above in the marketplace:

Screenshot of GitHub Action marketplace listing

So that’s a simple example of truly a template/merge of other existing actions. But can you use this method to create a custom action that just uses script for example, like PowerShell? YES! Let’s take another one of these examples that uploads the VSIX from our project to the Open VSIX gallery. Mads was using a PowerShell script that does his upload for him, so I’m copying that into a new composite action and making some inputs and then he can use it.  Here’s the full composite action:

# yaml-language-server: $schema=https://json.schemastore.org/github-action.json
name: 'Publish to OpenVSIX Gallery'
description: 'Publishes a Visual Studio extension (VSIX) to the OpenVSIX Gallery'
branding:
  icon: upload-cloud
  color: purple
inputs:
  readme:
    description: 'Path to readme file'
    required: false
    default: ''
  vsix-file:
    description: 'Path to VSIX file'
    requried: true
runs:
  using: "composite"
  steps:
    - name: Publish to Gallery
      id: publish_gallery
      shell: pwsh
      run: |
        $repo = ""
        $issueTracker = ""

        # If no readme URL was specified, default to "<branch_name>/README.md"
        if (-not "${{ inputs.readme }}") {
          $readmeUrl = "$Env:GITHUB_REF_NAME/README.md"
        } else {
          $readmeUrl = "${{ inputs.readme }}"
        }

        $repoUrl = "$Env:GITHUB_SERVER_URL/$Env:GITHUB_REPOSITORY/"

        [Reflection.Assembly]::LoadWithPartialName("System.Web") | Out-Null
        $repo = [System.Web.HttpUtility]::UrlEncode($repoUrl)
        $issueTracker = [System.Web.HttpUtility]::UrlEncode(($repoUrl + "issues/"))
        $readmeUrl = [System.Web.HttpUtility]::UrlEncode($readmeUrl)

        # $fileNames = (Get-ChildItem $filePath -Recurse -File)
        $vsixFile = "${{ inputs.vsix-file }}"
        $vsixUploadEndpoint = "https://www.vsixgallery.com/api/upload"

        [string]$url = ($vsixUploadEndpoint + "?repo=" + $repo + "&issuetracker=" + $issueTracker + "&readmeUrl=" + $readmeUrl)
        [byte[]]$bytes = [System.IO.File]::ReadAllBytes($vsixFile)
             
        try {
            $webclient = New-Object System.Net.WebClient
            $webclient.UploadFile($url, $vsixFile) | Out-Null
            'OK' | Write-Host -ForegroundColor Green
        }
        catch{
            'FAIL' | Write-Error
            $_.Exception.Response.Headers["x-error"] | Write-Error
        }

You can see it is mostly a PowerShell script and has the inputs (line 6). And here it is in use in a project:

# other steps removed for brevity in snippet
  publish:
    runs-on: ubuntu-latest
    steps:

      - uses: actions/[email protected]

      - name: Download Package artifact
        uses: actions/[email protected]
        with:
          name: RestClientVS.vsix

      - name: Upload to Open VSIX
        uses: timheuer/openvsix[email protected]
        with:
          vsix-file: RestClientVS.vsix

Pretty cool when your custom action is a script like this and you don’t need to do any funky containers, or have a node app that just launches pwsh.exe or stuff like that. LOVE IT! Here’s the repo for this one to see more: timheuer/openvsixpublish.

This will definitely be the first approach I consider when needing other simple actions for my projects or others. The simplicity and flexibility in ‘templatizing’ some steps is really great!

Hope this helps!

| Comments

Well with the release of .NET 6, lots of excitement around the platform and me being the nerd I am with a love for cycling, it’s time to open up for another round of ordering for the highly exclusive limited edition .NET Cycling Kit (jersey and bib shorts).

.NET Cycling Kit

Last year I had created these using the .NET Foundation assets and in accordance with the brand guidelines for the .NET project (did you know we had a branding guidelines?! Me neither, but now you do).  Well, we’re opening it up for an exclusive another round (this is round 3) and probably the last (famous last words).

Here’s the details:

  • These are high-quality race-fit cycling kits…yes they are not cheap…nor is the quality.
  • These are race-fit, not ‘club fit’ so they are meant to fit tighter and in riding position
  • Sizing: Eliel Cycling Sizing Guide (for context I am 5’9” [175cm] and ~195lbs [88.5kg] and I prefer a Large top and Large bib shorts)
  • These are all 100% custom made-to-order – there is no ‘stock’ for immediate ship
  • For patience, upon store closing these will take 10-12 weeks of production – must be patient :-)
  • They are custom, sales final, no returns
  • They look awesome and are comfortable
  • You will be the envy of all cyclists who are .NET developers that didn’t get one
  • I make NO money on this
  • Microsoft makes no money on this
  • Microsoft has nothing to do with this
  • There is no telemetry collected on the kit

So how do you get one?  Simple, click here: .NET Cycling Kit (Round 3) – you order direct from the manufacturer in California and pay direct to them.  Please read the site and details clearly and as well for EU ordering.

If you have any questions about these, the best place to ask is ping me on Twitter @timheuer for questions that I may be able to answer on sizing or otherwise.  I would love to see more .NET cyclists with their kits in the wild worldwide

| Comments

Last night I got tweeted at asking me how one could halt a CI workflow in GitHub Actions on a condition.  This particular condition was if the code coverage tests failed a certain coverage threshold.  I’m not a regular user of code coverage tools like Coverlet but I went Googling for some answers and oddly did not find the obvious answer that was pointed out to me this morning.  Regardless the journey to discover an alternate means was interesting to me so I’ll share what I did that I feel is super hacky, but works and is a similar method I used for passing some version information in other workflows.

First, the simple solution for if you are using Coverlet and want to fail a build and thus a CI workflow is to use the MSBuild integration option and then you can simply use:

dotnet test /p:CollectCoverage=true /p:Threshold=80

I honestly felt embarrassed that I didn’t find this simple option, but oh well, it is there and is definitely the simplest option if you can use this option.  But there you have it.  When used in an Actions workflow if the threshold isn’t met, this will fail that step and you are done.

Picture of failed GitHub Actions step

Creating your condition to inspect

But let’s say you need to fail for a different reason or in this example here, you couldn’t use the MSBuild integration and instead are just using the VSTest integration with a collector.  Well, we’ll use this code coverage scenario as an example but the key step here is focusing on how to fail a step.  Your condition may be anything but I suspect it is usually based on some previous step’s output or value.  Well first, if you are relying on previous steps values, be sure you understand the power of using outputs.  This is I think the best way to kind of ‘set state’ of certain things in steps.  A step can do some things and either in the Action itself set an Output value, or in the workflow YAML you can do this as well using a shell command and calling the ::set-output method.  Let’s look at an example…first the initial step (again using our code coverage scenario):

- name: Test
  run: dotnet test XUnit.Coverlet.Collector/XUnit.Coverlet.Collector.csproj --collect:"XPlat Code Coverage"

This basically will produce an XML output ‘report’ that contains the values we want to extract.  Namely it’s in this snippet:

<?xml version="1.0" encoding="utf-8"?>
<coverage line-rate="0.85999999999" branch-rate="1" version="1.9" timestamp="1619804172" lines-covered="15" lines-valid="15" branches-covered="8" branches-valid="8">
  <sources>
    <source>D:\</source>
  </sources>
  <packages>
    <package name="Numbers" line-rate="1" branch-rate="1" complexity="8">
      <classes>

I want the line-rate value (line 2) in this XML to be my condition…so I’m going to create a new Actions step to extract the value by parsing the XML using a PowerShell cmdlet.  Once I have that I will set the value as the output of this step for later use:

- name: Get Line Rate from output
  id: get_line_rate
  shell: pwsh  
  run: |
    $covreport = get-childitem -Filter coverage.cobertura.xml -Recurse | Sort-Object -Descending -Property LastWriteTime -Top 1
    Write-Output $covreport.FullName
    [xml]$covxml = Get-Content -Path $covreport.FullName
    $lineRate = $covxml.coverage.'line-rate'
    Write-Output "::set-output name=lineRate::$lineRate"

As you can see in lines 2 and 9 I have set a specific ID for my step and then used the set-output method to write a value to an output of the step named ‘lineRate’ that can be later used.  So now let’s use it!

Evaluating your condition and failing the step manually

Now that we have our condition, we want to fail the run if the condition evaluates a certain way…in our case if the code coverage line rate isn’t meeting our threshold.  To do this we’re going to use a specific GitHub Action called actions/github-script which allows you to run some of the GitHub API directly in a script.  This is great as it allows us to use the core library which has a set of methods for success and failure!  Let’s take a look at how we combine the condition with the setting:

- name: Check coverage tolerance
  if: ${{ steps.get_line_rate.outputs.lineRate < 0.9 }}
  uses: actions/[email protected]
  with:
    script: |
        core.setFailed('Coverage test below tolerance')

Okay, so we did a few things here.  First we are defining this step as executing the core.setFailed() method…that’s specifically what this step will do, that’s it…it will fail the run with a message we put in there.  *BUT* we have put a condition on the step itself using the if condition checking.  In line 6 we are executing the setFailed function with our custom message that will show in the runner log.  On line 2 we have set the condition for if this step even runs at all.  Notice we are using the ID of a previous step (get_line_rate) and the output parameter (lineRate) and then doing a quick math check.  If this condition is met, then this step will run.  If the condition is NOT met, this step will not run, but also doesn’t fail and the run can continue.  Observe that if the condition is met, our step will fail and the run fails:

Failed run based on condition

If the condition is NOT met the step is ignored, the run continues:

Condition not met

Boom, that’s it! 

Summary

This was just one scenario but the key here is if you need to manually control a fail condition or otherwise evaluate conditions, using the actions/github-script Action is a simple way to do a quick insertion to control your run based on a condition.  It’s quick and effective for some scenarios where your steps may not have natural success/fail exit codes that would otherwise fail your CI run.  What do you think? Is there a better/easier way that I missed when you don’t have clear exit codes?

Hope this helps someone!

| Comments

One of the biggest things that I’ve wanted (and have heard others) when adopting GitHub Actions is the use of some type of approval flow.  Until now (roughly the time of this writing) that wasn’t possible easily in Actions.  The concept of how Azure Pipelines does it is so nice and simple to understand in my opinion and a lot of the attempts by others using various Actions stitched together made it tough to adopt.  Well, announced at GitHub Universe, reviewers is now in Beta for Actions customers!!!  Yes!!!  I spent some time setting up a flow with an ASP.NET 5 web app and Azure as my deployment to check it out.  I wanted to share my write-up in hopes it might help others get started quickly as well.  First I’ll acknowledge that this is the simplest getting started you can have and your workflows may be more complex, etc.  If you’d like to have a primer and see some other updates on Actions, be sure to check out Chris Patterson’s session from Universe: Continuous delivery with GitHub Actions.  With that let’s get started!

Setting things up

First we’ll need a few things to get started.  These are things I’m not going to walk through here but will explain briefly what/why it is needed for my example.

  • An Azure account – I’m using this sample with Azure as my deployment because that’s where I do most of my work.  You can get a free Azure account as well and do exactly this without any obligation.
  • Set up an Azure App Service resource – I’m using App Service Linux and just created it using basically all the defaults.  This is just a sample so those are fine for me.  I also created these using the portal to have everything setup in advance.
  • I added one Application Setting to my App Service called APPSERVICE_ENVIRONMENT so I could just extract a string noting which environment I was in and display it on the home page.
  • In your App Service create a Deployment Slot and name it “staging” and choose to clone the main service settings (to get the previous app setting I noted).  I then changed the app setting value for this deployment slot.
  • Download the publish profile for each your production and staging instances individually and save those somewhere for now as we’ll refer back to them in the next step.
  • I created an ASP.NET 5 Web App using the default template from Visual Studio 2019.  I made some code changes in the Index.cshtml to pull from app settings, but otherwise it is unchanged.
  • I used the new Git features in Visual Studio to quickly get my app to a repository in my GitHub account and enabled Actions on that repo.

That’s it!  With those basics set up I can get started with the next steps of building out the workflow.  I should note that the steps I’m outlining here are free for GitHub public repositories.  For private repositories you need to be a GitHub Enterprise Server customer.  Since my sample is public I’m ready to go!

Environments

The first concept is Environments.  These are basically a separate segmented definition of your repo that you can associate secrets and protection rules with.  This is the key to the approval workflow as one of the protection rules is reviewers required (aka approvers).  The first thing we’ll do is set up two environments: staging and production.  Go to your repository settings and you’ll see a new section called Environments in the navigation. 

Screenshot of environment config

To create an environment, click the New Environment button and give it a name.  I created one called production and one called staging.  In each of these you can do things independently like secrets and reviewers.  Because I’m a team of one person my reviewer will be me, but you could set up others like maybe a build engineer for staging approval deployment and a QA team for production deployment.  Either way  click the Required reviewers checkbox and add yourself at least and save protection rule.

NOTE: This area may expand more to further protection rules but for now it is reviewers or a wait delay.  GitHub indicates others may be in the future.

Now we’ll add some secrets.  With Environments, you can have independent secrets for each environment.  Maybe you want to have different deployment variables, etc. for each environment, this is where you could do it.  For us, this is specifically what we’ll use the different publish profiles for.  Remember those profiles you downloaded earlier, now you’ll need them.  In the staging environment create a new secret named AZURE_PUBLISH_PROFILE and paste in the contents of your staging publish profile.  Then go to your production environment settings and do the same using the same secret name and use the production publish profile you downloaded earlier.  This allows our workflow to use environment-specific secret settings when they are called, but still use the same secret name…meaning we don’t need AZURE_PUBLISH_PROFILE_STAGING naming as we’ll be marking the environment in the workflow and it will pick up secrets from that environment only (or the repo if not found there – you can have a hierarchy of secrets effectively).

Okay we’re done setting up the Environment in the repo…off to set up the workflow!

Setting up the workflow

To get me quickly started I used my own template so I could `dotnet new workflow` in my repo root using the CLI.  This gives me a strawman to work with.  Let’s build out the basics, we’re going to have 3 jobs: build, deploy to staging, deploy to prod.  Let’s get started.  The full workflow is in my repo for this post, but I’ll be extracting snippets to focus on and show relevant pieces here.

Build

For build I’m using my standard implementation of restore/build/publish/upload artifacts which looks like this (with some environment-specific keys):

jobs:
  build:
    name: Build
    if: github.event_name == 'push' && contains(toJson(github.event.commits), '***NO_CI***') == false && contains(toJson(github.event.commits), '[ci skip]') == false && contains(toJson(github.event.commits), '[skip ci]') == false
    runs-on: ubuntu-latest
    steps:
    - uses: actions/[email protected]
    - name: Setup .NET Core SDK ${{ env.DOTNET_CORE_VERSION }}
      uses: actions/[email protected]
      with:
        dotnet-version: ${{ env.DOTNET_CORE_VERSION }}
    - name: Restore packages
      run: dotnet restore "${{ env.PROJECT_PATH }}"
    - name: Build app
      run: dotnet build "${{ env.PROJECT_PATH }}" --configuration ${{ env.CONFIGURATION }} --no-restore
    - name: Test app
      run: dotnet test "${{ env.PROJECT_PATH }}" --no-build
    - name: Publish app for deploy
      run: dotnet publish "${{ env.PROJECT_PATH }}" --configuration ${{ env.CONFIGURATION }} --no-build --output "${{ env.AZURE_WEBAPP_PACKAGE_PATH }}"
    - name: Publish Artifacts
      uses: actions/[email protected]
      with:
        name: webapp
        path: ${{ env.AZURE_WEBAPP_PACKAGE_PATH }}

Notice this job is ‘build’ and ends with uploading some artifacts to the job.  That’s it, the core functionality is to build/test this and store the final artifacts.

Deploy to staging

Next job we want is to deploy those bits to staging environment, which will be our staging slot in our Azure App Service we set up before.  Here’s the workflow job definition snippet:

  staging:
    needs: build
    name: Deploy to staging
    environment:
        name: staging
        url: ${{ steps.deploy_staging.outputs.webapp-url }}
    runs-on: ubuntu-latest
    steps:
    # Download artifacts
    - name: Download artifacts
      uses: actions/[email protected]
      with:
        name: webapp

    # Deploy to App Service Linux
    - name: Deploy to Azure WebApp
      uses: azure/[email protected]
      id: deploy_staging
      with:
        app-name: ${{ env.AZURE_WEBAPP_NAME }}
        publish-profile: ${{ secrets.AZURE_PUBLISH_PROFILE }}
        slot-name: staging

In this job we download the previously published artifacts to be used as our app to deploy.  Observe a few other things here:

  • I’ve declared that this job ‘needs’ the ‘build’ job to start.  This ensures a sequence workflow.  If build job fails, this doesn’t start.
  • I’ve declared this job an ‘environment’ and marked it using staging which maps to the Environment name we set up on the repo settings.
  • In the publish phase I specified the slot-name value mapping to the Azure App Service slot name we created on our resource in the portal.
  • Specify getting the AZURE_PUBLISH_PROFILE secret from the repo

You’ll also notice the ‘url’ setting on the environment.  This is a cool little delighter that you should use.  One of the outputs of the Azure web app deploy action is the URL to where it was deployed.  I can extract that from the step and put it in this variable.  GitHub Actions summary will now show this final URL in the visual map of the workflow.  It is a small delighter, but you’ll see useful a bit later.  Notice I don’t put any approver information in here.  By declaring this in the ‘staging’ environment it will follow the protection rules we previously set up.  So in fact, this job won’t run unless (1) build completes successfully and (2) the protection rules for the environment are stratified. 

Deploy to production

Similarly to staging we have a final step to deploy to production.  Here’s the definition snippet:

  deploy:
    needs: staging
    environment:
      name: production
      url: ${{ steps.deploy_production.outputs.webapp-url }}
    name: Deploy to production
    runs-on: ubuntu-latest
    steps:
    # Download artifacts
    - name: Download artifacts
      uses: actions/[email protected]
      with:
        name: webapp

    # Deploy to App Service Linux
    - name: Deploy to Azure WebApp
      id: deploy_production
      uses: azure/[email protected]
      with:
        app-name: ${{ env.AZURE_WEBAPP_NAME }}
        publish-profile: ${{ secrets.AZURE_PUBLISH_PROFILE }}

This is almost identical to staging except we changed:

  • Needs ‘staging’ to complete before this runs
  • Changed the environment to production to follow those protection rules
  • Removed the slot-name for deployment (default is production)
  • Changed the URL output value to the value from this job

Notice that we have the same AZURE_PUBLISH_PROFILE secret used here.  Because we are declaring environments we will get the environment-specific secret in these job scopes.  Helpful to have a common name and just map to different environments rather than many little ones – at least my opinion it does.

That’s it, we now have our full workflow to build –> deploy to staging with approval –> deploy to production with approval.  Let’s see it in action!

Trigger the workflow

Once we have this workflow in fact we can commit/push this workflow file and it should trigger a run itself.  Otherwise you can do a different code change/commit/push to trigger as well.  We get a few things here when the run happens.

First we get a nicer visualization of the summary of the job:

Screenshot of summary view

When the protection rules are hit, a few things happen.  Namely the run stops and waits, but the reviewers are notified.  The notification happens in standard GitHub notification means. I have email notifications and so I got an email like this:

Picture of email notification

I can then click through and approve the workflow step and add comments:

Screenshot of approval step

Once that step is approved, the job runs.  On the environment job it provides a nice little progress indicator of the steps:

Picture of progress indicator

Remember that URL setting we had?  Once that job finished, you’ll see it surface in that nice summary view to quickly click through and test your staging environment:

Picture of the URL shown in summary view in step

Once we are satisfied with the staging environment we can then approve the next workflow and the same steps happen and we are deployed to production!

Screenshot of final approval flow

And we’re done!

Summary

The concept of approvals in Actions workflows has been a top request I’ve heard and I’m glad it is finally there!  I’m in the process of adding it as an extra protection to all my public repo projects, whether it be for a web app deployment or a NuGet package publish, it is a helpful protection to put in place in your Actions.  It’s rather simple to set up and if you have a relatively simple workflow it is equally simple to config and modify already to incorporate.  More complex workflows might require a bit more thought but still simple to augment.  I’ve posted my full sample here and the workflow file in the repo timheuer/actions-approval-sample where you can see the full workflow file here.  This was fun to walk through and I hope this write-up helps you get started as well!