Azure CLI is a powerful tool. Bake it into Docker and you have the perfect toolkit for running your container based CI jobs targeting Azure. If only it was that simple.
Running CI/CD in today’s world is mostly container based. All the popular CI services like GitHub, Gitlab, BitBucket, Cloud Build, CircleCI, Drone.. they all pretty much rely on the fact that your CI jobs are run in containers. Azure DevOps makes an exception here as it introduced the concept of “container jobs” about a year ago, and therefore the support for this isn’t that good yet.
By taking advantage of containers in the CI/CD process makes it very flexible. You are not tied to a specific set of tools (or even the CI service), you pick and choose the ones you need and fit the best to your requirements. But what it also means, in most of the cases, is that the underlying OS is Linux and the container runtime is Docker. Some of the CI services are starting to support also Windows based containers, though still a minority.
Of course one could argue that you should use declarative approach and handle everything with IaC, in Azure’s case with ARM or Terraform. In reality you just can’t get full automation accomplished with those alone. That’s when scripting comes into play. Companies coming from e.g. OS background prefer to use bash and Azure CLI over other options here. And when you start thinking about automation you soon realize those scripts should be tested in the same context that your CI is using..
You basically have two options:
1) Build your own Docker image and install Azure CLI there
2) Use the official one from Docker Hub that Microsoft provides:
I recommend the second option if you’re not interested in re-inventing the wheel and maintaining the container image yourself. It does come with a few downsides: the image is quite big (1.13GB currently) and is based on Alpine Linux distro.
Microsoft offers also basic instructions for getting started with running Azure CLI in Docker.
Now this is where the basic instructions fall short. Automation in mind you want to test these things using Service Principal from day one.
Start by creating a new Service Principal:
# Login with your user account (which has the needed privileges to create new SPNs)
Save these credentials to a local environment file
Run the Docker container locally and test login with Service Principal:
$ docker run --env-file ./.env.local -it --rm \
Note: It’s always a good idea to lock down the version of your Azure CLI.
The Alpine Linux distro does not support -d options out of the box. Many of the examples on MS docs site for generating sas tokens rely on this. To enable it you need to install some extras to the container on the fly:
# add coreutils package to support -d options
After this you can use the date command more flexibly:
# Current time
AzCopy is a powerful tool for copying or moving data to Azure Storage. About 99,9% of Azure projects out there use Azure Blob Storage for various data needs. If you need to let’s say move hundreds of files to blob storage efficiently - this is the tool you should be using. It supports both Azure AD and SAS as authorization mechanisms nowadays, but to support all scenarios SAS is the only option still. Building the SAS token from Docker container is an art of it’s own as well.
AzCopy can be installed to the container on the fly:
A working solution for generating a SAS token:
#STORAGE_ACCOUNT_NAME=<your storage account name - passed in from env variable>
Example of copying files to blob storage from your local file system with AzCopy and SAS:
# Create local folder `files` and put some files there for testing
You can find a more complete example of utilizing these snippets for a Single Page App’s CD pipeline from here.
These examples are based on a real world project. There are a couple of things to keep in mind here that I’ve stumbled on.
- The tool can only work on blob storage container level. Meaning you cannot copy directly to the root of blob storage, you always need a blob container first
- It supports synchronization but that feature is not really suitable for CI/CD scenarios. If you need to update existing files with new ones, it’s always better to remove the current files and then copy new ones. It doesn’t support “updating over existing ones” when using the
Windows 10 / WSL
When working from WSL take note that the computer time ofter lags with the current time in the Docker container. This is a known bug with Docker for Windows. The best you can do is to restart the Docker for Windows before starting to work on your container based CI/CD script. You will save a lot of time troubleshooting issues with Azure CLI and blob storage, trust me.