How To Dockerise .NET Core Apps

Today’s header image was created by Erwan Hesry at Unsplash
In my most recent blog post, I gave you a little background information on docker and containerisation
click here for a quick refresher
but in this blog post, I’m going to walk through how you would add docker support to a pre-existing code base. It’s incredibly easy to do
create one file ->fill in the content -> win
Before we get any further into this blog post, I want to let you know that it’s not a short one. If you want a much faster introduction into docker, I would recommend taking a look at the YouTube video that Allen from Coding Blocks put out on creating a Blazor app in Docker. He covers the basics in 28 minutes.
A Quick Refresher
- Docker can be incredibly useful for defining your build and runtime environments, along with scripting the build steps.
- The core technologies behind docker are the Moby Project and the Go programming language
you don’t need to know or learn Go in order to use docker
- Docker uses dockerfiles, which are plain text files
Installing Docker
I wont talk through how to install docker, as the documentation for doing that is really quite good and is different depending on the operating system that you’re running.
A Quote Note On Windows
What I will say is that
at the time of writing this article
if you are running Windows 10, then you’ll need to make sure that you are running version 1709 or later. To figure out which version of Windows 10 you’re running:
- Hit Windows + R to bring up the Run prompt
- Type “winver”
- Without the quotes
- Hit return
You should see a window which looks a little like this:
I took that screen shot on a newly spun up Windows 10 VM, which is running Windows 10 1803.
Where To Start With Dockerising Something
The first thing you need to figure out is which version of the .NET Core SDK your app should be built against and which version of the .NET Core runtime your app should be running. You should know this already
that is, if your specifications are laid out fully
If you’re building a personal project (or just aren’t sure), then you can find out which versions of the SDKs and runtimes you have installed by running some commands. The first will tell us which version of the SDK you’re using:
dotnet --version |
1.x
If this returns 1.x, then you’ll have to look in the .NET Core SDK install directory on your computer in order to see which versions of the SDKs and runtimes you have installed. You can find this on a Unix-like
i.e. MacOS or a Linux distribution
by running the following command:
find / -name dotnet |
If you’re on a Windows machine, then the default installation path will be: “C:\Program Files\dotnet\sdk”, as in the following screen shot:
I took that screen shot on the VM I mentioned earlier
2.x
If you’re running 2.x, then you can run the following commands to check which versions of the SDK you have installed:
dotnet --list-sdks |
And this one to get the versions of the runtime that you have installed:
dotnet --list-runtimes |
Once you know which versions of the SDK and runtime you’re using, you’ll have to check that your global.json has the version of the SDK listed in it.
You ARE using a global.json, right? It’s an essential pattern for .NET Core development. If you’re not sure what the global.json file is for, then you can read what I had to say about it here
the linked post is from May 18th, 2017
Once you’ve done that, you can get on with dockerising your application.
Docker This, Docker That
Once you have the SDK and runtime versions, you need to find two base images for your dockerfile
I’ll describe what base images are in a moment
You’ll need two because we’re going to build your app within one docker image, and run your app within a completely separate docker image.
Base Images
Docker images are made up of other images, kind of like an onion
or an ogre
I’ll go into more detail in part three of this series, but the basic idea is that you select a pre-made image which has everything that you need and nothing more.
Since we’re going to be building .NET Core code and running an ASP NET Core application, we need to head over to the Microsoft DockerHub repository. This is where the official Microsoft docker images for all of the .NET ecosystem frameworks live
if you don’t remember what the eco-system is, take a look at this article of mine
We’re primarily interested in the dotnet repository. This is where all of the .NET Core SDK and ASP NET Core runtime images are stored, so we’d better check it out.
You’ll see a massive number of options to choose from under the “Complete set of Tags” section. Here’s a short screenshot of some of them:
this list is correct at the time of writing this article
With your SDK and runtime versions in hand, let’s pick a build image.
We want something either running Linux or Windows (the Windows images are listed after the Linux ones in the repo), it must have the same SDK version number, and be marked as “SDK”. Let’s say that you’re building a Blazor app, you’ll want to choose one of the many images under the “.NET Core 2.1 RC1 tags” header. Quite which one doesn’t matter at the moment
I’ll go into what all of this “stretch”, “jessie” and “alpine” business means in part three of this series
for argument’s sake, you could choose to use “2.1.300-rc1-sdk-stretch” for your SDK image, as it’s the first listed. But let’s take the “alpine” one instead (“2.1-sdk-alpine”).
Note
While you’re still learning docker, it doesn’t really matter which image you choose, as long as it has the correct SDK version. Unless you’re running docker on Windows, in which case you’ll need to check whether docker is running Windows or Linux containers.
You can do this by right clicking the docker icon in your system tray and looking at the context menu:
If it lists “Switch to Windows containers…” (like in the screen shot above), then you’re running docker with Linux containers. Whereas, If it lists “Switch to Linux containers…”, then you’re running docker with Windows containers.
Before we create our docker container, I just want to say that this first pass is not going to be optimised at all. It’s going to be a large, slow to build container.
That being said…
dwCheckApi, Meet Docker
I’m going to use dwCheckApi for the remainder of this post
you can too, as it’s available on my GitHub
but you can use whichever project you’d like.
Before we start, let’s take a quick look at the project structure, as a kind of reminder:
This isn’t the best layout for source files, but it’ll do. You’ve more than likely got something like
- src
- contains all of your source files
- each project in it’s own directory
- tests
- a directory for each set of tests
Which is a better layout
and something I’m going to move dwCheckApi towards
Anyway. Let’s dockerise dwCheckApi.
Another Heading?!
Create a file in the root of the project called “dockerfile” – this file will contain all of the configuration for our docker image – and paste the following code into it:
FROM microsoft/dotnet:2.1-sdk-alpine AS build | |
# Set the working directory witin the container | |
WORKDIR /src | |
# Copy all of the source files | |
COPY * | |
# Restore all packages | |
RUN dotnet restore ./dwCheckApi/dwCheckApi.csproj | |
# Build the source code | |
RUN dotnet build ./dwCheckApi/dwCheckApi.csproj | |
# Ensure that we generate and migrate the database | |
WORKDIR ./dwCheckApi.Persistence | |
RUN dotnet ef database update | |
# Publish application | |
WORKDIR .. | |
RUN dotnet publish ./dwCheckApi/dwCheckApi.csproj --output "../../dist" | |
# Copy the created database | |
RUN cp ./dwCheckApi.Persistence/dwDatabase.db ./dist/dwDatabase.db | |
# Build runtime image | |
FROM microsoft/dotnet:2.1-aspnetcore-runtime-alpine AS app | |
WORKDIR /app | |
COPY --from=build /dist . | |
ENV ASPNETCORE_URLS http://+:5000 | |
ENTRYPOINT ["dotnet", "dwCheckApi.dll"] |
There really is a lot going on here, but I’ll talk you through it, line by line.
First: the build image:
FROM microsoft/dotnet:2.1-sdk-alpine AS build | |
# Set the working directory witin the container | |
WORKDIR /src | |
# Copy all of the source files | |
COPY * |
What we’ve done here is told docker that we’d like to base our docker image on one which has already been created. We’re going to use “microsoft/dotnet:2.1-sdk-alpine” and give it a name of “build”
this is so that we can refer to it later
Then we create a directory within that image called “src”. This one command does the equivalent of the following two shell commands:
mkdir -p src | |
cd src |
Then we copy everything from the “docker context” into the src directory in our image.
The Docker What?!
When you build a docker image
which we’ll do in a minute
you pass in what’s called the context. The context is the directory on your hard drive, which contains all of the source files that docker will use within the context of building the image
see what they did there?
When we build our image, we’ll pass in the directory with our source code (and the dockerfile, too) as our context.
This can quickly become bloated and cause docker image builds to be slow, I’ll talk you though how to reduce the size of your context
ooh err
in part four of this series.
Restoring and Building
The next two lines are standard package restore and build stuff:
# Restore all packages | |
RUN dotnet restore ./dwCheckApi/dwCheckApi.csproj | |
# Build the source code | |
RUN dotnet build ./dwCheckApi/dwCheckApi.csproj |
This is slightly optimised already, but that’s only because I like to do restores separately from builds. The .NET Core CLI tooling has done implicit restores when doing builds for a long time now
since .NET Core 2.0, that is
If you don’t add or remove packages, then the restore step only needs to happen within the docker image once. This is because each command is cached
I’ll go into a little more detail in part 3, so stick around for that
and all cached commands are used whenever possible.
I can already hear the sighs of relief from those of you who have used NPM in the past. And, yes I will be discussing how to optimise NPM builds in part 4.
Migrating the Database
This bit is a little weird.
If you take a look at how dwCheckApi works, you’ll see that there’s a Persistence layer
which I’ve spelt incorrectly, but hey ho
This layer deals with talking to a Database, via EF Core. To ensure that the database exists and that it has all migrations applied to it, I’m going to force EF Core to apply them. That’s what this command does:
# Ensure that we generate and migrate the database | |
WORKDIR ./dwCheckApi.Persistence | |
RUN dotnet ef database update |
Essentially, this is changing into the dwCheckApi.Persistence directory, then running the EF Core command to apply any migrations.
You don’t have to do this part. I’m only doing it here because dwCheckApi uses a SQLite file which is stored in the root directory and my seed data is always updated before I apply any migrations. If you’re using SQL Server and are doing it correctly
i.e. it’s being served by a separate server to your app
then you’ll want to get the migration scripts and run them, via SQL Server Management Studio, against your DB server.
Pro tip:
You should never host your database within a container. The short version of why is: if the container dies or crashes, then your database will be lost – this is because containers are meant to be ephemeral.
Publishing
Running the built version of your application is fine. But to get the most performance from it with the least number of files, you’ll want to publish it.
Note
“Publish” here doesn’t meant to publish the application to Azure (or similar). It relates to preparing your source code for publishing, this includes things like letting Roslyn (the compiler) apply any optimisations.
This is specifically what the next three lines are doing:
# Publish application | |
WORKDIR .. | |
RUN dotnet publish ./dwCheckApi/dwCheckApi.csproj --output "../../dist" |
But because the previous step took place in the dwCheckApi.Persistence directory, we first have to move out one directory
the workdir command works similarly to the cd command
before attempting to publish the app.
We’re also storing the contents of the publish action in a “dist” directory, one level up. This means that the docker container will look a little like this:
I recreated the steps in the dockerfile, in order to create this screenshot
Optionally Copy the Database
As I said earlier, dwCheckApi uses a SQLite database file. When the EF Core migrations command is run, the SQLite database file is created (with all migrations applied) in the dwCheckApi.Persistence directory.
unless it already exists, in which case the migrations are applied to it
So we need to copy it out to the dist directory, which is what the next line in the dockerfile does:
# Copy the created database | |
RUN cp ./dwCheckApi.Persistence/dwDatabase.db ./dist/dwDatabase.db |
And Now For the Runtime
Now that we’ve built the app, we need to run it. In order to run it, we need a runtime image. Because ASP.NET Core is not .NET Core, but a thing which can run on .NET Core, the image we’ve used to build it might not have everything needed to provide a stable runtime.
As such, we need to find an ASP.NET Core Runtime image. Luckily, Microsoft supply one for us, and you can see it in the image above. We’ll use “2.1-aspnetcore-runtime-alpine” as our runtime image:
# Build runtime image | |
FROM microsoft/dotnet:2.1-aspnetcore-runtime-alpine AS app | |
WORKDIR /app |
This should start to look a little familiar now.
We’re basing our runtime image on the “2.1-aspnetcore-runtime-alpine” image and we’ll give it the name “app”. We’re also creating a directory within it called “app”.
Copying From the Build Image
In order to get the published app from the build image, we’ll have to copy it over. That’s what the next line does:
COPY --from=build /dist . |
The important part here is the “–from=build” part of the copy command. Reading the command, left to right, we have:
- Copy (“COPY”)
- From the build image’s (“–from=build”)
- dist directory (“/dist”)
- to here (“.”)
Setting up the ASP.NET Core Port
Docker allows you to set up environment variables via the ENV command, which is what the next line does:
ENV ASPNETCORE_URLS http://+:5000 |
What we’re going here is telling Kestrel that we want the application to respond on port 5000 and the HTTP schema
for real-world applications, you’ll want to make sure that it handles HTTPS by default, obviously
Setting the Entry Point
All dockerfiles should have a single EntryPoint
During a recent live stream with Jeff Fritz, Scott Hanselman recommended a pattern which has more than one entry point, but this is a more advanced usage of docker
The entry point tells docker what it should do when you start a container
an image runs within a container. More on this in part 3
Without this line, docker wouldn’t know how to start your application. As we’re using .NET Core to run our ASP.NET Core application, our entry point statement is really simple:
ENTRYPOINT ["dotnet", "dwCheckApi.dll"] |
This says that we want to run the equivalent of:
dotnet run dwCheckApi.dll |
or
dotnet run |
And that’s our dockerfile created.
But how do you create the image?
Creating The Image
This part is simple. You’ll need to open a terminal and head over to the directory with your dockerfile in it
in my example, I could open a terminal in my dwCheckApi root directory
then run the following command:
docker build . |
This command tells the docker CLI to build our docker image and pass in the current directory as the build context (which is what the “.” character is all about).
When you run this command you’ll get a whole lot of output messages. The first thing that docker will do is download your base images:
Then it will start executing the steps that you have outlined in your docker image:
Eventually, the docker image will be built:
You’ll now have a docker image. But we can’t refer to it by name. By getting docker to list all available images, you’ll see why:
docker image ls |
The output from that command will look a little like this:
REPOSITORY TAG IMAGE ID CREATED SIZE | |
<none> <none> 106a03efe0d4 8 minutes ago 168MB | |
microsoft/dotnet 2.1-sdk-alpine c751b3a7f4de 10 days ago 1.46GB | |
microsoft/dotnet 2.1-aspnetcore-runtime-alpine 4933ffee8f5b 10 days ago 163MB | |
microsoft/dotnet 2-sdk 2ac9a416f201 10 days ago 1.77GB | |
microsoft/dotnet 2.0-sdk 2ac9a416f201 10 days ago 1.77GB |
I’ve highlighted the image we just created, but there’s no way to run it because it doesn’t have a name. What we’ll need to do is rebuild the images and “tag” them.
Tagging an image means giving it some kind of useful identifier. We’ve used tags already (when we pulled our build and runtime images earlier), so let’s rebuild our image and add a tag to it:
docker build . --tag dwcheckapi.image |
The first thing you’ll notice, when you run this line, is that creating the docker image will be super fast. That’s because docker caches every step that it performs. This means that building previously created image will only actually build the stuff which has changed, and will use the cached version of everything else.
After re-running the build command and applying a tag, the list of images will look something like this:
REPOSITORY TAG IMAGE ID CREATED SIZE | |
dwcheckapi.image latest 106a03efe0d4 15 minutes ago 168MB | |
microsoft/dotnet 2.1-sdk-alpine c751b3a7f4de 10 days ago 1.46GB | |
microsoft/dotnet 2.1-aspnetcore-runtime-alpine 4933ffee8f5b 10 days ago 163MB | |
microsoft/dotnet 2-sdk 2ac9a416f201 10 days ago 1.77GB | |
microsoft/dotnet 2.0-sdk 2ac9a416f201 10 days ago 1.77GB |
So How Do I Run It?
Running an image means starting a docker container and having it spin up an instance of the image within that container. To do that, we need to run the following command:
docker run --rm -p 5000:5000 dwcheckapi.image --name running.dwcheckapi |
This tells the docker CLI that we want to run a new container, forward all requests on port 5000 of our machine to 5000 of the container
which is what the “-p 5000:5000” switch does
and to use our newly created image and give the container a name
if we don’t supply a name, one will be generated for us
and to delete our running container when we stop it
which is what the “–rm” switch does
As soon as the container is started, we’ll start seeing logging messages from the .NET Core runtime:
info: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[0] | |
User profile is available. Using '/root/.aspnet/DataProtection-Keys' as key repository; keys will not be encrypted at rest. | |
info: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[58] | |
Creating key {3506abf3-75ce-4e27-94a0-392ed573c76a} with creation date 2018-05-19 17:03:08Z, activation date 2018-05-19 17:03:08Z, and expiration date 2018-08-17 17:03:08Z. | |
warn: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[35] | |
No XML encryptor configured. Key {3506abf3-75ce-4e27-94a0-392ed573c76a} may be persisted to storage in unencrypted form. | |
info: Microsoft.AspNetCore.DataProtection.Repositories.FileSystemXmlRepository[39] | |
Writing data to file '/root/.aspnet/DataProtection-Keys/key-3506abf3-75ce-4e27-94a0-392ed573c76a.xml'. | |
Hosting environment: Production | |
Content root path: /app | |
Now listening on: http://[::]:5000 | |
Application started. Press Ctrl+C to shut down. |
Guess what happens when you head over to localhost:5000?
That’s dwCheckApi running inside of a docker container, on your local machine.
What’s great about this is that you don’t actually have to have either the .NET Core SDK or runtime installed on your computer in order to build and run a .NET Core application. How cool is that?
To stop, close and remove the running container, open your terminal (where docker is running and the log messages are being output to) and hit Ctrl+C.
Conclusion
This is as far as I’m going to go with this part of the blog post.
We’ve taken a source code repo, created a dockerfile for it, created an image from that docker file, then spun up a docker container. We’ve covered a whole lot of stuff, and that’s just the tip of the iceberg. So we’ll leave it there.
In part 3 of this blog post series, I’ll go into a little on how docker actually works. Until then, let me know how you get on with dockerising your applications in the comments below.