Hosting Options – Digital Ocean Droplets

Today’s header image was created by Ishan @seefromthesky, the original source for the image is available here
Hosting
When most developers talk about hosting they’re referring to a website,
Unless they’re talking about a party. And if that’s so, where’s my invite?
but in this post we’ll be looking at hosting a .NET Core application.
Applications built with .NET Core are usually web based, but it’s entirely possible to create command line applications with .NET Core
The web applications built with .NET Core are actually just console applications which act as servers… kind of
There are even libraries for building GUI applications with .NET Core; things like Xamarin, GTK#, and the like.
Anyway, after you’ve built your .NET Core application you’ll need a place to host it so that your users can access it. In the days of classic .NET Framework, hosting was limited to Windows based servers.Unless you were brave enough to run a production app on a Linux server, with the Mono Framework installed.
“brave” here because a lot of .NET devs (myself included) typically had little experience of Linux back in the early days of Mono.
But now that .NET Core is cross platform, we can host our applications anywhere, right? Well, anywhere that’s running a compatible OS.

Correct as of: 9th May 2017
I’ve already written about publishing a .NET Core application with Azure in the past, and Azure is pretty cool.
And if you want to read that article, it can be found here
In this post, I’m going to talk through the steps required to create a Digital Ocean droplet running Ubuntu 16.04,
I’ll go into why I chose this distribution and version in a moment
running the latest stable version of the .NET Core SDK, and using nginx as our server.
I’ll go into why I use Nginx over kestrel in a moment
Digital Ocean
Digital Ocean are a cloud hosting company, they offer a wide range of Linux Virtual Machines and what they call “One Click Apps” (which are installs of common web applications).

As with other cloud hosting providers (Azure, for example) there is a cost involved. Heading down the cloud hosted path, rather than hosting your web application yourself, could prove costly – especially if you end up having a lot of users.
Each of the VMs or One Click Apps that a Digital Ocean user creates is called a Droplet
See what they did there? Ocean… Droplet
and each Droplet is self contained.
Each Droplet is a fully hosted virtual machine running a distribution of Linux (the exact distribution is dependant on what you select when creating your droplet, obviously), and you are given complete control over it.
As you can see from the above gif, there is a One Click App with .NET Core and PowerShell installed on Ubuntu 16.04, but I like to do things the hard way.
Well, not the hard way per se. I just like to have full control over what I’ve installed, and I don’t need PowerShell

Preparing Application Code
Before we do any of that though, we need to make a few tweaks to the code that we’ll be using.
Oh, I didn’t mention which of my .NET Core applications I was going to use to achieve this, did I? If you guessed dwCheckApi, then you guessed correctly.
If you’d like a refresher on this app, then take a look through the articles that I wrote about it’s developement here
The first thing that we want to do is pull down the latest code, we can do this in the terminal with the following command
git pull https://github.com/GaProgMan/dwCheckApi.git |
This will pull the code down to the directory we’re in, we’ll then need to change into the src directory (so that we can build in a moment).
Now we need to alter the Configure method in startup.cs. We wont be using IIS as a reverse proxy (forwarding outside requests to localhost:5000 – which is where our application will be running), so we need to use the ForwardedHeaders middleware.
Firstly, we need to include the ForwardedHeaders namespace:
using dwCheckApi.DatabaseContexts; | |
using dwCheckApi.Services; | |
using Microsoft.AspNetCore.Builder; | |
using Microsoft.AspNetCore.Hosting; | |
using Microsoft.Extensions.Configuration; | |
using Microsoft.Extensions.DependencyInjection; | |
using Microsoft.Extensions.Logging; | |
using Microsoft.EntityFrameworkCore; | |
using Microsoft.AspNetCore.HttpOverrides; |
Once we’ve done that, we need to use the ForwardedHeaders middleware. Because each item of middleware in .NET Core forms a pipeline for all requests, we want to use this before any other middleware:
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory) | |
{ | |
loggerFactory.AddConsole(Configuration.GetSection("Logging")); | |
loggerFactory.AddDebug(); | |
// because we'll be using a reverse proxy other than IIS, we need to make sure that the | |
// headers are passed through with out requests | |
app.UseForwardedHeaders(new ForwardedHeadersOptions | |
{ | |
ForwardedHeaders = ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto | |
}); | |
app.UseCors("CorsPolicy"); | |
app.UseMvc(); |
One last thing that we need to do (which will save us time when we deploy) is to force the database to migrate on startup.
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory) | |
{ | |
loggerFactory.AddConsole(Configuration.GetSection("Logging")); | |
loggerFactory.AddDebug(); | |
// because we'll be using a reverse proxy other than IIS, we need to make sure that the | |
// headers are passed through with out requests | |
app.UseForwardedHeaders(new ForwardedHeadersOptions | |
{ | |
ForwardedHeaders = ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto | |
}); | |
app.UseCors("CorsPolicy"); | |
app.UseMvc(); | |
// seed the database using an extension method | |
using (var serviceScope = app.ApplicationServices.GetRequiredService<IServiceScopeFactory>().CreateScope()) | |
{ | |
var context = serviceScope.ServiceProvider.GetService<DwContext>(); | |
context.Database.Migrate(); | |
context.EnsureSeedData(); | |
} | |
} |
The code is now ready to be published, but let’s check that it actually builds and runs before we publish to (what is effectively) production:
dotnet restore | |
dotnet run |
There shouldn’t be any errors, but if there are you’ll need to fix them before we can continue.
We’ll also need to check that the application still runs without any issues.
it would be pretty pointless to publish an application to production if it didn’t run
Now that we’ve started the application, we’ll need to head over to the address that .NET Core gives you,
I was given localhost:5000
and you should get a screen similar to this one:

Now that we know our application is running locally, we’d better publish it to our production environment.
Setting Up A Digital Ocean Droplet
The first thing we need to do is create an account on Digital Ocean. You can do this by heading over to their homepage and running through the account creation wizard. You’ll need to provide payment details at some point in the account creation, Digital Ocean is not a free service, but it’s worth trying out for a month at the very least.
You can use this referral link and get $10 of free credit when you create your account on me
Once you’ve created your account, you’ll need to create a droplet.
Remember from earlier, droplets are virtual machines that you host your applications on
and you can do after you’ve logged in by clicking the “Create Droplet” button.

On the next screen, we’re going to choose “Ubuntu 16.04.2 x64” as the distribution, choose a hosting price plan,
back when I started playing with Digital Ocean, I chose the $5 level and it was more than enough.
Your mileage may vary, depending on what you want to do with your Droplet though.
a data centre region (you’re better off picking on that’s geographically near you for faster upload speeds), add an SSH key and give the droplet a host name.
Here is an example (I’ve not added an SSH key here, though)

I chose Ubuntu 16.04.2 when I was experimenting with Digital Ocean because I’ve been using Ubuntu (on and off) since version 7 was released. In fact, it’s my daily driver OS on my main home PC.
I’m typing this blog post up using Chromium on Ubuntu MATE 16.04, for example
SSH
We’re going to use SSH to communicate with our new Droplet, so it might be worth taking a few minutes to get an idea of what it is before moving on.
this is going to be an over simplified explanation, by the way
SSH stands for Secure Shell, and is a protocol for connecting to a remote computer and issuing it commands. All of this is done over a connection using public-key cryptography.
SSH is used by most system admins as a way to remotely running commands and scripts on the machines that they manage. We’ll do, pretty much, the same on our droplet.
Depending on the operating system that you’re running, you might have to install software to allow you to connect to a remote machine using SSH. If you’re running Windows, you’ll need to install something like PuTTY; but if you’re running MacOS or a Linux distribution, then you can connect from the terminal.
Connecting to the Droplet with SSH
I’m going to assume that you have access to SSH in your terminal, but the instructions are quite similar for PuTTY
you’ll be filling in boxes on a GUI rather than issuing commands
The first thing we’ll need is the IP address of our Droplet, we can find this on your Digital Ocean Cloud profile page. Here is profile page (with the IP address blurred out):
Under the IP address header we’ll find the IP address. For the following examples, I’ll use a fictional IP address of 123.45.67.123
SSH on Unix-Like OSs
With this IP address in hand, we’ll be able to connect to it using the following command:
ssh [email protected] |
You’ll then be prompted for the SSH key or password (if you didn’t create an SSH key, then a password will be generated and emailed to you) and you’re in.
you’ll also be prompted to create a new password for the root user. Make sure that you remember it, because you’ll need it again very soon
SSH on Windows using PuTTY
Using PuTTY to connect to our Droplet is really simple. Once PuTTY is installed,
Remember, you can get it from here
run it and you’ll see something like this:

Making sure that the SSH radio button is selected, and that you’ve typed in the IP address of your Droplet

Then in the SSH area (on the left hand side, under Connection), ensure that “SSH protocol version” is set to 2
From here, you can click on “Open” to open a terminal and begin connecting to the Droplet. As with the Unix-like SSH steps, you’ll be prompted for your user name and password
however, you’ll be asked who you want to log in as

In a real environment, you would create a non root user and use that one to connect to your Droplet. But, for now we’ll use the root user.
Installing .NET Core
The instructions for installing .NET Core on Ubuntu 16.04 are available on the .NET Core website.
you can read them by clicking here
The steps required can be boiled down to two sets of commands:
sudo sh -c 'echo "deb [arch=amd64] https://apt-mo.trafficmanager.net/repos/dotnet-release/ xenial main" > /etc/apt/sources.list.d/dotnetdev.list' | |
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 417A0893 | |
sudo apt-get update |
The above commands will add the .NET Core packages and Key server to Ubuntu’s package source lists.
sudo apt-get install dotnet-dev-1.0.3 |
The keen eye’d readers will notice that we’re not installing the (at the time of writing) latest version of the .NET Core SDK here. This is because when I did a search for all packages with ‘dotnet-dev’ in the name:
sudo apt-cache search dotnet-dev |
I was given the following output:
dotnet-dev-1.0.0-preview2-003121 - Microsoft .NET Core 1.0.0 - SDK Preview 2 | |
dotnet-dev-1.0.0-preview2-003131 - Microsoft .NET Core 1.0.1 - SDK 1.0.0 Preview 2-003131 | |
dotnet-dev-1.0.0-preview2-003156 - Microsoft .NET Core 1.0.3 - SDK 1.0.0 Preview 2-003156 | |
dotnet-dev-1.0.0-preview2-1-003177 - Microsoft .NET Core 1.1.0 - SDK 1.0.0 Preview 2.1-003177 | |
dotnet-dev-1.0.0-preview2.1-003155 - Microsoft .NET Core 1.1.0 Preview1 - SDK 1.0.0 Preview 2.1-003155 | |
dotnet-dev-1.0.0-preview3-004056 - Microsoft .NET Core 1.0.1 - SDK Preview 3 | |
dotnet-dev-1.0.0-preview4-004233 - Microsoft .NET Core 1.0.1 - SDK Preview 4 | |
dotnet-dev-1.0.0-rc3-004530 - Microsoft .NET Core 1.0.3 - SDK RC 3 | |
dotnet-dev-1.0.0-rc4-004769 - Microsoft .NET Core 1.0.3 - SDK RC 4 | |
dotnet-dev-1.0.0-rc4-004771 - Microsoft .NET Core 1.0.3 - SDK RC 4 | |
dotnet-dev-1.0.1 - .NET Core SDK 1.0.1 | |
dotnet-dev-1.0.3 - .NET Core SDK 1.0.3 |
which doesn’t include a reference to ‘dotnet-dev-1.0.4’, so we can’t install it
Now that the .NET Core SDK is installed, we can check the version number with
dotnet --version |
The above should return with 1.0.3
Incidentally, as this post was being edited and proof read (I do this a few days before going live), Microsoft pushed the packaged for versions 1.0.4 and 2.0.0 Preview 1 – as a result of announcing them at Build 2017.
Installing Nginx
Nginx is a lightweight web server with support for reverse proxying, and it’s what we’re going to use in place of Kestrel.
We’re not going to use Kestrel because it is not ready to be a web facing web server. There are a lot of insecure things that Kestrel does at the moment, and Microsoft do not recommend using it on web facing applications without a more fully functioning web server sat in front of it.
If you expose your application to the Internet, you must use IIS, Nginx, or Apache as a reverse proxy server
…
The most important reason for using a reverse proxy for edge deployments (exposed to traffic from the Internet) is security. Kestrel is relatively new and does not yet have a full complement of defenses against attacks. This includes but isn’t limited to appropriate timeouts, size limits, and concurrent connection limits.
The above quote was taken from https://docs.microsoft.com/en-us/aspnet/core/fundamentals/servers/
So we’ll install Nginx and use that as our reverse proxy server. To install Nginx issue the following command:
sudo apt-get install nginx |
Once Nginx has been installed, we need to start it:
sudo service nginx start |
But that isn’t all that’s required to allow Nginx through the firewall. Depending on the traffic that you want Nginx to receive, you can open either port 80, port 433, or both. To allow Nginx through the firewall, we’re going to inform ‘ufw’
which stands for Uncomplicated Firewall
to allow HTTP only traffic, we should run:
sudo ufw allow 'Nginx HTTP' |
If we wanted to allow HTTPS only traffic, we would run:
sudo ufw allow 'Nginx HTTPS' |
And if we wanted to all both HTTP and HTTPS traffic, we would run:
sudo ufw allow 'Nginx Full' |
Since this is a demo project, and we’re not dealing with sensitive requests, we can allow HTTP for now.
But before we continue, we need to ensure that we don’t wipe out the SSH rule in ufw, to ensure that we still have a rule which will allow SSH connections by running the following command:
sudo ufw allow ssh |
what we’ve done so far shouldn’t affect the SSH rule, but we’ll explicitly add it here just in case.
Anecdotally I was unable to connect via SSH with a different Droplet and it turned out that I’d wiped out the SSH rule. Talk about whoops.
We can check the status of our firewall with the following command:
sudo ufw status |
Which should give output similar to the following:
Status: active | |
To Action From | |
-- ------ ---- | |
Nginx HTTP ALLOW Anywhere | |
Nginx HTTP (v6) ALLOW Anywhere (v6) |
If you’re given the following:
Status: inactive |
Then you can start ufw with the following:
sudo ufw enable |
Let’s check that Nginx is running correctly, we can do that with the following command:
systemctl status nginx |
Which should give output similar to the following:
nginx.service - A high performance web server and a reverse proxy server | |
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: en | |
Active: active (running) since Mon 2017-05-08 20:05:32 UTC; 1 day 15h ago | |
Main PID: 3561 (nginx) | |
Tasks: 2 | |
Memory: 3.4M | |
CPU: 126ms | |
CGroup: /system.slice/nginx.service | |
3561 nginx: master process /usr/sbin/nginx -g daemon on; master_pro | |
4393 nginx: worker process |
Reverse Proxy Configuration
We chose Nginx because it has support for reverse proxying, so let’s set that up now.
otherwise we won’t be able to access out application
First we need to install nano, which is a text editor:
sudo apt-get install nano |
Now that we have nano installed, we can edit the Nginx configuration with it:
sudo nano /etc/nginx/sites-available/default |
This will give you a screen similar to the following:

The configuration file is quite big, here is mine:
## | |
# You should look at the following URL's in order to grasp a solid understanding | |
# of Nginx configuration files in order to fully unleash the power of Nginx. | |
# http://wiki.nginx.org/Pitfalls | |
# http://wiki.nginx.org/QuickStart | |
# http://wiki.nginx.org/Configuration | |
# | |
# Generally, you will want to move this file somewhere, and start with a clean | |
# file but keep this around for reference. Or just disable in sites-enabled. | |
# | |
# Please see /usr/share/doc/nginx-doc/examples/ for more detailed examples. | |
## | |
# Default server configuration | |
# | |
server { | |
listen 80 default_server; | |
listen [::]:80 default_server; | |
root /var/www/html; | |
# Add index.php to the list if you are using PHP | |
index index.html index.htm index.nginx-debian.html; | |
server_name _; | |
# Secure Nginx from clickjacking | |
add_header X-Frame-Options "SAMEORIGIN"; | |
# MIME-type sniffing | |
add_header X-Content-Type-Options "nosniff"; # thanks to commenter Fredrik Jonsén for pointing this out | |
location / { | |
# First attempt to serve request as file, then | |
# as directory, then fall back to displaying a 404. | |
#try_files $uri $uri/ =404; | |
# .NET Core config | |
proxy_pass http://localhost:5000; | |
proxy_http_version 1.1; | |
proxy_set_header Upgrade $http_upgrade; | |
proxy_set_header Connection keep-alive; | |
proxy_set_header Host $host; | |
proxy_cache_bypass $http_upgrade; | |
} | |
} |
I’ve removed everything that isn’t a useful comment or a configuration option. The best option here is to make your config match mine.
pro tip: PuTTY and SSH accept a right click as the “paste” command. Just sayin’
Once you’ve done that (and double checked it), you can save your changes with Ctrl+X and selecting ‘y’ (to overwrite the file).
After the configuration has saved, we need to test it by running the following command:
sudo nginx -t |
If the response you get doesn’t indicate a success, then you’ll need to edit the configuration with nano again and correct the errors it gives. Here is the success message I was given:
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok | |
nginx: configuration file /etc/nginx/nginx.conf test is successful |
Finally, we need to reload Nginx to apply our configuration. You can do that with the following command:
sudo nginx -s reload |
Creating a Directory for out Application
We’ll need to install our application somewhere, so let’s store it in it’s own directory. We’ll choose var as the parent directory for now.
Here’s what Wikipedia says about the var directory:
Stands for variable. A place for files that may change often – especially in size, for example e-mail sent to users on the system, or process-ID lock files.
The above quote is taken from the Wikipedia article on the Unix Filesystem
We’ll need to change to the var directory on our Droplet, and create a subdirectory for our app. So let’s do that:
cd /var | |
mkdir your-app-name |
You’ll need to replace ‘your-app-name’ with the name of your application. Since I’m using my dwCheckApi, I’ll run the following:
mkdir dwCheckApi |
Then we need to make sure that we own the directory – this is a Unix security and permissions thing – which we can do with:
sudo chown root your-app-name |
Again, swapping ‘your-app-name’ for the name of the directory you just created. As with the previous step, here is the command I ran:
sudo chown root dwCheckApi |
Publishing to A Digital Ocean Droplet
Before we can publish our application, we need to build a release version of our application. We have two options here, we can build either:
- A framework-dependent deployment (FDD)
- A self-contained deployment (SCD)
Here is how Microsoft describes the differences between the two build types:
- Framework-dependent deployment. As the name implies, framework-dependent deployment (FDD) relies on the presence of a shared system-wide version of .NET Core on the target system. Because .NET Core is already present, your app is also portable between installations of .NET Core. Your app contains only its own code and any third-party dependencies that are outside of the .NET Core libraries. FDDs contain .dll files that can be launched by using the dotnet utility from the command line. For example, dotnet app.dll runs an application named app.
- Self-contained deployment. Unlike FDD, a self-contained deployment (SCD) doesn’t rely on the presence of shared components on the target system. All components, including both the .NET Core libraries and the .NET Core runtime, are included with the application and are isolated from other .NET Core applications. SCDs include an executable (such as app.exe on Windows platforms for an application named app), which is a renamed version of the platform-specific .NET Core host, and a .dll file (such as app.dll), which is the actual application.
The above quote is taken from the Mircosoft documentation on Deploying .NET Core applications
We could build either of these release types, but since we already have .NET Core installed we’ll build an FDD version.
This will give us the added bonus of being a much smaller deliverable which, in turn, will have a much shorter upload time.
On your development machine (i.e. not the Droplet), go back to the terminal we used all the way at the beginning of this post (to get the latest version of the code and build the code), and run the following command:
dotnet publish -c release |
This will create a directory in your bin directory called release, and within it a release build of our application. Here is a screen shot of the release directory in VS Code on my machine:
We now need to send the files up to the Droplet. We’ll use FileZilla to create an SFTP connection to the server and publish our application to the directory we created (in the previous section).
you can use which ever SFTP application you wish, but I like FileZilla so I’ll use that
In FileZilla we need to set the following for our connection:
- Host
- sftp://123.45.67.123
- Username
- root
- Password
- the password you where emailed, or the SSH key
As with the earlier steps, I’ve used 123.45.67.123 as a fake IP address. As an example, here is a screenshot showing those settings:
Once you’ve supplied the connection information, click “QuickConnect” and FileZilla with establish an STFP connection. You’ll be asked if you want to trust the unknown host key, this is because FileZilla will have never connected to this server using the host key returned. Click “OK” and your connection will be complete.
You should see something similar to this:

In the lower left window (labelled Local Site), navigate to the publish directory where the release build was created
you can check what this is by going back to the terminal you used to build a release version of the application
and in the lower right window (labelled Remote Site) navigate to ‘/var/your-app-name’.
replacing ‘your-app-name’ with the name of the directory you created for it earlier
As a pro tip, if you know the address of the directory you want to navigate to, just type it into the dropdown above the relevant directory listing.
Here is a screen shot showing both directories for my set up:

Select all of the files in the “publish” directory on the Local Site (i.e your machine, on the right), right click on them and select “Upload”
FileZilla will start the process of uploading the files to your Droplet.
Once all of the files have been SFTP’d
can SFTP be a verb?
to the Droplet, head back to you SSH session and issue the following commands:
cd dwCheckApi | |
dotnet dwCheckApi.dll |
you’ll have to substitute ‘dwCheckApi’ for the name of your app (unless you’re using dwCheckApi to play along).
Then all we need to do is point our browser at the IP Address of our Droplet
remember, I’ve been using 123.45.67.123 as an example throughout this post
and we’ll see out application running in all of it’s glory:
And here is the response to sending a character search query:
Caveat
To run our .NET Core application, we need to issue the following command:
dotnet dwCheckApi.dll |
However, as soon as we close our SSH connection
pro-tip: to disconnect from SSH, use Ctrl + D
the running application will be closed. This is because it’s running via the SSH connection. To enable our application to stay alive after we’ve closed the SSH connection, we’d have to use a service.
Looking at the length of this post, as it stands,
and you’re more than likely going to think that I’m trying to cheat you our of some knowledge here
I’m a little reluctant to throw in extra content about services and how to set one up so that our application continues to run. However, there is an article in the .NET Core documentation about this very thing. You can read it here, if you wish.
Conclusion
There’s a little more set up involved with publishing to a Digital Ocean Droplet rather than publishing to Azure. However, if you’re not running Visual Studio on Windows or on MacOS, then you’ll have to publish to Azure via SFTP anyway.
I really quite having to run all of the commands and build up my server by hand, but that’s personal taste.
There’s a lot to take in here, but it’s worth knowing. At the very least, it’s worth knowing how to setup a .NET Core application on a non-Azure Cloud VM.
At least, in my opinion it’s worth knowing.
I apologise for the sheer length of this post, but I wanted to be a little thorough where I thought it necessary. Also, this post is my longest yet (clocking in at 3.5 thousand words).
Eep
For those want to try out a Digital Ocean Droplet, you can use this referral link and get $10 of credit on me, when you create you’re account.
Aren’t I nice?