Monday, 11 May 2020

Deploy a Containerized Core 3.1 Web Api to Kubernetes

It's been a while since I last wrote and it's been a very interesting journey so far. Busy as usual, it's been hard to find the time to write something meaningful here. This time I'm bringing you something I've been playing with for a while and that I hope it can bring your development to another level. In this article, I will show you how easy it is to containerize your .net core applications into a Docker container and then deploy it to a Kubernetes cluster. I'm also preparing my Rpi cluster to become my main Kubernetes cluster and I will write about it once everything is up and running.

First, you'll need the following tools and requirements (Note that I'm using Windows 10 as my main OS, so everything will be configured around it):

1) Docker Desktop for Windows:
At the time of this article, I'm using Docker which should be the latest one. You can get the latest from here: ( and follow the steps from the website.

2) Enable Kubernetes
Once everything is installed, open Docker preferences and select Kubernetes. Enable all the options there and restart Docker. That should bring back Docker and Kubernetes and you should see them online as in the picture below:

3) Enable File Sharing
You need to make sure you enable File Sharing so the container can see your local drive.

4) Get VS Code and Install Docker plugin (v1.1.0)
I won't go into much detail about .net core installation as I'm guessing you should all be able to build and run core applications locally ( In this step, you'll need to download VS Code if you haven't already and the Docker plugin which will allow us to create docker files easily as they already have a pre-built template.

5) Get VS Code Kubernetes plugin (v1.2.0)
Install the plugin to generate the YAML files that will be needed to configure the cluster. This requires the YAML component and at the time of this article, I was using version 0.8.0.

6) API Testing
In order to test the API, I recommend using Swagger ( as it will easily expose the API functions and arguments that you can test. Alternatively, you can use Postman ( which is also a great tool for this.

Dockerizing your Web API
Now we are ready to go to our Web API project and create a docker file. Note that I'm using a complex project for this exercise and not just a simple hello world. This API has additional complex requests and it also requires a Database. It also runs a few services and exposes most of the functionality via Swagger UI. 

Go to the folder of your project and type "code ." to launch vscode on the project. Then go to view -> command palette and type "docker add" as in the picture below and select "Docker Add Files to Workspace":

Once you add the template. it will ask you about the following details:
- Application platform: ASP.NET Core.
- Operating System (container): Linux
- Ports to open: 80 and 443

The example used is a bit more complex and it includes several dependencies that need to be copied across during the creation of the container. Below you can find a dependency diagram which shows the project structure:
The application consists of a WebAPI that allows you to submit trading orders and it includes a service layer that performs the triggers internally. It also connects to a SQL Server DB which sits outside the container on my local machine. So, the connection string in the project needs to point to my local machine ( plus the SQL Server port (1433). Sample connection string: ("DefaultConnection": "user id=user; password=password;Initial Catalog=TradingPlatform;Data Source=,1433;").

The final docker file should look as follows:

The file mentions that it will create an core 3.1 base layer and that it will switch to the working directory app exposing ports 80 and 443. The second image includes the SDK and it will copy all our source code there. And it will run the different commands to restore the dependencies and finally to run the dotnet build command on our solution.

Now that we have our docker file, we can run it using the following commands (open a new terminal via VSCode and type "docker build -t trading-platform:v1 .". The output should look like:

If everything works as expected, the API should be in the docker container and we should be able to see that everything is running as expected (using command docker images):

Now we just need to run our container (docker run -it --rm -p 8080:80 trading-platform:v1) and test that everything is working correctly. Note that the exposed port is 8080.

As you can see in the image below, the API is up and running and I can explore it via Swagger UI on localhost:8080/swagger/api/index,html which is the port we have exposed through our container:

Deploying it to Kubernetes

Now that we have our docker container with an ASP.NET Core Web API that talks to a SQL Server DB and that will now be deployed to a Kubernetes cluster. Your Kubernetes cluster should already be up and running if you had followed the steps above during the Docker installation.

Check that your Kubernetes context is switched to docker-desktop:
You can check that the configuration is correct by using "kubectl config get-contexts":

This will allow us to select the group we want to work with. Note that I have additional clusters created on my local.

Now we need to create the deployment file (deployment.yml). Generate a new deployment.yml file in your folder and then via VS Code, type "deployment" and that will bring up the inline annotation from the Kubernetes plugin. Then fill in the gaps with the information you need to set up the cluster and pods as shown in the information below:

We will provide a deployment name "trading-platform-deployment", the name of the pod "trading-platform-pod" and the name of the docker container to use, which in our case is called "trading-platform:v1". We will then specify port 80 as the port of the container.
If everything goes well, you should see your deployment in Kubernetes ready and also the pods. We can also see that the app is running by inspecting the logs:

In order to make it publicly available, we need to provide a service layer that will give us the IP to reach the container. Now we need to generate a service.yml file (press cntrl+space to trigger the Kubernetes plugin and select the deployment service option to scaffold the service template):

The service is linked to the Pod and we specify the port we want to expose (8080) and the port in the container (80) and also the type of service which is LoadBalancer in our case. We can now see that everything is running correctly using the following commands:

And presto! Now we have our API running on a Docker Container and deployed to a Kubernetes cluster and to make it more real, the project is a complex project with services and additional libraries and also with a connection to a SQL Server DB. 

If we browse to (http://localhost:8080/swagger/api/index.html) we will be able to reach the API.

Once completed, if you want to stop the service and pod, type:
- kubectl delete service trading-platform-service
- kubectl delete pod trading-platform-deployment-6bf776f966-8xs7r

Saturday, 10 August 2019

Date and Time in Console log in ASP.NET Core

One of the challenges that I found so far when moving towards ASP.NET Core is that the default console logger (built-in) that comes with it is not as verbose as I was expecting when displaying Dates and Times. To me this is the most obvious reason as I want to see when something happened:

As you can see in the image above, you can appreciate all the actions that are happening to the different controllers but there is no timestamp. This was raised as an issue with the aspnet team but it was deferred for later as it was not a critical piece of functionality...I guess who defines if this is critical or not. In any case, I'm the bearer of good news with the solution to your problems.

Serilog to the rescue!

With Serilog you can easily create a template so we display the message in any way we see fit. To accomplish this, here are the different components that you need (I'm already using the latest .NET Core 3.0 Preview 7):

Install the following packages:
  1. - Serilog.AspNetCore (latest, at the time of this example, was 2.1.2-dev-00028)
  2. - Serilog.Sinks.Console (latest, at the time of this example, was 3.1.2-dev-00779)
Add the following code to your Program.cs file:

And now you will see the log that you need:

Hope it helps!

Sunday, 4 August 2019

Enable .NET Core 3.0 Preview 7 on Visual Studio 2019

I've started migrating all my ASP .NET Core 2.1 apps to the latest ASP .NET Core 3.0 but as it's still marked as a preview as of July 2019, you will have to tell Visual Studio to allow these kinds of packages. So for now, to create ASP.NET Core 3.0 projects with VS2019 (version 16.2.1), you should do the following:

  • Enable preview releases of .NET Core SDK
    • Click Tools | Options in the top menu
    • Expand Environment | .NET Core 
    • Ensure that "Use previews of the .NET Core SDK (requires restart)" checkbox is checked

Once you restart Visual Studio and install all the necessary components mentioned above, you should see the following option under Target framework in your project:

Now, to build your project and make it work is another thing, but at least the framework is there your you to play with!

Happy Coding!

Sunday, 16 September 2018

Creating a File Server using ASP.NET Core 2.1 Static Files Middleware

One of the coolest features of ASP.NET Core is the ability to serve static files on HTTP requests without any server-side processing. This means that you can leverage this technology to build your own file server to serve static files over the wire. I managed to code a quick solution over the weekend to be able to upload/download files to one of my servers easily with this technology.

It all started with this Tweet:

The issue here is that both of us spend loads of time taking pictures and videos of our daughter and end up sharing all this data via Whatsapp. By the end of the day we both have the same information but in different resolutions and qualities which makes our life quite difficult when trying to guess which picture has the highest quality for printing (we tend to print most of the pictures). So we needed something simpler and quicker where to store all this pictures and videos (keeping the highest possible quality) and that it would be easier for both of us to share.

.NET Core and Ngrok to the rescue. 

With Ngrok, I can easily expose one of my websites to the world via one of the ngrok tunnels. I do own a professional account with them and it really made my life much easier as I can expose whatever I need to the world without having to tinker with my router. This helps me to expose services from my Raspberry Pi's and from my Servers.

Using .NET Core Static files middleware, I was able to build a quick solution (accessible via any browser and mobile responsive) with just 300 lines of code.

The main features of the application are:
  • Multiple file uploader (up to 300 Mb).
  • Multiple file downloader.
  • Daily browsable functionality (it allows you to navigate on each day to see the list of files uploaded).
  • Thumbnail automatic generation using MagicScaler.
  • Automatic movie conversion via CloudConvert (this allows me to share videos between mobile devices as iPhones generate .mov files which cannot be played on Android devices).
  • Keep the existing quality of the file (full size). This means uploading huge files into the File server.
  • Cookie authentication with Policies.

You can see the flow in the image below:

Sample code for the file uploader can be found below:

To list the files, I use the File provider in ASP.NET Core which uses the static file middleware to locate static files, in my case pictures and videos.

There are quite a lot of things to do to improve the application but now anyone who uses the application can easily upload/download pictures from one of my servers which is monitored and constantly backed up so no picture gets lost. Also segregating pictures by dates helps a lot to find the one you were looking for.

I'm really impressed with the latest release of .NET Core (note that this has been built with .NET Core 2.1 and there is already a 2.2 preview version available) as the request are really fast and you can't even notice any lag even browsing with your phone on 3G which gives a nice user experience.

This is how it looks when browsing from my phone:

Source code will be available soon on my Github page as I'm trying to figure out one of the issues with https redirection which still does not work correctly and without it, it doesn't make sense.

Jordi Corbilla

Sunday, 1 July 2018

Configure TeamCity to access private GitHub Repositories

One of the challenges I have been facing lately after moving to private repositories on GitHub is the ability to access them via TeamCity. The issue is that now the repository is not accessible via https and you have to find an alternative to retrieve the source code of your repository securely.

For this task, I will show you how to use GitHub Deploy keys and how to configure TeamCity to interact with your private repository.

The overall idea can be seen in the figure above. First, we will have to create the keys so we can place them in the required section. To generate the keys, you can just use Git Bash and using the command below:

Once finished, you will have two keys, the private and the public one.

Installing the public key in GitHub:

The operation above should've produced 2 keys (files):

  • id_rsa (private key)
  • (public key)

Open the file or run the following command on your git bash console to copy the content of the file into the clipboard: clip < ~/.ssh/

Now, go to your private repository on GitHub and select "Settings" and then "Deploy Keys". Once there, click on "Add deploy key" and paste the content of the file you've opened before / copied into the clipboard.

Once completed, you should see something like the image below (note that the image below shows that the key has been already used):

Installing the private key in TeamCity:

The following operations have been done to the latest version of TeamCity at the time of the publication of this article (2018.1 build 58245). I tried version 2017 initially and the configuration didn't work (just so you know if you are still on any version prior to 2018.1):

Click on your project overview and click on "Edit project Settings". Select "SSH Keys" and click "Upload SSH Key" button to upload your id_rsa file:

Now the SSH key will be available in your VCS Root. Now go to your build step and add a Git VCS Root that will pull the source code from the repository. The parameters that you have to configure are as follow:

  • VCS Root Name: Name of your VCS.
  • Fetch URL: URL of your repository in format git (not in https format as it will not be available because the repository is private). In this case you will have to change the https URL by this other git one as shown in the sample below:
  • Default branch: refs/heads/master
  • Authentication method: Uploaded Key
  • Username: empty (don't type anything here)
  • Uploaded Key: id_rsa (is the one that I've just uploaded)
  • Password: type the secret word you have configured in your private key if any.

If you now test the connection, it should be successful:

If you have a look at your project, you will see that the project is successfully connecting to your repository and pulling out the changes that are pending to be implemented in your pipeline:

I hope you find it useful as I have spent quite a lot of time just trying to find the right approach.