Sunday, 1 July 2018

Configure TeamCity to access private GitHub Repositories

One of the challenges I have been facing lately after moving to private repositories on GitHub is the ability to access them via TeamCity. The issue is that now the repository is not accessible via https and you have to find an alternative to retrieve the source code of your repository securely.

For this task, I will show you how to use GitHub Deploy keys and how to configure TeamCity to interact with your private repository.


The overall idea can be seen in the figure above. First, we will have to create the keys so we can place them in the required section. To generate the keys, you can just use Git Bash and using the command below:


Once finished, you will have two keys, the private and the public one.

Installing the public key in GitHub:

The operation above should've produced 2 keys (files):

  • id_rsa (private key)
  • id_rsa.pub (public key)

Open the file id_rsa.pub or run the following command on your git bash console to copy the content of the file into the clipboard: clip < ~/.ssh/id_rsa.pub

Now, go to your private repository on GitHub and select "Settings" and then "Deploy Keys". Once there, click on "Add deploy key" and paste the content of the file you've opened before / copied into the clipboard.


Once completed, you should see something like the image below (note that the image below shows that the key has been already used):


Installing the private key in TeamCity:

The following operations have been done to the latest version of TeamCity at the time of the publication of this article (2018.1 build 58245). I tried version 2017 initially and the configuration didn't work (just so you know if you are still on any version prior to 2018.1):

Click on your project overview and click on "Edit project Settings". Select "SSH Keys" and click "Upload SSH Key" button to upload your id_rsa file:


Now the SSH key will be available in your VCS Root. Now go to your build step and add a Git VCS Root that will pull the source code from the repository. The parameters that you have to configure are as follow:

  • VCS Root Name: Name of your VCS.
  • Fetch URL: URL of your repository in format git (not in https format as it will not be available because the repository is private). In this case you will have to change the https URL by this other git one as shown in the sample below:
  • Default branch: refs/heads/master
  • Authentication method: Uploaded Key
  • Username: empty (don't type anything here)
  • Uploaded Key: id_rsa (is the one that I've just uploaded)
  • Password: type the secret word you have configured in your private key if any.

If you now test the connection, it should be successful:


If you have a look at your project, you will see that the project is successfully connecting to your repository and pulling out the changes that are pending to be implemented in your pipeline:

I hope you find it useful as I have spent quite a lot of time just trying to find the right approach.

Jordi

Sunday, 15 April 2018

Detecting Ajax Requests In ASP.NET MVC 5

Security is one of my major concerns nowadays and it is quite common that someone will try to browse to a particular URL given any chance. To avoid this, we can detect whether the request came from an Ajax request or from a normal browser request.

Within ASP.NET MVC5 applications is quite easy to check if a request is being made via AJAX through the extension named IsAjaxRequest() method that is available on the Request object. The IsAjaxRequest() actually works by simply performing a check for the X-Requested-With header.

The example below will show you how wrap this functionality inside an Action Filter so you can add this feature to any controller you want to perform this check upon.

Creating the Action Filter:

Then just use the attribute in any of the actions you want to perform this operation:

And now, if you try to browse that action from a normal request you will get the following error:


Now you can make your site more secure without worrying about someone inadvertently clicking to  the action.

Monday, 25 September 2017

Make your Delphi applications pop with Font Awesome!

Yes you heard right, Font Awesome? You can use their icons to make your desktop applications pop. Nowadays I use it to make my websites look nicer and without having to worry about finding icons for my apps and edit them and so on. Font Awesome is one the smartest things you can use to make your applications pop.

Use Font Awesome icons in Desktop apps

First download Font Awesome and install it in your computer. At the time of this article I was using version 4.7.0 so I downloaded font-awesome-4.7.0.zip and installed the FontAwesome.otf file on my Windows 10 machine:



Font Awesome provides a cheatsheet that can be used to copy and paste the icons directly in your app so you don't have to worry about memorising any particular code to make the icon appear:



Nowadays I use the common approach where I buy certain icons or draw them myself using Photoshop (although this second option is quite time consuming and I only do it when I want to achieve the best results).


I'm sure you are all familiar with this approach, you add your icon in bmp format into one ImageList component, then link the ImageList to the button and select the icon index so it appears in your button as displayed in the image above. The problem arises when you want to have different sizes of that button as you will have to have different icon sizes to match it and so on.

So one of the things that I tend to do now in my applications and that makes it look a bit more standard (in terms of user experience as the user sees the same type of icon throughout the application) is by using Font Awesome:


The animation above displays how to replace one of you icons with a Font Awesome icon easily:

  • Locate the icon you want in the Font Awesome cheat-sheet.
  • Copy the icon image (not the Unicode code).
  • Locate the component you want to add the icon to.
  • Select Font Awesome font.
  • Paste the icon in the caption or text zone.
  • Adjust size to your needs.

In the images below you can compare the before/after and you'll see that the difference is noticeable:

Before (mixture of icons in bmp format plus some png images made with Photoshop):


After (Font Awesome fonts replacing all the icons):

Notice that now I can even include icons where there should only be text! so using this way I can compose my headers in a nicer way and include a very descriptive icon.

You will need Font Awesome font installed on your machine or target machine in order to take advantage of this functionality. I've used the latest Delphi 10.2 Tokyo on this one if anyone was wondering about it.

The example above is for VCL only and it should also for for FMX applications.

Example with FMX:


Jordi
Embarcadero MVP

Thursday, 27 July 2017

JSON RTTI Mapper with Delphi

One of the patterns that I have observed a lot during the time I have been playing with JSON streams is that I create my object based on the JSON stream and then I set the properties of that particular object manually so I can work with it rather than with the JSON object itself. 

Let's observe the following example. Imagine that we have the following JSON stream:


As you can see this JSON represents a list of Employees and each employee has the properties Name, Surname, Age and Address. So if we want to hold this in a TList<T> then we will have to create a class TEmployee with those properties and then manually assign each JSON parameter to each property like the example below:

The code is quite straight forward. We need to loop through the JSON array and populate the list.

If you look closely, you will see that basically we are mapping a field in the JSON object that it's called "name" to an object property called "Name". So to make it simpler this would literally be something like this:

Any mapper out there does one simple job and it's the job of mapping one field from one source to another.

So the question here is how to achieve this in a more clever way? Easy, let's use RTTI to map those properties!

Using the methods TypInfo.SetStrProp and TypInfo.GetPropList you can easily explore and the list of published properties of your class and set the value of them. To make use of the RTTI capabilities, you will have to move those properties to the published section of the class so they are visible through the RTTI.

Now you know how to use the RTTI to read the list of published properties and set them to a specific value. These examples have been coded with Delphi 10.2 Tokyo and you can find part of the mapper in one of the projects I'm currently working on: COCAnalytics.

There are many libraries out there that do amazing things with JSON so it's up to you to explore them. At least now you know how to map using the RTTI.

Happy coding!.

Jordi
Delphi MVP.

Tuesday, 25 July 2017

Writing quality code with NDepend v2017

The new version of NDepend v2017 has totally blown my mind. I can't stop exploring the new and enhanced Dashboard with features like Technical debt estimation, Quality Gates, Rules and Issues. In this post I will try to summarise what's new with NDepend v2017 and how to use these new features to write quality code.

I'm sure that you have experienced the feeling when you start typing code and after few weeks down the project you don't really know If what you have designed and coded is actually good or bad (by bad I mean that it's in some way rigid, fragile or non-reusable). I'm an advocate of Continuous Integration so I do have loads of metrics that help me identify broken windows or code smells easily in my code during the check-in stage. All these metrics encompass standards such as Design, Globalisation, Interoperability, Mobility, Naming, Performance, Portability, Security, Usage and so on. But none of them give a global rating that I could easily use to check if my project is actually good or bad.

Enhanced Dashboard

This new version contains a new set of application metrics that really improved the overall quality of the product. I will start integrating this as part of my release procedure as it gives a really good grasp of the status of the project from the coding side.

Here is a quick sneak peek of the new dashboard:


The aspects I'm most interested in and that I will delve into detail are the following ones:

Technical Debt Estimation

This new feature is a MUST for me. Just analyse your project with NDepend v2017 and let it give you the percentage of technical debt according to the rules that you have configured in your project. After every analysis you can see the trend and act accordingly using this metric:

This section considers the settings I have configured for my project. In this case the debt has increased from 7.75% to 7.93% due to the increase of number of issues in the solution. It also determines that the time needed to reach band "A" is of 3 hours and 32 min. The total amount of days needed to fix all the issues is the Debt (1 day and 1 hour).

To get values closer to reality, you have to configure your project to specify how long it will take you or any member of your team to fix an issue (most of the times I just specify half a day per issue as a rule). Here you can see the settings I have specified in my solutions as a rule of thumb and that you can consider in your projects:


These settings use the following considerations:

  • Your team will mostly code 6 hour a day. The rest of the time is spent with meetings, emails, research, etc.
  • The estimated effort to fix one issue is of 4 hours. That's the minimum I would give as average. There are issues that are fixed in 5 min and there are others that might take quite a bit of time. Don't forget that this time also includes filling the ticket details in your scrum environment and documentation, etc.
  • Then depending on the severity of the issue there is a threshold specified too as you can see in the figure above.

Another aspect to consider to get a proper estimation is also the code coverage. If you configure the coverage correctly in your solution then NDepend can get that data and use it to get a more comprehensive estimation.

To configure code coverage for NDepend you can follow my steps below:


Configuring JetBrains DotCover.

Once you've run your initial analysis, NDepend will also ask you to configure Code Coverage to get more information about your project and some additional metrics.

Go to NDepend project coverage settings under the Analysis tab and in there you'll have to select the XML file generated by DotCover.


If you run your tests with ReSharper you can select the coverage option and then in that menu go to the export button and select "Export to XML for NDepend". Leave this file in a known folder so you can automate this easily later on. The goal here is to configure everything manually but then you will have to do the work around so you can trigger all this with your build agent and get the report at the end of the run.

Chose the exported file and run again your analysis:


Now with all these details if you run NDepend you should get something like this:


Now you can see proper debt and the coverage. This is a little project that I'm currently working on and that it really works to demonstrate how good NDepend is in this case. If you don't know what one of the terms means, you can just click on it and you'll be redirected to the panel with all the details about that specific metric and its description.


The following three additional panels help shaping the technical debt information: Quality Gates, Rules and Issues. Below you'll find a quick introduction on each section and its relevance.


Quality Gates

Quality gates are based on Rules, Issues and Coverage. Basically this section determines certain parameters that your project should match in order to pass "quality". So for example: your project should contain a % of code coverage, your project should not contain Blocker or Critical issues, etc.

Here are some of these gates used for your reference:


Rules

Rules are defined as Project Rules and they check for violations in your code. This is like the rules defined by FXCop and that provide real arguments as to why your code is breaking a rule or that it needs to be better. Once you've gone through several iterations of fixing these, then your code will get cleaner and better (I promise you!). And most of all, you will understand the reason behind the rule!.

Here are some of these rules:

If you think that one of these rules does not apply to your project, you can just uncheck it and the framework will take of it so you don't have to worry about it anymore.


Issues

The number of issues are just a way of grouping the rules so you can determine which ones are the important ones to fix. So you can violate few rules but then these rules are categorised between blocker and low. So even though the project is violating 18 rules, 1 of these rules is just Low. This gives you an understanding of what's important to fix and what can wait.

Then each issue has a clear definition of the time that could take to fix:




Conclusion

To conclude, writing quality code is one of my main concerns nowadays. It's really easy to write code and also code that works but the difference between code that works and excellent code is this: quality and NDepend has the solution for you.

I have been fiddling with tools like FXCop and NDepend for a while now and I must say that NDepend is a must have in my toolkit belt. Really easy to use and with just one click you can have real arguments on the issues that need to be fixed in your solution and how long the team should take to fix them.