Monday, 13 June 2016

A simple VSTS-based pipeline for Java Web Applications

I took the plunge last weekend about building a pipeline for a very simple Java application, and it was very, very easy to do so.

The app in question is DeepSpace. It is written in Java and AngularJS, so it looked like it was perfect for my requirement. It is used in the VSTS Java demos, but I didn’t want to go down that route because of the deployment approach.

What I wanted to throw in the mix is Azure Resource Manager of course, I am not going to use FTP and manual credentials from a .publishsettings file anymore! So the first thing I did was to create an ARM template for my website.

What does it take to deploy such an application on VSTS? Well, I would say around ten minutes, tops. I realised Donovan Brown did the same thing, it would have saved me a bit of research!

Start with the build: VSTS has Maven running in the Hosted Build, so there is no setup cost you need to factor in for the build server:


The pom.xml file is kindly provided by DeepSpace, but it would not take long to have one. You can see I am packaging the application (so I would get a .war file, more to come later on the matter) and I am using JaCoCo for Code Coverage – again provided by the Hosted Build.

The next step is publishing the artifacts to VSTS. Nothing really fancy here – just push the .json and .war files.


So, we built our stuff. Now we want to push it to Azure I reckon. Release Management is definitely the right tool for this job.

I am using the Trackyon Advantage task like Donovan because I realised Tomcat is not exposed if you create a Java-based Azure Web Site with ARM, and you can’t change its configuration because it would be running under Program Files, where the user doesn’t have edit permissions.

By the way, if you want to have a look at what happens to your Azure Web Site, at what’s inside and if you want to run a cmd, browse to, where Kodu would provide a great amount of information and you can actually browse and edit (where possible) things.

So, back to RM – I am going to change the format of the .war file to a .zip compatible with MSDeploy so I can reuse the Azure Web App Deployment task and I am not going to fiddle with Tomcat (which means not getting near any credential or custom file modifications, which in turn is very good for automation!). If instead you need/can access Tomcat, use the VSTS extension for this.

I am literally just providing paths here:


Then I am going to deploy my ARM template as usual (it is made of a single Website and Azure App Service at the minute), and I am pushing my Web App as well:


That’s it! I wasn’t expecting it to be that easy – the only place where I stumbled was the war to zip conversion.

What I did was searching on the Marketplace for “war zip”, I had a look and Trackyon Advantage was there among the other five results. I looked at the description and it did what I was searching for. There is literally an extension for everything these days!

Of course the pipeline lacks stages, approvals and all the rest. But this is what I put together in around an hour, so it is a great starting point!

Friday, 10 June 2016

Moving a SonarQube installation to SQL Azure Database

There might be a tons of reasons behind it – you might want to take advantage of SonarQube’s support for SQL Azure Database, and it is totally fair enough.

There was a showstopper in the past if you were on 5.5 – this bug, fixed with the 5.6 release.

So let’s move! But upgrade to 5.6 first, on-premise Smile so you are going to have a clear starting point.

The first thing you need to do is to create a new SQL Azure Database in your subscription. Call it like the one you have on-premise, and use the same collation (tip: remember CS_AS…) for peace of mind.

Then (unless you are using SQL Server 2016) run the SQL Azure Migration Wizard. This tool will do everything on your behalf, and it is going to migrate the database in the cloud.

If you get any connection error here, remember that SQL Azure is locked down for external access – you need to add the IP address for client connectivity to the Azure Firewall:




As you would be using SQL Server Authentication, you also need to create a SQL User for SonarQube. Even if you already used that, the users are not migrated by the tool so it is something to do anyway.

Eventually, change the SonarQube database connection string to your new <azure DB> in the file:


Done! It is really easy, and if you are moving from a SQL Server Enterprise Edition it is also cheaper.

Wednesday, 8 June 2016

I updated SonarQube to 5.6 and nothing works any longer, should I panic?

SonarQube 5.6 is the new LTS release, hence there are lots of changes.

The first step where you might panic is when you launch it for the first time, you will find this:


ce is the Compute Engine, a process for data aggregation on the server.

Don’t worry, Process[ce] will eventually go up after you run http://<your SonarQube>/setup and migrate to 5.6.

Then, this at build time:


SonarQube comes with no plugins out-of-the-box! Remember to bring yours from the old installation and update them after the system check..

Wednesday, 25 May 2016

Getting started with LaunchDarkly and client-side feature flags

These days it is extremely easy to start using Feature Flags, especially with a service like LaunchDarkly.

In my case, I just wanted to setup a quick demo of client-side feature flags using only plain JavaScript and LaunchDarkly – I am pleased to say it is extremely easy even for a Web 0.9 chap like me! (Yep, I never really got into the Web 2.0 and the WhateverJS frameworks craze of the last few years Smile)

Let’s start with a few assumptions:

  • I want to use LaunchDarkly to manage my Feature Flags (and there might be many reasons behind this choice)
  • I want to show features only for authenticated users

I am not going to authenticate users myself  so in this case I am going to rely on LaunchDarkly acting as an authentication backend as well. This is totally done on purpose – when a feature is going public then no authentication is required to use it, otherwise a user would be authenticated against LaunchDarkly.

So, I need to create Feature Flags on


Each Feature Flag has a key (feature-* might be a good naming convention), and it is important to set any feature as Available in client-side snippet to access them via JavaScript.


Then add the JavaScript SDK as per the documentation:


Now, all I am doing is extremely easy: starting with the empty ASP.NET Application, I am going to remove the views statically referred by the Controller in the list on the navbar, and I am going to add an id attribute to this list:


Then I am going to add this series of scripts:


The EnableFlags() function is going to retrieve the potentially authenticated user from the Local Storage (I am using it as a mean to save the authenticated user – there is no real backend in this application Smile), authenticate this user against Launch Darkly, clean the list, and then check if any of my feature flags is turned on for the user. If so, the aforementioned list is dynamically populated.

Bear in mind – this works also for non authenticated users.If a feature is available for them it will show up.

The two other functions are Login() and Logout() – the first sets the user in the local storage and the second one deletes the user. Again, this is not cool or production JavaScript but it is for demo purposes and it works Smile

What happens is very nice: I can start deploying my application only to the users I want to:


Once I am confident with my code, I can rollout the feature to all or a percentage of my anonymous users:


This is just a starting point, the coolest part about LaunchDarkly is that you can integrate it with VSTS so you can rollout a feature at release time:


Use Feature Flags, implement them with either OSS libraries or with Launch Darkly, they make life so easier when it comes to delivering value for your customers!

Sunday, 8 May 2016

Simple pipeline with UWP and HockeyApp

Everybody needs a starting point here and there, so this post would be pretty much about what I did with a very similar situation – a very basic pipeline to push UWP builds to HockeyApp

What I wanted was a carefree, easy way of pushing CI builds to HockeyApp so I could enable manual testing for users. Let me add another requirement to the mix – in my case, we are talking about sideloaded apps, and only for x86 and x64. This doesn’t change the actual result though, we’ll talk about ARM at the end.


This is the pipeline I was talking about.

The first step is a PowerShell script – what it does is to change the value of the last build number in the appxmanifest so that each build will upload a new version of the app to HockeyApp. The service identifies a new build from its build number, so what I did is just changing the revision number with the BuildId of my Build. Simple as that.

Then the build restores the NuGet packages, and builds my app for x86 first and x64 then. The reason why I went down this route is because I wanted to keep things as simple as possible, with a very intuitive tree structure and preparing the folder for the Zip task I used next.

I decided to use a Zip Directories task from fellow MVP Peter Groenewegen. Building a task from scratch wasn’t in the cards because of time constraints in this case, and Peter’s did the job as required.


The task would create a zip file of the App_1.0.0.$(Build.BuildId)_Test folder. This zip file contains the build artifacts I am feeding to VSTS and HockeyApp.

Small shortcut, but this works well Smile basically this name comes out from the AppxBundle process, and the BuildId is there because I changed the appxmanifest file with it so it is nicely available everywhere in my build. This could be extended with a build variable though.

After uploading the symbols, the build uploads the artifacts:


from the folder I used as a destination when building, the build engine searches for the zip file I created previously with the task. It is a Server artifacts, meaning it is stored in VSTS.

Eventually, HockeyApp is fed with this file:



The connection comes from the Service Endpoint you need with HockeyApp. The App ID is the one you’ll find on HockeyApp and the Binary File Path is where the source for the zip file you need to upload is. Simple as that.

In HockeyApp you’ll see the build as soon as the process is over:


The zip files the build uploads contains these files:


which is exactly what you would get from the Create Package wizard in Visual Studio. Running the PowerShell script installs the app on the target system.

I mentioned ARM at the beginning – to add ARM to this pipeline you’ll need to add another build step for ARM, and then uploading the appx file generated by the build.

It is a slightly different process at the moment, and this requires a different provision on HockeyApp as well: the ARM app needs to be separate from the UWP one at the moment so you can upload the build artifact.

Tuesday, 3 May 2016

Application Insights Live Metrics Stream with ASP.NET 5

The single feature I deeply loved from the old Visual Studio Online Application Insights (before it was handed over to the Azure Team) was the Developer Dashboard, a real-time overview of how your application was faring.

Improve your product by analysing real world usage data with Visual Studio Online Application Insights

There is a replacement though: Live Metrics Stream. It is very powerful, way more than the old Developer Dashboard:


The problem is that if you try to configure it with an out-of-the-box ASP.NET 5 Application you will never manage to make it work:


…even if you have the latest Application Insights SDK package installed.

The reason is because not all the features of Application Insights are supported out-of-the-box with ASP.NET 5 – if you run it against .NET Core 5.0.

If you want to integrate LMS in an ASP.NET 5 application, you need to add this code snippet to your startup.cs file and remove dnxcore50 from your project.json file.


I totally expect a full support for all the Application Insights’ features to come soon, but for now if you really need LMS in your application you need to stick with DNX 4.5.1+ as a runtime.

Friday, 22 April 2016

A look at the new Work Item Tracking features in VSTS

I usually don’t do this, but the VSTS teams are overhauling this area at such a pace that makes a knowledge refresh really needed Smile

Aside from the cards layout a few months ago, there are really compelling features added to the platform. It isn’t easy to define what compelling is for WIT, as it is much of user interaction scenarios more than pure technical stuff doing magic, but I find them very, very interesting.

First of all, how many times during a planning meeting you create a Work Item and you then realise you chose the wrong type? Believe it or not, it happens all the times. Now you can just change the type from the UI, easy as that:




Another feature worth mentioning is the possibility of moving a Work Item between Team Projects. There might be a ton of reasons behind this need, I even heard of a Team Project used for support and escalation requests for multiple products.

Anyway, it is just like this, and you can also change the type here:



You can also create a new branch from a Work Item (very handy for feature-based development):



This is a fantastic way of keeping the planning aligned with development, IMHO.

Eventually, you can now follow a Work Item.


This means you’ll get an email whenever this Work Item is updated by other members of the team – I can already see Product Owners’ hands clapping!