Friday, 19 May 2017

Review - Professional Git

I am asked at least twice in three months about a good book on Git.

Despite the information available at https://git-scm.com/book is very comprehensive, there is value out of a published book to be consumed as a reference, especially when onboarding a new team member or when you finally want to get a firm grip on this (sometimes dreaded) Version Control System.

So I contacted Wrox to get a copy of Professional Git, which was published in December 2016 hence it is not only a reference book but also an updated reference book.


























You might wonder why is this so important to me. Well, it is fairly simple after all. Git became a mainstream tool a few years ago, but its history goes much further back. Documentation on its usage might be older, aimed at advanced users or fragmented, so it is important to have a book not only covering all the important topics, but crucially covering them from the right point of view.

Brent Laster did a brilliant job with this book. I felt the level of depth was perfect, neither too shallow nor too deep. Topics like Submodules – often troublesome for some and quite challenging anyway – are covered with a clarity that would make the essential information memorable.

The book has companion exercises and labs to keep you busy if you are a total newbie. If you are a bit more experienced it is still very valuable, putting down in plain language with straightforward samples even the trickiest of topics.

Just remember that the syntax used for the examples is UNIX-style – hence lots of ls! But aside from that I cannot refrain from suggesting it to who wants to start with Git and is looking for a comprehensive guide.

Tuesday, 9 May 2017

Error 404 when uploading the SonarQube Analysis Report from the Build

That can be something hard to catch if you don’t know where to look!

I spent some time on this issue – SonarQube’s End Analysis task fails, with either an unhandled exception (version 2.1.1 of the extension) or a 404 page in plain text in the log (with the older 2.0.0).

What was really odd was that the issue happened only on certain projects – some were fine, some others were failing, And it happened regardless of the build server used by TFS – whatever the agent, this randomness was there.

At the end of the day, don’t underestimate logs collected from the Web Server you are using: I found an error 404.13 in the logs, meaning the Analysis Report was exceeding the size limit for the upload.

Wednesday, 3 May 2017

What to do with WMI errors and Team Foundation Server

Last week I spent some time on trying to sort out WMI errors in a test environment. That was not fun, but at the end of the day there is something to learn out of it.
Everything starts with this set of errors with an AT-only installation:










Looking at the logs you can see it is pretty bad stuff:








I tried with the usual suspects (wmimgmt /verifyrepository, setting the involved machines as standalone hosts, etc), but they all run fine. One of the suggestions you can find around is to reinstall the IIS 6 Management Tools on the Application Tier:








Still no luck. I tried connecting with wmimgmt.msc and I got all sorts of errors. At the end of the day I temporarily reset the WMI repository on that machine and I then decided to move these test services to another VM. Why?

The WMI repository is not supposed to be rebuilt lightly. It should be the last resort, and I did not want to tie up this testing environment with a potentially problematic machine.
Don’t forget that the Application Tier is just a front-end for Team Foundation Server. You can replace it with another machine and scale out as needed.

Also, the errors above appear in both the Application Tier and Data Tier readiness check category for the AT-only install, despite they only apply to the AT. That is because the machine could not communicate with anything beyond itself (not even itself I would add), so it would report that the Data Tier isn’t reachable. Do not touch the database servers unless you really have to, and the logs are going to tell you if you have.

Tuesday, 18 April 2017

Quickly share query results with the Web Access

If you really like the Copy Query URL button because it opens a full-screen page with no other link to TFS or VSTS...

You would also know that the query link expires after 90 days. Too bad. Is there a way of getting the same behaviour (without taking into consideration any ACL-specific configuration) without using that link?

Well yes - just add &fullScreen=True to the Web Access URL:





Friday, 7 April 2017

“We don’t ship so often”: why? A reflection on Delivery hurdles.

The last Stack Overflow Developer Survey Results shows that the more a developer ships the happier (s)he is. 
Of course we see a huge amount of people checking in multiple times a day, but also a large amount of people checking in (so potentially building and deploying) much less than that.

So, looking at the other side of the medal: why aren’t you shipping often?

Reasons – as usual – are varied. There might be process constraints (certifications, etc.), hard requirements, but I’ve often seen a heavy reliance on older deployment procedures which are considered too expensive to be replaced by automation. Don’t touch what works, right?

Web applications are a stellar example of this. You might have the most complex web app in the world, but why should you manually move stuff around when you can pack everything in a MSDeploy package?

But that is for Azure and cloud technology and stuff!

Wrong answer! MSDeploy is around since 2009 and it is well supported on-premise as well! So why aren’t you using it for your existing application? It is, after all, the same concept Tomcat uses for its .war files.


This isn’t about throwing years of valuable content in the sink. It is often a matter of trying to split the larger problem into smaller components, and approaching different delivery vehicles. You can retain your existing application as-is, just replacing how you bring it into your production environments.

Sunday, 2 April 2017

How can I monitor my AlwaysOn synchronisation status?

As a Team Foundation Server administrator it is critical to have knowledge of all the components involved by your deployment, and SQL Server is the lion’s share (of course).

As you know I am a huge fan of SQL Server AlwaysOn, a really brilliant High Availability solution. I was wondering if there is a way of having an estimation of where the Database Engine is when you see the Synchronizing state in the AlwaysOn dashboard…








I found out there is a way, and it doesn’t even require any SQL at all. All you need to do is to add the Last Commit Time column on the Dashboard, so you will see the time of the last synchronised commit from the Primary Replica to the Secondary.














Of course it is not an ETA, but it gives a rough idea of how much work is left for the synchronisation.

During this state Team Foundation Server is still available because it relies on the Primary Replica, but remember to not perform any failover otherwise you are going to lose data! If it is a long synchronisation you are doing I strongly suggest to set the Failover Mode to Manual, downtime is always a better trade-off than data loss.

Tuesday, 21 March 2017

Move TFS databases with no downtime, thanks to SQL Server AlwaysOn

If you follow this blog or my Twitter feed you should know I am a massive fan of SQL Server AlwaysOn.

Recently I restored and moved some TFS databases around, and one of them remained on a temporary storage because of the massive size involved. After a while I managed to sort out the primary storage so I could move this database (and its Transaction Log) back to it.

This what I did, no warranties of course but it worked on my machines!

First of all, you need to be aware that you will have a limited availability during this period. It doesn't mean you are going to have an outage, but that you cannot rely on the Secondary Replica while you work on it. Why? Because you need to disable the Automatic Failover and make any Secondary non-readable:








Then suspend Data Movement from the Primary. This means your Primary Replica is not going to sync with the Secondary.















You will get your database to move in a non synchronised state.





Now note down your logical names for the files you need to move. Use these in the following query, the path in the FILENAME is going to be the new destination:











Run this on all servers. You might want to wait for the Secondary to be up-and-running, but don't forget to run it against the Primary too!





Copy all the files to the new destination, once done restart SQL Server on the Secondary:
















Now check that if Secondary is in a green state.









If the Secondary is green, resume Data Movement and after the status is Synchronised again perform a manual Failover so that the roles are swapped. Then perform all of the above on the new Secondary and you will be done.





Eventually, don't forget to re-enable any configuration you disabled before performing this!