Thursday, 22 June 2017

A few nuggets from using TFS/VSTS and SonarQube in your builds

The cool thing about SonarQube is that once it is set up it works immediately and it provides a lot of value for your teams.

After a while you will notice there are things that might be refined or improved in how you integrated the two tools, here are some I feel can be quite useful.


Bind variables between Team Build and SonarQube properties

I feel this is quite important – instead of manually entering Key, Project Name, Version, etc. you should be using your variables. Try to reduce manual input to a minimum.

Branch support

SonarQube supports branches with the sonar.branch property. This would create a separate SonarQube project you can use for comparison with other branches.

Analyse your solution once

Don’t be lazy and add just a task at the beginning and one at the bottom - you should scan one solution at the time and complete the analysis. This will solve the typical Duplicate project GUID warning you will get if you have multiple solutions in the same scan.

Exclude unnecessary files

It is so easy to add a sonar.exclusion pattern, do it to avoid scanning files you are not interested in.

Wednesday, 14 June 2017

How a (home)lab helps a TFS admin

I’ve always been a fan of homelabs. It is common knowledge that I am a huge advocate of virtualisation technologies, and pretty much all my machines feature at least Hyper-V running on them.

If you are not familiar with this, a homelab is a set of machines you run at home which simulate a proper enterprise environment. This does not mean a 42U cabinet full of massive servers, but even just a decently-sized workstation acting as VM host would do. The key here is enterprise, so ADDS, DNS, DHCP, the usual suspects indeed, plus your services.

What I am going to talk about is applicable to corporate labs as well, albeit this can be less fun Smile

So, if you are a TFS administrator, what are the advantages of a lab?

Testing upgrades!

Yes, test upgrades are one of the uses of a lab. But not just testing the upgrade itself, it also helps understanding how long an upgrade will take and what are the crucial areas you need to be aware of.

In an ideal world, you will have an exact copy of the production hardware so you might be able to have a very accurate forecast. This helps of course, but it will also hide what are the critical areas in your deployment.

Let’s take TFS 2017  – one of the most expensive steps is the migration of all the test results data in a collection to a different schema.

This is a very intensive operation, and having a lab where you know inside-out any finer detail about your hardware really helps when it comes to planning the proper upgrade, especially if you have a large deployment.

Without mentioning that in case of failure you are not breaking anybody’s day and you can work on your own schedule.

Also, you will find that sometimes you might need to experiment with settings that require a service interruption. The lab is yours, so you are not affecting anybody again and you can go straight to the solution when it comes to the production environment.

All of that sounds reasonable and maybe too simplicistic, but I saw too many instances where there was no lab and the only strategy was test-and-revert-if-fails, given that Team Foundation Server is “just a DB and IIS” (yeah…).

Definitely not something you want to see or hear, trust me Smile

Wednesday, 31 May 2017

What can you learn from The DevOps Handbook

I thought about reviewing The DevOps Handbook, but then I realised its real reference value. Books like this, with multiple layers of usage are really invaluable.


























This book is gold IMHO, and not just for its cover colour. But let’s rewind a bit.

I bought the book in January, I read the first few chapters then I forgot it at my parents’. Fast forward a few months, my brother comes visiting and he brings the book back, but it is then then left on the to read pile for a while.

Eventually I managed to find time to read it and here I am writing this post.
So why isn’t this a regular review? What did I realise, all of a sudden?

Well, books can be easily read in a few days, a couple of weeks tops. What happened is that each chapter I read was providing insights or mirrored situations I experienced during these months.

Each chapter can be cherry-picked and adapted to your situation, because it starts from a real scenario with real requirements and targets. You will notice patterns going through the book, all the concepts are there but applied in a tailored way to fit the problems one has to face.

I can see this book being a useful guide you might want to refer to when you are tackling a problem, just pick the chapter more akin to the approach you are using and you will feel guided.

Friday, 19 May 2017

Review - Professional Git

I am asked at least twice in three months about a good book on Git.

Despite the information available at https://git-scm.com/book is very comprehensive, there is value out of a published book to be consumed as a reference, especially when onboarding a new team member or when you finally want to get a firm grip on this (sometimes dreaded) Version Control System.

So I contacted Wrox to get a copy of Professional Git, which was published in December 2016 hence it is not only a reference book but also an updated reference book.


























You might wonder why is this so important to me. Well, it is fairly simple after all. Git became a mainstream tool a few years ago, but its history goes much further back. Documentation on its usage might be older, aimed at advanced users or fragmented, so it is important to have a book not only covering all the important topics, but crucially covering them from the right point of view.

Brent Laster did a brilliant job with this book. I felt the level of depth was perfect, neither too shallow nor too deep. Topics like Submodules – often troublesome for some and quite challenging anyway – are covered with a clarity that would make the essential information memorable.

The book has companion exercises and labs to keep you busy if you are a total newbie. If you are a bit more experienced it is still very valuable, putting down in plain language with straightforward samples even the trickiest of topics.

Just remember that the syntax used for the examples is UNIX-style – hence lots of ls! But aside from that I cannot refrain from suggesting it to who wants to start with Git and is looking for a comprehensive guide.

Tuesday, 9 May 2017

Error 404 when uploading the SonarQube Analysis Report from the Build

That can be something hard to catch if you don’t know where to look!

I spent some time on this issue – SonarQube’s End Analysis task fails, with either an unhandled exception (version 2.1.1 of the extension) or a 404 page in plain text in the log (with the older 2.0.0).

What was really odd was that the issue happened only on certain projects – some were fine, some others were failing, And it happened regardless of the build server used by TFS – whatever the agent, this randomness was there.

At the end of the day, don’t underestimate logs collected from the Web Server you are using: I found an error 404.13 in the logs, meaning the Analysis Report was exceeding the size limit for the upload.

Wednesday, 3 May 2017

What to do with WMI errors and Team Foundation Server

Last week I spent some time on trying to sort out WMI errors in a test environment. That was not fun, but at the end of the day there is something to learn out of it.
Everything starts with this set of errors with an AT-only installation:










Looking at the logs you can see it is pretty bad stuff:








I tried with the usual suspects (wmimgmt /verifyrepository, setting the involved machines as standalone hosts, etc), but they all run fine. One of the suggestions you can find around is to reinstall the IIS 6 Management Tools on the Application Tier:








Still no luck. I tried connecting with wmimgmt.msc and I got all sorts of errors. At the end of the day I temporarily reset the WMI repository on that machine and I then decided to move these test services to another VM. Why?

The WMI repository is not supposed to be rebuilt lightly. It should be the last resort, and I did not want to tie up this testing environment with a potentially problematic machine.
Don’t forget that the Application Tier is just a front-end for Team Foundation Server. You can replace it with another machine and scale out as needed.

Also, the errors above appear in both the Application Tier and Data Tier readiness check category for the AT-only install, despite they only apply to the AT. That is because the machine could not communicate with anything beyond itself (not even itself I would add), so it would report that the Data Tier isn’t reachable. Do not touch the database servers unless you really have to, and the logs are going to tell you if you have.

Tuesday, 18 April 2017

Quickly share query results with the Web Access

If you really like the Copy Query URL button because it opens a full-screen page with no other link to TFS or VSTS...

You would also know that the query link expires after 90 days. Too bad. Is there a way of getting the same behaviour (without taking into consideration any ACL-specific configuration) without using that link?

Well yes - just add &fullScreen=True to the Web Access URL:





Friday, 7 April 2017

“We don’t ship so often”: why? A reflection on Delivery hurdles.

The last Stack Overflow Developer Survey Results shows that the more a developer ships the happier (s)he is. 
Of course we see a huge amount of people checking in multiple times a day, but also a large amount of people checking in (so potentially building and deploying) much less than that.

So, looking at the other side of the medal: why aren’t you shipping often?

Reasons – as usual – are varied. There might be process constraints (certifications, etc.), hard requirements, but I’ve often seen a heavy reliance on older deployment procedures which are considered too expensive to be replaced by automation. Don’t touch what works, right?

Web applications are a stellar example of this. You might have the most complex web app in the world, but why should you manually move stuff around when you can pack everything in a MSDeploy package?

But that is for Azure and cloud technology and stuff!

Wrong answer! MSDeploy is around since 2009 and it is well supported on-premise as well! So why aren’t you using it for your existing application? It is, after all, the same concept Tomcat uses for its .war files.


This isn’t about throwing years of valuable content in the sink. It is often a matter of trying to split the larger problem into smaller components, and approaching different delivery vehicles. You can retain your existing application as-is, just replacing how you bring it into your production environments.

Sunday, 2 April 2017

How can I monitor my AlwaysOn synchronisation status?

As a Team Foundation Server administrator it is critical to have knowledge of all the components involved by your deployment, and SQL Server is the lion’s share (of course).

As you know I am a huge fan of SQL Server AlwaysOn, a really brilliant High Availability solution. I was wondering if there is a way of having an estimation of where the Database Engine is when you see the Synchronizing state in the AlwaysOn dashboard…








I found out there is a way, and it doesn’t even require any SQL at all. All you need to do is to add the Last Commit Time column on the Dashboard, so you will see the time of the last synchronised commit from the Primary Replica to the Secondary.














Of course it is not an ETA, but it gives a rough idea of how much work is left for the synchronisation.

During this state Team Foundation Server is still available because it relies on the Primary Replica, but remember to not perform any failover otherwise you are going to lose data! If it is a long synchronisation you are doing I strongly suggest to set the Failover Mode to Manual, downtime is always a better trade-off than data loss.

Tuesday, 21 March 2017

Move TFS databases with no downtime, thanks to SQL Server AlwaysOn

If you follow this blog or my Twitter feed you should know I am a massive fan of SQL Server AlwaysOn.

Recently I restored and moved some TFS databases around, and one of them remained on a temporary storage because of the massive size involved. After a while I managed to sort out the primary storage so I could move this database (and its Transaction Log) back to it.

This what I did, no warranties of course but it worked on my machines!

First of all, you need to be aware that you will have a limited availability during this period. It doesn't mean you are going to have an outage, but that you cannot rely on the Secondary Replica while you work on it. Why? Because you need to disable the Automatic Failover and make any Secondary non-readable:








Then suspend Data Movement from the Primary. This means your Primary Replica is not going to sync with the Secondary.















You will get your database to move in a non synchronised state.





Now note down your logical names for the files you need to move. Use these in the following query, the path in the FILENAME is going to be the new destination:











Run this on all servers. You might want to wait for the Secondary to be up-and-running, but don't forget to run it against the Primary too!





Copy all the files to the new destination, once done restart SQL Server on the Secondary:
















Now check that if Secondary is in a green state.









If the Secondary is green, resume Data Movement and after the status is Synchronised again perform a manual Failover so that the roles are swapped. Then perform all of the above on the new Secondary and you will be done.





Eventually, don't forget to re-enable any configuration you disabled before performing this!

Friday, 17 March 2017

Settle your team's disputes with an EditorConfig file

Ok, this is a bit too much :) but it is actually possible to settle some disputes on Visual Studio settings by leveraging on an EditorConfig file.

EditorConfig is a broadly adopted open-source file format that enables IDEs to be set in a pre-defined way so that you can have a consistent set of rules across tools. This is an ideal tool for creating a standard set of settings and guidelines to be adopted across the team, and Visual Studio 2017 now supports this format!

Kasey Uhlenhuth wrote a brilliant description of what is supported in the IDE, and the setting area is very well done with an actual example of what you are setting up.




Then you can configure how to enforce your style rules - bear in mind that errors are treated as such, so they would prevent a successful build!










If found in a solution, the .editorconfig file will override the default settings of the IDE – so to have a team-shared convention all you need to do is put the file into the root of the project folder, job done!

Wednesday, 8 March 2017

The new connected marketplace experience: how to buy Package Management for TFS

It is not news that in a Hybrid DevOps stack you might have stuff on-premise and stuff in the cloud, Package Management is a prime example of this.

If you don’t have a Visual Studio Enterprise license for any user in your organisation you’d need to buy some users’ licenses for this service. It is billed through Azure even if you are on-premise and totally disconnected from the internet.

You install it on-premise, but you would still set your billing to an Azure subscription when buying it:


















Once you are done you can install the extension. Remember: if you have an Enterprise license (either with or without MSDN) you are already entitled to Package Management so you can install the extension straight away.

Also, if you need to manage your users you can browse to the Users hub (<your TFS>/<your collection>/_admin/_userHub) and assign licenses to whoever requires them:









This is exactly like VSTS, but on your on-premise TFS.

Sunday, 26 February 2017

Handle your NuGet packages’ qualities with Release Views

Are you building NuGet packages for your tools, utilities and libraries? Check.

Are you using SemVer for versioning? Check.

Then you might want an easy way of offering your (internal) packages, sorting them between Release and Prerelease for example. Release Views is what you are looking for.

What is really brilliant is that you already have a baseline set: Release and Prerelease. You don’t have to configure anything, it is already there for you.















What makes lots of sense IMHO is to divide them into Release, Prerelease and CI.

That’s because even if we would all love to have a single feed where you can get all the packages in an independent fashion, it is highly likely that some users might not need a CI package but something more refined instead.
With the latter it is clear that the package is not as good as a beta, making it easier to section your offer for the user as you like. CI is really bleeding edge sometimes, and I believe it is good to have it separated from other builds.

Tuesday, 14 February 2017

A note on TFSConfig OfflineDetach

I already mentioned the very useful TFSConfig OfflineDetach in the past. Today I used it once again, and I realised a very important information is missing.

What you need to remember is that the Configuration Database you need to use is the one which has the Collection you want to detach in it. And it must be offline as well.

So in today’s situation (moving a collection across domains in a new instance) I had to restore the configuration database as well as the collection’s.

Monday, 6 February 2017

PRs and ‘Unable to queue Build’ with VSTS

This should never happen, but if it does…

Policy1

Just go checking the PR settings to verify that there actually is a build linked!

image

Monday, 30 January 2017

Getting started with Delivery Plans in VSTS

Last week Microsoft released a very interesting extension for VSTS – Delivery Plans.

It is still a very early preview, it will be associated with a business model (so it is likely it won’t be free), and this feature represents a very important expansion of VSTS’ field of execution.

This extension brings a way of tracking the work undertaken by multiple teams at the same time, with the possibility of focusing only on a certain level of detail, and enables scenarios of delivery forecast previously quite hard to achieve.

To easily get started, I suggest installing the Sample Data Widget extension and deploy the “SAFe with VSTS” package. I went for this package because it creates a nice set of Work Items, not because there is any relationship between Delivery Plans and SAFe.

Once this is done, customise the iteration dates and the teams you like – I would go with two sub-teams part of a large team - then assign what you feel is better suited to each team (pretty randomly I reckon given we are just using sample data Smile) and create a Plan like this:

image

This is a plan designed to give you a full breadth of information, from the larger parts to the finer details. The result is brilliant:

image

In a single page you will get a timeline view of Epics and Features delivered per-sprint by the whole team, plus the User Stories delivered by each sub-team. This is obviously overkill for the real world – you will want two different plans depending on the level of detail you would like to provide – but it explains why this feature is so powerful and game-changing for me.

Delivery Plans will enable scenarios where stakeholders can easily understand the status of their value streams without using external reporting tools, and this is a crucial step to allow VSTS to grow from a developement-focused tool to a more general purpose in a company.

The documentation is already very comprehensive, take a look there and I strongly suggest to give a go to this extension, given the value it brings.

Monday, 23 January 2017

Do not forget: only leaves are shown on a board!

This is a classic question I get at least once a year, regardless of TFS, VSTS, versions, etc.

I cannot see my Product Backlog Item on the board anymore! Is it gone? Are we losing data?

No, we are not losing data, but you need to remember that if you are linking a PBI/User Story as a child to another one the parent won’t be shown in the board, and that is by design.

This is something that dates back a few year – Willy covered it in 2013 but it is an older discussion theme, always worth remembering it though Smile

Saturday, 14 January 2017

Help! My PowerShell path in the prompt is gone!

If you follow my speeches you surely know I am a huge fan of posh-git, an extremely handy PowerShell module to display inline repository information.

Today I installed it in a Virtual Machine I am going to use for demos at a conference next month and I noticed something odd at first sight:

image

Where is my path?!

Thanks to Antonio I realised that maybe it was a good idea to check the documentation. It was indeed…

 image

The whole step 2 section explains how to customise the PowerShell prompt – so it was trivial to restore it to the full path as I like it Smile

Wednesday, 4 January 2017

Error AADSTS90093 with SonarQube and AAD, why?


image

It happens. And the reason here is extremely easy and straightforward: if you have any permission on your AAD application for SonarQube which you don’t need you will be denied access Smile

The documentation was updated in June but this error came out recently on an old instance set up before then.