Saturday, 20 September 2014

Application Insights: what’s going on?

I guess it has been a little overlooked, but there is a lot of moving around Application Insights…
The biggest thing is that with Visual Studio 2013 Update 3, Application Insights is moving towards version 2.0. It’s not a mere version change…
Application Insights is being moved to Microsoft Azure, and 2.0 is the first version of it. The move is not complete yet, so the 1.3.2 version – running on Visual Studio Online – works, and it contains all the current feature set, but bear in mind that they are “rebuilding it from the ground up as part of Microsoft Azure”.
If you want to understand which version you are running, just check the ApplicationInsights.config file: if it contains a schemaVersion, then you are using the 2.0 release.
The Azure version lacks several features at the moment (Windows Store and Windows Phone apps monitoring, different APIs) and there are a couple of architectural changes, most notably the agent-free performance monitoring.
But it does not mean you are losing anything: it was in preview on VSO, it is in preview on Azure, and you can use both. If your application or service is configured to send data to the 1.3.2 version, this is not changing as there is no automatic upgrade.
There is only one thing to consider: if you remove the 2.0 package and you restore the 1.3.2 one, you cannot return to the 2.0 without repairing the Visual Studio installation.

Thursday, 11 September 2014

Again, again and again on the backups

This is a topic which I find coming back every now and then: backups of the Team Foundation Server.

Team Foundation Server is a SQL Server-based product – hence most of the backups’ work happens there. Full, Copy Only, Differential, Transaction Log: choose your flavour, as long as you are confident it’s good.

IMHO it is a good practice to keep things simple: a daily Full Backup with hourly Transaction Logs Backups provide a good level of protection without involving the (IMHO) complicated Differential Backups.

If you can, use the OOB tool: it is mature enough to do its job without too many worries. But if you happen to need a manual backup, there are a couple of information to keep in mind…

In order to be supported by the Microsoft CSS your backups must be synchronized – no exceptions. The safest way of doing that, as it required manual interaction with the TFS databases, is to follow this MSDN walkthrough. I introduced a slight modification as I manage a big deployment which uses SQL Server AlwaysOn, which is just the verification of the preferred backup instance, but the core steps are the same.

The reason behind that is pretty simple: the Team Project Collection databases refer to objects (like IDs, or identities) stored in the TFS_Configuration database. If you restore a Team Project Collection database which contains something not aligned with the Configuration DB, it is going to end badly…

And remember to test the restore – otherwise you do not have a backup Smile

Thursday, 28 August 2014

TFS Transaction Marking on SQL Server AlwaysOn Data Tier

If you need to manually backup the Team Foundation Server – you might have several reasons for not using the OOB tool – you need to follow this walkthrough on MSDN.

What I’d like to share is a small script you might use while you have to backup your Team Foundation Server running on an AlwaysOn-backed Data Tier.

I created a hourly job in both nodes, running one minute prior to the Transaction Log Backup job, as it follows:

image

In our case, we backup the Primary Replica, so before initiating the transaction I check for the preferred Replica. If it is 1, it’s the primary, otherwise it is the secondary (2) or it is resolving (0), both cases where my job cannot run.

It might be a little bit overzealous, because if you run the very same job on a non-preferred Replica (the secondary in our case) you are going to get an execution error stating the databases are read-only, but better safe than sorry!

Wednesday, 20 August 2014

Why is my Incremental Analysis Database Sync going on forever?


Sometimes it happens...
 
And that’s just because I stopped it. Why does it happen?
 
The reason is pretty easy: if the job is running, but you have a network problem – an outage, like it happened to me – the TFS Job Agent might not report the state and it may goes on for hours, even if it has released all the resource locks.
 
You can safely stop the job by invoking the Web Service on the TFS Application Tier – you’ll need to set the SetAnalysisJobEnabledState to FullyDisabled first and then to Enabled in order to restart with the next scheduled job.
 
And remember – do NOT process the TFS_Analysis OLAP cube with SSMS, as it is not supported by the Microsoft CSS.

Wednesday, 13 August 2014

How did I learn to get on well with Git

Who knows me certainly know I am not the biggest…err…fan of Git Smile

Thanks to Gian Maria and its continuous support on it I managed to understand how Git works and why it is so powerful. I am not saying it is “better than” something else – it is different and it has some pros and some cons.

So, it’s distributed. Distributed does not mean anarchist – it means distributed. If you want to have some sort of centralisation, go for a Remote. You can use it as a shared repository to be used like a central depot without losing any advantage of the DVCS concept.

When you commit something is different than when you push something. A commit is local, a push is toward a Remote.

A Git Fetch gets all the objects from the Remote which are not in your local repository. A Git Pull does more: it merges those files on your local repository, like a Get Latest Version.

Eventually – install SourceTree. It’s an amazing GUI tool with a fantastic branch visualisation tool.

Monday, 4 August 2014

Can’t refresh the TfsOlapReport connection? Have a look at the Trusted Data Providers…

You open the SharePoint Dashboard and you suddenly see this error:

image

An error occurred during an attempt to establish a connection to the external data source. The following connections failed to refresh. TfsOlapReport

Fair enough, something happened to the Excel Services. Does it? Actually not – and if you try opening that specific report you will see your local Excel refreshing the data and working as usual.

What happened?

That specific error is a generic refresh error in our case. I tried back and forth on all the usual suspects – SSAS permissions, SSS token in the file, SharePoint settings, even firewall ports – but nothing changed.

Suddenly I noticed some reports were working (the Burndown one for example) while this (Active Bugs by Priority) didn’t. So what?

Looking at the Connection String I saw the Burndown report had MSOLAP.3 as a provider, while the broken report was using MSOLAP.5

A quick double check on the SharePoint server (Manage Excel Services Application –> Trusted Data Providers) brought to the solution: MSOLAP.5 was not listed as a Trusted Data Provider.

Once I added MSOLAP.5 to the list everything worked as expected again and the reports were correctly showing.

Tuesday, 15 July 2014

Demystifying the Scrum of Scrums

The Scrum of Scrums is often saw as something ‘which grew out of control’, ‘just for Scrum Masters’ or something suited just to very large organizations.

It isn’t, actually…and it’s not rocket science, either.

A Scrum of Scrums is the best possible way of clearing doubts and questions raised among teams. It must not be merged or confused with a bigger standup meeting (as I’ve heard it…) because it is something brought on by the teams’ representatives – the Scrum Masters.

Its scope is to get a clear understanding of the problem’s domain and provide a solution – as the Scrum Master is there to remove impediments.

And yes: a Scrum of Scrums can have its own backlog, Jeff Sutherland defines the Scrum of Scrums as “…an operational delivery mechanism”, so having a backlog is perfectly reasonable.