Saturday, 17 January 2015

Continuous-Integration-as-Infrastructure and Git Flow

This came out at Friday’s meeting of the London ALM User Group – can I set up an infrastructure CI build for my whole Git project, configured with Git Flow?

I think Git Flow is a great piece of tool for non-enthusiasts or anyway people with limited interest in understanding Git. It creates a naming convention skeleton for your branching strategy, and using SourceTree you can let this kind of users work on their stuff with limited supervision.

gitflow0gitflow1

So, thinking on it – can Team Build help? Of course it can!

If you select the appropriate Include statement for the branches you’d like in the Build Definition, it is literally one click away:

image

Doing so you are telling the Team Build to run a build (CI in this case) on the feature/* branch, where of course the star is a wildcard. Every branch you are going to create will benefit from it without any effort.

So you commit whatever you need, and the CI is already working for all you wish - given the mapping.

gitflow2 gitflow3

Wednesday, 14 January 2015

Visual Studio Release Management as a Service – a quickstart

Let’s face it – cloud computing is utterly cool (pay-as-you-use service, near 100% availability, no infrastructure costs) but it is not for everyone. Even myself, I keep maintaining my own lab even if I use cloud services, both for learning and production purposes. But there are tools where the infrastructure and setup costs are not for everyone – think about Exchange.

IMHO Visual Studio Release Management is amazing, but its infrastructure and setup/maintenance cost might not be for every taste. That’s why having it as a Service in Visual Studio makes it even better!

If you install the Release Management Client on your machine, you just need to connect to Visual Studio Online instead of an on-premise server:

image

After you log on with your MSA, you will notice the URL automatically changing – that is VSRMaaS!

image

This is the only setting you have to care about. All the rest is managed by Visual Studio Online.

So, how can you use it? There are a couple of limits, for now. It is a beta, and it supports only vNext environments. A vNext environment is an agent-less deployment environment, which leverages PowerShell DSC. Eventually, it can be both in the cloud or on-premise. After that, it’s all the usual VSRM stuff…

image

The caveat is that you need to use DSC – hence no pre-existing activities but just your own scripts:

image

These scripts are going to feed the Release Pipeline, exactly like the agent-based VSRM deploys.

If you need a jumpstart though, use the integrated menu in Visual Studio: it is going to create stages, components and templates on your’s behalf!

image

Thursday, 8 January 2015

Useful SQL queries for the TFS Administrator

Right, right – I know:

image

image

I would actually replace can with will, but people might think I am too harsh. Remember – the TFS databases cannot be modified by nobody but Microsoft, otherwise you won’t get support, you might experience unknown (and untested!) issues and mostly important – you would lose any upgrade path as the schemas are checked by the TFS installer.

Anyway I won’t be talking about modifying the databases, while instead I wanted to share some useful queries I found during my almost ten years experience with Team Foundation Server.

SQL Server is a deterministic system, but sometimes it doesn’t seem to be. For instance, AlwaysOn is synchronising some data after you messed up with a Team Project Collection used for testing purposes, and despite it might look stuck it is actually doing something.

The AlwaysOn Dashboard doesn’t show anything but a Synchronizing status, so how can I say so?

Run this query:

SELECT dmv_2.login_name AS Invoker,
dmv_1.session_id AS SPID,
command as 'Instruction Type',
a.text AS Query,
start_time AS 'Initiated at',
percent_complete AS Percentage,
dateadd(second,estimated_completion_time/1000, getdate()) AS ETA
FROM sys.dm_exec_requests dmv_1
CROSS APPLY sys.dm_exec_sql_text(dmv_1.sql_handle) a
INNER JOIN sys.dm_exec_sessions dmv_2
ON dmv_1.session_id = dmv_2.session_id

This is what you’ll get:

image

Excluding the line where I am the Invoker – of course – I can see all the activity at 11:08am:

  • NT AUTHORITY\SYSTEM running sp_server_diagnostics
  • The TFS and SQL service account in this testing environment running an INSERT query – that is AlwaysOn!

That query leverages SQL Server’s DMVs, and it is very handy for checking all the things happening in the Database Engine.

Another one is about Transaction Log files – what about their physical status?

SELECT Name,
database_id,
log_reuse_wait,
log_reuse_wait_desc
FROM sys.databases

Querying sys.databases can give you the status of your Transaction Log files. I needed that because I was running some extreme tests in borderline conditions, hence I had to monitor their status. Sys.databases is very handy for other information as well.

Eventually, as Grant suggests (and he is always right!), DBCC CHECKDB regularly is a must. And please, have a test server around so you can compare all the behaviours, if any.

Monday, 22 December 2014

Troubleshoot a Team Project deletion

A colleague of mines once said “it’s never a stupid question if you don’t know the answer.” So that post might sound stupid, but I had people asking for it hence…there it goes!

You might need to delete a Team Project, and it is a matter of seconds, isn’t it?

image

It is not always the case, unfortunately. But you can do a lot to understand what goes south. Just using the TFS Admin Console.

Firstly, when you have a DeleteProject job running, you can actually check what it is doing. It is not very intuitive, but if you double-click it, you can access this:

image

Ok, the job fails. You know what? If you double-click the failed job you can get a very detailed log:

image

and digging down there you will surely find the reason why the job fails:

image

That specific case, well…just size your testing environment accordingly, ok? :)

Tuesday, 9 December 2014

Reducing Technical Debt with Smart Unit Tests

One of the reasons behind Technical Debt is the lack of appropriate test suites around a certain feature. Especially when implementing something new, tests are critical in shaping a robust and quality solution. Often, if you have something in the works and you are not strictly operating TDD, tests are behind where they should be.

Visual Studio 2015 introduced Smart Unit Tests, which are nothing but the former MSR Pex project, rebranded and productised. What Pex/a Smart Unit Test does is to analyse your code and create a basic suite of unit tests to test the basic, border scenarios. Here is an example:

image

Right click on the method, Smart Unit Tests

image

and here is the result:

image

Of course – this is a really, really basic scenario. What is interesting IMHO is how it is doing it behind the scenes:

image

as mentioned, it is a full-fledged Unit Test. Very basic, but still a good starting point, saving time while in the works. And if you save it, the Smart Unit Test engine is automatically going to create a new Test Project with the aforementioned tests contained in there. Again, it is not meant to remain as-is (“Sum748” is not a great test name for instance…) but it is still better IMHO than doing everything on my own.

Let’s make things a bit harder now:

image

That is very crappy code in small scale. No exception management at all, just the plain and down-to-the-bone feature, potentially in development. I can ear people screaming, but it happens extremely often in every organisation. This is the output of Smart Unit Test in this scenario:

image

It seems I need to spend some time on handling DivideByZeroExceptions and OverflowExceptions, to begin with…

Monday, 1 December 2014

Lab Management and Environments – what to remember

Lab Management’s SCVMM environments are nothing more than a bunch of Virtual Machines running somewhere in a datacentre. Really. I do not understand the reluctancy (almost fear!) when I mention it.

Let’s start with Network Isolation. Network Isolation is an extremely handy feature, allowing a side-by-side deployment of multiple instances of an environment with the same properties (machine name, IP addresses, basically everything which should not be duplicated in a network). It is very cool.

And guess what, there is a clear, step-by-step guide on how to create a Domain Controller VM to be used as a template for a Network Isolated Environment. Basically once you installed the ADDS you need to clean the DNS.

Once you have the VMs ready, I would suggest to compose some environments to be reused without searching for the VM every time. To then enable the Network Isolation, you need to check this checkbox in the Advanced tab of the Wizard:

image

That is all you need to do. SCVMM will then add a secondary Network Card to the VM to enable this feature, but it is nothing you should worry about.

Also remember that unless you set auto-provisioning, your VMs won’t be automatically shared among the Team Projects in a Collection. You can import them from the library you used to store the template anyway.

image

One last thing to remember on the VM Templates – always remember to enable the File and Printers Sharing firewall exception, otherwise the deployment would fail, and you won’t be able to connect to the VMs via the MTM Environment Viewer for instance.

If you want an all-in-one reference, have a look at this appendix from Testing for Continuous Delivery with Visual Studio 2012 – even if it is on the older version everything is still relevant. The whole book is actually on the matter, so I suggest to have a look at it. 

Another misunderstood topic seems to be Test Settings. We all have seen the fantastic demos with screen and audio recording, but then all of a sudden you cannot set it up in your lab.

To enable that feature, you need to install the Desktop Experience Feature on your Windows Server VMs:

image

and then select the Screen and Voice Recorder diagnostic data adapter from the Test Setting you want to use:

image

Each DDA can be configured to better suit the usage you want, in this case just bear in mind that you are storing big binary files inside the Team Project Collection database, so its size might increase very quickly if you use it a lot. Moreover, there is a number of useful settings you might use:

image

You can copy specific files (not tied to the Version Control or the build output) to the VMs, run pre and post-test execution scripts, or even force 32 or 64-bit execution in case you need it:

image 

Unfortunately the number of resources here is not immense – MSDN is extremely useful as usual, together with the aforementioned eBook, the Visual Studio ALM Rangers Lab Management Guide and the Pro Team Foundation Server 2013 book.

But again, this is not rocket science so you should be good with them.

Monday, 17 November 2014

How to configure Visual Studio Team Lab Management 2013, once and for all

Every time I go at a conference/user group and Lab Management is mentioned I hear someone saying “Lab Management? I never understood how it sticks together…” “Wow, it must be an adventure to set it up!” and so on…

Well, after all Visual Studio Team Lab Management (yes, fancy name) is not rocket science at all! It is just a clever mix of many different components, each doing a different thing, to enable the “Virtual Test Fabric” scenario. Nothing more, nothing less.

To begin with, you would need System Center Virtual Machine Manager (2012 R2), at least one Hyper-V host, Team Foundation Server (2013.4 in this case), a Build Controller and a Test Controller.

Assuming SCVMM installed  and configured (how: install SQL Server Database Engine, install SCVMM pointing at it, add an Hyper-V host), you need to install the SCVMM Console on the Team Foundation Server Application Tier. Now you can configure Lab Management!

image

You just need to enter your SCVMM FQDN:

image

and – if you wish to use it – an IP Block and a DNS Suffix for your Network Isolated machines:

image

This is the core, infrastructure configuration. You are going to see that something is missing though…

image

You just configured the infrastructure for the whole Lab Management deployment, what’s missing is the configuration for each Team Project Collection you want to enable.

The two settings you need are:

  • A Library Share (a normal SMB share) containing the SCVMM templates used by VSTLM to create your VMsimage
  • A Host Group (it’s actually optional, as SCVMM creates a default “All Hosts” Host Group, which in your case is enough as we are assuming you are starting with one Hyper-V host server)
    image 

As mentioned, the Auto Provision flag enables all the Team Projects contained into your Collection.

Now the only missing piece is a Test Controller to bind to Lab Management. In fact, if you launch Test Manager and try to create a new Environment, it would complain:

image

So, let’s install the Test Controller and configure it:

image

If you need it, configure a Lab Service Account as well. This is helpful in cases where you need to resort to Shadow Accounts (or you can’t add the Service Account to the Local Administrators group), but let’s keep it simple and skip it for now. Just keep that in mind:

image

That’s all! This is the whole Lab Management configuration! Is it still rocket science? In another post we are going to look at the environments’ configurations and at some useful tips from the real world.