Monday 31 July 2017

My take on build pools optimisation for the larger deployments

If you have a large pool of Build Agents it can be easy to incur in a terrible headache: plenty of hardware resources to handle, capabilities, pools, queues, etc.

Bearing this in mind, having a single default pool is the last thing you want IMHO:

image

There are exceptions to this of course, like if you work on a single system (loosely defined, like a single product or a single suite of applications) or if you have a massive, horizontal team across the company.

Otherwise pulling all the resources together can be a bit of a nightmare, especially if company policy gets in the way – what if each development group/product team/etc. needs to provide hardware for their pools?

Breaking this down means you can create pools based on corporate organisation (build/division/team/whatever), on products (one pool per product or service) or on geography.

Performance should be taken into account in any case: you can add custom capabilities marking something special about your machines:

image

Do you need a CUDA-enabled build agent for some SDKs you are using? Add a capability. Is your codebase so legacy or massive that takes advantage of fast, NVMe SSDs? Add a capability. You get the gist of it after a while.

That becomes very nice, because with capabilities you can define your perfect build requirements when you trigger the build, and the system is going to choose the one that has all you need – saving you the hassle of manually finding what you need.

Maintaining these Build Agents is also important – that is why a Maintenance Job can be scheduled to clean up the _work folder in the agent:

image

This can have an impact on your pools – that is why you can specify that only a certain percentage is going to undergo the job at once. Everything is also audited, in case you need to track down things going south.

Wednesday 26 July 2017

So many things I like in the new Release Editor!

Change is hard to swallow, it is the human nature and we cannot do anything about it Smile so like every change, the new Release Editor can be a surprise for some.

image

To be fair with you, I think it is a major step ahead, for a few reasons. Usability is on top of the pile of course, as I can have a high level overview of what my pipeline does without digging into the technical details of the process.

Then if you look at the Artifacts section, you will see the amount of sources you can choose from:

image

Being VSTS a truly interoperable DevOps platform spoils you for choice – I really appreciate the having Package Management in such a prominent place, because it enables all sorts of consumption scenarios for NuGet packages as a build output, including a cross-organisation open model.

Then on the Environments section, the templates provided cover lots of scenarios and not only with cloud technologies. One that is going to be really appreciated in hybrid DevOps situations is the IIS Website and SQL Database Deployment.

image

This template creates a two phase deployment that serves as a starting point for most on-premise deployments with IIS and SQL Server.

The Web App Deployment supports XML transformations and substitutions by default:

image

The data side of the story is really interesting IMHO as it employs DACPACs by default, together with a .sql file and inline SQL options:

image

I think it is clear I really like it Smile

Tuesday 11 July 2017

Git, TFS and the Credential Manager

A colleague rang up saying he could not clone anything from the command line, but everything was fine in Visual Studio. All he got from PowerShell was an error stating he was not authorised to access the project.

He did not want to setup a PAT or SSH keys, and this behaviour was quite odd to say the least. There was also a VPN in the mix.

At the end of the day the easiest way to get around this was using the Windows Credential Manager:

image