Monday 22 April 2019

Make your life easier with Group Rules in Azure DevOps Services

This is something that comes in very handy - licence management isn't the simplest thing to implement and police. There is always someone who is going to try and sneak around the process in place, in order to quickly get access.

Group rules make this really simple. It's basically a templating model applied to the Azure DevOps licensing system:



































In the example above, I created a rule for a fictitious Workshop - all my users will have a Basic licence (as they might not be Visual Studio subscribers) but they will be automatically added to the Project Administrators group.

This might feel simplicistic, but you can add as many projects as you like in a rule. Which means that creating these will help shape the organisation's management, especially in large companies - you can have Visual Studio subscribers automatically added to all projects, and Basic subscribers (which might be contractors for example) scoped to business unit (where costs are usually accounted for and split) for example.

If you are an Organisation Administrator, this will save you plenty of time 😁

Thursday 11 April 2019

Do you want to move to YAML pipelines? Here is how I would do it.

YAML pipelines can be daunty, no question about this - especially if you come from a background where you come from a nice UI like the one for Build Pipelines to a plain text file describing the Pipeline itself.

Well, I feel it is daunty anyway :-)

But in a process of continuous improvement I am prone to leave the comfort zone to experiment, and this is how I feel after approaching YAML pipeline. It's a bit like Git, if you like...

So, first of all don't try to mix and match the two - always start afresh. It is double the effort to try to apply something like this in a brownfield situation. Be confident with it first, and then apply it to something else.

Once you remember that the file needs to be named azure-pipelines.yml...my best friend in this process is the YAML schema reference, it contains lots of useful examples to help you structure the pipeline and it contains the references to Bash, Script and PowerShell so you are not completely lost when you approach the task catalogue.
The documentation is now YAML-first, so it is going to be easy to follow. Start by replicating a simple .NET Framework build from the UI with YAML. Then try again with a small variation. You will quickly get the grip with it.

But it is long and tedious and time consuming. Which is why Microsoft decided to create the YAML assistant, a brilliant feature that merges YAML pipelines and the old UI-based designed.













If you need to add a task and you are not sure on what to do, show the assistant and select the task from there:



And you will be able to fill all the parameters like you would in a UI-based pipeline:














































Once this is done, add it to the pipeline and you are done with it:












This is a very smart way of handling this, and over time you will find yourself going more and more to it.

Wednesday 20 March 2019

Use Azure DevOps Release Gates to check for website availability and automate stage flows

Modern deployment patterns rely on automation, everywhere. A common request in this space is to automatically verify if a web resource is up and running before proceeding with the deployment.

Instead of having a script that runs within a Stage, why not leveraging Release Gates? At the end of the day they are designed with automation in mind.

To make this example generic enough, I created an Azure Function (code here) based off the HttpTrigger sample that checks for a URL availability. If the code is not 4xx or 5xx it returns a simple JSON output:







































It is not the coolest or cleanest code, but it works 😁 Now, once this Function is up and running, we can leverage it within a Release Gate in a Pre-deployment condition:



















Set the Completion event to ApiResponse, and check the value of the output as a Success criteria. If IsUp is true, then the website is up and running:










You can actually make it even smarter by checking for both IsUp and the code:







This would be enough to have the check in place, and automate the gatekeeping of your stage flow.

Thursday 14 March 2019

Did you know? Changing default and comparison branch in Git from Azure DevOps

This was one of those things you never realise until you actually look at it:





Have you ever thought that many people just leave that at the default setting (both Default and Compare on master) without too much thought?

Well, it is easy to change for both.

Compare:

Right click on any other branch, and select Set as compare branch.





















Default:

From the Repository settings, right click on any other branch and select Set as default branch.










Now, you might wonder why you might want to change these settings.

The Compare branch is a user setting - you might want to use it to set your development branch as a baseline branch to see how far master is ahead or behind compared to develop, for example.

On the other hand, the Default branch is for Pull Requests - it identifies


the default branch for merging code into when creating a new Pull Request.

Friday 8 March 2019

A quick reflection on git reset

This is fairly quick, but I haven't realised how important it is up to now - a friend made up such a mess of his repository adding and changing files before committing that he wanted to delete the folder and start from scratch.

Is it worth it? Not really, you can move around Git's history with this command:

git reset --hard <SHA of the target commit>

Why do I say it is an important thing to keep in mind? Think of it this way: if the user comes from TFVC, this is equivalent to a local Undo All and to Get Specific Version and Get Latest Version (with HEAD instead of the SHA) commands.

Sunday 17 February 2019

Using the Basic Process Template in Azure DevOps to make support management easier

I love the introduction of a Basic Process Template in Azure DevOps, and I will tell you why – it provides a very simple structure for projects that do not want or require a more complex set of prescriptions, but it still feeds from the best practices and the consensus of Agile methodologies so you are not left totally in the dark or to yourself.

An example is support management (as well as escalation management – for the sake of this post I will only refer to support, but many of these concepts apply to both). While there are tools like Service Now which provide an end-to-end lifecycle for support tickets, for certain teams or organisation it can be like hammering a nail for a picture on the wall with a sledgehammer. Doable, but at what cost? Let’s use this scenario as an example.

For starters, you don’t really need Iterations here. While you could use them – nothing wrong with iterations within a support team – you can do equally well without them, it is up to the team and how it is organised. Areas are what you might need: at least an area per product. Don’t be tempted to add many sub-areas under each product, you can easily leverage tags and be way tidier that way.














Also – don’t be scared if you see this error message:













It is perfectly normal, you don’t have bugs in this project! You have Issues instead. So disregard that.
Onto your backlog now, you can start by clicking on the button to create a new Work Item – which is going to be an Issue:

















You can go on for a while. I also really like the fact that you can define where the new Work Item goes: top, bottom or at selection. Really neat IMHO.










Once you have a backlog, people start working on it – Azure Boards come in very handy at this, and they are already pre-set in a way that many people will find intuitive:








If you think about it, support management is the perfect example of a Lean project in practice. Stuff comes in, it’s worked on for a while and it comes out on the other side with a status. That’s really it, and the Basic Process Template is a perfect starting point for this kind of approach.

And let me stress that – it is a starting point. You can customise it like any other Process Template and you can change everything about it, but IMHO its real value is in the simplified angle from where it tries to make life simpler for these teams or situations where other structures might be overkill.

Sunday 10 February 2019

The continuous quest for automation in DevOps, and the Azure ML Studio example

When you think about it, most of the tasks you carry on in a DevOps environment revolve around automation.

Infrastructure as Code? It’s automation, right? Cool. Testing? As automated as possible. Integration with 3rd party systems (SonarQube, WhiteSource, Azure DevOps, Jira, anything you retrieve data from or send data to) – it is automated.

You might think I live in a DevOps bubble. Let’s expand then – SRE? Data processing? How many things we rely on in modern development that have automation to their core? Excellent, you got your answer.

Mind that, it is not a Cloud exclusive. At the end of the day it’s technology we are talking about – hence my mantra “A technology problem is never really a problem. Technology can be bent at will.

Now, I am working upon a client at the moment which has many efforts going on, and I was asked if I know of a way of automating deployment of Azure Machine Learning Experiments from Azure ML Studio.

Being completely oblivious to the technology I spent some time on it, and despite being a machine learning tool it is quite WYSIWYG. Hence no automation whatsoever, at the moment. Being a cloud-based product there isn’t the shadow of a doubt that eventually it will be implemented (given enough demand, obviously), but it is manual today. And I need to do this today, so there was no other option than going down a custom route.

Let’s pretend for a minute that I am not living under a rock, I don’t know how to use a search engine. The first step here would be to fire up Fiddler and see what happens when you press your buttons in the tool’s UI.

It is a modern web-based tool so there is communication between the UI and some API layer. There will be some sort of authentication involved. You can work your way around it. Eventually, you will be able to replicate this interaction with a script, and put it in your pipeline.

Given I do not live under a rock and I know how to use a browser the first search result would be the excellent Azure ML PS. Using it in a set of PowerShell scripts, stored in a Git repository, to be then consumed by an Azure PowerShell task in Azure DevOps Pipelines is really trivial. Again, this is a valid proposition for both on-premise systems as well as cloud-based systems.

Pipelines are brilliant for this – and I mean it. If you have something ready, or something that cannot use a task, just throw it within a PowerShell or a Bash script and you are in business. Use a Windows agent or a Unix agent, I don’t really care to be fair – as long as you can interact with the target system with a script everything is good in my view.

Sure, there will be situations where automating gets really difficult. Sometimes you can’t avoid doing manual things. Some other times you will be better off rewriting the whole thing. But usually, we are in a quest for continuous automation and it is what keeps us going. Keep automating!