Naeem Sarfraz

Blogging about Enterprise Architecture, ALM, DevOps & happy times coding in .Net

The Promise of Windows Containers

Done. You've been working to build that new feature and move your backlog item to done. You're happy with the level of unit testing and some functional testing, you have confidence you've built something that works. But wait, you're not actually done because it's not yet in production.

Done Done. Now starts the laborious job of packaging up components, setting the correct configuration settings, and begin deploying. Right, files have been copied so you begin testing only to discover something is wrong and immediately you're thinking "but it worked on my machine." Which file did we forget to copy or remove? Was there a dependency that should have been installed? More often than not it's a mistake involving a human.

Maybe you're using a Continuous Integration sever to build, test and package your application binaries. Striving to deploy the same set of binaries to each of your environments (if you're doing an xcopy-style, manually copying files to a server, then you're doing it wrong). Containers can help. Simply it is a package of your application files and their dependencies which will run almost anywhere, consistently.

The value proposition for Containers is driven by the reliability of packaging and deploying a known quantity.

A Container is made up of your application files, dependant libraries and configuration of its operating environment. These three things are packaged using a layered file-system and stored as an image in a public or private registry. To run our application we create an instance of our Container using the registry image and we're in business.

Some of the benefits of using Containers you can look forward to include:

  • Package your application and its dependencies into one logical unit making running and deploying a whole lot easier
  • Better utilisation of your hardware and host operating system due to a higher density of running applications
  • Ready for scale-on-demand, horizontal\cloud scaling, by simply running multiple instances of your container

Over the coming weeks I'd like to share with you how to get started with Windows Containers and the types of common patterns you'll employ when shifting your .NET applications to this paradigm.

A flavour of the topics will include:

  • Building & debugging inside a Container
  • Container Patterns: Service, Task, Tools
  • Composing applications using Docker Compose
  • Setting up a CI\CD pipeline

I hope you'll join me and if there are any particular topics you'd like to see please feel free to leave a comment below.

Release Management 2015: Copying Failed for Robocopy, Component Name Must Match Artifact Name

Overnight we upgraded our on-premise TFS 2015 installation applying Update 3 and this morning I came in to find all our overnight builds failed. Our overnight CI builds, trigger a release in Release Management deploying the latest version of our applications to our dev environment. The problem lay with the Release Management upgrade (Release Management Client for Visual Studio 2015 Update 3).

Release Management reported the following error.

System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation.
---> Microsoft.TeamFoundation.Release.Common.Helpers.OperationFailedException: Deployment started on
target machine...

System.AggregateException: Failed to execute the powershell script. Consult the logs below for
details of the error.
Copying failed. Consult the robocopy logs for more details. ---> System.Management.Automation.RuntimeException:
Copying failed. Consult the robocopy logs for more details. ---> System.Management.Automation.RuntimeException:
Copying failed. Consult the robocopy logs for more details.

2016/07/01 05:03:53 ERROR 2 (0x00000002) Accessing Source Directory \\SERVER\Builds\APPLICATION\1683\Web\Web\
The system cannot find the file specified.

Back to Basics

We have a convention for our TFS builds where using the Publish Artifact Task we’ll organise the contents of the build Artifacts in folders: Web, SQL or App.


Each folder in the TFS Build Artifacts will have a one-to-one mapping to a Release Management component similar to this.


The Path to package normally points to the root of the TFS Build Artifacts folder and we add the word Web so that the component will look for the deployable bits in this folder. This is the only bit I could link back to the error as I saw robocopy was attempting to copy from a folder path that ended with ..\Web\Web. I quickly setup a new component and played around to come across this error.

ERROR: 0 artifact(s) found corresponding to the name 'vNext-Deploy-DEBUG' for BuildId: 1693.
Rename the component such that it matches uniquely with any of the available artifacts of the
build : SQL, Web.


Aha. The last error pointed to the fact that your published artifact (top level folders in your TFS Build Artifacts) must match the name of the component defined in Release Management. In fact I found that it doesn’t have to match the name exactly but the component should contain the name of the artifact. A typical example of the name we use is vNext-DeploySalesApp-WebApp. 

The word Web in the Path to package field would explain why it tried to copy files from ..\Web\Web. Now this box cannot be left empty so we replaced it with .\ and normal service is resumed.

P.S. Very soon we’ll be glad to see the back of Release Management in its WPF client form and move all our releases to the new version which runs out of the TFS site. Can’t wait.

Netflix Purchase from Apple Store Scam

Over the past couple of weeks a couple of family members have been caught out by a very good phishing email purported to be from the Apple Store. Wikipedia describes phishing in the following terms.

Phishing is the attempt to acquire sensitive information such as usernames, passwords, and credit card details (and sometimes, indirectly, money), often for malicious reasons, by masquerading as a trustworthy entity in an electronic communication.

Here is a copy of the email.


Illusion of Authenticity

You’ll have to admit the email looks a good one and to the untrained eye you could easily fall for it. The colours play on Apple’s use of grey tones although the Apple logo is clearly missing. On receiving this email you’re immediately thinking what purchase? Let me investigate.

All of the links except one point to the Apple website, again adding to the picture this has come from a legitimate source. But if you’ve received this email out of the blue there’s only one thing you’re interested in and that’s cancelling the subscription. Has someone purchased a subscription from my account without me knowing? Did I forget to cancel my renewal? There’s only one way to answer these questions. Follow the Manage\Cancel Subscription link.

Spotting a Scam

Hovering over the Manage\Cancel Subscription link we see it points to (PLEASE do not enter your Apple ID if you follow this link), which is a domain registered in Poland. We know this because we can see the domain ends with and a WHOIS search confirms this. Clearly not the Apple site. In fact if you go to the root domain you’ll see it appears to be the website for a hotel in Poland, something which a Google search also confirms.

So why is a Polish hotel aiding the scammers? They’re not. Their website is a Wordpress site and it appears to have been hacked. Those that have hacked the site have placed some code that redirects the visitor to a site that is designed to look like the Apple login page but more on that later. Why would they do this? They’re piggybacking off a reputable website and know access to it will not be blocked.

An Apple Login

Following the hacked link takes you to this page.


Again, looks pretty convincing. Many of the links on the site point to real links on the Apple website however the page does not belong to Apple and here’s how you can tell.

  1. Take a good look at the URL. It may say the word apple in there however look for the root domain name as it should be In this case it’s
  2. Look for the padlock. The actual Apple login page uses SSL which ensures your username and password is passed to Apple it remains encrypted and therefore private. In Google Chrome look out for the green padlock as shown below.Capture2
  3. A spelling mistake. Everyone can make spelling mistakes and this is no sure way of spotting a scam but I noticed the password field was spelt incorrectly. Passwort ending with a t instead of a d.

Once the victim has used their real Apple username and password the site asks them for their billing address, credit card details, account name + sort code and other personal information. Enough information now to impersonate them and attempt to make fraudulent transactions.

In this particular scam the family members logged in using their actual apple username and password. Now the scammers have these details and they attempted to make a £1000 purchase in Debenhams which the bank fortunately picked up and blocked it and notified the account holder.

I’ve Fallen For This So What Can I Do?

  1. Change your Apple password immediately.
  2. If you’re using the same password on other websites go and change those passwords too.
  3. If you’re not using a password then consider doing so now. In this case it would not have filled out the login form as the URL does not match the Apple website URL. 1Password is highly recommended.
  4. Inform your bank that you think your card details may have been obtained fraudulently. I would consider actually cancelling the card itself and getting a new one.
  5. Report the scam to the Police on the Action Fraud website. Action Fraud is the UK’s national reporting centre for fraud and cyber crime where you should report fraud if you have been scammed, defrauded or experienced cyber crime.
  6. Report the phishing email to your email provider. For example here are instructions from Gmail on how to do that.

One final piece of advice, just remember that looks can be deceiving.

Random Puzzles in Scratch

Using the Super Scratch Programming Adventure examples I set our Code Club students recreating a memory game using the Mona Lisa puzzle game from the book. The completed version “hard codes” the sequence you need to memorise so one of the students set about creating a truly random sequence generator. I want to document one way of doing this in Scratch here for his benefit and anyone else who might find this useful.

Presenting the sequence to the game player is defined by a series of sequential blocks like so:


And judging the game players response is even more unwieldy as there is lots of duplication and is very difficult to modify if you want to extend the sequence. Go to the project hosted on the Scratch site for the full code and click “See Inside.”


Generating a Random Sequence

The numbers from which we want to generate our sequence represent the options available to the game player, e.g. 1 to 4. To avoid the “hard coding” we’re going to require a list variable to hold the random sequence. We can now fill this list using this block of code:


Sometimes this will generate a sequence where a number might repeat like “2 2 1 4”. For the game player this isn’t a nice experience so I’m going to add a check so that it will only add the random number to he list if it doesn’t match the last random number.


Judging the Last Move

Working out if the game player entered the correct move is really easy now that we have a list of random numbers. I chose to solve this by:

  1. Saving the game players chosen move to a lastkey variable
  2. Check that the value of the lastkey variable and the last number added to the list are the same
  3. If so then the game player got it right
    1. Let them know they got it right using the say block and
    2. Remove the last item from the list as we don’t need to check this number again


The Final Solution

Here’s is the new version.


A few things to note with this solution:

  • I haven’t implemented “lives” for the game player so that you can only get three attempts at getting the sequence correct
  • When you start you get the option of choosing a level. This simply extends the number of steps you need to memorise

Security Changes in ASP.NET Core from NDC London 2016

Workshop: ASP.NET 5 (now Core) Authorization Lab

Following my previous post, this one has been sat in my drafts and I forgot about it, here are the new .NET security workshops and presentations from NDC London. Note! All of the code developed was against .NET Core RC2 from the daily builds.

What’s new in Security in ASP.NET 5 and MVC 6 - Dominick Baier

A run around the new ASP.NET Data Protection & Authorization Stacks - Barry Dorrans

ASP.NET Core from NDC London 2016

Following my recent post on adopting .NET Core I wanted to share some talks and workshop material from NDC London.

Workshop: What’s new in ASP.NET 5 and MVC 6

Two days with Damian Edwards and David Fowler was a fantastic opportunity to dive into the deep end and get a taste of things to come. All of the labs can be found in this repository but for ease I’ve listed them below. Note! All of the code developed was against .NET Core RC2 from the daily builds.

A brief history of ASP.NET: From 1.0 to 5.0 - Damian Edwards and David Fowler

Saying “Goodbye” to DNX and “Hello!” to the .NET Core CLI - Damian Edwards & David Fowler

Middleware Tips and Tricks for ASP.NET 5 - Scott Allen

ASP.NET 5 on Docker

.NET Core: Should I stay or should I go?

.NET is undergoing a major reboot and if you work in the ecosystem you simply cannot ignore it. Originally presented as an upgrade carrying the name vNext and then ASP.NET 5 it has been sensibly renamed to ASP.NET Core and .NET Core to reflect the new offering.

What is it all about?

The .NET framework has grown over the past decade into a feature-rich platform under-pinning many of Microsoft’s server products such as Dynamics CRM & SharePoint. As developers we’ve also used this framework to build incredible web applications and sites using the goodies each version-increment brought. Starting with WebForms we quickly adopted AJAX, WPF, WCF, MVC & SignalR not to forget language enhancements including generics, LINQ and async\await. Microsoft looked after us well.

The landscape has and is changing and the .NET team have reacted. .NET Core is their answer, a fork of the .NET framework stripped-down into nuget packages that you consume as you require. Your application can be deployed in an XCOPY fashion removing the dependency on any server-installed component. It’s cross-platform, modular (most features delivered as nuget packages), faster (using a new web server called Kestrel), and is being developed in the open (over at GitHub).

The journey to this point wasn’t easy with several “beta” drops causing pain (listen to Rick Strahl on DotNetRocks), late renaming of the new product, more renaming of the tools and pushing back the RTM release. What we’ll have when it does RTM will not be on par with the current full-framework offering either; it is still being defined.

Enter the Cloud & Open-Source

The rise and success of Microsoft Azure is transforming the way Microsoft delivers services with faster releases and shorter deployment times. With a noticeable number of Azure machines running Linux Microsoft will want to expand and grow this part of their offering. So what does .NET have to do with this?

.NET is not a friendly player in this space being the huge monolith that it is, there is a price to pay for all of the good-stuff it brings to lighten the load of a developer. Microsoft need more from .NET; first of all it needs to be cloud-friendly which means scalable even if this isn’t a requirement for non-Azure customers. It needs to be cross-platform being able to run on not only Windows servers but Linux too. There’s a big win here from a customers perspective as a Linux server hosted in Azure costs less to run than a Windows server.

And why Open-Source? By opening up the code for all to see they are inviting the support of the community to build an even better product and it’s not all one-way. Microsoft teams themselves are contributing to open-source projects to encourage wider adoption of their products and services which is a win for everyone. For .NET Core to succeed as a viable cross-platform framework contributions from the community are essential.

Developing in the Enterprise

Whilst most Dark Matter developers don’t exclusively exist in the Enterprise I’ve met many that do, especially where they are working exclusively in a Microsoft stack. These developers are getting things done and building most of the business software out there with the tools and libraries that a vendor like Microsoft supplies. They’ll use open-source software but rarely contribute code back. They’re looking to be hand-held through a solution using a tried and tested, production-ready framework that will make life easier. .NET Core doesn’t fit this model.

What should you do?

If you’re an Enterprise team I don’t believe .NET Core will be right for you right now, maybe never. The ecosystem (tools and third party libraries) have some way to go before they’re ready to give you the productivity you desire. Keep an eye on it’s development as it will RTM sometime this year, Q2 2016? but continue to use the full framework and be prepared to review this in 2017.

Ask yourself: How does .NET Core add value to the work I currently do? If you’re working exclusively in the Microsoft stack then it’s not going to add much. Another way of looking at it: What problems are we solving with .NET Core? Again if you’re working in the Microsoft stack not the right ones.

Developing .NET Core was necessary for Microsoft in this new world of cloud computing, I just don’t think its right for Enterprise teams but at least we now have a choice which can only be a good thing. .NET Core will enable new scenarios and that’s where the fun begins.

Trigger an Agent Release from TFS 2013 Build

You’re using TFS 2013 to build your application continuously, or on a schedule or both giving you feedback on how good your team are doing at integrating their work with one another. Using Release Management (RM) you can deploy your application into your Dev environment, then UAT and finally Production all at the click of a button. This post describes how you can trigger that build from a TFS 2013 build.

How could you use this and why? A product team wants to deploy to a Dev environment on every commit to the central repository. This environment is used by the developers to test the integration of all of the different components of a product before releasing into UAT. This can be described as Continuous Deployment. If the product has many components to it you may not want to deploy on every commit instead you could deploy daily as a result of a scheduled build e.g. a nightly build.

You can do this by triggering a release from a TFS build by modifying the build process template, used by your build definition, of which there are several:

  • DefaultTemplate.xaml
  • GitTemplate.xaml
  • TfvcTemplate.xaml
  • UpgradeTemplate.xaml

Installing the new build process template

Install RM and new versions of the above templates are installed on your server for you to use. On the server where you have installed RM Server go to C:\Program Files (x86)\ Microsoft Visual Studio 12.0\ReleaseManagement\bin\ where you’ll find the following files:

  • ReleaseDefaultTemplate.xaml – TFS 2010
  • ReleaseDefaultTemplate.11.1.xaml – TFS 2012
  • ReleaseGitTemplate.12.xaml – TFS 2013
  • ReleaseTfvcTemplate.12.xaml – TFS 2013
  • ReleaseUpgradeTemplate.xaml – TFS 2010

You can also find these files on a machine where you have installed the RM client application, the location is C:\Program Files (x86)\Microsoft Visual Studio 12.0\Release Management\Client\bin (just be aware that some have reported the templates in the RM Client directory might not work). The RM client can be downloaded from your MSDN subscription or from here which is a limited trial version. Now add these files to source control by creating a folder called BuildProcessTemplates at the root of your Team Project repository and drop these files into there and commit (or checkin) so the path looks something like (for TFSVS) $/MyTeamProject/BuildProcessTemplates/ReleaseTfvcTemplate.12.xaml.


Configure your build

Create your new build and select one of the new templates:


You’ll want to take note of the following RM-specific properties:


  • Configuration to Release – String array, normally set to Any CPU|Release
  • Release Build – Boolean value (true or false), true if you want to trigger the release
  • Release Target Stage – String value, if you set this to the name of one of your RM stages e.g. DEV, PROD if you want to deploy into that enviornment only otheriwse leave blank

The new release process templates trigger RM by making a call to ReleaseManagementBuild.exe, which you’ll get when you install the RM client. This means wherever you have a TFS Build agent installed you will need to have the RM client also installed on that machine.

Trigger a new build and confirm that your release was created and then triggered ReleaseManagementBuild.exe. Using the RM client you should see a new release created.

Modifying an existing build process template

You may have modified one of the default templates or created your own version of it in which case you will want to add the RM capabilities to it. There is a post over at MSDN on the Visual Studio ALM blog that details exactly how to do this.

Keeping up with the Times and in particular Technology

As an Architect keeping up with frameworks, best practise patterns and being aware of emerging technologies can be quite a challenge and over the past year I’ve used a few productivity tools which have been invaluable. So I’d like to share a few things I do to keep on top of this.

  • Engaging with the community via Twitter
    Noise. Great as it is it can be watch out out as it can be a terrible distraction to your flow. I often wonder how much work some people actually do given the amount they tweet. However it’s useful for current developments and engaging with leaders in your field of interest.
    And here’s a tip “the best way to get the right answer on the Internet is not to ask a question, it's to post the wrong answer.” Cunningham’s law.
  • Keeping track of blogs with Feedly
    There are an infinite number of blogs out there with posts to fill your day and at some point you’ll actually want\need to do some work. Currently I am tracking around 50 blogs and spend around 30 – 60 minutes a day catching up with them and use the following tools to keep track of what I’ve read, intend to read and notifcations of new posts when they become available. Some posts I glance over whilst others require more concentration and sometimes a quiet room.
    • Feedly – a blog aggregator which has a nice plugin for Chrome allowing me to quickly mark a post as read when it doesn’t interest me. There are mobile apps available too.
    • OneTab – OneTab keep my browser looking sane because this is no strategy!
      If I can’t read the article within a minute then it gets added to OneTab, my read later list, much like Scot Hanselman talked about.
  • Listening to Podcasts
    Commuting to work, stepping out in the car between errands or travelling on a long journey to a conference are great times to catch up on my podcast collection. Here are the current work related ones I’m following: 
  • Track a subject of interest using StackOverflow tags
    This is a great way to pickup from other people’s experiences if the subject you are tracking is new to you and has a narrow focus. I wouldn’t suggest tracking something as broad as jQuery or C# but I’m finding Domain-Driven Development much more manageable with a daily digest email containing 3/4 new questions most of which have answers.

Sometimes I can’t but feel overwhelmed with the amount of information out there so I decided to devise this strategy around this. Over time I have started to reduce the number of people I follow on twitter and the number of blogs I track.

I hope you’ll find this useful and please share your productivty tips in the comments below.

Understanding TFS & Release Management Integration

We’re in a little bit of transition phase for these two products as the integration gets tighter and tighter. The story began with the acquisition of Release Management from InCycle in 2013 and will soon become one product with little trace of the WPF client app which is used to configure and manage deployment workflows.

TFS build has the continuous integration story covered. Now you want to deploy your application you need to switch over to Release Management and trigger a release. Triggering a release was a manual action in earlier versions of the product but it can now be triggered automatically opening up continuous deployment strategies to you.

The information below outlines a series of how-to articles I’ll be posting in the next month of stuff I’ve learnt whilst integrating the two products. Not everyone has the luxury of upgrading to the latest versions of Visual Studio or TFS so I’m hoping this will be useful for someone looking for a consolidated source on this stuff.

Using a subset of the complete options Jakob Ehn wrote about in his post Triggering Releases in Visual Studio Release Management I’m focusing on the on-premise options available to you.



Release Management

Deployment Type


Post (links will added as I publish posts)
1 TFS 2013 Build RM 2013 Update 2 Agent-Based ReleaseManagementBuild.exe
ReleaseBuildTemplate for Xaml build
Trigger an Agent Release from TFS 2013 Build
2 TFS 2013 Build RM 2013 Update 2 Agent-Based ReleaseManagementBuild.exe
Post build PowerShell script for Xaml build
Trigger an Agent Release from TFS 2013 Build
3 TFS 2013 Build RM 2013 Update 3 vNext Rest API
Post build PowerShell script for Xaml build
Trigger a vNext Release from TFS 2013 Build
4 TFS 2015 Build RM 2013 Update 2 Agent-Based ReleaseManagementBuild.exe
PowerShell task
Trigger an Agent Release from TFS 2015 Build
5 TFS 2015 Build RM 2013 Update 3 vNext Rest API
PowerShell task
Trigger a vNext Release from TFS 2015 Build



Some of the terminology can be confusing so here is an explanation of the top level functions in Release Management.

  • Agent Release – Deployment template using pre-defined functional building blocks and using Windows Workflow to manage the workflow. Requires a deployment agent to be installed on the target server.
  • vNext Release – Deployment template using PowerShell and using Windows Workflow to manage the workflow. Does not require a deployment agent however will require an account with local admin permissions on the target server.
    • PowerShell – Scripts that will have been authored by yourself for deploying components such as a xcopy job, web deploy or dacpac package.
    • PowerShell DSC – Scripts utilising Desired State Configuration.

The future – PowerShell all the things!

The Release Management Service, now available in public preview in Visual Studio Team Services, will replace the current WPF client app and related architecture. RMS has been basically re-built and works in a similar way to the new build system in TFS 2015. A summary of the new features can be found here and this is what it looks like.

This means Agent based deployments will be a thing of the past as the way forward is paved with PowerShell scripts. If you’re currently using Agent based deployments then the best thing you can do is to start using vNext templates and paths to start deploying using PowerShell. These PowerShell scripts can then be re-used in Release Management Service when it becomes available to on-premise. Interestingly there is a tool, created by the ALM Rangers, which will attempt to migrate your Agent based deployment artifacts and over to vNext templates.