Moving resources with ARM

Resource Groups allow you to manage multiple resources within Azure as a logical group. Resource Groups are enabled via the Azure Resource Manager (ARM), which is the new mechanism for managing and securing resources in Azure.

If you have created resources via the old Azure Portal or Service Management APIs, then you may notice that they belong to Default-XXXX Resource Groups when viewed in the new Azure Portal.

Current State

As you can see from the following view of one of my subscriptions in the new Azure Portal, I have a Storage Account that is part of a Resource Group named Default-Storage-WestUS. I originally created this Storage Account in the old Azure Portal but have re-used it for the pbubuntu VM. The VM and its associated Cloud Service were created via the new Azure Portal and were placed into a Resource Group named pbubuntu.

Azure Resources - Before

I’ve been cleaning up a number of orphaned Default-XXXX Resource Groups this week. I did however want to keep the portalvhds2knfjr67c7f3q Storage Account, since it held the VHD for my VM. I was looking for a way to move a resource from one Resource Group to another. I wanted to move the portalvhds2knfjr67c7f3q Storage Account into my pbubuntu Resource Group.

Trying to move the resource

I discovered the Azure PowerShell cmdlet Move-AzureResource and the Move a resource section in the Using Azure PowerShell with Azure Resource Manager blog post described exactly what I was trying to do. But when I tried to run the cmdlet it returned immediately (with no error) and no effect on my resources.

PS C:\> Get-AzureResource -ResourceId /subscriptions/[MY_SUBSCRIPTION_ID]/resourceGroups/Default-Storage-WestUS/providers/Microsoft.ClassicStorage/storageAccounts/portalvhds2knfjr67c7f3q | Move-AzureResource -DestinationResourceGroupName pbubuntu

When I involved Fiddler in the debugging effort I noticed that there was no traffic being sent across the wire. No REST API calls were being made. Then I discovered this issue on GitHub – #379 Moving Azure resources doesn’t do anything. This was related to version 0.9.1 of the Azure PowerShell cmdlets – which is what I was running.

No problem I thought, I’ll use the Azure xplat cli, but quickly found that the move feature was not yet implemented on the resource command …

Ok – back to basics then.

I’d look up the REST API endpoint and payloads and use raw REST calls. I opened up the Azure Resource Manager REST API Reference on MSDN and tried to find the REST API for moving a resource. Nothing … This was not going well.

Diving into cmdlet source code

My next step was to download the Azure PowerShell code from GitHub and attempt to discover the REST API endpoint and payload from the Move-AzureResource cmdlet source code. I built the code and ran the MoveResourceTest test to start debugging the cmdlet.


I managed to extract the REST API endpoint (managementUri) and infer the payload structure from the ResourceBatchMoveParameters class. I confirmed my findings with Ilya Grebnov, the Architect and Lead Engineer of Azure Resource Manager.

Thanks to Ilya for getting back to me so quickly and for correcting my initial attempt at the payload.

Moving that resource !

ARMClient is a simple command line tool to invoke the Azure Resource Manager API and can be found on GitHub. It is fairly low level and allows you to interact with the raw REST API directly. ARMClient is great in that it manages the Azure authentication tokens for you. Have a look at the ARMClient: a command line tool for the Azure API blog post to understand this tool better.

ARMClient is available via chocolatey and I installed it as follows:

choco install armclient

I then created the payload in a file named move-resource.json. The targetResourceGroup is the full path to the Resource Group I’d like to move the resource to. In this case it’s the pbubuntu Resource Group in my subscription. The resources collection contains the full path to the resources I’d like to move. My collection consists solely of my portalvhds2knfjr67c7f3q Storage Account.

  "targetResourceGroup": "/subscriptions/[MY_SUBSCRIPTION_ID]/resourceGroups/pbubuntu",
  "resources": [

You will be required to authenticate yourself to use the ARMClient. You can do that as follows:

PS C:\> armclient login

Then I issued a POST to the REST API endpoint for moving a resource out of my Default-Storage-WestUS Resource Group. I included a reference to my move-resource.json file (you need to include the @ prefix) and switched on verbose mode to get as much detail as possible.

PS C:\> armclient post[MY_SUBSCRIPTION_ID]/resourceGroups/Default-Storage-WestUS/moveResources?api-version=2015-01-01 @C:\Dev\move-resource.json -verbose
---------- Request -----------------------

POST /subscriptions/[MY_SUBSCRIPTION_ID]/resourceGroups/Default-Storage-WestUS/moveResources?api-version=2015-01-01 HTTP/1.1
Authorization: Bearer eyJ0eXAiOiJKV...
User-Agent: ARMClient/
Accept: application/json
x-ms-request-id: 586648f9-d5ad-43de-b281-e0269ec3cd48

"targetResourceGroup": "/subscriptions/[MY_SUBSCRIPTION_ID]/resourceGroups/pbubuntu",
"resources": [
---------- Response (5333 ms) ------------

HTTP/1.1 202 Accepted
Pragma: no-cache
Retry-After: 15
x-ms-ratelimit-remaining-subscription-writes: 1199
x-ms-request-id: acc728c3-7086-4774-9665-e30d15bf6084
x-ms-correlation-request-id: acc728c3-7086-4774-9665-e30d15bf6084
x-ms-routing-request-id: AUSTRALIAEAST:20150526T044311Z:acc728c3-7086-4774-9665-e30d15bf6084
Strict-Transport-Security: max-age=31536000; includeSubDomains
Cache-Control: no-cache
Date: Tue, 26 May 2015 04:43:11 GMT

After a few seconds, a quick check with the Get-AzureResource cmdlet (filtered to the pbubuntu Resource Group) showed that my Storage Account had been moved.

PS C:\> Get-AzureResource -ResourceGroupName pbubuntu

Name              : pbubuntu
ResourceId        : /subscriptions/[MY_SUBSCRIPTION_ID]/resourceGroups/pbubuntu/providers/Microsoft.ClassicCompute/domainNames/pbubuntu
ResourceName      : pbubuntu
ResourceType      : Microsoft.ClassicCompute/domainNames
ResourceGroupName : pbubuntu
Location          : westus
SubscriptionId    : [MY_SUBSCRIPTION_ID]

Name              : pbubuntu
ResourceId        : /subscriptions/[MY_SUBSCRIPTION_ID]/resourceGroups/pbubuntu/providers/Microsoft.ClassicCompute/virtualMachines/pbubuntu
ResourceName      : pbubuntu
ResourceType      : Microsoft.ClassicCompute/virtualMachines
ResourceGroupName : pbubuntu
Location          : westus
SubscriptionId    : [MY_SUBSCRIPTION_ID]

Name              : portalvhds2knfjr67c7f3q
ResourceId        : /subscriptions/[MY_SUBSCRIPTION_ID]/resourceGroups/pbubuntu/providers/Microsoft.ClassicStorage/storageAccounts/portalvhds2knfjr67c7f3q
ResourceName      : portalvhds2knfjr67c7f3q
ResourceType      : Microsoft.ClassicStorage/storageAccounts
ResourceGroupName : pbubuntu
Location          : westus
SubscriptionId    : [MY_SUBSCRIPTION_ID]

End State

And checking the new Azure Portal also confirms my portalvhds2knfjr67c7f3q Storage Account has indeed been moved into my pbubuntu Resource Group.

Azure Resources - After

I’m hoping that the next release of the Azure PowerShell cmdlets will fix the Move-AzureResource cmdlet and that the Azure xplat cli will implement this functionality in the near future. In the interim, you can use ARMClient and the endpoint and payload as described in this post.

Calling Python from PHP in Azure Websites

I was recently asked for help by an ISV. They needed to generate a SAS (Shared Access Signature) to grant time-limited access to resources that they were storing in Azure Blob Storage. They had developed their solution using PHP and deployed it to Azure Websites. I had generated SAS before using C# and .NET but never using PHP. No problem I thought … we have an Azure SDK for PHP and it’s available on GitHub !

Azure SDK for PHP

But after pouring over the code I came to the realisation that there is no functionality to generate a SAS for Blob Storage in the PHP SDK. It seems as though this was first opened as an issue in 2012 but has not been resolved. So what to do?

I didn’t want to implement a solution that would place the burden of keeping up to date with Azure changes on the ISV. I had a quick look at the Python SDK and the weight lifted off my shoulders when I found !!

Azure SDK for Python

I verified my approach with the Azure SDK team and got the thumbs up. I was good to go and I could sense the solution would be a matter of minutes away – little did I know …

Test Azure Website

I created a test Website in Azure called callingpythonfromphp and ensured that PHP was enabled (it is by default). I didn’t need to change the Python version setting since Python is installed by default (as are all the other languages). The switches you can see below enable http handlers for the respective languages.

Site Settings for callingpythonfromphp Website

You can check that Python is installed via the Debug Console in Kudu. Here you can see that Python 2.7 is available in D:\Python27.

Kudu - console

I added two files to the D:\home\site\wwwroot folder of my Website – and test-fakesas.php.

Listing of wwwroot

The test-fakesas.php file simply called Python and passed it the script to execute.

<?php echo ">> "; system('D:\Python27\python.exe'); echo " <<"; ?>;

The was so named since all I’m doing is returning the string “SAS” and not actually generating a SAS. This will be output in the PHP script. In the final version it can be assigned to a PHP variable and utilised.

print ("SAS")

This was a quick test to check whether or not I could call a Python script from PHP in an Azure Website. The >> and << in the PHP script would allow me to visually confirm that the output of the Python script had been inserted where I expected.

Hitting the test-fakesas.php script via the Debug Console in Kudu gave me the expected result. I could see the SAS being output. I had successfully called into Python from the PHP script.

Execute PHP script

I then hit the test-fakesas.php script from the browser. Hmm – this was not good. The SAS string was nowhere to be seen.

Calling php script from Website

Looking at the php_errors.log file in D:\home\LogFiles I found the following:

PHP Warning: system(): Unable to fork [D:\Python27\python.exe] in D:\home\site\wwwroot\test-fakesas.php on line 1

Errors in php_errors.log

The Fix

So there was a difference in behaviour between the Debug Console and the manner in which php-cgi was being launched. After a few emails back and forth to the Azure Websites team, they discovered that the source of this issue was to do with the fastcgi.impersonate PHP setting. It was set to 1 by default and needed to be switched off (set to 0). This resulted in the following solution on the Kudu Xdt Transform Samples wiki page.

Custom php.ini

This solution basically allows you to deploy a custom php.ini file for your Azure Website and override settings. When you are doing this ensure that ALL instances of fastcgi.impersonate=1 are changed to fastcgi.impersonate=0 and are NOT commented out.

Note that the applicationhost.xdt and php.ini file should be deployed to your D:\home\site folder.

Solution deployed

And now the call works from both the Debug Console in Kudu and the browser.

Finally - it works !

Ok – so that’s all great. But I actually needed to generate a SAS.

Generating a genuine SAS

DISCLAIMER – I’m going to preface this entire section by saying that I am not a Python guru.

The recommended mechanism for installing Python packages is pip but when I ran pip via the Debug Console in Kudu to install the Azure SDK …

D:\Python27\Scripts\pip.exe install azure

I got the following error.

error: could not create 'D:\Python27\Lib\site-packages\azure': Access is denied

I found that the only way I could install the Python Azure SDK via pip into the Azure Website was by starting off with Python’s virtualenv. I found a great blog post that got me up and running quickly. I created a new development environment called myapp.

D:\Python27\Scripts\virtualenv --no-site-packages myapp

Create development environment

This creates an entire new and isolated Python environment and copies utilities and the Python executables into this environment.

Next activate the development environment.


Activate development environment

And then install the Azure SDK.

pip install azure

Install Azure SDK

I added the test-sas.php file to the D:\home\site\wwwroot\folder of my Website and the file to the D:\home\site\wwwroot\myapp folder .

Python and PHP scripts

The test-sas.php file simply called Python and passed it the script to execute. You can see that it is using the Python executable in the myapps development environment. This means it will also have access to the Azure SDK I installed.

<?php echo ">> "; system('myapp\Scripts\python myapp\'); echo " <<"; ?>;

The was based on an example from StackOverflow. The script now will actually generate a SAS for read-only access to the images/flower.png blob within my imagesstoragepb storage. This SAS will be output in the PHP script.

from import *
accss_plcy = AccessPolicy()
accss_plcy.start = '2015-01-08'
accss_plcy.expiry = '2015-01-10'
accss_plcy.permission = 'r'
sap = SharedAccessPolicy(accss_plcy)
sas = SharedAccessSignature('imagesstoragepb', 'DPnYqQIKTbNn6iC+nH03wjbvHcpm9htIYVZYkGQJEaWhUbEaGj6koypC0R8AW5Zc6L/g8Sj1tmTPMQlWY8m+NQ==')
qry_str = sas.generate_signed_query_string('images/flower.png','b', sap)
print (sas._convert_query_string(qry_str))

Running this in the browser led to the successful generation of the expected SAS !!

Generated a SAS from PHP via Python in Azure

So don’t be put off if a feature you need is not available in the PHP SDK. Check if you can leverage the Python SDK. And if you need to override the default PHP settings in Azure Websites you also now have a mechanism to do that.

Love at first site – scriptcs and WAML at Brisbane Azure User Group

I gave a talk at the Brisbane Azure User Group on the 13th November 2013 titled Love at first site – scriptcs and WAML. Here is my slide deck.

scriptcs is putting C# on a diet and decoupling your favourite language from Visual Studio. The Windows Azure Management Library (WAML) is a C# library that wraps the Windows Azure Management REST APIs. These two belong together !

In this talk I introduced scriptcs and the Windows Azure Management Library, before showing how to combine these two awesome resources to script the management of your Windows Azure assets with the full power of C#.

First Look at Built-in Autoscaling and Alerting at Brisbane Azure User Group

I gave a talk at the Brisbane Azure User Group on the 21st August 2013 titled First Look at Built-in Autoscaling and Alerting. Here is my slide deck.


Autoscaling has finally been built in to Windows Azure via Microsoft’s acquisition of MetricsHub. The Autoscaling funtionality from MetricsHub has been rolled directly into the Windows Azure platform. Other features that have also been rolled into the Windows Azure platform from MetricsHub include Availability Monitoring and Alerting.

Win an Aston Martin with your MSDN Subscription!

Most Microsoft developers have an MSDN subscription. Yet not many know that you get up to $150 Windows Azure credits per month with your MSDN subscription. Activate the Windows Azure benefits included with your MSDN subscription and you could win an Aston Martin V8 Vantage !

Simply activate your Windows Azure MSDN benefit before 30 September 2013 and deploy at least one Web Site or Virtual Machine. What are you waiting for ?

Activate your Windows Azure MSDN Benefit !

Still need convincing ?

You no longer need to provide credit card details when activating your Windows Azure MSDN benefit !

MSDN Professional Subscribers receive $50/month worth of Windows Azure monetary credits, MSDN Premium Subscribers $100/month, and MSDN Ultimate Subscribers $150/month. These credits can be applied towards any Windows Azure resource being used for Dev/Test purposes. It is up to you to decide how you would like to use them.

Here are some examples of how a MSDN Premium Subscriber could use their monthly monetary credits.


Get started now ! Activate your Windows Azure MSDN benefit.

Why you’ll love Windows Azure SDK 2.0 at the Brisbane Azure User Group

I gave a talk at the Brisbane Azure User Group on the 15th May 2013 titled Why you’ll love Windows Azure SDK 2.0. Here is my slide deck.

Why you'll love the Windows Azure SDK 2.0

This major refresh of the Windows Azure SDK was released on 30th April with some really great new features and enhancements.  In the talk I explored the new capabilities in version 2.0 using Scott Guthrie’s post and Damir Dobric’s excellent Service Bus 2.0 post and sample code as guidance.

We looked at the following:

Web Sites
Improved Visual Studio Tooling around Publishing
Management Support within the Visual Studio Server Explorer
Streaming Diagnostic Logs

Cloud Services
Support for new High Memory VM Instances
Faster Deployment Support with Simultaneous Update Option
Visual Studio Tooling for configuring, managing and viewing Diagnostics Data

Storage Client 2.0 included in New Projects
Visual Studio Table Explorer

Service Bus
Updated Client Library
Message Browse Support
New Message Pump Programming Model
Auto-delete for Idle Messaging Entities

PowerShell 3.0
PowerShell Remoting
New Automation Commands for Web Sites, Cloud Services, Virtual Machines, Service Bus, Windows Azure Store, Storage, Scaffolding cmdlets for Web/Worker Role

Understanding the benefits of Windows Azure geo-redundancy in Australia

The Windows Azure family has been extended with deployment regions in Australia. The deployment regions are paired for geo-redundancy and in Australia are located in the New South Wales and Victoria sub-regions.

Windows Azure data centres

Why is geo-redundancy important ?

Windows Azure Storage is an important service that underpins a number of other Windows Azure services – Blob Storage, Table Storage, and Virtual Machine (OS and Data Volumes).

Geo-replication is enabled by default in Windows Azure Storage and provides the highest level of storage durability. Data is asynchronously replicated from your primary location to a secondary location within the same region. These locations are guaranteed to be at least 400kms apart to ensure that data durability across catastrophic events.

The following picture shows the New South Wales and Victoria sub-regions within the Australia geographical region. Here the NSW deployment region is the primary location and is asynchronously replicating data to the Victoria deployment region which is the secondary location.


In the event of a catastrophic event in the primary location where the primary location cannot be restored, all traffic will be failed over to the geo-replicated secondary location. This ensures business continuity.

What about redundancy within the deployment region ?

Within each deployment region there is an additional layer of redundancy. All data is synchronously replicated to 3 different storage nodes across 3 separate fault and upgrade domains. This allows each deployment region to recover from common issues such as disk, node or rack failure.

The following picture demonstrates the 3 copies across 3 separate racks and the creation of a new copy after the failure of a storage node.


When geo-replication is enabled, you will effectively have 6 copies of your data distributed across 2 geo-graphically dispersed deployment regions. The multiple layers and mechanisms ensuring highly durable data will provide business continuity across a number of scenarios.

How do these new deployment regions affect Australian businesses ?

The most obvious answers would be reduced latency and data storage within Australia.

Even though a Windows Azure CDN has been available out of Sydney for a while, it has only offered lower latency on Blob Storage for very specific read-only workloads. With an Australian region available now, lower latency is available to a wider range of Windows Azure services and this will benefit Australian businesses utilising the Windows Azure platform.

Content in Blob Storage that is produced and consumed within the local Australian market will benefit from the lower latency. A more compelling case can now also be made for cloud integrated storage solutions such as StorSimple. Those customers within Australia that have regulatory pressures preventing them from storing data outside of Australia will also be heartened and this announcement should remove this hurdle.

Australian businesses utilising Virtual Machines within the Windows Azure Infrastructure Services will also enjoy lower latency when connecting via Remote Desktop or SSH.

This is a truly exciting announcement that will hopefully see more Australian companies begin their journey into the cloud.