Autoscaling Azure with WASABi – Part 6

I gave an Autoscaling Azure talk at the Brisbane Azure User Group (BAUG) on the 18th April 2012. This series of posts will walk through the demo I put together for the talk using the Autoscaling Application Block (WASABi).

What are we doing ?

All of our configuration is now complete. We will be running the ConsoleAutoscaler console application and observing how the rules we configured and the queue length of the workerqueue storage account queue affect the number of instances of our Queue Manager web application running in Azure.

Initial State

Ensure that the workerqueue queue is empty in our storage account by using either Azure Storage Explorer to remove any messages or clicking the remove button in our Queue Manager web application until the queue length is zero.Queue Manager - queue length of zero

Ensure that there is only a single instance of the Queue Manager web application running.

Run the Autoscaler Console Application

Make sure Visual Studio is open with the ConsoleAutoscaler application we wrote in Part 2. Ensure that the Autoscaling Block has been configured as per Part 3 and that the Service Information Store and Rules Store have been configured as per Part 4 and Part 5.

Hit F5 to run the ConsoleAutoscaler application.

A Picture is worth a 1000 Words !

The graph below shows a 32 minute run of the ConsoleAutoscaler application. The Message Count (queue length) is shown in red and the resulting Instance Count is shown in green. The result of our QueueLength_Avg_5m operand defined for our reactive rules is shown in purple.

The queue length and the instance count share the vertical axis. The horizontal axis is time in minutes.

Autoscaling Graph

Increase the Queue Length

I increased the number of messages in the queue, via the Queue Manager web application, to 4 as seen at point 1 in the graph and again to 8 as seen at point 2.

Queue Manager - queue length of 8

Since the QueueLength_Avg_5m operand is aggregated over a 5 min window and we are in the initial 5 minutes, it can be seen in the graph that its value has not yet exceeded our configured target of 5.

Our rules are evaluated every minute by the Autoscaling Block and diagnostic information sent to the console and a log file as per our configuration. As can be seen from the diagnostics for this period, the Autoscaling Block is taking no action. The number of instances of our Queue Manager web application is still at 1.

Autoscaling General Verbose: 1002 : Rule match.
Autoscaling Updates Verbose: 3001 : The current deployment configuration for a hosted service is about to be checked to determine if a change is required (for role scaling or changes to settings).
Autoscaling Updates Verbose: 3012 : Some instance count changes will be ignored.
Autoscaling Updates Verbose: 3004 : There are no configuration changes to submit for the hosted service.

At around the 6 minute mark, the value of the QueueLength_Avg_5m operand crosses the queue length threshold of 5. As can be seen from the diagnostics for this period,  the reactive rule Heavy Load (Increase) has been matched and the Autoscaling Block submits a scaling (up) request for the WasabiDemoWebRole. The result of the scaling request can be seen at point 3 in the graph. The number of instances of our Queue Manager web application is now at 2.

Autoscaling General Verbose: 1002 : Rule match.
Autoscaling Updates Verbose: 3001 : The current deployment configuration for a hosted service is about to be checked to determine if a change is required (for role scaling or changes to settings).
Autoscaling Updates Verbose: 3003 : Role scaling requests for hosted service about to be submitted.
[BEGIN DATA]
… "MatchingRules":"Default, Heavy Load (Increase)"
… "InstanceChanges":{"WasabiDemoWebRole":{"CurrentValue":1,"DesiredValue":2}}
Autoscaling Updates Information: 3002 : Role configuration changes for deployment were submitted.

Decrease the Queue Length

I decreased the number of messages in the queue, via the Queue Manager web application, to 6 as seen at point 4 in the graph.

Queue Manager - queue length of 6

The value of the QueueLength_Avg_5m operand is still above 5 and this results in the reactive rule matching again. The 2nd instance of the Queue Manager web application, is still being brought online and the Autoscaling Block cannot submit the scaling request. This can be seen in the diagnostics for this period.

Autoscaling General Verbose: 1002 : Rule match.
Autoscaling Updates Verbose: 3001 : The current deployment configuration for a hosted service is about to be checked to determine if a change is required (for role scaling or changes to settings).Autoscaling Updates Warning: 3005 : The deployment is not in the running status, cannot submit a scaling request now.

Finally the 2nd instance is spun up and the matching reactive rule can submit another scaling (up) request for the WasabiDemoWebRole. The result of the scaling request can be seen at point 5 in the graph. The number of instances of our Queue Manager web application is now at 3.

Autoscaling General Verbose: 1002 : Rule match.
Autoscaling Updates Verbose: 3001 : The current deployment configuration for a hosted service is about to be checked to determine if a change is required (for role scaling or changes to settings).
Autoscaling Updates Verbose: 3003 : Role scaling requests for hosted service about to be submitted.
[BEGIN DATA]
… "MatchingRules":"Default, Heavy Load (Increase)"
… "InstanceChanges":{"WasabiDemoWebRole":{"CurrentValue":2,"DesiredValue":3}}
Autoscaling Updates Information: 3002 : Role configuration changes for deployment were submitted.

No Queue Length Changes

During the period between 15 min and 22 min I issued no queue length changes via the Queue Manager web application. The value of the QueueLength_Avg_5m operand was still above 5, but the constraint rules were now protecting us from spinning up too many instances. Our default constraint rule ensures that we cannot spin up more than 3 instances. This can be seen in the diagnostics for this period.

Autoscaling General Verbose: 1002 : Rule match.
Autoscaling Updates Verbose: 3001 : The current deployment configuration for a hosted service is about to be checked to determine if a change is required (for role scaling or changes to settings).
Autoscaling Updates Verbose: 3012 : Some instance count changes will be ignored.
[BEGIN DATA]
… "InstanceChanges":{"WasabiDemoWebRole":{"DesiredInstanceCount":4,"TargetInstanceCount":3}}}
Autoscaling Updates Verbose: 3004 : There are no configuration changes to submit for the hosted service.

Flush the Queue

At around the 22 min mark I flushed the queue which resulted in a queue length of zero. This can be seen at point 6 in the graph.

Queue Manager - queue length of zero

As a result the value of the QueueLength_Avg_5m operand fell below the threshold of 5 at around the 23 min mark. This resulted in the Heavy Load (Decrease) reactive rule matching at both points 7 and 8 in the graph and the Autoscaling Block submitting scaling (down) requests for the WasabiDemoWebRole. This can be seen in the diagnostics for this period. The number of instances of our Queue Manager web application drops to 2 and then to 1.

Autoscaling General Verbose: 1002 : Rule match.
Autoscaling Updates Verbose: 3001 : The current deployment configuration for a hosted service is about to be checked to determine if a change is required (for role scaling or changes to settings).
Autoscaling Updates Verbose: 3003 : Role scaling requests for hosted service about to be submitted.
[BEGIN DATA]
… "MatchingRules":"Default, Heavy Load (Decrease)"
… "InstanceChanges":{"WasabiDemoWebRole":{"CurrentValue":3,"DesiredValue":2}}
Autoscaling Updates Information: 3002 : Role configuration changes for deployment were submitted.</pre>
Autoscaling General Verbose: 1002 : Rule match.
Autoscaling Updates Verbose: 3001 : The current deployment configuration for a hosted service is about to be checked to determine if a change is required (for role scaling or changes to settings).
Autoscaling Updates Verbose: 3003 : Role scaling requests for hosted service about to be submitted.
[BEGIN DATA]
… "MatchingRules":"Default, Heavy Load (Decrease)"
… "InstanceChanges":{"WasabiDemoWebRole":{"CurrentValue":2,"DesiredValue":1}}
Autoscaling Updates Information: 3002 : Role configuration changes for deployment were submitted

The value of the QueueLength_Avg_5m operand is still below 5 after the 30 min mark, but the constraint rules are now protecting us from spinning down too many instances. Our default constraint rule ensures that we always are running at least one instance. This can be seen in the diagnostics for this period.

Autoscaling General Verbose: 1002 : Rule match.
Autoscaling Updates Verbose: 3001 : The current deployment configuration for a hosted service is about to be checked to determine if a change is required (for role scaling or changes to settings).
Autoscaling Updates Verbose: 3012 : Some instance count changes will be ignored.
[BEGIN DATA]
… "InstanceChanges":{"WasabiDemoWebRole":{"DesiredInstanceCount":0,"TargetInstanceCount":1}}}
Autoscaling Updates Verbose: 3004 : There are no configuration changes to submit for the hosted service.

How did I capture the data for the graph ?

I wrote a simple PowerShell script to poll the queue length of my workerqueue storage account queue and the number of instances of my Queue Manager web application. Every 30 seconds the script would write out the values. I redirected the output to a csv file which I then opened and manipulated in Excel.

.\DataCollector.ps1 > .\DataCollector.csv

Here is the DataCollector.ps1 script. Refer to Part 4 for a refresher on how to configure PowerShell for use with your Azure accounts.

[Reflection.Assembly]::LoadFrom('C:\Projects\WasabiDemo\WebApplication\packages\WindowsAzure.Storage.1.7.0.0\lib\net35-full\Microsoft.WindowsAzure.StorageClient.dll') | Out-Null

$storageKey = (Get-AzureStorageKey -StorageAccountName baugautoscalingapp).Primary 
$connectionString = "DefaultEndpointsProtocol=https;AccountName=baugautoscalingapp;AccountKey={0}" -f $storageKey
$queueName = "workerqueue"

$storageAccount = [Microsoft.WindowsAzure.CloudStorageAccount]::Parse($connectionString)
$queueClient = [Microsoft.WindowsAzure.StorageClient.CloudStorageAccountStorageClientExtensions]::CreateCloudQueueClient($storageAccount)
$queue = $queueClient.GetQueueReference($queueName)

$index = 1;
$interval = 30 * 1000

"{0}`t{1}`t{2}`t{3}" -f "TimeStamp", "Index", "MessageCount", "InstanceCount"

while ($true) 
{
	$messageCount = $queue.RetrieveApproximateMessageCount()
	$instanceCount = (Get-AzureDeployment -ServiceName baugautoscalingapp -Slot Production).RoleInstanceList.Count
	$timestamp = Get-Date -Format "yyyy-MM-dd hh:mm:ss"

	"{0}`t{1}`t{2}`t{3}" -f $timestamp, $index, $messageCount, $instanceCount

	$index = $index + 1
	[System.Threading.Thread]::Sleep($interval)
}

Conclusion

This simple demo has hopefully provided you with sufficient insight into the Autoscaling Application Block and some understanding of how it operates.

Advertisements

13 thoughts on “Autoscaling Azure with WASABi – Part 6

  1. Great tuto at all,
    Just one mistake i do:
    when i run, the console appears ans show an error on this line:
    wuth this msg : configuration system failed to init
    var scaler = EnterpriseLibraryContainer.Current.GetInstance();

    Do you have any idea?
    What’s do I wrong…
    Thank you

      1. Sounds like there is an issue in the configuration within the app.config. Give it a look and see if there is anything that looks misconfigured.

      2. I found the solution 🙂
        Here it is :
        The configSections node must be place before the system.diagnotics node in the app.config file, and everything works good.

        Thank you for your quick answer,
        Rgds,
        Kenny.

  2. hi Paul,

    It’s me again 😛
    Just want to say that I followed your tuto and it’s works well.
    But now, I have to deploy it on a PaaS like Azure from MS.

    Do you have any idea about that ?
    Because all website I saw explain how to manipulate Wasabi but not how to deploy it on a PaaS.

    Thank you in advance,
    Kenny.

    1. I am laughing,I answer all my question 😛

      But this one, I don’t have any response yet.
      I had create a worker role to deploy WASABion the cloud but I have a stupid error:

      “Autoscaling General Error: 4101 : Cannot access contents”

      Thk U

      1. Thank you very much :

        I am looking for this link,
        If I have another question, can I send you on this blog again or directly on your message address ?

        Rgds,
        Kenny

      2. I thought one thing,

        Do not I have to deploy my XML file( rules, serviceinformation) in a blob storage?
        I read this article : Autoscaling Application Block and Transient Fault Handling Application Block Reference.pdf

        What do you think about?

      3. I just try to run the WR but locally the same pb.
        I think I have a mis in my app.cfg, that the last place.

        thx

      4. everything run good but always little errors:

        – Could not retrieve the instance count for hosted service with DNS prefix ‘xxxxx’.
        – Autoscaling Updates Error: 3010 : Microsoft.Practices.EnterpriseLibrary.WindowsAzure.Autoscaling.ServiceManagement.ServiceManagementClientException: The service configuration could not be retrieved from Windows Azure for hosted service with DNS prefix ‘wasabiautoscalingappacc’ in subscription id ‘58250cb6-b1e0-41b6-aaf6-5d836ce01075’ and deployment slot ‘Production’. —> Microsoft.Practices.EnterpriseLibrary.WindowsAzure.Autoscaling.Security.CertificateException: The certificate with thumbprint ‘5A4081821ADA9A4B8D8E78991532F3152025CECB’ in store name ‘My’ and store location ‘LocalMachine’ could not be found.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s