ASafaWeb, Excessive Headers and Windows Azure

ASafaWeb - Automated Security Analyser for ASP.NET WebsitesTroy Hunt is a Microsoft MVP for Developer Security and regularly blogs and tweets about security principles.  He is the creator of ASafaWeb, the Automated Security Analyser for ASP.NET Websites. Troy has observed that around 67% of ASP.NET websites have serious configuration related security vulnerabilities. ASafaWeb makes scanning for these vulnerabilities dead easy and provides both remediation and additional reading around the resolution.

Troy blogged about excessive response headers in Shhh… don’t let your response headers talk too loudly.

By default, an ASP.NET application running on IIS returns a lot of information about the server and framework via the headers. This information can be used to help exploit vulnerabilities that are present in the technologies the headers identify. If any of the following response headers are returned, ASafaWeb will report a warning for this scan:

  • Server
  • X-Powered-By
  • X-AspNet-Version
  • X-AspNetMvc-Version

I set off to discover how to remove this configuration vulnerability from an ASP.NET Web API application running on Windows Azure.

Create a simple ASP.NET Web API application

Start Visual Studio 2012 as Administrator (we’ll need elevated privileges later when running the Windows Azure Emulator).

Create a new project with File > New > Project and select ASP.NET MVC 4 Web Application. Give the project a name of WebAPI and the solution a name of ExcessiveResponseHeaders. Click OK. Select the Web API template and click OK.

Right click on the WebAPI project in the Solution Explorer and select Add Windows Azure Cloud Service Project. A WebAPI.Azure project will be created with a WebAPI Role associated with the WebAPI Project. The WebAPI.Azure project will be configured as the startup project.

ExcessiveResponseHeaders project - Solution Explorer

The default Web API project created by Visual Studio will contain a simple Values Controller. Throughout this post I’ll be using the GET api/values REST endpoint which will simply return a list of 2 strings [ "value1", "value2" ].

ExcessiveResponseHeaders project - Values Controller GET method

Hit F5 and start up the project in the Windows Azure emulator.

Removing the Headers

Using Fiddler and issuing a GET api/values to the WebAPI application running on the Windows Azure emulator we can observe the following response headers:

  • Server: Microsoft-IIS/8.0
  • X-AspNet-Version: 4.0.30319
  • X-Powered-By: ASP.NET

Fiddler - Output for GET

No X-AspNetMvc-Version response header is observed since the api/values route is being managed by the WebAPI framework. If you issued a GET to the following URL: http://127.0.0.1:81/a/a you would observe the MVC header as the MVC framework routing would be brought into play:

  • X-AspNetMvc-Version: 4.0

To remove the Server header we add the following method and code to the Global.asax.cs file in the WebAPI project:

 
protected void Application_PreSendRequestHeaders(object sender, EventArgs e) 
{ 
  HttpContext.Current.Response.Headers.Remove("Server"); 
} 

To remove the X-AspNetMvc-Version header we add the following code to the Application_Start() method in the Global.asax.cs file in the WebAPI project:

 
MvcHandler.DisableMvcResponseHeader = true; 

Add code to Global.asax.cs

To remove the X-AspNet-Version header ensure that the Web.config has the following xml:

 
<configuration>
  <system.web>
    <httpRuntime enableVersionHeader="false" />
  </system.web>
</configuration>

To remove the X-Powered-By header ensure that the Web.config has the following xml:

 
<configuration>
  <system.webServer>
    <httpProtocol>
      <customHeaders>
        <remove name="X-Powered-By" />
      </customHeaders>
    </httpProtocol>
  </system.webServer>
</configuration>

Add config to web config

Hit F5 in Visual Studio again to start up the application with our changes. Using Fiddler and submitting a GET api/values to the WebAPI application now shows that the headers have been suppressed.

Fiddler - No Headers

Deploying this code to Windows Azure (http://pbwebapi.cloudapp.net) confirms that this solution works there too.

In Azure running the default Web API solution (before suppressing the headers):

Headers not suppressed in Azure

In Azure running the modified Web API solution (after suppressing the headers):

Headers suppressed in Azure

Is there a NuGet package for that ?

There always is … And its name is NWebsec.

Revert all the changes made previously in the Global.asax.cs and Web.config files. Right click on the WebAPI project and select Manage NuGet Packages … Search for NWebsec and install.

NWebsec NuGet Package

The installation of the NuGet package modifies the Web.config in the WebAPI project by adding a nwebsec sectionGroup and an entry in modules.

NWebsec web config updates

The X-Powered-By header is added by IIS and cannot be removed in the processing pipeline. Therefore NWebsec cannot remove it. To remove the X-Powered-By header ensure that the Web.config has the following xml:

 
<configuration>
  <system.webServer>
    <httpProtocol>
      <customHeaders>
        <remove name="X-Powered-By" />
      </customHeaders>
    </httpProtocol>
  </system.webServer>
</configuration>

To suppress all the other headers ensure that the Web.config contains the following xml:

 
<configuration>
  <nwebsec>
    <httpHeaderModule>
      <suppressVersionHttpHeaders enabled="true" />
    </httpHeaderModule>
  </nwebsec>
</configuration>

NWebsec configuration in web config

That was much better. We only had to add some configuration to the Web.config. There was no code involved.

The NWebsec package does not suppress the Server header but defaults it to a value of Webserver 1.0 – this value can be configured in the Web.config. All other headers are suppressed as can be seen via Fiddler.

Headers suppressed locally

Once the update is deployed to Azure (http://pbwebapi.cloudapp.net) it is confirmed that this solution works there too.

Headers suppressed in Azure

Scanning with ASafaWeb

Scanning the api/values REST endpoint of our default Web API application via the ASafaWeb analyser returns a warning about the same headers we observed via Fiddler.

ASafaWeb - Headers not suppressed

Scanning the api/values REST endpoint of our NWebsec enabled Web API application via the ASafaWeb analyser returns a warning about the same headers we observed via Fiddler.

ASafaWeb - NWebSec

Scanning the api/values REST endpoint of our manually configured Web API application via the ASafaWeb analyser returns a warning about a Server: Microsoft-IIS/8.0 header. This is not what we observed via Fiddler.

ASafaWeb - Manual

But wait … What is that URL that is being scanned ?
http://pbwebapi.cloudapp.net/api/values/elmah.axd ?

Sneaky ! That URL was resulting in a 404 being raised by IIS since the file did not exist. This was bypassing the processing pipeline and our attempts to suppress the headers.

Closing the 404 – Not Found hole

We’ll be continuing with the manual version of our suppressing headers code – I’m aiming to suppress all headers and the NWebsec package still sends down a Server header.

Again there is a NuGet package that comes to the rescue. NotFoundMVC automatically installs itself during web application start-up. It handles all the different ways a 404 HttpException is usually thrown by ASP.NET MVC. This includes a missing controller, action and route.

Right click on the WebAPI project and select Manage NuGet Packages … Search for NotFoundMVC and install.

NotFoundMVC NuGet Package Install

The NotFoundMVC package will make the following modifications to the Web.config file in the WebAPI project. The Error.cshtml placed under Views\Shared may be modified to provide customised 404 pages.

NotFoundMVC updates to Web Config

Deploying to Azure and scanning the api/values REST endpoint of our manually configured Web API application via the ASafaWeb analyser returns a PASS.

ASafaWeb - Pass

But wait – there’s more …

You’d think I would be happy with the PASS ! But I had seen something in one of the URLs attempted by the ASafaWeb analyser. It had thrown in an illegal character into the URL. So I tried the following URL locally:

HTTP.SYS Bad Request

Why is that Server header back ? And why is it now reporting Microsoft-HTTPAPI/2.0 ?

When you see a Server header value containing Microsoft-HTTPAPI instead of the IIS version it means the HTTP.SYS driver handled the request and it did not traverse all the way to the IIS user mode process.

So how do we get rid of that header now ? This header is being added right down at the kernel level. Time to break out the trusty regedit utility. Add a DisbableServerHeader REG_DWORD with a value of 1 to the following:

  • HKLM\SYSTEM\CurrentControlSet\Services\HTTP\Parameters

HTTP.SYS Registry Editor

This requires the HTTP.SYS service to be restarted. I battled with service dependencies but a reboot of my machine ensured the restart. Once that was done, the illegal URL no longer exposed the Server header.

HTTP.SYS Bad Request with suppressed header

And finally – modifying HTTP.SYS in Windows Azure

Trying the URL with the illegal character produced the same result in Windows Azure.

HTTP.SYS Server Header - Azure

To edit the registry on the web role in our cloud service in Windows Azure we’ll need to write a batch file. Right click the WebAPI project and click Add > New Item … Add a Text File with name Configure_HTTP.SYS.cmd and populate with the following:

 
@echo off
setlocal
set regpath=HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\HTTP\Parameters
reg query "%regpath%" /v "DisableServerHeader"
if errorlevel 1 (
   reg add %regpath% /v DisableServerHeader /t REG_DWORD /d 00000001
   shutdown /r /t 0
) 

This will add the registry entry if it does not exist and reboot the instance. This will only occur whenever the instance is re-provisioned as the registry entry will then no longer be set.

Make sure that the Build Action is set to None and the Copy to Output Directory is set to Copy always for the batch file.

Add command to project

Ensure that the batch file is configured as a startup task and is run with elevated privileges. Add the following xml to the ServiceDefinition.csdef file in the WebAPI.Azure project:

 
<Startup>
  <Task commandLine="Configure_HTTP.SYS.cmd" executionContext="elevated" />
</Startup>

Add to ServiceDefinition.csdef

Deploying to Azure and trying the URL with the illegal character now results in all headers being suppressed.

HTTP.SYS Suppressed Server Header - Azure

Conclusion

With a bit of work you can suppress all the headers in your ASP.NET application running in Windows Azure.

Be mindful though, that removing these excessive headers is not a silver bullet that will magically make your web application 100% less likely to be attacked. But by adding in these additional layers you at least can exclude attackers that look at specific response headers to determine whether or not you are an interesting target.

StreamInsight at Brisbane Azure User Group

As I have mentioned before I am a big fan of StreamInsight, I gave a talk at the Brisbane Azure User Group on the 17th October 2012 titled Extracting Realtime Business Value with StreamInsight. Here are my slides.

Slides - Extracting Realtime Business Value With StreamInsight

Paul presenting

I also strongly recommend looking at the Hitchhiker’s Guide to StreamInsight 2.1 Queries from the StreamInsight Team. It includes the PDF Guide and a Visual Studio Solution containing all the examples. It is an excellent tool to familiarise yourself with some of the concepts in StreamInsight.

I also got to demo StreamInsight in Azure (codename Austin) to the user group. The discussions around the possibilities with StreamInsight and Complex Event Processing definitely got the creative juices flowing.

StreamInsight on Azure

For the 2 days prior to TechEd Australia 2012 I experienced my first Mexia Code Camp down the Gold Coast, where I got to enjoy geeking out with the rest of the Mexia team. We stayed in luxury beachfront apartments with amazing views of the beach and the Gold Coast coastline. The Code Camp was all about exploring new technology that excited us. This was definitely the coolest environment I’d ever written code in !

La Grande Oceanfront Luxury Apartments

Mexia’s Ben Simmonds and I are both big fans of StreamInsight, Microsoft’s Complex Event Processing Engine and are lucky enough to be part of Microsoft’s StreamInsight Advisory Group which has afforded us early access to StreamInsight in Azure (codename Austin). I’m really excited by the possibilities of StreamInsight and complex event processing and really enjoyed exploring the technology at the Code Camp.

StreamInsight covers the Velocity dimension of Gartner’s 3Vs of Big Data – Volume, Velocity and Variety. It’s beauty lies in its ability to extract relevant knowledge from one or more large streams of data. Big data can be analysed in real-time and events can be raised when something relevant is detected within the large stream of information. For example, if a monitoring heartbeat on some resource was not detected, this would constitute a relevant signal we are interested in amid the irrelevant noise of the regular heartbeat.

Big Data: Velocity and Volume dimensions (source: http://blogs.msdn.com/b/microsoft_business_intelligence1/archive/2012/02/22/big-data-hadoop-and-streaminsight.aspx)

The project that I tackled involved the following:

  1. Integrating with Ben‘s StreamInsight Austin instance to host a custom IObservable that monitored a heartbeat running against a Windows Azure Service Bus subscription.
  2. Implementing a LINQ query to find missing heartbeats and drop an event into Ben’s Windows Azure SQL Database sink.
  3. An ASP.NET MVC application to display the events dropped into the SQL Database sink from those queries bound to it – Ben’s maximum and average latency events and my missing heartbeat events.

First up I created an observable that generated point events for the regular heartbeat coming from some code checking whether or not a Windows Azure Service Bus subscription was available. This was turned into a stream and then I created a LINQ query over the stream that filtered on events where the subscription was not available. This filtered set of events was then passed through a tumbling window which was 10 seconds long and grouped to get a count of missed heartbeats within that window.

var serviceBusHeartbeatMissingStream = myApp.DefineStreamable(() => serviceBusInputPointObservable); 
var serviceBusHeartbeatWindow = from e in serviceBusHeartbeatMissingStream 
		where e.IsAvailable == 0 
		group e by e.Subscription into gs 
		from win in gs.TumblingWindow(TimeSpan.FromSeconds(10)) 
		select new ServiceBusDown 
		{ 
			Id = null, 
			Subscription = gs.Key, 
			DateTime = DateTime.UtcNow, 
			Count = win.Count() 
		}; 

This small block of code is doing something amazing. Every 10 seconds it will aggregate the last 10 seconds of events that have passed through the engine and output an event of type ServiceBusDown if that aggregate contains any events where the heartbeat was missed. Extracting the relevant information from a torrent of data – it’s a beautiful thing !

These events were bound to the Windows Azure SQL Database sink.

serviceBusHeartbeatWindow.Bind(sqlSinkForServiceBusDown(targetSqlConnectionString, false)).Run();

This meant that they were available for reporting against in a dashboard.

A simple ASP.NET MVC application returned data from the Windows Azure SQL Database sink and displayed real-time the values of these business events:

  • Maximum Latency (in ms)
  • Average Latency (in ms)
  • Service Bus Up/Down

Simple dashboard

Even though these were really simple examples of what is possible with StreamInsight I am blown away by the technology. As the sheer volume of data we are processing increases, it becomes critical to ensure that you have the capabilities to extract business value from the latent information filling up our data stores or washing over our systems.

I’m presenting a session at the Brisbane Azure User Group this month (October 2012) on StreamInsight titled Extracting Realtime Business Value with StreamInsight.

Windows Azure Training Workshop

I work for Mexia as a consultant and we are currently partnering with Microsoft in Australia to deliver the FREE 1 Day Windows Azure Dev Camp and the 3 Day Windows Azure Foundation Training Workshop. Last week I delivered the 3 Day Workshop in Melbourne with Mexia’s Ben Simmonds. This took the form of an instructor led training workshop and was based on the August 2012 Refresh of the Windows Azure Training Kit.

Windows Azure Traing Kit - August 2012 REFRESH

There was plenty of content to get foundational concepts across to beginners and a number of excellent hands on labs for cementing these concepts.

Windows Azure Training - Melbourne

Make sure to check the Mexia website for date confirmations – we’ll be delivering the 1 Day Windows Azure Dev Camp and the 3 Day Windows Azure Foundation Training Workshop in Sydney and Brisbane in Nov/Dec this year.

Autoscaling Azure with WASABi – Part 6

I gave an Autoscaling Azure talk at the Brisbane Azure User Group (BAUG) on the 18th April 2012. This series of posts will walk through the demo I put together for the talk using the Autoscaling Application Block (WASABi).

What are we doing ?

All of our configuration is now complete. We will be running the ConsoleAutoscaler console application and observing how the rules we configured and the queue length of the workerqueue storage account queue affect the number of instances of our Queue Manager web application running in Azure.

Initial State

Ensure that the workerqueue queue is empty in our storage account by using either Azure Storage Explorer to remove any messages or clicking the remove button in our Queue Manager web application until the queue length is zero.Queue Manager - queue length of zero

Ensure that there is only a single instance of the Queue Manager web application running.

Run the Autoscaler Console Application

Make sure Visual Studio is open with the ConsoleAutoscaler application we wrote in Part 2. Ensure that the Autoscaling Block has been configured as per Part 3 and that the Service Information Store and Rules Store have been configured as per Part 4 and Part 5.

Hit F5 to run the ConsoleAutoscaler application.

A Picture is worth a 1000 Words !

The graph below shows a 32 minute run of the ConsoleAutoscaler application. The Message Count (queue length) is shown in red and the resulting Instance Count is shown in green. The result of our QueueLength_Avg_5m operand defined for our reactive rules is shown in purple.

The queue length and the instance count share the vertical axis. The horizontal axis is time in minutes.

Autoscaling Graph

Increase the Queue Length

I increased the number of messages in the queue, via the Queue Manager web application, to 4 as seen at point 1 in the graph and again to 8 as seen at point 2.

Queue Manager - queue length of 8

Since the QueueLength_Avg_5m operand is aggregated over a 5 min window and we are in the initial 5 minutes, it can be seen in the graph that its value has not yet exceeded our configured target of 5.

Our rules are evaluated every minute by the Autoscaling Block and diagnostic information sent to the console and a log file as per our configuration. As can be seen from the diagnostics for this period, the Autoscaling Block is taking no action. The number of instances of our Queue Manager web application is still at 1.

Autoscaling General Verbose: 1002 : Rule match.
Autoscaling Updates Verbose: 3001 : The current deployment configuration for a hosted service is about to be checked to determine if a change is required (for role scaling or changes to settings).
Autoscaling Updates Verbose: 3012 : Some instance count changes will be ignored.
Autoscaling Updates Verbose: 3004 : There are no configuration changes to submit for the hosted service.

At around the 6 minute mark, the value of the QueueLength_Avg_5m operand crosses the queue length threshold of 5. As can be seen from the diagnostics for this period,  the reactive rule Heavy Load (Increase) has been matched and the Autoscaling Block submits a scaling (up) request for the WasabiDemoWebRole. The result of the scaling request can be seen at point 3 in the graph. The number of instances of our Queue Manager web application is now at 2.

Autoscaling General Verbose: 1002 : Rule match.
Autoscaling Updates Verbose: 3001 : The current deployment configuration for a hosted service is about to be checked to determine if a change is required (for role scaling or changes to settings).
Autoscaling Updates Verbose: 3003 : Role scaling requests for hosted service about to be submitted.
[BEGIN DATA]
… "MatchingRules":"Default, Heavy Load (Increase)"
… "InstanceChanges":{"WasabiDemoWebRole":{"CurrentValue":1,"DesiredValue":2}}
Autoscaling Updates Information: 3002 : Role configuration changes for deployment were submitted.

Decrease the Queue Length

I decreased the number of messages in the queue, via the Queue Manager web application, to 6 as seen at point 4 in the graph.

Queue Manager - queue length of 6

The value of the QueueLength_Avg_5m operand is still above 5 and this results in the reactive rule matching again. The 2nd instance of the Queue Manager web application, is still being brought online and the Autoscaling Block cannot submit the scaling request. This can be seen in the diagnostics for this period.

Autoscaling General Verbose: 1002 : Rule match.
Autoscaling Updates Verbose: 3001 : The current deployment configuration for a hosted service is about to be checked to determine if a change is required (for role scaling or changes to settings).Autoscaling Updates Warning: 3005 : The deployment is not in the running status, cannot submit a scaling request now.

Finally the 2nd instance is spun up and the matching reactive rule can submit another scaling (up) request for the WasabiDemoWebRole. The result of the scaling request can be seen at point 5 in the graph. The number of instances of our Queue Manager web application is now at 3.

Autoscaling General Verbose: 1002 : Rule match.
Autoscaling Updates Verbose: 3001 : The current deployment configuration for a hosted service is about to be checked to determine if a change is required (for role scaling or changes to settings).
Autoscaling Updates Verbose: 3003 : Role scaling requests for hosted service about to be submitted.
[BEGIN DATA]
… "MatchingRules":"Default, Heavy Load (Increase)"
… "InstanceChanges":{"WasabiDemoWebRole":{"CurrentValue":2,"DesiredValue":3}}
Autoscaling Updates Information: 3002 : Role configuration changes for deployment were submitted.

No Queue Length Changes

During the period between 15 min and 22 min I issued no queue length changes via the Queue Manager web application. The value of the QueueLength_Avg_5m operand was still above 5, but the constraint rules were now protecting us from spinning up too many instances. Our default constraint rule ensures that we cannot spin up more than 3 instances. This can be seen in the diagnostics for this period.

Autoscaling General Verbose: 1002 : Rule match.
Autoscaling Updates Verbose: 3001 : The current deployment configuration for a hosted service is about to be checked to determine if a change is required (for role scaling or changes to settings).
Autoscaling Updates Verbose: 3012 : Some instance count changes will be ignored.
[BEGIN DATA]
… "InstanceChanges":{"WasabiDemoWebRole":{"DesiredInstanceCount":4,"TargetInstanceCount":3}}}
Autoscaling Updates Verbose: 3004 : There are no configuration changes to submit for the hosted service.

Flush the Queue

At around the 22 min mark I flushed the queue which resulted in a queue length of zero. This can be seen at point 6 in the graph.

Queue Manager - queue length of zero

As a result the value of the QueueLength_Avg_5m operand fell below the threshold of 5 at around the 23 min mark. This resulted in the Heavy Load (Decrease) reactive rule matching at both points 7 and 8 in the graph and the Autoscaling Block submitting scaling (down) requests for the WasabiDemoWebRole. This can be seen in the diagnostics for this period. The number of instances of our Queue Manager web application drops to 2 and then to 1.

Autoscaling General Verbose: 1002 : Rule match.
Autoscaling Updates Verbose: 3001 : The current deployment configuration for a hosted service is about to be checked to determine if a change is required (for role scaling or changes to settings).
Autoscaling Updates Verbose: 3003 : Role scaling requests for hosted service about to be submitted.
[BEGIN DATA]
… "MatchingRules":"Default, Heavy Load (Decrease)"
… "InstanceChanges":{"WasabiDemoWebRole":{"CurrentValue":3,"DesiredValue":2}}
Autoscaling Updates Information: 3002 : Role configuration changes for deployment were submitted.</pre>
Autoscaling General Verbose: 1002 : Rule match.
Autoscaling Updates Verbose: 3001 : The current deployment configuration for a hosted service is about to be checked to determine if a change is required (for role scaling or changes to settings).
Autoscaling Updates Verbose: 3003 : Role scaling requests for hosted service about to be submitted.
[BEGIN DATA]
… "MatchingRules":"Default, Heavy Load (Decrease)"
… "InstanceChanges":{"WasabiDemoWebRole":{"CurrentValue":2,"DesiredValue":1}}
Autoscaling Updates Information: 3002 : Role configuration changes for deployment were submitted

The value of the QueueLength_Avg_5m operand is still below 5 after the 30 min mark, but the constraint rules are now protecting us from spinning down too many instances. Our default constraint rule ensures that we always are running at least one instance. This can be seen in the diagnostics for this period.

Autoscaling General Verbose: 1002 : Rule match.
Autoscaling Updates Verbose: 3001 : The current deployment configuration for a hosted service is about to be checked to determine if a change is required (for role scaling or changes to settings).
Autoscaling Updates Verbose: 3012 : Some instance count changes will be ignored.
[BEGIN DATA]
… "InstanceChanges":{"WasabiDemoWebRole":{"DesiredInstanceCount":0,"TargetInstanceCount":1}}}
Autoscaling Updates Verbose: 3004 : There are no configuration changes to submit for the hosted service.

How did I capture the data for the graph ?

I wrote a simple PowerShell script to poll the queue length of my workerqueue storage account queue and the number of instances of my Queue Manager web application. Every 30 seconds the script would write out the values. I redirected the output to a csv file which I then opened and manipulated in Excel.

.\DataCollector.ps1 > .\DataCollector.csv

Here is the DataCollector.ps1 script. Refer to Part 4 for a refresher on how to configure PowerShell for use with your Azure accounts.

[Reflection.Assembly]::LoadFrom('C:\Projects\WasabiDemo\WebApplication\packages\WindowsAzure.Storage.1.7.0.0\lib\net35-full\Microsoft.WindowsAzure.StorageClient.dll') | Out-Null

$storageKey = (Get-AzureStorageKey -StorageAccountName baugautoscalingapp).Primary 
$connectionString = "DefaultEndpointsProtocol=https;AccountName=baugautoscalingapp;AccountKey={0}" -f $storageKey
$queueName = "workerqueue"

$storageAccount = [Microsoft.WindowsAzure.CloudStorageAccount]::Parse($connectionString)
$queueClient = [Microsoft.WindowsAzure.StorageClient.CloudStorageAccountStorageClientExtensions]::CreateCloudQueueClient($storageAccount)
$queue = $queueClient.GetQueueReference($queueName)

$index = 1;
$interval = 30 * 1000

"{0}`t{1}`t{2}`t{3}" -f "TimeStamp", "Index", "MessageCount", "InstanceCount"

while ($true) 
{
	$messageCount = $queue.RetrieveApproximateMessageCount()
	$instanceCount = (Get-AzureDeployment -ServiceName baugautoscalingapp -Slot Production).RoleInstanceList.Count
	$timestamp = Get-Date -Format "yyyy-MM-dd hh:mm:ss"

	"{0}`t{1}`t{2}`t{3}" -f $timestamp, $index, $messageCount, $instanceCount

	$index = $index + 1
	[System.Threading.Thread]::Sleep($interval)
}

Conclusion

This simple demo has hopefully provided you with sufficient insight into the Autoscaling Application Block and some understanding of how it operates.

Autoscaling Azure with WASABi – Part 5

I gave an Autoscaling Azure talk at the Brisbane Azure User Group (BAUG) on the 18th April 2012. This series of posts will walk through the demo I put together for the talk using the Autoscaling Application Block (WASABi).

What are we configuring?

We will be configuring the rules in the file based rules store. This store holds the rules that the autoscaler will utilise in scaling our service.

Add Rules Store

Right click on the ConsoleAutoscaler project and select Add New Item > XML File. Name the newly created xml file local-rules-store.xml. Right click on the xml file and select Properties. Ensure that the Copy to Output Directory property has a value of Copy if newer.

Solution Explorer - local-rules-store.xml

Open the local-rules-store.xml file, right click in the content window and select Properties. In the XML Document properties window select the Schemas property and then click on the ellipses to select the schema. Select the AutoscalingRules.xsd schema from the resulting list and in the Use column set the property value to Use this schema.

XML Schema selection

Clicking OK should result in the Solution Explorer showing the AutoscalingRules.xsd schema as linked to the local-rules-store.xml file. This will also now provide intellisense when editing this file.

Solution Explorer - schema linked

Paste the xml below into your local-rules-store.xml file.

 
<?xml version="1.0" encoding="utf-8" ?>
<rules xmlns="http://schemas.microsoft.com/practices/2011/entlib/autoscaling/rules" enabled="true">

  <constraintRules>

    <rule name="Default" enabled="true" rank="1">
      <actions>
        <range target="WasabiDemoWebRole" min="1" max="3" />
      </actions>
    </rule>

  </constraintRules>

  <reactiveRules>

    <rule name="Heavy Load (Increase)" enabled="true">
      <actions>
        <scale target="WasabiDemoWebRole" by="1" />
      </actions>
      <when>
        <greaterOrEqual operand="QueueLength_Avg_5m" than="5" />
      </when>
    </rule>

    <rule name="Heavy Load (Decrease)" enabled="true">
      <actions>
        <scale target="WasabiDemoWebRole" by="-1" />
      </actions>
      <when>
        <less operand="QueueLength_Avg_5m" than="5" />
      </when>
    </rule>

  </reactiveRules>

  <operands>
    <queueLength alias="QueueLength_Avg_5m" aggregate="Average" queue="workerqueue" timespan="00:05:00" />
  </operands>

</rules>	

Lines 4-12 contain the constraint rules. These rules take precedence over the reactive rules. The autoscaling block expects at least one constraint rule to be active when rules are evaluated. If not, no scaling operation will be performed. The upper bound of the constraint rules guard your budget and the lower bound guards your SLA. It is good practice to always include a default constraint rule.

I have added a simple constraint rule that has a rank of 1 (the highest – so effectively the catch all rule). The rule constrains the instance range of my WasabiDemoWebRole to between 1 and 3 instances. You can also add a timetable for your constraint rule. For example during your known busy periods you can pro-actively scale up instances and outside of this busy period scale back down.

Lines 14-34 contain the reactive rules. These rules allow you to react to load or demand. The reactive rules, out the box, can monitor the value of performance counters, Windows Azure queue lengths and instance counts. You can also create custom-defined business metrics to scale the application when those values exceed specified thresholds.

The reactive rules allow you to scale instances or to switch your application into different operating states if you have written your application to cater for application throttling. Application throttling is managed via application settings in the configuration file.

I have added two simple reactive rules. One that scales my WasabiDemoWebRole up by one and a paired rule that performs the scaling down by one. It is good practice to have a pair of reactive rules – one for scaling up and one for scaling back down. My reactive rules both monitor the QueueLength_Avg_5min operand. This operand is defined between lines 36-38 as the average queue length over 5 min of the workerqueue storage account queue. If this value is greater than or equal to 5 the rules scale the WasabiDemoWebRole up by one instance and if this value is less than 5 the rules scale down the WasabiDemoWebRole by one instance.

The Rules Store is now configured. Next up we will be running the autoscaler and observing how the queue length of the workerqueue storage account queue and the rules affect the number of instances of the WasabiDemoWebRole.

Autoscaling Azure with WASABi – Part 4

I gave an Autoscaling Azure talk at the Brisbane Azure User Group (BAUG) on the 18th April 2012. This series of posts will walk through the demo I put together for the talk using the Autoscaling Application Block (WASABi).

What are we configuring?

We will be configuring the service information in the file based service information store. This store holds the information for the service that we will be autoscaling.

Add Service Information Store

Right click on the ConsoleAutoscaler project and select Add New Item > XML File. Name the newly created xml file local-serviceinformation-store.xml. Right click on the xml file and select Properties. Ensure that the Copy to Output Directory property has a value of Copy if newer.

Solution Explorer - local-serviceinformation-store.xml

Open the local-serviceinformation-store.xml file, right click in the content window and select Properties. In the XML Document properties window select the Schemas property and then click on the ellipses to select the schema. Select the AutoscalingServiceModel.xsd schema from the resulting list and in the Use column set the property value to Use this schema.

XML Schema selection

Clicking OK should result in the Solution Explorer showing the AutoscalingServiceModel.xsd schema as linked to the local-serviceinformation-store.xml file. This will also now provide intellisense when editing this file.

Solution Explorer - schema linked

Configure Service Information Store

To configure the store we’re going to need subscription information, management certificate keys and store account keys from our Azure subscription. I’ve previously shown how to obtain this information from the Windows Azure preview portal, but will now show you how to do this from Powershell. There are Powershell cmdlets that are made available as part of the Windows Azure SDK for .NET. You can start your Powershell via the Windows Azure Powershell link under the Windows Azure folder in the Windows 7 start menu. I start my Powershell via Console2. It’s an awesome environment to host all your consoles.

I start my Powershell console with the following command. This will load all system modules and the Windows Azure modules into my shell.

 C:\Windows\SysWOW64\WindowsPowerShell\v1.0\powershell.exe -NoExit -Command "Get-ChildItem -Recurse -Include *.psd1 -Path 'C:\Windows\SysWOW64\WindowsPowerShell\v1.0\Modules', 'C:\Program Files (x86)\Microsoft SDKs\Windows Azure\PowerShell\Azure\' |% { Import-Module $_ }"

Now import the publishsettings file into your Powershell environment. This caches your subscription and management certificate information so that you can interact with Azure without continually having to set the subscription information. My publishsettings file was the same one I downloaded in the first post.

 
Import-AzurePublishSettingsFile -PublishSettingsFile 'C:\Users\Paul\Downloads\Windows Azure MSDN - Visual Studio Premium-Paul Bouwer-8-18-2012-credentials.publishsettings'

Import your publishsettings file

If you run into issues with this Import-AzurePublishSettingsFile step, you can manually correct the cache by looking at the C:\Users\[YOUR USERNAME]\AppData\Roaming\Windows Azure Powershell folder and correcting the information in the files there. Thanks to Michael Washam in this post for helping me troubleshoot some caching issues I ran into.

Now onto why we actually fired up Powershell !

Run the following command to get a list of Ids and Names for your subscriptions.

 Get-AzureSubscription |% { "{0} : {1}" -f $_.SubscriptionId, $_.SubscriptionName}

Use the Subscription Name and the following command to obtain your management certificate thumbprint.

 (Get-AzureSubscription -SubscriptionName '[YOUR SUBSCRIPTION NAME]').Certificate 

Finally use the Storage Account Name and the following command to obtain the key for the storage account used by the application you will be scaling.

 (Get-AzureStorageKey -StorageAccountName [YOUR STORAGE ACCOUNT NAME]).Primary 

The output from running these commands can be seen below. And no – these are not my real subscription ids, certificates and storage account keys – they are random bits courtesy of my Photoshop skills !All the Powershell commands and their output

Place all of these values and the xml below into your local-serviceinformation-store.xml file. My service name and storage account name values were baugautoscalingapp.

 
<?xml version="1.0" encoding="utf-8" ?>
<serviceModel xmlns="http://schemas.microsoft.com/practices/2011/entlib/autoscaling/serviceModel">
	<subscriptions>
		<subscription name="[YOUR SUBSCRIPTION NAME]"
		              subscriptionId="[YOUR SUBSCRIPTION ID]"
		              certificateThumbprint="[YOUR CERT THUMPRINT]"
		              certificateStoreName="My"
		              certificateStoreLocation="CurrentUser">
		  <services>
		    <service dnsPrefix="[YOUR SERVICE NAME]" slot="Production" scalingMode="Scale">
		      <roles>
		        <role alias="WasabiDemoWebRole" 
		              roleName="WasabiDemoWebRole" 
		              wadStorageAccountName="[YOUR STORAGE ACCOUNT NAME]"/>
		      </roles>
		    </service>
		  </services>
		  <storageAccounts>
		    <storageAccount alias="[YOUR STORAGE ACCOUNT]"
		connectionString="DefaultEndpointsProtocol=https;AccountName=[YOUR STORAGE ACCOUNT NAME];AccountKey=[YOUR STORAGE ACCOUNT KEY]">
		      <queues>
		        <queue alias="workerqueue" queueName="workerqueue" />
		      </queues>          
		    </storageAccount>
		  </storageAccounts>
		</subscription>
	</subscriptions>
	<stabilizer scaleUpCooldown="00:01:00"
	        scaleDownCooldown="00:01:00"
	        scaleUpOnlyInFirstMinutesOfHour="0"
	        scaleDownOnlyInLastMinutesOfHour="0" />
</serviceModel>

Lines 4-8 contain the subscription and management certification information. Lines 9-17 contain the service information for the service we will be autoscaling.

On line 10 we can see a scalingMode attribute which is set to Scale. This as the name suggests means that the action that will be taken by the autoscaler when rule conditions are met is to scale the service instance. This attribute can also have a value of Notify which will not scale but notify a list of email addresses that a scaling action should be taken. The third and final value of ScaleAndNotify is a composition of these two actions.

On line 22 we have defined an alias for the queue in our storage account which we will be using as a metric for the rules.

On lines 28-31 we have configured the Stabilizer, which helps to prevent fast scale up and down oscillations and helps to optimize costs by limiting scaling-up operations to the beginning of the hour and scaling-down operations to the end of the hour.

The Service Information Store is now configured. Next up we will be configuring the Rules Store.

Autoscaling Azure with WASABi – Part 3

I gave an Autoscaling Azure talk at the Brisbane Azure User Group (BAUG) on the 18th April 2012. This series of posts will walk through the demo I put together for the talk using the Autoscaling Application Block (WASABi).

What are we configuring?

We have built the autoscaler as a console application but without configuration it cannot do much. This post will see us configuring the autoscaler so that it knows where to find its data points store, where to find its autoscaling rules and where to find the service information for the services it will be autoscaling.

Visual Studio Extensions

The Enterprise Library Configuration Console will make the configuration of the Autoscaling Application Block a little easier. Click on Tools > Extension Manager … in Visual Studio and search for enterprise library config in the Online Gallery. Install the EnterpriseLibrary.Config extension.Enterprise Library Configuration Console - Extension Manager

Configuration

Right click on the app.config file in the solution and select the Enterprise Library Configuration Console’s edit configuration file option.

Solution Explorer

Select the Add Autoscaling Settings from the Blocks menu item.Add Autoscaling Settings

Go to the Windows Azure preview portal and select the primary access key from your Data Points Store Storage Account. Mine is baugautoscalingdata . We will need this key for the data points store storage account configuration.

Storage account - manage access keys

Expand the Autoscaling Settings section in the Enterprise Library Configuration Console and configure the following elements:

  • Data Points Store Storage Account
  • Rule Evaluation Rate

Place your storage account connection string into the Data Points Store Storage Account element. It will be in the following format:

 DefaultEndpointsProtocol=https;AccountName=[YOUR STORAGE ACCOUNT NAME];AccountKey=[YOUR STORAGE ACCOUNT KEY]

Modify the Rule Evaluation Rate down to 1 min (00:01:00) from the default. This is a demo afterall and we want to see results quickly.Autoscaling Settings

The Rules Store and Service Information Store default to storage account storage. Since we will be hosting these stores locally change both of the stores to use local file stores. Click on the + next to each store and set the store to use local file storage.Use Local File Rules Store

Configure the stores with the file names of the files that will host the stores:

  • Rules Store – local-rules-store.xml
  • Service Information Store – local-serviceinformation-store.xml

Ensure that the Logger is set to System Diagnostics Logger. Save and close.Configure Rules Store

Open the app.config file and paste the following system.diagnostics xml as a child node under the <configuration> node. This will log all output to the console in addition to a file at C:\Logs\ConsoleAutoscaler.log. Ensure that your C:\Logs folder is created.

 
<system.diagnostics>
  <trace autoflush="true"/>
  <sources>
    <source name="Autoscaling General"
            switchName="SourceSwitch"
            switchType="System.Diagnostics.SourceSwitch" >
      <listeners>
        <add name="console" />
        <add name="file" />
        <remove name ="Default" />
      </listeners>
    </source>
    <source name="Autoscaling Updates"
            switchName="SourceSwitch"
            switchType="System.Diagnostics.SourceSwitch" >
      <listeners>
        <add name="console" />
        <add name="file" />
        <remove name="Default" />
      </listeners>
    </source>
  </sources>
  <sharedListeners>
    <add name="console" type="System.Diagnostics.ConsoleTraceListener">
      <filter type="System.Diagnostics.EventTypeFilter" initializeData="Verbose"/>
    </add>
    <add name="file" type="System.Diagnostics.TextWriterTraceListener" initializeData="C:\Logs\ConsoleAutoscaler.log">
      <filter type="System.Diagnostics.EventTypeFilter" initializeData="Verbose"/>
    </add>
  </sharedListeners>
  <switches>
    <add name="SourceSwitch"
        value="Verbose, Information, Warning, Error, Critical" />
  </switches>
</system.diagnostics>

The autoscaler is now configured. Next up will be configuring the Service Information Store.

Autoscaling Azure with WASABi – Part 2

I gave an Autoscaling Azure talk at the Brisbane Azure User Group (BAUG) on the 18th April 2012. This series of posts will walk through the demo I put together for the talk using the Autoscaling Application Block (WASABi).

What are we building?

We will be building the application that performs the autoscaling using the Autoscaling Application Block (WASABi) from the Enterprise Library 5.0 Integration Pack for Windows Azure. I decided to build a simple console application to host the autoscaler locally for the purposes of the demo. For simplicity I also used a local service information store and local rules store. More on these stores in the next parts of the series.

In a real world situation you would probably want to host the autoscaler in an Azure cloud service worker role and the stores in an Azure storage account.

Create Storage Account

Didn’t we just create a storage account in part 1? Yes we did, but that was for the web application that we will be scaling. We now need to create a storage account that will hold the data points that the autoscaler will use to evaluate its scaling rules.

The data points store must be hosted in an Azure storage account as the Autoscaling Application Block uses the upsert feature of the Azure table storage that is not supported by the local emulator.

Log in to the Windows Azure preview portal and click on the NEW button. Select Storage > Quick Create and fill in the details for your storage account. I have selected baugautoscalingdata as my url and Southeast Asia as my region since I live in Australia. The subscription I am using is the subscription linked to my MSDN. I have also disabled geo-replication for the demo storage account. Click on Create Storage Account.

baugautoscalingdata - create storage account

Wait for the storage creation to complete successfully. You should be rewarded with an Online status.

baugautoscalingdata - Storage online !

Create Console Application

Now on to the actual console application that will host the autoscaler. Click on New Project in Visual Studio and select Visual C# > Windows > Console Application. I have a folder C:\Projects\WasabiDemo where I am creating the solution with the name, ConsoleAutoscaler. Click OK to create the project.Console Application Project

Right click on the ConsoleAutoscaler project and select Properties. Ensure that the Target Framework is set to .NET Framework 4 and not the default .NET Framework 4 Client Profile. Save.

Right click on the ConsoleAutoscaler project and select Manage NuGet Packages … Select the Online tab on the left of the dialog and then in the search box type the word wasabi. Hit Enter.

Select the Enterprise Library 5.0 – Autoscaling Application Block package and click Install.NuGet: Enterprise Library 5.0 – Autoscaling Application Block

A number of dependencies are installed into the project in addition to schemas to manage the service information and local rules stores.

Solution Explorer

Open the Program.cs file and update with the following code:

 
using System;
using Microsoft.Practices.EnterpriseLibrary.Common.Configuration;
using Microsoft.Practices.EnterpriseLibrary.WindowsAzure.Autoscaling;

namespace ConsoleAutoscaler
{
  class Program
  {
    static void Main(string[] args)
    {
      try
      {
        var scaler = EnterpriseLibraryContainer.Current.GetInstance<Autoscaler>();
        scaler.Start();

        while (true)
        {
          System.Threading.Thread.Sleep(10000);
          Console.WriteLine("{0} - running", DateTime.Now);
        }
      }
      catch (Exception exp)
      {
        Console.WriteLine(exp.Message);
        Console.Write(exp.StackTrace);
      }
      Console.ReadKey();

    }
  }
}

That is all there is to setting up the autoscaler using the Autoscaling Application Block. The next steps will be to configure the autoscaler and the service information and local rules stores.

Autoscaling Azure with WASABi – Part 1

I gave an Autoscaling Azure talk at the Brisbane Azure User Group (BAUG) on the 18th April 2012. This series of posts will walk through the demo I put together for the talk using the Autoscaling Application Block (WASABi).

What do we need to get started?

You’ll need Visual Studio 2010 and the Windows Azure SDK for .NET. You’ll also need a Windows Azure account – you can sign up for a 90-day free trial or use your MSDN Subscription.

What are we building and deploying?

We will be building and deploying an Azure application that will be the target of the autoscaling. I decided to build a simple ASP.NET MVC 3 web application that also allowed me to manipulate some simple metric that WASABi could monitor for its autoscaling rules. WASABi can monitor a number of metrics but for the demo I decided on the queue length of a queue on the Windows Azure Queue Storage Service as this was the simplest to get up and running.

Create Storage Account

Once you have a Windows Azure account log in to the Windows Azure preview portal and click on the NEW button. We need a storage account to hold the queue we’ll be using in the demo. Select Storage > Quick Create and fill in the details for your storage account. I have selected baugautoscalingapp as my url and Southeast Asia as my region since I live in Australia. The subscription I am using is the subscription linked to my MSDN. I have also disabled geo-replication for the demo storage account. Click on Create Storage Account.

baugautoscalingapp - create storage account

Wait for the storage creation to complete successfully. You should be rewarded with an Online status.

baugautoscalingapp - Storage online !

The easiest way to interact with your storage account is to use Azure Storage Explorer 5. You will need your access key to interact with your storage account so click on Manage Keys in the portal and copy the Primary Access Key to the clipboard.

Storage account - manage access keys

Click on Add Account in the Azure Storage Explorer and populate the details. I have pasted my access key copied from the portal into the storage account key field. Click Add Storage Account.

Azure Storage Explorer - configure

Confirm that you can connect to your storage account. You can see that we have not created any queues in our storage account yet.

Azure Storage Explorer - connected

Create Web Application

Now on to the actual web application. Click on New Project in Visual Studio and select Visual C# > Cloud > Windows Azure Cloud Service. I have a folder C:\Projects\WasabiDemo where I am creating the solution with the name, WebApplication. Click OK to create the project.Windows Azure Cloud Server Project

Select Windows Azure Tools – June 2012 on the next dialog. Select the ASP.NET MVC3 Web Role and click on the pencil to rename the role to WasabiDemoWebRole. Click OK.ASP.NET MVC 3 Web Role

Select the empty ASP.NET MVC 3 project template as we want this project as lean as possible. Click OK.Empty template

Open the WasabiDemoWebRole in the WebApplication Azure project from the Solution Explorer. Change the VM size to Extra small and Save. This is useful in maximising the hours allocated to your subscription. You can get 6 Extra small instances for the cost of a Small instance.

VM size - Extra small

Right click on the Models folder in the WasabiDemoWebRole project and select Add > Class. Create a class named QueueManager.

 
using Microsoft.WindowsAzure;
using Microsoft.WindowsAzure.StorageClient;

namespace WasabiDemoWebRole.Models
{
  public class QueueManager
  {
    private string StorageConnectionString
    {
      get { return "DefaultEndpointsProtocol=https;AccountName=[REPLACE WITH YOUR STORAGE ACCOUNT NAME];AccountKey=[REPLACE WITH YOUR STORAGE ACCOUNT KEY]"; }
    }

    private string QueueName
    {
      get { return "workerqueue"; }
    }

    public int MessageCount
    {
      get { return RetrieveMessageCount(); }
    }

    private CloudQueue GetQueue()
    {
      var storageAccount = CloudStorageAccount.Parse(StorageConnectionString);
      var queueClient = storageAccount.CreateCloudQueueClient();
      var queue = queueClient.GetQueueReference(QueueName);
      queue.CreateIfNotExist();

      return queue;
    }

    public void AddMessage()
    {
      var queue = GetQueue();
      var message = new CloudQueueMessage("Brisbane Azure User Group - Wasabi Demo Test Message");
      queue.AddMessage(message);
    }

    public void RemoveMessage()
    {
      var queue = GetQueue();
      var retrievedMessage = queue.GetMessage();
      queue.DeleteMessage(retrievedMessage);
    }

    public int RetrieveMessageCount()
    {
      var queue = GetQueue();
      return queue.RetrieveApproximateMessageCount();
    }
  }
}

You will need to replace the [REPLACE WITH YOUR STORAGE ACCOUNT NAME] and [REPLACE WITH YOUR STORAGE ACCOUNT KEY] tokens in the StorageConnectionString property with your storage account name and the primary access key value from your storage account.

Right click on the Controllers folder in the WasabiDemoWebRole project and select Add > Controller. Create a controller named HomeController and use the Empty controller template.

 
using System.Web.Mvc;
using WasabiDemoWebRole.Models;

namespace WasabiDemoWebRole.Controllers
{
  public class HomeController : Controller
  {
    public ActionResult Index()
    {
      ViewBag.Message = "Queue Manager";

      var queueManager = new QueueManager();
      ViewBag.MessageCount = queueManager.MessageCount;

      return View();
    }

    [HttpPost]
    public ActionResult Manage()
    {
      ViewBag.Message = "Queue Manager";
      var queueAction = Request["QueueAction"];

      var queueManager = new QueueManager();

      if (queueAction.Equals("Add"))
      {
        queueManager.AddMessage();
      }
      else if (queueAction.Equals("Remove"))
      {
        if (queueManager.MessageCount > 0) { queueManager.RemoveMessage(); }
      }

      ViewBag.MessageCount = queueManager.MessageCount;

      return View("Index");
    }
  }
}

Right click on View() in the Index() method in HomeController and select Add View … Click the Add button in the dialog.

 
@{
    ViewBag.Title = "Brisbane Azure User Group - Wasabi Demo";
}

<h2>@ViewBag.Message</h2>
<p>Current Count of Messages in Queue Waiting to be Processed: @ViewBag.MessageCount</p>
<p>
  <form action="/Home/Manage" method="POST">
    <button type="submit" name="QueueAction" value="Add">Add</button>
    <button type="submit" name="QueueAction" value="Remove">Remove</button>
  </form>
</p>

Publish to Azure

Right click on the WebApplication Azure project in the Solution Explorer and click on Publish. Click on the Sign in to download credentials link in the Publish Windows Azure Application dialog. A publishsettings file containing publish credentials for your Azure account will be downloaded. Download publishsettings file

Click on the Import button in the Publish Windows Azure Application dialog and import the publishsettings file you have just downloaded. Select your subscription from the drop down once it has been populated.Import publishsettings file and select subscription

If you have not already provisioned a cloud service through the portal you will be prompted to create one. This is required to host your web role. I have selected baugautoscalingapp as the name of my cloud service and have selected Southeast Asia as my location.Create cloud service

The cloud service will be created.Cloud service created

This can be confirmed in the portal. The baugautoscalingapp cloud service has a service status of Created.

Confirmed in portal

Change the deployment label in the Advanced Settings tab to something meaningful. Click Next.Deployment label

Confirm the settings in the Summary and then click Publish.Summary

You will see the details of the deployment in the Windows Azure Activity Log in Visual Studio.

Windows Azure Activity Log in Visual Studio

The Deploying status will be visible on the cloud service via the portal.

Cloud service status in portal

The deployment is shown as complete in the Windows Azure Activity Log in Visual Studio

Visual Studio says deployment complete

The cloud service has a service status of Created and a Production status of Running.

Cloud service status in portal

The web application is running in Azure. If we open the cloud service url in a browser and go to the Home/Manage route we can view our simple application. Clicking on the Add button will add messages to the queue and the Remove button will remove them. Below I have added 5 messages to the queue.Application running in Azure !

If we open up Azure Storage Explorer and click on the queues link we can see that the workerqueue is now available and contains 5 messages. The message contains the text from our QueueManager class.

Confirm queue size and contents

We now have a running web application in Azure that can be scaled and control over the queue that will be used for metrics by the autoscaler.

Certificates

This is a section with some additional details around what that Publish Windows Azure Application dialog did behind the scenes. All interactions with Azure are managed via Management Certificates. This ensures that only authorised requests can be made to deploy, configure or scale resources within Azure.

If you look at your Certificate Store via the Certificates snap-in in MMC you will notice a certificate under Personal > Certificates that matches the name of the publishsettings file. The dialog automagically created a management certificate for you and added it to your store.

Certificate Store

The Management Certificates cannot be viewed in the new preview portal so you have to take a trip down memory lane and open up the old portal – https://windows.azure.com/. Click on the Management Certificates section and you’ll see the related Management Certificates associated with your subscription. Each time a publishsettings file is generated a management certificate is added under your subscription. No additional certificates however are added to your local certificate store.

Old portal - Management Certificates