Windows Azure Storage Emulator or Service?

Depending on your development process and what you do on the Windows Azure Platform, today I’m going to announce [again for the very first time] one of the of best kept secrets of the Windows Azure Development when utilizing the Emulator tools.

Secret you say, how secret is it?

It’s such a secret that even the person that originally disclosed this juicy tip doesn’t even remember saying it!


Ryan Dunn originally released this information on Episode 18 of the Cloud Cover Show.

The suspense is killing, what is this about?

Sometimes when using the Storage Emulator while creating a Windows Azure Application you will run into a scenario where you will attempt to read an entity from storage before actually creating on entity of this kind.

This is a problem due to the way the Storage Emulator is structured. The Schema for a particular entity is stored in an instance of SQL Express (by default), because of this, an entity must first be stored to generate the schema before the emulator is aware of what you querying for.

Using this work around will allow you to pre-check to see if the storage account the application is pointing at is the Storage Emulator or the live Storage Service. Once you’ve determined local storage is being used you can create a fake entity save it to Storage (which creates the schema), then promptly delete it before doing your query.

Recently, I worked on a project which used multiple storage accounts for separating data into logical containers. Due to this separation I found myself creating storage connections dynamically. In order to get these accounts to work well locally this trick came in handy.

Enough already, Show us the Codez!

It is a relatively simple fix, it uses the IsLoopBack Property of the Uri class. One slight caveat to this would be if [in the future, or you’ve done some hacking] the Storage Emulator was located on a computer in the local network, as the MSDN documentation describes IsLoopBack as:

A Boolean value that is true if this Uri references the local host; otherwise, false.

But this works great as long as your storage stays local to your dev machine.

Happy Clouding!

Cloud Aware Configuration Settings


This code has been updated to be more Fluent in it’s Construction.


In this post we will look at how to write a piece of code that will allow your Application to be Environment aware and change where it receives it’s connection string when hosted on Windows Azure.

Are you looking to get started with Windows Azure? If so, you may want to read the post “Get your very own Cloud Playground” to find out about a special offer on Windows Azure Deployments.

State of Configuration

In a typical ASP.NET application the obvious location to store a connectionString is the Web.config file. However when you deploy to Windows Azure the web.config gets packed within the cspkg File and is unavailable for configuration changes without redeploying your application.

Windows Azure does supply an accessible configuration file to store configuration settings such as a connectionString. This file is the cscfg file and is required to upload a Service to Windows Azure. The cscfg file is definitely where you will want to place the majority of your configuration settings that need to be modified over the lifetime of your application.

I know you’re probably asking yourself, what if I want to architect my application to work both On-Premise and in the Cloud on one Codebase? Surprisingly, this is possible and I will focus on a technique to allow a Cloud focused application to be deployed on a Shared Hosting or On-Premise Server without the need to make a number of Code Changes.

Obviously this solution does fit within a limited scope, and you may also want to consider architecting your solution for cost as well as portability. When building a solution on Windows Azure, look into leveraging the Storage Services as part of a more cost effective solution.

Cloud Aware Database Connection String

One of the most common Configuration Settings you would like to be “Cloud-Aware” is your Database Connection String. This is easily accomplished in your application code by making a class that can resolve your connection string based on where the code is Deployed.

How do we know where the code is deployed you ask? That’s rather simple, Windows Azure Provides a static RoleEnvironment Class which exposes a Property IsAvailable which only returns true if the Application is running in either Windows Azure itself or the Windows Azure Compute Emulator.

Here is a code snippet that will give you a rather good idea:

Let’s take a moment to step through this code to get a better understanding of what the class is doing.

As you can see the class is declared as static and exposes one static property, this property will either grab the Configuration Setting that is required for the particular environment the Application is Deployed on.

If the connectionString variable has not been previously set a conditional statement is evaluated on the RoleEnvironment.IsAvailable Property. If the condition is found to be true the value of connectionString is retrieve from the CSCFG file by calling a static method of the RoleEnvironment class GetConfigurationSettingValue this searches through the Cloud Service Configuration file for a Value on a Setting with the Name “ApplicationData”.

If the RoleEnvironment.IsAvaliable Property evaluates false, the application is not hosted in the Cloud and the ConnectionString will be collected from the web.config file by using the System.Configuration.ConfigurationManager class.

The same technique can be used to resolve AppSettings by accessing the AppSettings NameValueCollection from the ConfigurationManager class.

Beyond The Basics

There are a few other things you may come across when creating your Cloud Aware application.

Providers, Providers, Providers

ASP.NET also contains a powerful Provider model which is responsible for such things as Membership (Users, Roles, Profiles). Typically these settings are configured using string look-ups that are done within the web.config file. This is problematic because we don’t have the ability to change the web.config without a redeployment of our Application.

It is possible to use the RoleEntryPoint OnStart method to execute some code to programmatically re-configure the Web.config, but that can be both a lengthy process as well as a very error  prone way to set your configuration settings.

To handle these scenarios you will want to create a custom provider and provide a few [well documented] configuration settings within your Cloud Service Configuration file that are used by Convention.

One thing to note when using providers is you are able to register multiple providers, however you can only provide so much information in the web.config file. In order to make your application cloud aware you will need to wrap the use of the provider objects in your code with a check for RoleEnvironment.IsAvailable so you can substitute the proper provider for the current deployment.

Something to Consider

Up until now we’ve been trying to [or more accurately managed to] avoid the need to recompile our project to deploy to the Cloud. It is possible to package your Application into a Cloud Service Package without the need to recompile, however if you’re building your solution in Visual Studio there is a good chance the Application will get re-compiled before it is Packaged for a Cloud Deployment.

With this knowledge under your belt it enables a unique opportunity for you to remove a large amount of conditional logic that needs to be executed at runtime by handing that logic off to the compiler.

Preprocessor Directives are a handy feature that don’t get leveraged very often but are a very useful tool. You can create a Cloud Deployment Build Configuration which supplies a “Cloud” Compilation Symbol. Leveraging Preprocessor Conditional logic with this Compilation Symbol to wrap your logic that switches configuration values, or Providers in your application can reduce the amount of code that is executed when serving the application to a user as only the appropriate code will be compiled to the DLL. To Learn more about Preprocessor Directives see the first Programming Article I had written.


With a little bit of planning and understanding the application you are going to be building some decisions can be made really early to plan for an eventual cloud deployment of an application without the need for an abundance of code being written during the regular development cycle nor is there a need to re-write a large portion of your application if you would like to build the functionality in once the Application is ready for Cloud Deployment. With this said there are still obvious improvements to be gained by leveraging the Cloud Platform to it’s full Potential. Windows Azure has a solid SDK which is constantly and consistently iterated on to provide developers with a rich development API.

If you want to leverage more of Windows Azure’s offerings it is a good idea to create a wrapper around Microsoft’s SDK so you will be able to create a pluggable architecture for your application to allow for maximum portability from On-Premise to the Cloud and ultimately between different cloud providers.

Happy Clouding!

The Ultimate Windows Azure Development VM

As a Software Developer we have many options when it comes to the tools that we put in our tool belt. One thing that I’ve found exceptionally useful over the years are Virtual Machines, not only Virtual Machines, but having a tailored environment to what you’re Developing.

With my focus on the Cloud I thought it would be useful to continue the trend of Building out a Virtualized Environment that’s Tailored to my work that I’m doing in the Windows Azure Platform. I’ve compiled a list of the Tools and SDKs in which I have found the most useful while working on projects for Windows Azure.

Operating System

  • Windows 7 [SP1]
  • Windows Server 2008 R2 [SP1]

Note: Windows Server 2008 R2 is a Handy OS to have on a Virtual Machine within your environment if you expect to have to use the VM Role in Windows Azure.

Desktop Backgrounds

Windows Add-Ons

Development Environment




Assessment & Optimization

Management & Debugging



Code Samples


Database Safe Values, Using Generics, and Extension Methods

In my early years as a Developer I was using DotNetNuke and often had large Data Access Layers (DALs) to abstract the database away from my application. I quickly found myself checking to ensure that a value returning from the Database wasn’t NULL (\0), so I created a helper class that had a number of methods for each data type I interacted with.  This created was done similar to this code below:

 public static double ConvertNullDoubleobject field)
       if (field != DBNull.Value)
           return Convert.ToDouble(field);
       return 0; // or if my application had Defaults Global.DefaultDouble

This seemed like a great solution at the time, but is quite a lot of code, as you are writing a method for every…single…datatype that your application interacts with. Being in a Small Business you didn’t really get the ability to refactor code that often, so once it was written you stuck by it.

Moving along in my career I stepped into Environment that already had full DALs in place so I would mimic what the previous developer did in order to keep the code base easily maintainable, as there is never a lack of fun trying to understand more than one persons point of view across a piece of code.

Tonight I was scanning through twitter and noticed Ben Alabaster [@BenAlabaster] asking a quite a unique question on twitter. So I piped in with my two cents, and then the conversation sparked beyond twitter to msn, then eventually to Skype. Here is the unique Single Helper Method we came up with:

For .NET 2.0, a static method:

public static T ConvertDBNullToSafeOrDefaultTypeValue<T>(object field)
      if (field != DBNull.Value)
          return (T)Convert.ChangeType(field, typeof(T));
      return default(T);

For .NET 3.5, an Extension method:

public static T ConvertDBNullToSafeOrDefaultTypeValue<T>(this object field)
      if (field != DBNull.Value)
           return (T)Convert.ChangeType(field, typeof(T));
      return default(T);

This solution provides a source code friendlier way of returning values from a database and validating the data against your business rules [you can replace default(T) is a Generic way of handling your applications default values].

Happy Coding!

The Next Generation of Defensive Coding

If you’re a Software Developer, hopefully you understand the concept of Defensive Coding. If you’re not familiar with the term here is a quick example to explain the concept.

public string SomeMethod(string prefix, string rootWord, string suffix)
	// Ensure Parameters contain values.
		throw new ArgumentNullException("Prefix cannot be null");

		throw new ArugmentNullException("RootWord cannot be null");

		throw new ArugmentNullException("Suffix cannot be null");

	return string.Format("{0}{1}{2}", prefix, rootWord, suffix);

Defensive coding gives the benefit of ensuring that your method is being used properly by who ever is implementing your code, if the proper requirements aren’t met the code throws an exception and warns the Developer what particular parameters expects in order for the function to complete properly.

This concept has been used for years and has suited its purpose well. However there are certain things that this method doesn’t provide. Wouldn’t it be nice if these conditions could be validated by the IDE, before compiling your project? Enter Code Contracts.

codecontracts_sm Code Contracts were added to .NET 4, but are available to be used in previous versions of the .NET Framework by installing them from the Microsoft DevLabs Project Site. To use Code contracts you will have to Add a Reference to the System.Diagnostics.Contracts namespace.


Static Checking which is the feature of Code Contracts that works without explicitly compiling your code [Visual Studio Background Compilation is necessary], is unfortunately only available in Visual Studio 2010. [Aside: I use the term unfortunately here lightly, you really should Upgrade to Visual Studio 2010, Microsoft has done an amazing job, and you won’t be sorry]

To Mimic the code that I’ve shown above, however this time leveraging the Code Contracts.

public string SomeMethod(string prefix, string rootWord, string suffix)
	// Ensure Parameters contain values.

	return string.Format("{0}{1}{2}", prefix, rootWord, suffix);

As you can see the implementation is much neater and easier to read than the blocks of if statements.  This is not the only functionality of Code Contracts either, you can let the contract pass the value through by using the Assume Method which assumes that the value is valid. Other Advantages include Code Based Documentation [Outlining what is expected by the method (in Code, because no one likes making XML Comments)], Business Rule Validation, Can be evaluated on TFS Gated Check-in.

Once I start using Code Contracts in more depth, I’ll be sure to start giving you more real life implementation scenarios. As always be sure to check back!

Until then, Happy Coding!

Preprocessor Directives – Design Practice

What is a Preprocessor Directive?

A preprocessor directive is a piece of code that is meant explicitly for the compiler.  This offers a programmer to focus compilation for a specific environment at compile time instead of runtime.

How do i identify a Preprocessor Directive?


In C# preprocessor directives can be identified by a hash (#) infront of a word or statement. C# Preprocessor Directives [only bolded directives are covered in this wiki]:

  • #if
  • #else
  • #elif
  • #endif
  • #define
  • #undef
  • #warning
  • #error
  • #line
  • #region
  • #endregion
  • #pragma
  • #pragma warning
  • #pragma checksum


In VB preprocessor directives can be identified by a hash (#) in front of a word or statement. VB.NET Preprocessor Directives [only bolded directives are covered in this wiki]:

  • #Const
  • #ExternalSource
  • #If…Then
  • #ElseIf
  • #EndIf
  • #Region
  • #EndRegion

What’s the advantage of using Preprocessor Directives?

Depending on the Preprocessor Directive there are many advantages to using directives over in code conditional statements.

C# – Conditional Directives 

Pairing the #define, #if, #else, and #endif Conditional statements allows you to set variables to target your Testing vs. Production environments.

 Figure 1 [C#]

VB.NET – Conditional Directives

Pairing the #Const, #If…Then. #else, and #End If Conditional statements allow you to set variables to target your Testing vs. Production environments.

Figure 1 [VB]


This is how you declare a preprocessor directive. I’ve defined a TEST variable to be able to switch between my Test environment and Production environment.


Using the conditional directive, I check to see if TEST is defined, seeing how it is currently defined the constant _toEmail gets set to ‘’. However if i was to comment out the #define TEST, the _toEmail constant would be set to ‘’.


You can also use the not (!) comparison operator to declare that code is only run if TEST is undefined.  As you’ll see here i have wrapped my preprocessor directive around my try catch block this way i will be able to get an understanding of what types of errors will be thrown without the try catch there.  This will allow me to catch the appropriate Exceptions, as seen in 1.e


This scenerio is just to show that member variables can also be changed by a preprocessor condition.  This can come in handy if you pair the condition to change the subject to match a rule you have set up in outlook for test email filtering.


Now that i’ve run through some testing and have seen the Exceptions that my program is throwing i can now catch the appropriate Exceptions and handle them.  Once you have a good idea of what exceptions you need to handle you can remove the preprocessor condition and the try catch block can be tested to show the appropriate user friendly messages.


Hopefully now that you have seen an example you can understand some of the advantages of using preprocessor directives. If you would have made an in code switch in 1.b, you would have had to declare an extra boolean, and the conditional statement would be taking up valuable runtime processing time, memory for the (unnecessary) variable.  When using the preprocessing directives in Visual Studio, you will notice that there is a visual queue as to what variable is currently active. This makes your application more readable, for you and future developers working on your code, as well as saving your countless read throughs of your code to understand what version of your code is running.

C# – Organizational Directives

Using the #region and #endregion directives you can outline sections of your code and logically organize them within a descriptive code region.

Figure 2 [C#]

VB.NET – Organizational Directives

Using #Region and #End Region  directives you can outline sections of your code and logically organize them within a descriptive code region.

Figure 2 [VB.NET]


Hopefully you can see the usefulness of #regions (#Regions in VB.NET).  Not only can you label sections of code, but in the Visual Studio IDE these tags allow you to collapse the code contained in the directive to make your code smaller and easier to read, maintain and manage.


Hopefully over the course of this article you have seen some of the benefits of using Preprocessor Directives.  Remember: Preprocessor Directives are evaluated at Compile time eleminating bottle necks of conditional statements in your code to switch between Testing and Production code are eliminated freeing up resources for your actual application.