• Windows Azure Performance Gotchas #1, raising the throughput reducing the headache

    Published by on June 23rd, 2010 7:34 am under Emerging Technology

    1 Comment

    Lately, we’ve been working on a Windows Azure Project with huge load, really high peaks. During the project we got the following “gotcha moments” that I’ll be trying to summarize throughout the post and thru a series of post I expect to write.

    If you are using WCF, you must tweak it

    On iServiceOriented.com, there was this post about tweaking WCF that still valid up-to-day. WCF is broken by default, and if you were planning using it on Windows Azure (or even your own servers) you must tweak it with all the performance optimizations, except that you are fine with a lousy 10 requests per second.

    Gotcha #1: Follow the performance on “WCF Gotchas 3: Configuring Performance Options

    RetryPolicy can be evil

    RetryPolicy is the mechanism used by the Windows Azure Storage Client to prevent the users from its own service fails. As the idea itself rocks, implementation wasn’t necessarily done for your scalability needs.

    When writing high traffic services you might want to keep the least number of threads or at least all of the identified, the built-in RetryPolicy hides the underlying complexity of performing retries when the service fails but it also hides the Thread usage from you which at this scale is critical.

    Gotcha #2: Disable the built-in RetryPolicy

    By using RetryPolicies.NoRetry you can prevent your app for creating threads just to ensure that an action has been executed, and if you need your app to retry for an eventual Service Availability issue, write your own policy.

    If you need inspiration, you can check these snippets for RetryPolicy and another thing that you should consider when doing this type of things, like adding an Extension Method to identify whether an exception is “Retry-able” or not.

    Let Windows Azure Storage handle it

    “Cloud Computing” brought Computing up to a whole new level, nowadays developers are able to tell how much a design decision cost. This is stunning, now all those performance optimizations that you always wanted but never had the chance to implement have a strong economical justification (or not).

    Whenever you’re exposing data to a client thru a (web)service in Windows Azure your paying for I/O and Compute Hours. This costs are distributed as transfer from/to Storage to the Service and from/to Service to the Client.

    Now consider the following, the reference data of your application (data pretty much the same for all the users) can be consumed by the Client straight from Storage instead. Now the cost distribution radically changes to be I/O from Windows Azure Storage to Client, no more Compute Time nor scalability headaches.

    Gotcha #3: Deflect load to Windows Azure Storage as much as you can

    Redirecting the load to Windows Azure not only saves you some bucks from Computing Power and I/O but also take out the pain of having to scale up your services, since Microsoft is the responsible of scaling it up.

    When doing this remember that Windows Azure Storage is RESTful and all the optimization that can be performed at the transport level (like Caching, Expiration, GZip, etc) perfectly fit here, and if it’s a javascript client take a look at JSON(P) (JSON is much more efficient than XML).

    Livin’ on the edge

    Experience has proven that if you live on the latest VM Image by Windows Azure Team your performance and stability will increase as they ship new images.

    Gotcha #4: Configure your Hosted Service Deployment to use the latest-available Virtual Machine

    Except your code has compatibility issues with .NET 4 or issues with .NET 3.5 SP(x) you should live always on the latest VM image. This can be configured using the Windows Azure MMC and setting the Virtual Machine Image Version to “*” (star)

    I expect to continue writing about the different patterns, gotchas and stuff we figured out while working on Windows Azure, so stay tuned!

    thanks,
    ~johnny