Thoughts on Windows Azure Pricing…

There are a LOT of posts out there talking about Azure pricing.  There’s the Azure TCO Calculator, and some good practices scattered out there that demystify things.  Some of these bear repeating here, but I also wanted to take you through my math on expenses – how you design your app can have serious consequences on your pricing.  So let’s get the basic pricing out of the way first (just for the Azure platform, not AppFabric or SQL Azure):

  • Compute = $0.12 / hour
  • Storage = $0.15 / GB stored / month
  • Storage transactions = $0.01 / 10K
  • Data transfers = $0.10 in / $0.15 out / GB - ($0.30 in / $0.45 out / GB in Asia)

Myth #1:  If I shut down my application, I won’t be charged.

Fact:  You will be charged for all deployed applications, even if they aren’t running.  This is because the resources are allocated on deployment, not when the app is started.  Therefore, always be sure to remove deployments that aren’t running (unless you have a good reason to keep them there).

Myth #2:  If my application is less CPU intensive or idle, I will be charged less.

Fact:  For compute hours, you are charged the same whether your app is at 100% CPU or idle.  There’s some confusion (and I was surprised by this, too) because Azure and Cloud provisioning is often referred to as "consumption based” and (in this case, incorrectly) compared to a utility like electricity.  A better analogy is that of a hotel room.  An Azure deployment is reserving a set of resources.  Like the hotel room, whether you use it or not doesn’t change the rate.

On the plus side, Compute hours are a fairly easy thing to calculate.  It’s the number of instances in all of your roles * $.12 for small VM instances.  A medium instance (2 core) is $.24, and so on.

Myth #3:  There’s no difference between a single medium instance and two small instances.

Fact:  While there is no difference in compute price, there is significant difference in that the two small instances offer better redundancy and scalability.  It’s the difference between scaling up vs scaling out.  The ideal scenario is for an application that can add additional instances on demand, but the reality is that applications need to written to support this.

In Azure, requests are load balanced across all instances of a given webrole.   This complicates session and state management.  Many organizations do what is called sticky persistence or sticky sessions when implementing their own load balancing solution in their applications.  When a user visits a site, they will continue to visit the same server for their entire session.  The downside of this approach is that should the server go down, the user is redirected to another server and loses all state information.  However, it’s a viable solution in many scenarios, but not one that Azure load balancing offers.

Scaling up is done by increasing your VM size to medium (2 core), large (4 core), or XL (8 core), with more RAM allocated at each level.  The single instance becomes much more powerful, but your limited by the hardware of a single machine.

In Azure, the machine keys are synchronized among instances so there is no problem with cookies and authentication tokens, such as those in the ASP.NET membership providers.  If you need session state information, this is where things get more complicated.  I will probably get zinged for saying this, but there is currently no good Azure-based session management solution.  The ASP Providers contained in the SDK does have a Table Storage Session State demo, but the performance isn’t ideal.   There are a few other solutions out there, but currently the best bet is to not rely on session state and instead use cookies whenever possible.

Now, having said all this, the original purpose of the post:  I wanted to make sure folks understood transactions costs with Azure Storage.  Any time your application so much as thinks about Storage, it’s a transaction.  Let’s use my Worldmaps site as an example.  This is not how it works today, but very easy could have been.  A user visits a blog that pulls an image from Worldmaps.  Let’s follow that through:

Step Action Transaction #
1 User’s browser requests image.  
2 Worker roll checks queue. (empty) 1
3 If first hit for map (not in cache), stats/data pulled from Storage. 2
4 Application enqueues hit to Azure Queue. 3
5 Application redirects user to Blob Storage for map file. 4
6 Worker dequeues hit. 5
7 Worker deletes message from queue. 6

While #3 is only on first hit for a given map, there are other transactions going on behind the scenes and if you are using the Table Storage Session State provider … well, it’s another transaction per hit (possibly two, if session data is changed and needs to be written back to storage).

If Worldmaps does 200,000 map hits per day (not beyond the realm of possibility but currently a bit high), then 200,000 * 6 = 1,200,000 storage transactions.  They are sold in 10,000 transactions for $.01, so that’s 120 “units” or $1.20 per day.  Multiply that by 30 days, and that’s about $36/mo for storage transactions alone – not counting the bandwidth or compute time.

I realized this early on and as a result I significantly changed the way the application works.  Tips to save money:

  1. If you don’t need durability, don’t use Azure queues.  Worldmaps switches between in-memory queues and Azure queues based on load, configuration, and task.  Since queues are REST calls, you could also make a WCF call directly to another Role.
  2. Consider scaling down worker roles by multithreading particularly for IO heavy roles.  Also, a webrole’s run method (not implemented) simply calls Thread.Sleep(-1), so why not override it to do processing?  More on this soon…
  3. SQL Azure may be cheaper, depending on what you’re doing.  And potentially faster because of connection pooling.
  4. If you aren’t interested in CDN, use Azure Storage only for dynamic content.
  5. Don’t forget about LocalStorage.  While it’s volatile, you can use it as a cache to serve items from the role, instead of storage.
  6. Nifty backoff algorithms are great, but implement only to save transaction cost.  It won’t affect compute charge.
  7. Take advantage of the many programs out there, such as hours included in MSDN subscriptions, etc.

Next up will be some tips on scaling down and maximizing the compute power of each instance.

Pingbacks and trackbacks (2)+

Comments are closed

My Apps

Dark Skies Astrophotography Journal Vol 1 Explore The Moon
Mars Explorer Moons of Jupiter Messier Object Explorer
Brew Finder Earthquake Explorer Venus Explorer  

My Worldmap

Month List