Funniest Slide Header I’ve Seen in a Long Time…

Credit to a colleague for this slide, but for those who follow Microsoft’s cloud platform might get this reference:

Just One More Week To Enter The Rock Paper Azure Fall Sweepstakes!

Week #3 of the Rock Paper Azure Challenge ended at 6pm EST on 12/9/2011. That means another five contestants just won $50 Best Buy gift cards! Congratulations to the following players for having the Top 5 bots for Week #3: AmpaT choi Protist RockMeister porterhouse Just a reminder to folks in the contest, be sure to catch Scott Guthrie, Dave Campbell, and Mark Russinovich live online next Tuesday, 12/13/2011, for the Learn Windows Azure event! Does your bot have what it takes to win? There is one more week to try and find out, now through December 16th, 2011. Visit the Rock Paper Azure Challenge site to learn more about the contest and get started. Remember, there are two ways to win: Sweepstakes To enter the sweepstakes all you have to do is enter a bot, any bot – even the pre-coded ones we provide – into the game between now and 6 p.m. ET on Dec. 16th. No ninja coding skills need – heck, you don’t even need Visual Studio or a Windows machine to participate! At 6 pm ET on Friday, December 16, 2011 the "Fall Sweepstakes" round will be closed and no new entries will be accepted. Shortly thereafter, four bots will be drawn at random for the Grand Prize (trip to Cancun, Mexico), First Prize (Acer Aspire S3 laptop), Second Prize (Windows Phone), and Third Prize (XBox w/Kinect bundle). Competition For the type-A folks, we’re keen on making this a competitive effort as well, so each week - beginning Nov. 25th and ending Dec. 16th - the top FIVE bots on the leaderboard will win a $50 Best Buy Gift card. If your bot is good enough to be in the top five on successive weeks, you’ll take home a gift card each of those weeks too. Of course, since you’ve entered a bot, you’re automatically in the sweepstakes as well! Note: As with past iterations of the challenge, even though you can iterate and upload updated bots for the competition, you will only be entered into the sweepstakes one time. You know what they say… you gotta be in it to win it! Good luck to all players in week #4!

Azure Camps Coming Soon!

Jim, Peter, and I are gearing up for another road trip to spread the goodness that is Windows Azure! The Windows Azure DevCamp series launched recently with a two-day event in Silicon Valley, and we’re jumping on the bandwagon for the East Region. We have five stops planned in December, and we’re doing things a bit differently this go-round. Most of the events will begin at 2 p.m. and end at 9 p.m. – with dinner in between of course. The first part will be a traditional presentation format and then we’re bringing back RockPaperAzure for some “hands-on” time during the second half of the event. We’re hoping you can join us the whole time, but if classes or your work schedule get in the way, definitely stop by for the evening hackathon (or vice versa). By the way it wouldn’t be RockPaperAzure without some loot to give away, so stay “Kinected” to our blogs for further details on what’s at stake! Here’s the event schedule, be sure to register quickly as some venues are very constrained on space. You’ll want to have your very own account to participate, so no time like the present to sign up for the Trial Offer, which will give you plenty of FREE usage of Windows Azure services for the event as well as beyond.   Registration Link Date Time NCSU, Raleigh NC Mon, Dec. 5th, 2011 2 – 9 p.m. Microsoft, Farmington CT Wed., Dec. 7th, 2011 2 – 9 p.m. Microsoft, New York City Thur., Dec. 8th, 2011 9 a.m. – 5 p.m. Microsoft, Malvern PA Mon., Dec. 12th, 2011 2 – 9 p.m. Microsoft, Chevy Chase MD Wed., Dec. 14th, 2011 2 – 9 p.m.

Rock, Paper, Azure is back…

Rock, Paper, Azure (RPA) is back!   For those of you who have played before, be sure to get back in the game!  If you haven’t heard of RPA, check out my past posts on the subject.   In short, RPA is a game that we built in Windows Azure.  You build a bot that plays a modified version of rock, paper, scissors on your behalf, and you try to outsmart the competition.  Over the summer, we ran a Grand Tournament where the first place prize was $5,000!   This time, we’ve decided to change things a bit and do both a competition and a sweepstakes.   The game, of course, is a competition because you’re trying to win.  But we heard from many who didn’t want to get in the game because the competition was a bit fierce.   Competition:  from Nov 25 through Dec 16, each Friday, we’ll give the top 5 bots a $50 Best Buy gift card.  If you’re the top bot each Friday, you’ll get a $50 gift card each Friday.  Sweepstakes:  for all bots in the game on Dec 16th, we’ll run the final round and then select a winner at random to win a trip to Cancun.   We’re also giving away an Acer Aspire S3 laptop, a Windows Phone, and an Xbox Kinect bundle.  Perfect timing for the holidays! Check it out at!

Geo-Load Balancing with the Azure Traffic Manager

One of the great new features of the Windows Azure platform is the Azure Traffic Manager, a geo load balancer and durability solution for your cloud solutions.  For any large website, managing traffic globally is critical to the architecture for both disaster recovery and load balancing. When you deploy a typical web role in Azure, each instance is automatically load balanced at the datacenter level.   The Azure Fabric Controller manages upgrades and maintenance of those instances to ensure uptime.  But what about if you want to have a web solution closer to where your users are?  Or automatically direct traffic to a location in the event of an outage?    This is where the Azure Traffic Manager comes in, and I have to say, it is so easy to set up – it boggles my mind that in today’s day and age, individuals can prop up large, redundant, durable, distributed applications in seconds that would rival the infrastructure of the largest websites.  From within the Azure portal, the first step is to click the Virtual Network menu item. On the Virtual Network page, we can set up a number of things, including the Traffic Manager.   Essentially the goal of the first step is to define what Azure deployments we’d like add to our policy, what type of load balancing we’ll use, and finally a DNS entry that we’ll use as a CNAME: We can route traffic for performance (best response time based on where user is located), failover (traffic sent to primary and only to secondary/tertiary if primary is offline), and round robin (traffic is equally distributed).   In all cases, the traffic manager monitors endpoints and will not send traffic to endpoints that are offline. I had someone ask me why you’d use round robin over routing based on performance – there’s one big case where that may be desirable:  if your users are very geography centric (or inclined to hit your site at a specific time) you’d likely see patterns here one deployment gets maxed out, while another does not.   To ease the traffic spikes to one deployment, round robin would be the way to go.  Of course, an even better solution is to combine traffic shaping based on performance with Azure scaling to meet demand. In the above image, let’s say I want to create a failover for the Rock Paper Azure botlab (a fairly silly example, but it works).   I first added my main botlab (deployed to South Central) to the DNS names, and then added my instance deployed to North Central:   From the bottom of the larger image above, you can see I’m picking a DNS name of as the public URL.  What I’d typically do at this point is go in to my DNS records, and add a CNAME, such as “” –> “”. In my case, I want this to be a failover policy, so users only get sent to my North Central datacenter in the event the south central instance is offline.  To simulate that, I took my south central instance offline, and from the Traffic Manager policy report, you’d see something like this: To test, we’ll fetch the main page in IE: … and we’re served from North Central.  Of course, the user doesn’t know (short of a traceroute) where they are going, and that’s the general idea.  There’s nothing stopping you from deploying completely different instances except of course for the potential end-user confusion! But what about database synchronization?   That’s a topic for another post …

Use the Windows Azure CDN for Assets

The most common response to running stuff in the cloud (Azure, Amazon in particular) is the that it’s too expensive for the little guy.   And generally, hosting VMs when a small shared site of something similar will suffice is a tough argument. There are aspects to Azure, though, that are very cost effective as they do “micro-scale” very well.  A good example of this is the Azure CDN, or more simply, Azure Blob Storage.   It’s effective to exchange files, it’s effective at fast delivery, and even lightweight security using shared access signatures (links that essentially only work for a period of time).    It’s durable:  not just redundant internally, but externally as well, automatically creating a backup in another datacenter. For MSDN subscribers, you already have Azure benefits, but even going out of pocket on Blob storage isn’t likely to set you back much:  $0.15/GB of storage per month, $0.01/10,000 transactions, and $0.15/GB outbound bandwidth ($0.20 in Asia; all inbound free).  A transaction is essentially a “hit” on a resource, so each time someone downloads, say, an image file, it’s bandwidth + 1 transaction.  Because these are micro transactions, for small apps, personal use, etc., it’s quite economical … often adding up to pennies per month.   A few typical examples are using storage to host files for a website, serve content to mobile devices, and to simply offload resources (images/JS files) from website code. Depending on usage, the Azure Content Delivery Network (CDN) can be a great way to enhance the user experience.  It may not always be the case (and I’ll explain why) but essentially, the CDN has dozens of edge servers around the world.  While your storage account is served from a single datacenter, having the data on the edge servers greatly enhances speed.   Suppose an app on a phone is retrieving documents/text to a worldwide audience … enabling CDN puts the content much closer.  I created a test storage account in North Europe (one of the Azure datacenters) to test this, using a small graphic from RPA: Here’s the same element via the CDN (we could be using custom DNS names, but for demo purposes we’re not): Here’s a trace to the storage account in the datacenter – from North Carolina, really not that bad all things considered: You can see we’re routed to NY, then on across the pond, and total latency of about 116ms.   And now the CDN: MUCH faster, chosen not only by physical distance but also network congestion.   Of course, I won’t see a 100ms difference between the two, but if you’re serving up large documents/images, multiple images, or streaming content, the difference will be noticeable.  If you’re new to Azure and have an account, creating a storage account from the dashboard is easy.   You’d just click on your storage accounts, and enter a name/location: You’d typically pick someplace close to you or where most of your users are.   To enable CDN, you’d just click the CDN link on the left nav, and enable it: Once complete, you’ll see if on the main account screen with the HTTP endpoint: So why wouldn’t you do this? Well, it’s all about cacheability.   If an asset is frequently changing or infrequently used, it’s not a good candidate for caching.   If there is a cache miss at a CDN endpoint, the CDN will retrieve the asset from the base storage account.  This will incur an additional transaction, but more importantly it’s slower than if the user just went straight to the storage account.  So depending on usage, it may or may not be beneficial. 

Azure and Phone … Better Together

We had an excellent time presenting today’s Windows Phone Camp in Charlotte. Thank you to everyone who attended. Here are some resources and major points of today’s “To the cloud…” session. First, here is the slide deck for the presentation.  To The Cloud... Next, download the Windows Azure Toolkit for Windows Phone. This contains both the sending notifications sample, and the Babelcam application. Note that there are quite a few setup steps – using the Web Platform Installer is a great way to make all of this easier. The key takeaway that I really wanted to convey: while the cloud is most often demonstrating massive scale scenarios, it’s also incredibly efficient at micro scale. The first scenario we looked at was using Azure Blob Storage as a simple (yet robust) way to host assets. Think of Blob Storage as a scalable file system with optional built in CDN support. Regardless of where your applications of hosted (shared host, dedicated hosting, or your own datacenter), and regardless of the type of application (client, phone, web, etc.) the CDN offers a tremendously valuable way to distribute those resources. For MSDN subscribers, you already have access so there’s no excuse to not use this benefit. But even if you had to go out of pocket, hosting assets in Azure is $0.15/GB per month, + $0.01/10,000 transactions, + $0.15/GB outbound bandwidth (inbound is free). For small applications, it’s almost free. Obviously you need to do the math for your app, but consider hosting 200MB in assets (images, JS files, XAPs, etc.), a million transactions a month with several GB of data transfers: it’s very economical at the cost of a few dollars / month. In the second demo, we looked at using Azure Queues to enhance the push notification service on the phone. The idea being that we’ll queue failed notifications, and retry them for some specified period of time. For the demo, I only modified the raw notifications. In PushNotificationsController.cs (in toolkit demo above), I modified SendMicrosoftRaw slightly: [HttpPost]public ActionResult SendMicrosoftRaw(string userId, string message){ if (string.IsNullOrWhiteSpace(message)) { return this.Json("The notification message cannot be null, empty nor white space.", JsonRequestBehavior.AllowGet); } var resultList = new List<MessageSendResultLight>(); var uris = this.pushUserEndpointsRepository.GetPushUsersByName(userId).Select(u => u.ChannelUri); var pushUserEndpoint = this.pushUserEndpointsRepository.GetPushUsersByName(userId).FirstOrDefault(); var raw = new RawPushNotificationMessage { SendPriority = MessageSendPriority.High, RawData = Encoding.UTF8.GetBytes(message) }; foreach (var uri in uris) { var messageResult = raw.SendAndHandleErrors(new Uri(uri)); resultList.Add(messageResult); if (messageResult.Status.Equals(MessageSendResultLight.Error)) { this.QueueError(pushUserEndpoint, message); } } return this.Json(resultList, JsonRequestBehavior.AllowGet);} Really the only major change is that if the messageResult comes back with an error, we’ll log the error. QueueError looks like this: private void QueueError(PushUserEndpoint pushUser, string message){ var queue = this.cloudQueueClient.GetQueueReference("notificationerror"); queue.CreateIfNotExist(); queue.AddMessage(new CloudQueueMessage( string.Format("{0}|{1}", pushUser.ChannelUri.ToString(), message) ));} We’re simply placing the message on the queue with the data we want: you need to get used to string parsing with queues. In this case, we’ll delimit the data (which is the channel URI and the message of the notification) with a pipe character. While the channel URI is not likely to change, it’s a better approach to store the username and not the URI in the message, and instead do a lookup of the current URI before sending (much like the top of SendMicrosoftRaw does), but for the purposes of the demo is fine. When we try sending a raw notification when the application isn’t running, we’ll get the following error: Typically, without a queue, you’re stuck. Using a tool like Cloud Storage Studio, we can see the notification is written to the failure queue, including the channel URI and the message: So, now we need a simple mechanism to poll for messages in the queue, and try to send them again. Because this is an Azure webrole, there’s a way to get a “free” thread to do some processing. I say free because it’s invoked by the Azure runtime automatically, so it’s a perfect place to do some background processing outside of the main site. In Webrole.cs, you’ll see there is no Run() method. The base WebRole Run() method does nothing (it does an indefinite sleep), but we can override that. The caveat is, we never want to exit this method. If an exception bubbles out of this method, or we forget to loop, the role will recycle when the method exits: public override void Run(){ this.cloudQueueClient = cloudQueueClient ?? GetStorageAccountFromConfigurationSetting().CreateCloudQueueClient(); var queue = this.cloudQueueClient.GetQueueReference("notificationerror"); queue.CreateIfNotExist(); while (true) { Thread.Sleep(200); CloudQueueMessage message = queue.GetMessage(TimeSpan.FromSeconds(60)); if (message == null) continue; if (message.DequeueCount > 60) { queue.DeleteMessage(message); continue; } string[] messageParameters = message.AsString.Split('|'); var raw = new RawPushNotificationMessage { SendPriority = MessageSendPriority.High, RawData = Encoding.UTF8.GetBytes(messageParameters[1]) }; var messageResult = raw.SendAndHandleErrors(new Uri(messageParameters[0])); if (messageResult.Status.Equals(MessageSendResultLight.Success)) { queue.DeleteMessage(message); } }} What this code is doing, every 200 milliseconds, is looking for a message on the failure queue. Messages are marked with a 60 second timeout – this will act as our “retry” window. Also, if we’ve tried to send the message more than 60 times, we’ll quit trying. Got to stop somewhere, right?   We’ll then grab the message from the queue, and parse it based on the pipe character we put in there. We’ll then send another raw notification to that channel URI. If the message was sent successfully, we’ll delete the message. Otherwise, do nothing and it will reappear in 60 seconds.   While this code is running in an Azure Web Role, it’s just as easy to run in a client app, service app, or anything else. Pretty straightforward, right? No database calls, stored procedures, locking or flags to update. Download the completed project (which is the base solution in the toolkit plus these changes) here (note: you’ll still need the toolkit):  VS2010 Solution The final demo was putting it all together using the Babelcam demo – Azure queues, tables, notifications, and ACS. Questions or comments? Let me know.

RockPaperAzure Grand Tournament

We’re back – this time with an International Grand Tournament in Rock, Paper, Azure.    So what’s new? First, we heard many folks loud and clear that they weren’t happy it was U.S. residents only.   So, now’s your chance – we’ve opened up the tournament to Canada, the UK, Sweden, New Zealand, Germany, China, and of course the USA.   We’ve also included country flags in the leaderboard: Next, we’ve changed some of the rules.  Specifically, players are now “blind” when they play in the GT.  What does that mean?   It means that your bot will not know the team name of the opponent.  While playing, the name of the opponent is a “?” and this is also reflected in the game history and log file: Why this change?  Primarily, we felt it made the game a little more interesting as it focuses on algorithms as opposed to brute force.   We’ve created a GT Practice Round that is not blind, so if you wish, you can tinker in this round to get some exposure and fine tune your logic.  Of course, playing in the practice round is optional.  Next, players will break down into heats during the GT.   After the round closes, we’ll segment players into a number of heats (as I write this, I can’t quite recall if we agreed on a random 25% in each heat, or 25 players per heat).  The idea is that this creates a ladder approach to get to the top and adds a bit of excitement to see how far up the ladder your bot can go.  It also scales nicer, since we’re assuming a higher involvement in the competition. Finally, we decided to give away something a little sweeter than an Xbox.  This time, we’re got $5,000 riding on first place!  Additionally, what we’ve decided to do is spread out the winnings a bit more so second place receives $1,000, and the next ten players (3rd-12th place) all receive $250.  So, why the prize structure?  Well, during an in-person event during our original 6 week competition, I heard someone remark that it would be too difficult to place in the top 3 to get a prize, much less win the Xbox.   I can understand that because, indeed, some of the bots we saw were really phenomenal.   What we wanted to do was make it so there were enough prizes to reward “pretty good play” for those (like myself) are interested in playing a little, but not spending a hundred hours coding a bot.  With the new prize structure plus blind playing, it’s really anyone’s game with a little clever code.  We hope you think so, too… and have fun playing!  Questions or comments, feel free to ping us either here on my blog or through the website.

Azure Tech Jam

You’ve heard about cloud computing and already know it’s the greatest thing since sliced bread – and maybe you’ve already attended a Microsoft Azure Boot Camp or other event introducing you to the cloud and detailing the various parts of the Windows Azure platform.  Well we’ll do that too… in the first half hour!  The rest of the time we’ll have a bit of fun with Azure by taking a look at some cool demos and poking under the hood.  We’ll then take a look at some of the innovative uses of cloud computing that Windows Azure customers have already produced.  After lunch, we’ll introduce the genesis and creation of the Rock Paper Azure Challenge… AND run our very own Challenge on-site, exclusive to attendees only, complete with prizes like an Xbox 360/Kinect bundle, a stand-alone Kinect, and a $50 gift certificate. This is an interactive way to learn about developing and deploying to the cloud, with a little friendly competition thrown in for fun. So bring your laptop, Windows Azure account credentials and a sense of adventure and join us for this FREE, full-day event as Peter, Jim, and I take you “to the cloud!” Prerequisites: · Windows Azure Account – don’t have one? We’re offering a free Windows Azure 30-day pass for all attendees. Apply for yours now as it can take 3 days to receive. Use code AZEVENT · Laptop with Azure Tooks and SDK installed Want a leg up on the competition? Visit the Rock Paper Azure Challenge web site and begin coding your winning bot today. Location Date Charlotte, NC June 2 Malvern, PA June 7 Pittsburgh, PA June 9 Ft. Lauderdale, FL June 14 Tampa, FL June 16 Due to the hands-on nature of this event seating is limited. Reserve your spot by registering today!

Rock, Paper, Azure Deep Dive: Part 2

In part 1, I detailed some of the specifics in getting the Rock, Paper, Azure (RPA) up and running in Windows Azure.   In this post, I’ll start detailing some of the other considerations in the project – in many ways, this was a very real migration scenario of a reasonably complex application. (This post doesn’t contain any helpful info in playing the game, but those interested in scalability or migration, read on!) The first issue we had with the application was scalability.  Every time players are added to the game, the scalability requirements of course increases.  The original purpose of the engine wasn’t to be some big open-ended game played on the internet;  I imagine the idea was to host small (10 or less players).    While the game worked fine for < 10 players, we started to hit some brick walls as we climbed to 15, and then some dead ends around 20 or so.  This is not a failing of the original app design because it was doing what it was intended to do.  In my past presentations on scalability and performance, the golden rule I always discuss is:  you have to be able to benchmark and measure your performance.  Whether it is 10 concurrent users or a million, there should always be some baseline metric for the application (requests/sec., load, etc.).   In this case, we wanted to be able to quickly run (within a few minutes) a 100 player round, with capacity to handle 500 players.  The problem with reaching these numbers is that as the number of players goes up, the number of games played goes up drastically (N * N-1 / 2).   Even for just 50 players, the curve looks like this: Now imagine 100 or 500 players!  The first step in increasing the scale was to pinpoint the two main problem areas we identified in the app.  The primary was the threading model around making a move.  In an even match against another player, roughly 2,000 games will be played.   The original code would spin up a thread for each _move_for each game in the match.   That means that for a single match, a total of 4,000 threads are created, and in a 100-player round, 4,950 matches = 19,800,000 threads!  For 500 players, that number swells to 499,000,000. The advantage of the model, though, is that should a player go off into the weeds, the system can abort the thread and spin up a new thread in the next game. What we decided to do is create a single thread per player (instead of a thread per move).  By implementing 2 wait handles in the class (specifically a ManualResetEvent and AutoResetEvent) we can accomplish the same thing as the previous method.  (You can see this implementation in the Player.cs file in the DecisionClock class.)  The obvious advantage here is that we go from 20 million threads in a 100 player match to around 9,900 – still a lot, but significantly faster.   In the first tests, 5 to 10 player matches would take around 5+ minutes to complete.   Factored out (we didn’t want to wait) a 100 player match would take well over a day.   In this model, it’s significantly faster – a 100 player match is typically complete within a few minutes. The next issue was multithreading the game thread itself.  In the original implementation, games would be played in a loop that would match all players against each other, blocking on each iteration.  Our first thought was to use Parallel Extensions (of PFx) libraries built into .NET 4, and kicking off each game as a Task.  This did indeed work, but the problem was that games are so CPU intensive, creating more than 1 thread per processor is a bad idea.  If the system decided to context switch when it was your move, it could create a problem with the timing and we had an issue with a few timeouts from time to time.   Since modifying the underlying thread pool thread count is generally a bad idea, we decided to implement a smart thread pool like the one here on The Code Project.   With this, we have the ability to auto scale the threads dynamically based on a number of conditions. The final issue was memory management.  This was solved by design:  the issue was that original engine (and Bot Lab) don’t store any results until the round is over.  This means that all the log files really start to eat up RAM…again, not a problem for 10 or 20 players – we’re talking 100-200+ players and the RAM just bogs everything down.  The number of players in the Bot Lab is small enough where this wasn’t a concern, and the game server handles this by design by using SQL Azure, recording results as the games are played. Next time in the deep dive series, we’ll look at a few other segments of the game.  Until next time!

My Apps

Dark Skies Astrophotography Journal Vol 1 Explore The Moon
Mars Explorer Moons of Jupiter Messier Object Explorer
Brew Finder Earthquake Explorer Venus Explorer  

My Worldmap

Month List