Getting a Windows Azure account for Rock, Paper, Azure

If you’re interested in getting a Windows Azure account to play in Rock, Paper, Azure (RPA), there are a few options available to you, from least painful to most painful (in my opinion, anyway): Method 1 – Windows Azure Pass The way most people are getting an account is through the Windows Azure Pass program (using code PLAYRPA).  More details can be found on the Get Started page under step 1.    But, this certainly isn’t the only way to get an account, and – for a few of you – might not be possible.  The Azure Pass is limited to one Live ID, so if you got an account through the Azure Pass program say 6 months ago, you can’t get another one.  (I won’t say anything if you just sign up for another Live ID.) Method 2 – Windows Azure Trial Account Sign up for the Windows Azure Free Trial.   This gives you 750 hours of an extra small compute instance, and 25 hours of a small compute instance.  You do need a credit card to cover overages.  Note: the Bot Lab project by default is set up as a small compute instance.  If you go this route, I highly recommend you change the Bot Lab to be an Extra Small instance.  You can do this by double-clicking the role and changing the VM size: Method 3 – MSDN Subscriptions Have an MSDN Premium or Ultimate subscription?   You already have account hours you can use.  Log into your MSDN account for more information.   This step does require a credit card (or other billing arrangement) to handle overages, but you are not billed as long as you stay within plan.  As of the time of this writing, please note that Extra Small compute instances are beta and not included in the MSDN hours – so be sure to stick with a small instance.  As usual, we recommend taking down deployments once you’re done to avoid wasting compute time. Method 4: Pay as You Go Specials Check out the current offers.   There are few different options based on your needs (and some are available specifically for partners).  The introductory special is the best way to get started, but if you’re using Windows Azure consistently, the Windows Azure Core offers a great value.   If you’re just interested in playing the game and willing to pay or aren’t able to receive other offers for some reason, deploying the Bot Lab as an Extra Small instance costs $0.05 per hour.   If you were to play during the week, and leave the Bot Lab deployed 24 hours, you’d be looking at roughly $5.  (If you only code in the evenings for a few hours, pulling down the deployment overnight and not during use will bring that down substantially.) See you on the battlefield!

We’re Ready to Rock…

… but there are some changes to Rock, Paper, Azure we need to discuss based on our test week.   First things first:  I highly recommend you download the latest Bot Lab and MyBot project here.  For the most part, old bots bots will continue to work, but since this was a test round anyway, I highly recommend you upgrade – there are a few edge cases that might break bots, or cases where a bot doesn’t do what you think it would.  Let’s discuss all of these changes we’ve made.  I’ll save the best for last. CanBeat and LosesTo – Don’t Use We re-scoped the CanBeat and LosesTo methods from the Move class to internal (from public).   Most people weren’t using them, and they weren’t really intended for outside consumption and we received a couple of questions on behavior.  The reason for the change is this – and some of this is historical:  the way the engine works is that it goes through a series of steps to determine a winner.   It will first look if a move can beat another move: 1: internal bool CanBeat(Move move) 2: { 3: if (!ValidMoves.Contains(GetType().Name)) 4: return false; 5:  6: if (!ValidMoves.Contains(move.GetType().Name)) 7: return true; 8:  9: return CanBeatLegalMove(move); 10: } So the RockMove implementation returns true if the move is scissors which is implemented in the RockMove class.  The original game had moves in different assemblies for multiple rounds, so the win determination path had to waterfall through, and examine rules to the game – such as whether or not you have any dynamite remaining.   The short answer here is there are cases where this will return unexpected results, so it’s best to just not expose them.  Performance wise, it’s better to have your own implementation, anyway.  If your bot is currently using this, sorry to say this is a breaking change. Game Summary At the top of the log file, you’ll now see a game summary that is helpful for info-at-a-glance.  It looks like this: Game Summary bhitney JohnSmith Points 953 (L) 1000 (W) Dynamite Used 45 100 Time Deciding (s) 0.04885 0.06507 Time deciding shows how much time your bot has used for the entire round.   This is interesting to see how your bot compares to others, but, it’s also useful for timeouts which is below.   Timeout   The server has some strict timeouts – we’re always tweaking the exact logic and we’re hesitant to give too much information because, in a multithreaded environment, the time slice each bot has isn’t exclusive.   But, your bot has a maximum of 1 second (subject to change) per match.  Typically that is no problem, as you can see in the game summary.  Once your bot crosses the 1 second timeout, your bot will stop making moves (essentially forfeiting the rest of the moves).   Server Messaging   In the log file, you’ll now see more clear messaging if something happened – for example, if a player threw dynamite but was out of dynamite, the log will indicate this next to the score.    ExceptionMove   We standardized the error moves to “ExceptionMove.”  If a player times-out or throws an exception, their last move will be an ExceptionMove.  This is visible in the log, and detectable like so:   if (opponent.LastMove is ExceptionMove){ //player threw an exception or timeout} This is a good time to mention that the opponent’s LastMove is null on the first move.  Also, if a player throws “illegal dynamite” (that is, throws dynamite when he/she is out) their last move will be Dynamite.  It’s up to you to figure out they were out of dynamite!   Fresh Upgrade   One of Azure’s strengths is flexibility in deployment – when deploying, you have an option for an in-place upgrade, or can deploy to staging and do a VIP (virtual IP) swap that allows you to seamlessly move a new deployment into production.    Because of the significant changes to our Bot Lab and new features, we recommend deleting the old deployment first, and then deploying the new package and not doing an in-place upgrade.   Player Log   And now the best part!  There’s a private player log you can write to.  This information will be appended to the bottom of the game log – but it’s only visible to you.  We’re still testing this feature, but it should be helpful in isolating issues with your bot.  To write to your log, you can do something like:   1: if (!you.HasDynamite) 2: { 3: you.Log.AppendLine("I'm out of dynamite!"); 4:  5: //opponent's log is always null! 6: if (opponent.Log == null) { } 7: }   Note:  Your opponent’s log is always null, so don’t try it!  Also, logs are limited to a total of 300k – this should be more than enough, but once the log passes this mark, the engine won’t write to the log (but no exception is raised).

RPA: Why is my Bot Rejected?

We’ve had a few people ask about getting the following message when submitting bots: Bot contains invalid calls. Please review the official rules for more information. In short, this message means the bot didn’t pass the static analysis tests.  On the About the Challenge page, you’ll see this: While this list is subject to change, we do not allow any classes/methods from System.IO, System.Net, System.Reflection, System.Threading, System.Diagnostics, or System.GC. That list is subject to change, and there are a few other red flags like P/Invoke or unsafe code.  Sometimes, these are caused by what would otherwise seem innocuous, such as calling GetType on an object – but that’s a reflection call, so would trigger the static analysis. Another occurrence:  analysis will look for the presence of such code, not if it’s reachable.  In one case, the code was never called but still present in the bot. So, if you’re getting the above message, look for those items and if you have any questions, leave a comment here or send us an note through the site!

Rock, Paper, Azure: Why US Only?

Over the past few days, we’ve gotten many requests from those outside the United States who would like to play Rock, Paper, Azure.  Some have even said, “I don’t care about the prizes, I just want to play!”   So what’s the deal? If we could simply enable the contest to be worldwide, that would be awesome!    But, when prizes are involved, it’s not a simple matter.   There isn’t an easy mechanism by which to say, “it’s ok to enter, you just can’t win.”  Just to provide a little transparency here, there are legal hurdles and frankly, the team that put this together is only a few people doing something part time.   Because we’re in the U.S. subsidiary, our main focus (naturally) is on U.S. – not that we want to exclude anyone, of course.  So what’s next?  First, we’re working with our colleagues in other subsidiaries to enable this in more countries, if at all possible.  It won’t happen on week 1, but hopefully we’ll get there over time.   If you know your local DPE evangelism team, be sure to contact him or her and let him know you’d like to play – if you don’t know who that is, leave a comment here in my blog. 

Rock, Paper, Azure Deep Dive: Part 1

If you’re not sure what Rock, Paper, Azure (RPA) is all about, check out the website or look over some of my recent posts.   In this series of posts, I want to go into some of the technical nuts and bolts regarding the project. First, you can download Aaron’s original project on github (here and here).   The first project is the Compete framework, which is an extensible framework design to host games like Rock, Paper, Scissors Pro! (the second project).    The idea, of course, is that other games can be created to work within the framework. Aaron and the other contributors to the project (I remember Aaron telling me some others had helped with various pieces, but I don’t recall who did what) did a great job in assembling the solution.   When moving it to Windows Azure, we had a number of issues – the bottom line is, our core requirements were a bit different than what was in the original solution.   When I describe some of these changes in this and other posts, don’t mistake it for me being critical of Aaron’s project.   Obviously, having used it at code camps and the basis for RPA shows I have a high regard for the concept, and the implementation, in many parts, were quite impressive. So, if you download those two projects on github, the first challenge is getting it up and running.  You’ll see in a few locations there are references to a local path – by default, I believe this is “c:\compete”.  This is the local scratch folder for bots, games, the db4o database, and the logfile.  Getting this to work in Windows Azure was actually pretty straightforward.   A Windows Azure project has several storage mechanisms.  When it comes to NTFS disk I/O, you have two options in Azure:  Local Storage, or Azure Drives.   Azure Drives are VHD files stored in Azure Blob Storage and can be mounted by a VM.   For our purposes, this was a little overkill because we only needed the disk space as a scratch medium: the players and results were being stored in SQL Azure.  The first thing we needed to do to get local storage configured is add a local storage resource: In this case, we just created a local storage area called compete, 4GB in size, set to clean itself if the role recycles. The next step was to remove any path references.  For example, in Compete.Site.Models, you’ll see directory references like this: Because there’s so much disk I/O going on, we created an AzureHelper project to ultimately help with the abstraction, and have a simple GetLocalScratchFolder method that resolves the right place to put files: Now, we inject that call wherever a directory is needed (about a half dozen or so places, if memory serves).   The next major change was deciding: to Spark, or not to Spark?  If you look at the project references (and in the views themselves, of course), you’ll see the Spark view engine is used: I’m no expert on Spark but having worked with it some, I grew to like its simplicity: The problem is, getting Spark to work in .NET 4.0 with MVC 2 was, at the time, difficult.  That doesn’t appear to be the case today as Spark has been revived a bit on their web page, but we started this a few weeks earlier (before this existed) and while we recompiled the engine and got it working, we ultimately decided to stick with what we knew best. The end result is the Bot Lab project.   While we’re using RPA with the idea that it can help others learn about Azure while having fun, it’s also a great example of why to use Windows Azure.  The Bot Lab project is around 1 MB in size, and the Bot Lab itself can be up and running in no time (open solution, hit F5). Imagine if you wanted to host an RPS style competition at a code camp.  If you have a deployment package, you could take the package and host it locally if you wanted, or upload it to Windows Azure – hosting an extra small instance for 6 hours at a code camp would cost $0.30.   Best of all, there’s no configuring that needs to be done (except for what the application dictates, like a username or password).  This, if you ask me, is one of the greatest strengths behind a platform as a service.

RockPaperAzure Coding Challenge

I’m pleased to announce that we’re finally launching our Rock, Paper, Azure Challenge! For the past couple of months, I’ve been working with Jim O’Neil and Peter Laudati on a new Azure event/game called Rock, Paper, Azure.  The concept is this:  you (hopefully) code a “bot” that plays rock, paper, scissors against the other players in the game.  Simple, right? Here’s where it gets interesting.  Rock, paper, scissors by itself isn’t all that interesting (after all, you can’t really beat random in a computer game – assuming you can figure out a good random generator!), so there are two additional moves in the game.  The first is dynamite, which beats rock, paper, and scissors.   Sounds very powerful – and it is – however, you only have a few per match so you need to decide when to use them.   The other move is a water balloon.  The water balloon beats dynamite, but it loses to everything else.   You have unlimited water balloons. Now, with the additional rules, it becomes a challenge to craft an effective strategy.   We do what we call “continuous integration” on the leaderboard – as soon as your bot enters, it’s an all out slugfest and you see where you are in near real time.   In fact, just a few minutes ago, a few of us playing a test round were constantly tweaking our bots to defeat each other – it was a lot of fun trying to outthink each other. Starting next week, we’ve got some great prizes on the line – including Xbox systems, Kinect, and gift cards – so be sure to check it out!   The project homepage is here: See you in the game!

Creating a Quick Progress Visual like UpdateProgress

On Thursday, we’re going to go live with our RockPaperAzure coding challenge – and it’s time to brain dump on some of the lessons learned while building out the solution – some small (like this one), some large. When developing the RPA website, I chose to use ASP.NET Webforms and ASP.NET Ajax.  The reason for this is simple … I’m a big fan ASP.NET MVC, but for this, given the very short time frame, it was the fastest route to completion.   (For those familiar with Aaron’s base project on Github, I thought about doing it all client side via JQuery but wasn’t impressed with the perf, so decided to go server side.) ASP.NET Ajax has a nifty UpdateProgress control that is useful during async postbacks to show activity, and while it has a display delay property, it doesn’t have a minimum display time property.  This is problematic because if you want a small spinning icon/wait cursor, displaying it too briefly is just an annoyance.  (Of course, brief is good because it means your page is snappy.)  One of the solutions to this (as I read on StackOverflow) was to simply put a Thread.Sleep in the server postback method, causing the round-trip to take longer and thus display your animation longer.  While this will work, it will crush scalability.  Depending on how many concurrent users you have, a Thread.Sleep on an ASP.NET thread should be avoided at all costs, and this wasn’t something I was willing to do.  There are a few commercial controls that will do this, and indeed, using the Ajax toolkit, you can develop a control that will accomplish the same.  But I wanted something that I could develop in 5 minutes – basically in less time than it is taking me to write this blog post.  I already have the spinning icon, so I wrapped it in a div or span (whatever is appropriate): 1: <span id="ProgressTemplate" style="visibility: hidden;"> 2: <img src="~/images/rpadie.gif" clientidmode="Static" runat="server" 3: id="imganim" height="18" width="18" /> 4: </span> Then, we just plug into the request model to show/hide the control.  It’s a little hack-ish – for example, I reset the visibility to visible on EndRequest because the span, with a default of hidden, will disappear on its own.  This can be fixed by making it a server control and enabling viewstate.  Still, not too bad for 5 minutes: 1: <script language="javascript" type="text/javascript"> 2: <!-- 3: var prm = Sys.WebForms.PageRequestManager.getInstance(); 4: 5: prm.add_initializeRequest(InitializeRequest); 6: prm.add_endRequest(EndRequest); 7: var postBackElement; 8: function InitializeRequest(sender, args) { 9: postBackElement = args.get_postBackElement(); 10: $get('ProgressTemplate').style.visibility = 'visible'; 11: } 12: function EndRequest(sender, args) { 13: $get('ProgressTemplate').style.visibility = 'visible'; 14: setTimeout('hideProgress()', 1000); 15: } 16:  17: function hideProgress() { 18: $get('ProgressTemplate').style.visibility = 'hidden'; 19: } 20: // --> 21: </script> When the Ajax request comes back, it will keep the span visible for 1 second.  Ideally, a nice modification would use a elapsed time mechanism, so if the request actually took 1 second for some reason, it would hide the span without delay.  This isn’t took hard to implement, but broke the 5 minute goal.

Storing Data in Azure: SQL, Tables, or Blobs?

While building the back end to host our “Rock, Paper, Scissors in the cloud” game, we faced a situation of where/how to store the log files for the games that are played.   In my last post, I explained a bit about the idea; in the game, log files are essential at tuning your bot to play effectively.  Just to give a quick example of what the top of a log file might look like:  In this match, I (bhitney) was playing a house team (HouseTeam4) … each match is made up of potentially thousands of games, with one game per line.    From the game’s perspective, we only care about the outcome of the entire match, not the individual games within the match – but we need to store the log for the user.  There’s no right or wrong answer for storing data – but like everything else, understanding the pros and cons is the key.  Azure Tables We immediately ruled out Azure Tables, simply because the entity size is too big.   But what if we stored each game (each line of the log) in an Azure Table?    After all, Azure Tables shine at large, unstructured data.   This would be ideal because we could ask specific questions of the data – such as, “show me all games where…”.  Additionally, size is really not a problem we’d face – tables can scale to TBs.  But, storing individual games isn’t a realistic option.  The number of matches played for a 100 player match 4,950.  Each match has around 2,000 games, so that means we’d be looking at 9,900,000 rows per round.   At a few hundred milliseconds per insert, it would take almost a month to insert that kind of info.  Even if we could get latency to a blazing 10ms, it would still take over a day to insert that amount of data.    Cost wise, it wouldn’t be too bad: about $10 per round for the transaction costs. Blob Storage Blob storage is a good choice as a file repository.  Latency-wise, we’d still be looking at 15 minutes per round.  We almost went this route, but since we’re using SQL Azure anyway for players/bots, it seemed excessive to insert metadata into SQL Azure and then the log files into Blob Storage.  If we were playing with tens of thousands of people, that kind of scalability would be really important.   But what about Azure Drives?   We ruled drives out because we wanted the flexibility of multiple concurrent writers.  SQL Azure Storing binary data in a database (even if that binary data is a text file) typically falls under the “guilty until proven innocent” rule.  Meaning: assume it’s a bad idea.  Still, though, this is the option we decided to pursue.  By using gzip compression on the text, the resulting binary was quite small and didn’t add significant overhead to the original query used to insert the match results to begin with.  Additionally, the connection pooling makes those base inserts incredibly fast – much, much faster that blob/table storage. One other side benefit to this approach is that we can serve the GZip stream without decompressing it.  This saves processing power on the web server, and also takes a 100-200k log file to typically less than 10k, saving a great deal of latency and bandwidth costs. Here’s a simple way to take some text (in our case, the log file) and get a byte array of the compressed data.  This can then be inserted into a varbinary(max) (or deprecated image column) in a SQL database: 1: public static byte[] Compress(string text) 2: { 3: byte[] data = Encoding.UTF8.GetBytes(text); 4: var stream = new MemoryStream(); 5: using (Stream ds = new GZipStream(stream, CompressionMode.Compress)) 6: { 7: ds.Write(data, 0, data.Length); 8: } 9:  10: byte[] compressed = stream.ToArray(); 11:  12: return compressed; 13: } And to get that string back: 1: public static string Decompress(byte[] compressedText) 2: { 3: try 4: { 5: if (compressedText.Length == 0) 6: { 7: return string.Empty; 8: } 9:  10: using (MemoryStream ms = new MemoryStream()) 11: { 12: int msgLength = BitConverter.ToInt32(compressedText, 0); 13: ms.Write(compressedText, 0, compressedText.Length - 0); 14:  15: byte[] buffer = new byte[msgLength]; 16:  17: ms.Position = 0; 18: using (GZipStream zip = new GZipStream(ms, CompressionMode.Decompress)) 19: { 20: zip.Read(buffer, 0, buffer.Length); 21: } 22:  23: return Encoding.UTF8.GetString(buffer); 24: } 25: } 26: catch 27: { 28: return string.Empty; 29: } 30: }   In our case, though, we don’t really need to decompress the log file because we can let the client browser do that!  In our case, we have an Http Handler that will do that, and quite simply it looks like:   1: context.Response.AddHeader("Content-Encoding", "gzip"); 2: context.Response.ContentType = "text/plain"; 3: context.Response.BinaryWrite(data.LogFileRaw); // the byte array 4: context.Response.End(); Naturally, the downside of this approach is that if a browser doesn’t accept GZip encoding, we don’t handle that gracefully.   Fortunately it’s not 1993 anymore, so that’s not a major concern.

Getting Ready to Rock (Paper and Scissors)

We’re gearing up for something that I think will be truly exciting – but I’m getting ahead of myself.  This is likely going to be a long series of posts, so let me start from the beginning. About a year or so ago, at the Raleigh Code Camp, I stumbled into a coding competition that was run by James Avery and Nate Kohari during a few hours in the middle of the day.   The concept was simple:  write a program that plays “Rock, Paper, Scissors” – you would take your code, compiled as a DLL, and upload it to their machine via a website, and the site would run your “bot” against everyone else.   Thus, a coding competition! I was intrigued.  During the first round, I didn’t quite get the competition aspect since return a random move of rock, paper, or scissors seems to be about the best strategy.  Still, though, you start thinking, “What if my opponent was even more lazy and just throws rock all the time?”  So you build in a little logic to detect that.   During round 2, though, things started getting interesting.   In addition to the normal RPS moves, a new moved called Dynamite was introduced.  Dynamite can beat RPS, but you only have a few to use per match.  (In this case, a match is when two players square off – the first player to 1,000 points wins.  You win a point by beating the other player in a single ‘throw.’  Each player has 100 dynamite per match.)  Clearly, your logic now is fairly important.  Do you throw dynamite right away to gain the upper hand, or is that too predictable?   Do you throw dynamite after you tie?    All of a sudden, it’s no longer a game a chance.  Now enter round 3.  In round 3, a new move, Water Balloon, is introduced.  Water Balloon can defeat dynamite, but loses to everything else.  So, if you can predict when your opponent is likely to throw dynamite, you can throw a water balloon and steal the point – albeit with a little risk. This was a lot of fun, but I was intrigued by the back end that supported all of this – and, from a devious viewpoint, the security considerations which are enormous.  James pointed me to the base of the project, available up on GitHub by Aaron Jensen, the author of the Compete framework (which is the underlying engine) and the Rock, Paper, Scissors game which uses the Compete engine. You can download the project today and play around.  At a couple of code camps in the months to come, I ran the same competition, and it went pretty well overall. So, what does this have to do with anything, particularly Azure?   Two things.  First and foremost, I feel that Azure is a great platform for projects like this.  If you download the code, you’ll realize there’s a little setup work involved.  I admit it took me some time to get it working, dealing with IIS, paths, etc.   If I wanted to run this for a code camp again, it would be far easier to take an Azure package at around 5mb, click deploy, and direct people that site.  Leaving a small instance up for the day would be cheap.   I like no hassle. The other potential is using the project as a learning tool on Azure.   You might remember that my colleague Jim and I did something similar last year with our Azure @Home series – we used Azure to contribute back to Stanford’s Folding@home project.  It was a great way to do something fun, useful, and educational. In the coming weeks, we’re rolling out a coding competition in Azure that plays RPS – the idea here is that as a participant, you can host your own bots in a sandbox for testing, and the main game engine can take these bots and continually play in the background.  I’m hoping it’s a lot of fun, slightly competitive, and educational at the same time.  We’ve invested a bit more into polishing this than we did with @home, and I’m getting excited at unveiling. Over the next few posts, I’ll talk more about what we did, what we’ve learned, and how the project is progressing!

Connected Show: Migrating To Azure

I recently sat down with Peter Laudati, my cloud colleague up in the NY/NJ area, and discussed Worldmaps and the migration to the cloud in Peter’s and Dmitry’s Connected Show podcast .   Thanks guys for the opportunity! Connected Show - Episode #40 – Migrating World Maps to Azure A new year, a new episode. This time, the Connected Show hits 40! In this episode, guest Brian Hitney joins Peter to discuss how he migrated the My World Maps application to Windows Azure. Fresh off his Azure Firestarter tour through the eastern US, Brian talks about migration issues, scalability challenges, and blowing up shared hosting. Also, Dmitry and Peter rap about Dancing with the Stars, the XBox 360 Kinect, Dmitry’s TWiT application for Windows Phone 7, and Dmitry’s outdoor adventures at 'Camp Gowannas'. Show Link:

My Apps

Dark Skies Astrophotography Journal Vol 1 Explore The Moon
Mars Explorer Moons of Jupiter Messier Object Explorer
Brew Finder Earthquake Explorer Venus Explorer  

My Worldmap

Month List