Storing Data in Azure: SQL, Tables, or Blobs?

While building the back end to host our “Rock, Paper, Scissors in the cloud” game, we faced a situation of where/how to store the log files for the games that are played.   In my last post, I explained a bit about the idea; in the game, log files are essential at tuning your bot to play effectively.  Just to give a quick example of what the top of a log file might look like:  In this match, I (bhitney) was playing a house team (HouseTeam4) … each match is made up of potentially thousands of games, with one game per line.    From the game’s perspective, we only care about the outcome of the entire match, not the individual games within the match – but we need to store the log for the user.  There’s no right or wrong answer for storing data – but like everything else, understanding the pros and cons is the key.  Azure Tables We immediately ruled out Azure Tables, simply because the entity size is too big.   But what if we stored each game (each line of the log) in an Azure Table?    After all, Azure Tables shine at large, unstructured data.   This would be ideal because we could ask specific questions of the data – such as, “show me all games where…”.  Additionally, size is really not a problem we’d face – tables can scale to TBs.  But, storing individual games isn’t a realistic option.  The number of matches played for a 100 player match 4,950.  Each match has around 2,000 games, so that means we’d be looking at 9,900,000 rows per round.   At a few hundred milliseconds per insert, it would take almost a month to insert that kind of info.  Even if we could get latency to a blazing 10ms, it would still take over a day to insert that amount of data.    Cost wise, it wouldn’t be too bad: about $10 per round for the transaction costs. Blob Storage Blob storage is a good choice as a file repository.  Latency-wise, we’d still be looking at 15 minutes per round.  We almost went this route, but since we’re using SQL Azure anyway for players/bots, it seemed excessive to insert metadata into SQL Azure and then the log files into Blob Storage.  If we were playing with tens of thousands of people, that kind of scalability would be really important.   But what about Azure Drives?   We ruled drives out because we wanted the flexibility of multiple concurrent writers.  SQL Azure Storing binary data in a database (even if that binary data is a text file) typically falls under the “guilty until proven innocent” rule.  Meaning: assume it’s a bad idea.  Still, though, this is the option we decided to pursue.  By using gzip compression on the text, the resulting binary was quite small and didn’t add significant overhead to the original query used to insert the match results to begin with.  Additionally, the connection pooling makes those base inserts incredibly fast – much, much faster that blob/table storage. One other side benefit to this approach is that we can serve the GZip stream without decompressing it.  This saves processing power on the web server, and also takes a 100-200k log file to typically less than 10k, saving a great deal of latency and bandwidth costs. Here’s a simple way to take some text (in our case, the log file) and get a byte array of the compressed data.  This can then be inserted into a varbinary(max) (or deprecated image column) in a SQL database: 1: public static byte[] Compress(string text) 2: { 3: byte[] data = Encoding.UTF8.GetBytes(text); 4: var stream = new MemoryStream(); 5: using (Stream ds = new GZipStream(stream, CompressionMode.Compress)) 6: { 7: ds.Write(data, 0, data.Length); 8: } 9:  10: byte[] compressed = stream.ToArray(); 11:  12: return compressed; 13: } And to get that string back: 1: public static string Decompress(byte[] compressedText) 2: { 3: try 4: { 5: if (compressedText.Length == 0) 6: { 7: return string.Empty; 8: } 9:  10: using (MemoryStream ms = new MemoryStream()) 11: { 12: int msgLength = BitConverter.ToInt32(compressedText, 0); 13: ms.Write(compressedText, 0, compressedText.Length - 0); 14:  15: byte[] buffer = new byte[msgLength]; 16:  17: ms.Position = 0; 18: using (GZipStream zip = new GZipStream(ms, CompressionMode.Decompress)) 19: { 20: zip.Read(buffer, 0, buffer.Length); 21: } 22:  23: return Encoding.UTF8.GetString(buffer); 24: } 25: } 26: catch 27: { 28: return string.Empty; 29: } 30: }   In our case, though, we don’t really need to decompress the log file because we can let the client browser do that!  In our case, we have an Http Handler that will do that, and quite simply it looks like:   1: context.Response.AddHeader("Content-Encoding", "gzip"); 2: context.Response.ContentType = "text/plain"; 3: context.Response.BinaryWrite(data.LogFileRaw); // the byte array 4: context.Response.End(); Naturally, the downside of this approach is that if a browser doesn’t accept GZip encoding, we don’t handle that gracefully.   Fortunately it’s not 1993 anymore, so that’s not a major concern.

Getting Ready to Rock (Paper and Scissors)

We’re gearing up for something that I think will be truly exciting – but I’m getting ahead of myself.  This is likely going to be a long series of posts, so let me start from the beginning. About a year or so ago, at the Raleigh Code Camp, I stumbled into a coding competition that was run by James Avery and Nate Kohari during a few hours in the middle of the day.   The concept was simple:  write a program that plays “Rock, Paper, Scissors” – you would take your code, compiled as a DLL, and upload it to their machine via a website, and the site would run your “bot” against everyone else.   Thus, a coding competition! I was intrigued.  During the first round, I didn’t quite get the competition aspect since return a random move of rock, paper, or scissors seems to be about the best strategy.  Still, though, you start thinking, “What if my opponent was even more lazy and just throws rock all the time?”  So you build in a little logic to detect that.   During round 2, though, things started getting interesting.   In addition to the normal RPS moves, a new moved called Dynamite was introduced.  Dynamite can beat RPS, but you only have a few to use per match.  (In this case, a match is when two players square off – the first player to 1,000 points wins.  You win a point by beating the other player in a single ‘throw.’  Each player has 100 dynamite per match.)  Clearly, your logic now is fairly important.  Do you throw dynamite right away to gain the upper hand, or is that too predictable?   Do you throw dynamite after you tie?    All of a sudden, it’s no longer a game a chance.  Now enter round 3.  In round 3, a new move, Water Balloon, is introduced.  Water Balloon can defeat dynamite, but loses to everything else.  So, if you can predict when your opponent is likely to throw dynamite, you can throw a water balloon and steal the point – albeit with a little risk. This was a lot of fun, but I was intrigued by the back end that supported all of this – and, from a devious viewpoint, the security considerations which are enormous.  James pointed me to the base of the project, available up on GitHub by Aaron Jensen, the author of the Compete framework (which is the underlying engine) and the Rock, Paper, Scissors game which uses the Compete engine. You can download the project today and play around.  At a couple of code camps in the months to come, I ran the same competition, and it went pretty well overall. So, what does this have to do with anything, particularly Azure?   Two things.  First and foremost, I feel that Azure is a great platform for projects like this.  If you download the code, you’ll realize there’s a little setup work involved.  I admit it took me some time to get it working, dealing with IIS, paths, etc.   If I wanted to run this for a code camp again, it would be far easier to take an Azure package at around 5mb, click deploy, and direct people that site.  Leaving a small instance up for the day would be cheap.   I like no hassle. The other potential is using the project as a learning tool on Azure.   You might remember that my colleague Jim and I did something similar last year with our Azure @Home series – we used Azure to contribute back to Stanford’s Folding@home project.  It was a great way to do something fun, useful, and educational. In the coming weeks, we’re rolling out a coding competition in Azure that plays RPS – the idea here is that as a participant, you can host your own bots in a sandbox for testing, and the main game engine can take these bots and continually play in the background.  I’m hoping it’s a lot of fun, slightly competitive, and educational at the same time.  We’ve invested a bit more into polishing this than we did with @home, and I’m getting excited at unveiling. Over the next few posts, I’ll talk more about what we did, what we’ve learned, and how the project is progressing!

Connected Show: Migrating To Azure

I recently sat down with Peter Laudati, my cloud colleague up in the NY/NJ area, and discussed Worldmaps and the migration to the cloud in Peter’s and Dmitry’s Connected Show podcast .   Thanks guys for the opportunity! Connected Show - Episode #40 – Migrating World Maps to Azure A new year, a new episode. This time, the Connected Show hits 40! In this episode, guest Brian Hitney joins Peter to discuss how he migrated the My World Maps application to Windows Azure. Fresh off his Azure Firestarter tour through the eastern US, Brian talks about migration issues, scalability challenges, and blowing up shared hosting. Also, Dmitry and Peter rap about Dancing with the Stars, the XBox 360 Kinect, Dmitry’s TWiT application for Windows Phone 7, and Dmitry’s outdoor adventures at 'Camp Gowannas'. Show Link: http://bit.ly/dVrIXM

Folding@home SMP Client

Wouldn’t you know it!  As soon as we get admin rights in Azure in the form of Startup Tasks and VM Role, the fine folks at Stanford have released a new SMP client that doesn’t require administrative rights.  This is great news, but let me provide a little background on the problem and why this is good for our @home project.  In the @home project, we leverage Stanford’s console client in the worker roles that run their Folding@home application.   The application, however, is single threaded.   During our @home webcasts where we’ve built these clients, we’ve walked through how to select the appropriate VM size – for example, a single core (small) instance, all the way up to an 8 core (XL) instance.  For our purposes, using a small, single core instance is best.  Because the costs are linear (2 single core costs the same as a single dual-core), we might as well just launch 1 small VM for each worker role we need.   The extra processors wouldn’t be utilized and it didn’t matter if we had 1 quad core running 4 instances, or 4 small VMs each with their own instance. The downside to this approach is that the work units assigned to our single core VMs were relatively small, and consequently the points received were very small.   In addition, bonus points are offered based on how fast work is done, which means that for single core machines, we won’t be earning bonus points.  Indeed, if you look at the number of Work Units our team has done, it’s a pretty impressive number compared to our peers, but our score isn’t all that great: As you can see, we’ve processed some 180,000 WU’s – that would take one of our small VMs, working alone, some 450 years to complete!   Points-wise, though, is somewhat ho-hum. Stanford has begun putting together some High Performance Clients that make use of multiple cores, however, until now, were difficult to install in Windows Azure.   With VM Role and admin startup tasks just announced at PDC we could now accomplish these tasks inside of Azure, but it turns out Stanford (a few months back, actually) put together a drop-in replacement that is multicore capable.  Read their install guide here.   This is referred to as the SMP (symmetric multiprocessing) client.  The end result is that instead of having (for example) 8 single-core clients running the folding app, we can instead of 1 8-core machine.   While it will crunch fewer Work Units, the power and point value is far superior.  To test this, I set up a new account with a username of bhitney-test.  After a couple of days, this is result (everyone else is using the non-SMP client): 36 Work Units processed for 97k points is averaging about 2,716 points per WU.   That’s significantly higher than the single core which pulls in about 100 points per WU.  The 2,716 average is quite a bit lower than what it is doing right now, because bonus points don’t kick in for about the first dozen items.  Had we been able to use the SMP client from the beginning, we’d be sitting pretty at a much higher rating – but that’s ok, it’s not about the points. :)

My Apps

Dark Skies Astrophotography Journal Vol 1 Explore The Moon
Mars Explorer Moons of Jupiter Messier Object Explorer
Brew Finder Earthquake Explorer Venus Explorer  

My Worldmap

Month List