Folding@home SMP Client

Wouldn’t you know it!  As soon as we get admin rights in Azure in the form of Startup Tasks and VM Role, the fine folks at Stanford have released a new SMP client that doesn’t require administrative rights.  This is great news, but let me provide a little background on the problem and why this is good for our @home project.  In the @home project, we leverage Stanford’s console client in the worker roles that run their Folding@home application.   The application, however, is single threaded.   During our @home webcasts where we’ve built these clients, we’ve walked through how to select the appropriate VM size – for example, a single core (small) instance, all the way up to an 8 core (XL) instance.  For our purposes, using a small, single core instance is best.  Because the costs are linear (2 single core costs the same as a single dual-core), we might as well just launch 1 small VM for each worker role we need.   The extra processors wouldn’t be utilized and it didn’t matter if we had 1 quad core running 4 instances, or 4 small VMs each with their own instance. The downside to this approach is that the work units assigned to our single core VMs were relatively small, and consequently the points received were very small.   In addition, bonus points are offered based on how fast work is done, which means that for single core machines, we won’t be earning bonus points.  Indeed, if you look at the number of Work Units our team has done, it’s a pretty impressive number compared to our peers, but our score isn’t all that great: As you can see, we’ve processed some 180,000 WU’s – that would take one of our small VMs, working alone, some 450 years to complete!   Points-wise, though, is somewhat ho-hum. Stanford has begun putting together some High Performance Clients that make use of multiple cores, however, until now, were difficult to install in Windows Azure.   With VM Role and admin startup tasks just announced at PDC we could now accomplish these tasks inside of Azure, but it turns out Stanford (a few months back, actually) put together a drop-in replacement that is multicore capable.  Read their install guide here.   This is referred to as the SMP (symmetric multiprocessing) client.  The end result is that instead of having (for example) 8 single-core clients running the folding app, we can instead of 1 8-core machine.   While it will crunch fewer Work Units, the power and point value is far superior.  To test this, I set up a new account with a username of bhitney-test.  After a couple of days, this is result (everyone else is using the non-SMP client): 36 Work Units processed for 97k points is averaging about 2,716 points per WU.   That’s significantly higher than the single core which pulls in about 100 points per WU.  The 2,716 average is quite a bit lower than what it is doing right now, because bonus points don’t kick in for about the first dozen items.  Had we been able to use the SMP client from the beginning, we’d be sitting pretty at a much higher rating – but that’s ok, it’s not about the points. :)

Distributed Cache in Azure: Part III

It has been awhile since my last post on this … but here is the completed project – just in time for distributed cache in the Azure AppFabric!  : )  In all seriousness, even with the AppFabric Cache, this is still a nice viable solution for smaller scale applications. To recap, the premise here is that we have multiple website instances all running independently, but we want to be able to sync cache between them.   Each instance will maintain its own copy of an object in the cache, but in the event that one instance sees a reason to update the cache (for example, a user modifies their account or some other non-predictable action occurs), the other instances can pick up on this change.  It’s possible to pass objects across the WCF service however because each instance knows how to get the objects, it’s a bit cleaner to broadcast ‘flush’ commands.  So that’s what we’ll do.   While it’s completely possible to run this outside of Azure, one of the nice benefits is that the Azure fabric controller maintains the RoleEnvironment class so all of your instances can (if Internal Endpoints are enabled) be aware of each other, even if you spin up new instances or reduce instance counts. You can download this sample project here:     When you run the project, you’ll see a simple screen like so: Specifically, pay attention to the Date column.  Here we have 3 webrole instances running.  Notice that the Date matches – this is the last updated date/time of some fictitious settings we’re storing in Azure Storage.    If we add a new setting or just click Save New Settings, the webrole updates the settings in Storage, and then broadcasts a ‘flush’ command to the other instances.  After clicking, notice the date of all three changes to reflect the update: In a typical cache situation, you’d have no way to easily update the other instances to handle ad-hoc updates.  Even if you don’t need a distributed cache solution, the code base here may be helpful for getting started with WCF services for inter-role communication.  Enjoy!

Distributed Cache in Azure: Part II

In my last post, I talked about creating a simple distributed cache in Azure.   In reality, we aren’t creating a true distributed cache – what we’re going to do is allow each server manage its own cache, but we’ll use WCF and inter-role communication to keep them in sync.   The downside of this approach is that we’re wasting n* more RAM because each server has to maintain its own copy of the cached item.   The upside is:  this is easy to do. So let’s get the obvious thing out the way.  Using the built in ASP.NET Cache, you can add something to the cache like so that inserts an object that expires in 30 minutes: 1 Cache.Insert("some key", 2 someObj, 3 null, 4 DateTime.Now.AddMinutes(30), 5 System.Web.Caching.Cache.NoSlidingExpiration); With this project, you certainly could use the ASP.NET Cache, but I decided to you use the Patterns and Practices Caching Block.  The reason for this:  it works in Azure worker roles.  Even though we’re only looking at the web roles in this example, it’s flexible enough to go into worker roles, too.  To get started with the caching block, you can download it here on CodePlex.  The documentation is pretty straightforward, but what I did was just set up the caching configuration in the web.config file.  For worker roles, you’d use an app.config: 1 <cachingConfiguration defaultCacheManager="Default Cache Manager"> 2 <backingStores> 3 <add name="inMemory" 4 type="Microsoft.Practices.EnterpriseLibrary.Caching.BackingStoreImplementations.NullBackingStore, Microsoft.Practices.EnterpriseLibrary.Caching" /> 5 </backingStores> 6 7 <cacheManagers> 8 <add name="Default Cache Manager" 9 type="Microsoft.Practices.EnterpriseLibrary.Caching.CacheManager, Microsoft.Practices.EnterpriseLibrary.Caching" 10 expirationPollFrequencyInSeconds="60" 11 maximumElementsInCacheBeforeScavenging="250" 12 numberToRemoveWhenScavenging="10" 13 backingStoreName="inMemory" /> 14 </cacheManagers> 15 </cachingConfiguration> The next step was creating a cache wrapper in the application.  Essentially, it’s a simple static class that wraps all the insert/deletes/etc. from the underlying cache.  It doesn’t really matter what the underlying cache is.  The wrapper is also responsible for notifying other roles about a cache change.   If you’re a purist, you’re going to see that this shouldn’t be a wrapper, but instead be implemented as a full fledged cache provider since it isn’t just wrapping the functionality.   That’s true, but again, I’m going for simplicity here – as in, I want this up and running _today_, not in a week. Remember that the specific web app dealing with this request knows whether or not to flush the cache.  For example, it could be a customer updates their profile or other action that only this server knows about.  So when adding or removing items from the cache, we send a notify flag that instructs all other services to get notified… 1 public static void Add(string key, object value, CacheItemPriority priority, 2 DateTime expirationDate, bool notifyRoles) 3 { 4 _Cache.Add(key, 5 value, 6 priority, 7 null, 8 new AbsoluteTime(expirationDate) 9 ); 10 11 if (notifyRoles) 12 { 13 NotificationService.BroadcastCacheRemove(key); 14 } 15 } 16 17 public static void Remove(string key, bool notifyRoles) 18 { 19 _Cache.Remove(key); 20 21 if (notifyRoles) 22 { 23 Trace.TraceWarning(string.Format("Removed key '{0}'.", key)); 24 NotificationService.BroadcastCacheRemove(key); 25 } 26 } The Notification Service is surprisingly simple, and this is the cool part about the Windows Azure platform.  Within the ServiceDefinition file (or through the properties page) we can simply define an internal endpoint: This allows all of our instances to communicate with one another.  Even better, this all maintained by the static RoleEnvironment class.  So, as we add/remove instances to our app, everything magically works.  A simple WCF contract to test this prototype looked like so: 1 [ServiceContract] 2 public interface INotificationService 3 { 4 [OperationContract(IsOneWay = true)] 5 void RemoveFromCache(string key); 6 7 [OperationContract(IsOneWay = true)] 8 void FlushCache(); 9 10 [OperationContract(IsOneWay = false)] 11 int GetCacheItemCount(); 12 13 [OperationContract(IsOneWay = false)] 14 DateTime GetSettingsDate(); 15 } In this case, I want to be able to tell another service to remove an item from its cache, to flush everything in its cache, to give me the number of items in its cache as well as the ‘settings date’ which is the last time the settings were updated.  This is largely for prototyping to make sure everything is in sync. We’ll complete this in the next post where I’ll attach the project you can run yourself, but the next steps are creating the service and a test app to use it.    Check back soon!

Azure Firestarter – coming soon!

I’m excited to announce that our Azure Firestarter series is getting ready to roll!  Registration details at the bottom.  Basically, I’m teaming up with my colleagues Peter Laudati and Jim ONeil and we’ll be travelling around.   We’re intentionally been waiting to do these events so they post-PDC – this means we’ll include the new stuff we’re announcing at PDC!  Also, for those wondering what was going on with the @home series, we’ll be doing that here, too, with some revamped ideas… The Agenda Is cloud computing still a foggy concept for you? Have you heard of Windows Azure, but aren’t quite sure of how it applies to you and the projects you’re working on? Join your Microsoft Developer Evangelists for this free, all-day event combining presentations and hands-on exercises to demystify the latest disruptive (and over-hyped!) technology and to provide some clarity as to where the cloud and Windows Azure can take you. 8:00 a.m. - Registration 8:30 a.m. - Morning Sessions: Getting Your Head into the Cloud Ask ten people to define “Cloud Computing,” and you’ll get a dozen responses. To establish some common ground, we’ll kick off the event by delving into what cloud computing means, not just by presenting an array of acronyms like SaaS and IaaS , but by focusing on the scenarios that cloud computing enables and the opportunities it provides. We’ll use this session to introduce the building blocks of the Windows Azure Platform and set the stage for the two questions most pertinent to you: “how do I take my existing applications to the cloud?” and “how do I design specifically for the cloud?” Migrating Applications to Windows Azure How difficult is it to migrate your applications to the cloud? What about designing your applications to be flexible inside and outside of cloud environments? These are common questions, and in this session, we’ll specifically focus on migration strategies and adapting your applications to be “cloud ready.” We’ll examine how Azure VMs differ from a typical server – covering everything from CPU and memory, to profiling performance, load balancing considerations, and deployment strategies such as dealing with breaking changes in schemas and contracts. We’ll also cover SQL Azure migration strategies and how the forthcoming VM and Admin Roles can aid in migrating to the cloud. Creating Applications for Windows Azure Windows Azure enables you to leverage a great deal of your Visual Studio and .NET expertise on an ‘infinitely scalable’ platform, but it’s important to realize the cloud is a different environment from traditional on-premises or hosted applications. Windows Azure provides new capabilities and features – like Azure storage and the AppFabric – that differentiate an application translated to Azure from one built for Azure. We’ll look at many of these platform features and examine tradeoffs in complexity, performance, and costs. 12:15 - Lunch 1:00 - Cloud Play Enough talk! Bring your laptop or pair with a friend, as we spend the afternoon with our heads (and laptops) in the cloud. Each attendee will receive a two-week “unlimited” Azure account to use during (and after) our instructor-led hands-on lab. During the lab you’ll reinforce the very concepts we discussed in the morning as you develop and deploy a compelling distributed computing application to Windows Azure. 4:00 p.m. The Silver Lining: Evaluations and Giveaways Registration & Details Use the links below to register for the Windows Azure Firestarter in the city closest to you. City Date Registration Tampa, FL November 8 REGISTER HERE! Alpharetta, GA November 10 REGISTER HERE! Charlotte, NC November 11 REGISTER HERE! Rochester, NY November 16 REGISTER HERE! Waltham, MA November 30 REGISTER HERE! New York, NY December 1 REGISTER HERE! Malvern, PA December 7 REGISTER HERE! Chevy Chase, MD December 9 REGISTER HERE! Hope to see you there!

Creating a Poor Man’s Distributed Cache in Azure

If you’ve read up on the Windows Server AppFabric (which contains Velocity, the distributed caching project) you’re likely familiar with the concepts of distributed cache.   Distributed caching isn’t strictly limited to web environments, but for this post (or if I ramble on and it becomes a series) we’ll act like it does.  In a web environment, session state is one of the more problematic server-side features to deal with in multiple server applications.   You are likely already familiar with all of this, but for those who aren’t:  the challenge in storing session state is handling situations where a user’s first request goes to one server in the farm, then the next request goes to another.   If session state is being relied upon, there are only a few options:  1) store session state off-server (for example, in a common SQL Server shared by all web servers) or 2) use “sticky” sessions so that a user’s entire session is served from the same server (in this case, the load balancer typically handles this).   Each method has pros and cons. Caching is similar.  In typical web applications, you cache expensive objects in the web server’s RAM.  In very complex applications, you can create a caching tier – this is exactly the situation Velocity/AppFabric solves really well.  But, it’s often overkill for more basic applications.   The general rule of thumb(s) with caching is:  1) caching should always be considered volatile – if an item isn’t in the cache for any reason, the application should be able to reconstruct itself seamlessly.  And 2) an item in the cache should expire such that no stale data retained.   (The SqlCacheDependency helps in many of these situations, but generally doesn’t apply in the cloud.) The last part about stale data is pretty tricky in some situations.  Consider this situation:  suppose your web app has 4 servers and, on the the home page, a stock ticker for the company’s stock.  This is fetched from a web service, but cached for a period of time (say, 60 minutes) to increase performance.    These values will quickly get out of sync – it might not be that important, but it illustrates the point about keeping cache in sync.  A very simple way to deal with this situation is to expire the cache at an absolute time, such as the top of the hour.  (But this, too, has some downsides.) As soon as you move into a more complicated scenario, things get a bit trickier.  Suppose you want to expire items from a web app if they go out of stock.   Depending on how fast this happens,  you might expire them based on the number in stock – if the number gets really low, you could expire them in seconds, or even not cache them at all.  But what if you aren’t sure when an item might expire?   Take Worldmaps … storing aggregated data is ideal in the cache (in fact, there’s 2 levels of cache in Worldmaps).  In general, handling ‘when’ the data expires is predictable.  Based on age and volume, a map will redraw itself (and stats updated) between 2 and 24 hours.   I also have a tool that lets me click a button to force a redraw.   When one server gets this request, it can flush its own cache, but the other servers know nothing about this.   In situations, then, when user interaction can cause cache expiration, it’s very difficult to cache effectively and often the result is just not caching at all. With all of this background out of the way, even though technologies like the SqlCacheDependency currently don’t exist in Azure, there are a few ways we can effectively create a distributed cache in Azure – or perhaps more appropriately, sync the cache in a Windows Azure project. In the next post, we’ll get technical and I’ll show how to use the RoleEnvironment class and WCF to sync caches across different web roles.  Stay tuned!

@home: The Beginning

This post is part of a series diving into the implementation of the @home With Windows Azure project, which formed the basis of a webcast series by Developer Evangelists Brian Hitney and Jim O’Neil.  Be sure to read the introductory post for the context of this and subsequent articles in the series. To give even more background than in the first post … way back in late March (possibly early April) Jim had the idea to start something we originally called “Azure Across America” … not be confused with AAA :).   If you put yourself in our shoes, Azure is a very difficult technology for us to evangelize.   It reminds me a little of what it was like to explain the value prop of “WPF/e” back when the first bits were made available, long before it took the name Silverlight.   Azure is obviously a pay for use model, so what would be an interesting app to build in a webcast series?  Preferably something that helps illustrate cloud computing, and not just a “hello world” application. While we debated what this would look like (I’ll spare the details), the corp team solidified a trail account program that enabled us to get free trial accounts for attendees to the series.   This changed the game completely, because now we weren’t hindered by signup costs, deployment costs, etc.  In fact, the biggest challenge was doing something interesting enough that would be worth your time to deploy. That’s when we had the idea of a distributed computing project.   Contributing back to a well-known distributed computing project would be interesting, useful, demonstrate cloud computing, and not be hindered by the constant fluctuation of apps going online and offline.   So now that we had the idea, which project would we choose?  We also had a number of limitations in the Azure platform.  Don’t get me wrong:  Azure offers a number of strengths as a fully managed PaaS … but we don’t have administrator rights or the ability to remote desktop into the VMs.  In essence, we need whatever we deploy to not require admin access, and be xcopy deployable.  Stanford’s Folding@home project was perfect for this.  It’s a great cause, and the console version is easy to work with.  What we wanted to do was put together a site that would, in addition to providing the details, how-to’s, and recordings, show stats to track the current progress … In the next posts, I’ll go over the site and some of the issues we faced when developing the app.

@home with Windows Azure: Behind the Scenes

As over a thousand of you know (because you signed up for our Webcast series during May and June), my colleague Jim O’Neil and I have been working on a Windows Azure project – known as @home With Windows Azure – to demonstrate a number of features of the platform to you, learn a bit ourselves, and contribute to a medical research project to boot.  During the series, it quickly became clear (…like after the first session) that the two hours was barely enough time to scratch the surface, and while we hope the series was a useful exercise in introducing you to Windows Azure and allowing you to deploy perhaps your first application to the cloud, we wanted (and had intended) to dive much deeper. So enter not one but two blog series.  This introductory post appears on both of our blogs, but from here on out we’re going to divide and conquer, each of us focusing on one of the two primary aspects of the project.  Jim will cover the application you might deploy (and did if you attended the series), and I will cover the application, which also resides in Azure and serves as the ‘mothership’ for @home with Windows Azure.  Source code for the project is available, so you’ll be able to crack open the solutions and follow along – and perhaps even add to or improve our design.  You are responsible for monitoring your own Azure account utilization.  This project, in particular, can amass significant costs for CPU utilization.  We recommend your self-study be limited to using the Azure development fabric on your local machine, unless you have a limited-time trial account or other consumption plan that will cover the costs of execution. So let’s get started.  In this initial post, we’ll cover a few items Project history Folding@home overview @home with Windows Azure high-level architecture Prerequisites to follow along Project history Jim and I have both been intrigued by Windows Azure and cloud computing in general, but we realized it’s a fairly disruptive technology and can often seem unapproachable for many of your who are focused on your own (typically on-premises) application development projects and just trying to catch up on the latest topical developments in WPF, Silverlight, Entity Framework, WCF, and a host of other technologies that flow from the fire hose at Redmond.   Walking through the steps to deploy “Hello World Cloud” to Windows Azure was an obvious choice (and in fact we did that during our webcast), but we wanted an example that’s a bit more interesting in terms of domain as well as something that wasn’t gratuitously leveraging (or not) the power of the cloud. Originally, we’d considered just doing a blog series, but then our colleague John McClelland had a great idea – run a webcast series (over and over… and over again x9) so we could reach a crop of 100 new Azure-curious viewers each week.  With the serendipitous availability of ‘unlimited’ use, two-week Windows Azure trial accounts for the webcast series, we knew we could do something impactful that wouldn’t break anyone’s individual pocketbook – something along the lines of a distributed computing project, such as SETI.  SETI may be the most well-known of the efforts, but there are numerous others, and we settled on one (, sponsored by Stanford University) based on its mission, longevity, and low barrier to entry (read: no login required and minimal software download).  Once we decided on the project, it was just a matter of building up something in Windows Azure that would not only harness the computing power of Microsoft’s data centers but also showcase a number of the core concepts of Windows Azure and indeed cloud computing in general.  We weren’t quite sure what to expect in terms of interest in the webcast series, but via the efforts of our amazing marketing team (thank you, Jana Underwood and Susan Wisowaty), we ‘sold out’ each of the webcasts, including the last two at which we were able to double the registrants - and then some! For those of you that attended, we thank you.  For those that didn’t, each of our presentations was recorded and is available for viewing.  As we mentioned at the start of this blog post, the two hours we’d allotted seemed like a lot of time during the planning stages, but in practice we rarely got the chance to look at code or explain some the application constructs in our implementation.  Many of you, too, commented that you’d like to have seen us go deeper, and that’s, of course, where we’re headed with this post and others that will be forthcoming in our blogs. Overview of Stanford’s Folding@Home (FAH) project Stanford’s was launched by the Pande lab at the Departments of Chemistry and Structural Biology at Stanford University on October 1, 2000, with a goal “to understand protein folding, protein aggregation, and related diseases,” diseases that include Alzheimer’s, cystic fibrosis, CBE (Mad Cow disease) and several cancers. The project is funded by both the National Institutes of Health and the National Science Foundation, and has enjoyed significant corporate sponsorship as well over the last decade.  To date, over 5 million CPUs have contributed to the project (310,000 CPUs are currently active), and the effort has spawned over 70 academic research papers and a number of awards. The project’s Executive Summary answers perhaps the three most frequently asked questions (a more extensive FAQ is also available): What are proteins and why do they "fold"? Proteins are biology's workhorses -- its "nanomachines." Before proteins can carry out their biochemical function, they remarkably assemble themselves, or "fold." The process of protein folding, while critical and fundamental to virtually all of biology, remains a mystery. Moreover, perhaps not surprisingly, when proteins do not fold correctly (i.e. "misfold"), there can be serious effects, including many well known diseases, such as Alzheimer's, Mad Cow (BSE), CJD, ALS, and Parkinson's disease. What does Folding@Home do? Folding@Home is a distributed computing project which studies protein folding, misfolding, aggregation, and related diseases. We use novel computational methods and large scale distributed computing, to simulate timescales thousands to millions of times longer than previously achieved. This has allowed us to simulate folding for the first time, and to now direct our approach to examine folding related disease. How can you help? You can help our project by downloading and running our client software. Our algorithms are designed such that for every computer that joins the project, we get a commensurate increase in simulation speed. FAH client applications are available for the Macintosh, PC, and Linux, and GPU and SMP clients are also available.  In fact, Sony has developed a FAH client for its Playstation 3 consoles (it’s included with system version 1.6 and later, and downloadable otherwise) to leverage its CELL microprocessor to provide performance at a 20 GigaFLOP scale. As you’ll note in the architecture overview below, the @home with Windows Azure project specifically leverages the FAH Windows console client. @home with Windows Azure high-level architecture The @home with Windows Azure project comprises two distinct Azure applications, the site (on the right in the diagram below) and the application you deploy to your own account via the source code we’ve provided (shown on the left).  We’ll call this the Azure@home application from here on out. has three main purposes: Serve as the ‘go-to’ site for this effort with download instructions, webcast recordings, and links to other Azure resources. Log and reflect the progress made by each of the individual contributors to the project (including the cool Silverlight map depicted below) Contribute itself to the effort by spawning off Folding@home work units.               I’ll focus mostly on this backend piece and other related bits and pieces, design choices, etc.  The other Azure service in play is the one you can download from (either in VS2008 or VS2010 format) – the one we’re referring to as Azure@home.  This cloud application contains a web front end and a worker role implementation that wraps the console client downloaded from the site.  When you deploy this application, you will be setting up a small web site including a default page (image the left below) with a Bing Maps UI and a few text fields to kick off the folding activity.  Worker roles deployed with the Azure service are responsible for spawning the Folding@home console client application - within a VM in Azure - and reporting the progress to both your local account’s Azure storage and the application (via a WCF service call).                    Via your own service’s website you can keep tabs on the contribution your deployment is making to the effort (image to right above), and via you can view the overall – as I’m writing this the project is ranked 583 out of over 184,000 teams; that’s in roughly the top 0.3% after a little over two months, not bad! Jim will be exploring the design and implementation of the Azure@home piece via upcoming posts on my blog. Prerequisites to follow along Everything you need to know about getting started with @home with Windows Azure is available at the site, but here’s a summary: Operating System Windows 7 Windows Server 2008 R2 WIndows Server 2008 Windows Vista Visual Studio development environment Visual Studio 2008 SP1 (standard or above), or Visual Web Developer 2008 Express Edition with SP1, Visual Studio 2010 Professional, Premium or Ultimate (trial download), or Visual Web Developer 2010 Express Windows Azure Tools for Visual Studio (which includes the SDK) and has the following prerequisites IIS 7 with WCF Http Activation enabled SQL Server 2005 Express Edition (or higher) – you can install SQL Server Express with Visual Studio or download it separately. Azure@home source code Visual Studio 2008 version Visual Studio 2010 version Folding@home console client For Windows XP/2003/Vista (from Stanford’s site) In addition to the core pieces listed above, feel free to view one of the webcast recordings or my screencast to learn how to deploy the application.  We won’t be focusing so much on the deployment in the upcoming blog series, but more on the underlying implementation of the constituent Azure services. Lastly, we want to reiterate that the Azure@home application requires a minimum of two Azure roles.  That’s tallied as two CPU hours in terms of Azure consumption, and therefore results in a default charge of $0.24/hour; add to that a much smaller amount of Azure storage charges, and you’ll find that it’s left running 7x24, your monthly charge will be around $172!  There are various Azure platform offers available, including an introductory special; however, the introductory special includes only 25 compute hours per month (equating to12 hours of running the smallest version of Azure@home possible). Most of the units of work assigned by the Folding@home project require at least 24 hours of computation time to complete, so it’s unlikely you can make a substantial contribution to the Stanford effort without leveraging idle CPUs within a paid account or having free access to Azure via a limited-time trial account.  You can, of course, utilize the development fabric on your local machine to run and analyze the application, and theoretically run the Folding@home client application locally to contribute to the project on a smaller scale. That’s it for now.  I’ll be following up with the next post within a few days or so; until then keep your head in the clouds, and your eye on your code.

@home: Most Common Problems #1

Jim and I are nearly done with the @home with Azure series, but we wanted to document some of the biggest issues we see every week.  As we go through the online workshop, many users are deploying an Azure application for the first time after installing the tools and SDK.   In some cases, attendees are installing the tools and SDK in the beginning of the workshop. When installing the tools and SDK, it’s important to make sure all the prerequisites are installed (available on the download page).  The biggest roadblock is typically IIS7 – which basically rules out Windows XP and similar pre-IIS7 operating systems.  IIS7 also needs to be installed (by default, it isn’t), which can be verified by going into the control panels / programs and features. The first time you hit F5 on an Azure project, development storage and the development fabric are initialized, so this is typically the second hurdle to cross.   Development storage relies on SQL Server to house the data for the local development storage simulation.  If you have SQL Express installed, this should just work out of the box.  If you have SQL Server Standard (or other edition), or a non-default instance of SQL Server, you’ll likely receive an error to the effect of, “unable to initialize development storage.” The Azure SDK includes a tool called DSINIT that can be used to configure development storage for these cases.  Using the DSINIT tool, you can configure development storage to use a default or named instance of SQL Server. With these steps complete, you should be up and running!

Windows Azure Guest OS

In a Windows Azure project, you can specify the Guest OS version for your VM.  This is done by setting the osVersion property inside the ServiceConfiguration file: If you don’t specify a version explicitly, the latest Guest OS is chosen for you.  For production applications, it’s probably best to always provide an explicit value, and I have a real world lesson that demonstrates this! MSDN currently has a Guest OS Version and SDK Compatibility Matrix page that is extremely helpful if you’re trying to figure out which versions offer what features.  I recently ran into a problem when examining some of my performance counters – they were all zero (and shouldn’t have been)!  Curious to find out why, I did some digging (which means someone internal told me what I was doing wrong). In short, I had specified a performance counter to monitor like so:  "\ASP.NET Applications(__Total__)\Requests/Sec".  This had worked fine, but when I next redeployed some weeks later, the new Guest OS (with .NET Fx 4.0) took this to mean 4.0 Requests/Sec, because I didn’t specify a version.   So, I was getting zero requests/sec because my app was running on the earlier runtime.  This was fixed by changing the performance counter to "\ASP.NET Apps v2.0.50727(__Total__)\Requests/Sec".   For more information on this, check out this article on MSDN.  And thanks to the guys in the forum for getting me up and running so quickly!

Scaling Down – Text Version

I caught some Flak this weekend at the Charlotte Code Camp when Justin realized my recent Scale Down with Windows Azure post was principally a screencast (aside from the code sample).   So Justin, I’m documenting the screencast just for you! :) First, a good place to start with this concept is on Neil Kidd’s blog post.   Go ahead and read that now … I’ll wait.  Most of this code is based off of his original sample, I’ve modified a few things and brought it forward to work with the latest SDK. So, in a nutshell, a typical worker role template contains a Run() method in which we’d implement the logic to run our worker role.  In many cases, there are multiple tasks and multiple workers.  Unless the majority of the work you are doing is CPU bound (which is entirely possible, as is the case with our Azure-based distributed Folding project), the resources of the VM can be better utilized by multithreading the tasks and workers. The trick is to do this correctly as writing multithreaded code is challenging.  In general, parallel extensions is likely not the right approach in this situation.  There are some exceptions – for example, if you are using a 4-core (large) VM and require lots of parallel processing, PFx might be the best approach.  But that’s not often the case in the cloud.  Instead, we need a lightweight framework that allows us to create a number of “processors” (using quotes here to avoid confusion with a CPU) that are responsible for doing their work independent of any other “processors” in the current instance.  Each “processor” can run on its own thread, but the worker role itself, instead of doing the work, simply monitors the health of all of the threads and restarts them as necessary. The implementation is not terribly complex – but if you aren’t comfortable with threading or just don’t want to reinvent the wheel, check out the base project.  Feel free to add to or modify the project as necessary.  Let’s step through some of the concepts. Download the sample project (Visual Studio 2010) here. First, it doesn’t matter if you implement this in a webrole or a workerrole.  A webrole exposes the same Run() method that a workerrole does, and it doesn’t interfere with the operation of hosting a website – aside from the fact that there are limited resources per VM of course. First up is the IProcessMessages interface.  This interface is simple, basically saying our processors need to define how long they need per work unit, and expose a Process() method to call.   Our health monitor keeps tabs on the processor, so it needs to know how long to wait before assuming the processor is hung: A simple processor is then very easy to create.  We just implement the IProcessMessages interface, and code whatever logic we need our worker to do inside the Process() method.  We’re specifying that this processor needs only 20 seconds per work unit, so the health monitor will restart the worker in the event it doesn’t see progress when 20 seconds elapse.  SyncRoot isn’t needed unless you need to do some locking: So far, pretty simple.  Our processor doesn’t need to be aware of threading, or handling/restarting itself.   The ProcessRecord class does this for us.  It won’t do the actual monitoring, but rather, implements the nuts and bolts of starting the thread for the current processor: When the ProcessorRecord class is told to start the thread, it calls a single Run() method passing in the processor.  This method will essentially run forever, calling Process() each iteration.  Since we’re not getting notified of work, each processor is essentially polling for work.  Because of this, a traditional implementation is to say if there is work to do, keep calling Process() as frequently as possible, but if there’s no work to do, sleep for some amount of time: The current implementation is simple – it doesn’t do exponential back off if there’s no work to do, it just sleeps for the amount of time specified in the ProcessorRecord.  That leaves us with one more task, and that’s defining our processors in the web/worker role Run() method.  The nice thing about this approach is that it’s quite easy to add multiple instances to scale up or down as needed: In the case above, we’re creating 2 processors of the same type, giving them different names (helpful for the log files), the same thread priority, and a sleep time of 5 seconds per iteration if there’s no work to do.  In the Run() method, instead of doing any work, we’ll just monitor the health of all the processors.  Remember, the Run() method shouldn’t exit under normal conditions because it will cause the role to recycle: It may look complicated, but it’s pretty simple.  Each iteration, we’ll look at each Processor.  The Timeout is calculated based on the last known “thread test” (when the thread was last known to be alive and well, plus any process time or sleep time adjustments.  If that time is exceeded, a warning is written to the log file and the Processor is reset.  Worldmaps has been using this approach for about 6 months now, and it’s been flawless. Is this the most robust and complete framework for multithreading worker roles?  No.  It’s a prototype – a good starting place for a more robust solution.  But, the pattern you see here is the right starting point:  the role instance itself knows what processors it wants, but doesn’t concern itself with their implementation or threading details.  Each ProcessorRecord will execute its processor, and implements the threading logic, without regard to the other processors or the host.  The Processors don’t care about threading, other processors, or the host, it just does its work.  This separation of concerns makes it easy to expand or modify this concept as the application changes. If you’re trying to get more performance out of your workers, try this approach and let me know if you have any comments.

My Apps

Dark Skies Astrophotography Journal Vol 1 Explore The Moon
Mars Explorer Moons of Jupiter Messier Object Explorer
Brew Finder Earthquake Explorer Venus Explorer  

My Worldmap

Month List