Monthly Archives: June 2008

The Evils of Active Directory

I need to be honest. I think Active Directory is a bit of a liability when it comes to the overall Microsoft product offering. There are some great products like TFS, Exchange, SharePoint and they all integrate with Active Directory.

The problem is that Active Directory basically requires organisations to own all their own infrastructure if they want to achieve single sign-on across all of these products.

Internally at Readify we are seriously looking at the costs of our IT organisation and we would love to be able to host Exchange with one hoster, SharePoint with another (not talking about SharePoint as a TFS dependency here), and probably self-host our TFS server, but possibly up on something like Server Intellect, or GoGrid.

The problem is that all the subtle AD dependencies in this products makes it difficult to really commit to that course of action. If we decide to install products in workgroup mode (or give them their own AD as required by the hosters) what are we exposing ourselves to in the future if one of the product teams decides to take a hard dependency on AD.

Where are the investments that Microsoft is making around technologies like CardSpace and simple Username/Password authentication over SSL in all their products which will allow their customers to distribute their IT assets in the cloud.

Until Microsoft takes multi-tenancy and hosting scenarios seriously its not going to be a reality for a lot of organisations.

A worthwhile CodePlex project: NUnit for Team Build

Richard Banks, a fellow Principal Consultant at Readify has released NUnit for Team Build on CodePlex. Richard has written up a post about it on his blog which is well worth the read, not only because what he is built is instantly useful, but also because it shows (with source) how to integrate other testing platforms into Team System.

Awesome stuff Richard! I had a customer ask me about this challenge the other day, I knew it was possible, but this basically shows the way to go about it!

Revealing your passwords at the video shop.

On Thursday night myself and a friend were at the video shop. When we got to the counter we were asked for the password on the account. My friend tried one password and that was incorrect. They then tried three or four other passwords and those were also incorrect. In the end it was a trick question, the guy at the counter was just testing.

It occurs to me that this is a brilliant user engineering attack for hackers. Get a job as a video store clerk then get users to reveal a range of their passwords. You already have all their other details on file such as their address, date of birth and even the video preferences might be quite revealing.

What do you think the chances are of one of those passwords being for something valuable like a work user account, bank login, or eBay account? I’d say high for a lot of people struggling to keep track of multiple passwords in the digital age.

Have you created a cool gadget for Team System?

Then enter the Team System Gadget Contest! Here are the details as they currently stand:

Have you created a useful gadget for Team System?  Do you have one in mind?  I am looking for the coolest community built tool for VSTS.  It can be something for TFS, for Visual Studio, or something that is stand alone. The winner will receive a one year subscription to MSDN with Team Suite!  

To enter, submit a screen cast (up to 3 minutes long) which tells everyone why your gadget is the coolest and the source code.  All submissions will be released to the public as free source to use and enjoy (with you getting all the credit of course).  Videos will also be made available to the public to help make you famous!  This should be something new and not something repackaged. 

Submissions accepted up until August 31st 2008.  Winner will be announced September 15th 2008.

Looks like it could be good fun!

The Server is the Application

Lately I have been looking into Amazon’s Elastic Compute Cloud offering (EC2). To say this is just a hosting service would be glossing over a fairly radical shift in thinking about how applications are going to be constructed in the future.

How does EC2 work?

When you sign-up for an EC2 account you provide your credit card and agree to pay (by the hour) for the computing resources you use. Computing resources are divided up into “instances” which run in “availability regions”. Each instance is basically a pre-configured Linux image (referred to generically as AMIs) which can be started and stopped on demand. Availability regions map (from what I understand) to geographical regions where servers are located within the network of Amazon data centres.

The images are created through a templating process where you select an existing AMI, start it up, modify the configuration, and then bundle it up to create a new AMI which you can then run up in the future.

Why is EC2 different?

Whilst this sounds similar to standard virtualization techniques where you just have a virtual server which you deploy your application onto, the difference is in how Amazon has exposed the offering to the public.

First of all, there is no groovy server order form, if you want to start up an instance you need to make a series of web-service calls (although they do have some command line tools that wrap these calls). This effectively means that the tool is targeting software developers first.

The second difference is that the instances themselves are stateless. What does that mean in the context of an entire virtualized computer? Well when you start the instance the computer boots, it might change some data on the image, but when you stop that image the data is lost. You might be thinking that this is insane, but it makes perfect sense in a utility computing model – and it encourages you to use Amazons Simple Storage Service (S3) to store your buckets of data.

Some Opening Statements

Organisations that have been using EC2 for a while have no doubt started to hit on some of the challenges and opportunities of this style of computing but for many of us in the enterprise space it is the beginning of a discussion that will force us to question the way that we design and host applications and data.

To start the discussion, I’m going to throw the following statements out there, leave some comments or do track-backs to get the discussion happening:

  1. Applications will use either a continuous snapshot model to maintain state between instance runtimes or not bother with stateful image storage at all and copy down a working copy of data from the cloud each time the instance starts up. If it isn’t backed up – then the data is lost, disaster recovery (or rather disaster tolerance is an application level feature).
  2. Current database server technologies such as MySQL, SQL Server, DB2 and Oracle will need to be redesigned from the ground up to support partitioning of data. Relational databases will need to handle most of the distributed integrity problems and automatically and need to be hosting environment aware.
  3. Every application will run on multiple instances. The minimum number of instances will be those required to hold a redundant copy of the data. The maximum number would be – well there is no maximum – but the things that drive it up would be increased load.
  4. Instances will be aware of their host environment and will be able to request multiple instances as well as make demands about the separation of logical instances across physical hardware nodes (if you have two instances on the same physical hardware node then you have no redundancy).
  5. Development teams will configure entire servers to be rolled out, operations teams will manage the virtualization technology and not care what developers roll out.
  6. Applications will be designed to do rolling upgrades. If you application demands 100 concurrent instances bringing the n + 1 version online will involve running in emulated “n mode” until there are enough upgraded nodes to perform the cut over to “n + 1 mode”.
  7. A standard image format capable of running different platforms is going to be critical for widespread adoption of utility hosting platforms like EC2.

What is the Microsoft/VMware play in this space?

Who knows? The Amazon EC2 technology is based on Xen which I don’t really know a lot about. Xen is a hypervisor-based solution and both Microsoft and VMware have technologies that fit the bill. However the magic seems to be in the packaging.

Amazon rather than locking down their resource allocation console has opened it up to the world and instead of focused on scaling out their data centre to handle load. They they build an API around the outside that allows their customers to provision the boxes. Assuming you could build the same sort of API on top of Windows 2008 Hypervisor (I know you can, it has the underlying APIs to support it) then all you would need to do is focus on the cost of the offering.

Amazon can give you a small instance server for $0.10 per hour. which includes 160GB of storage. I could go out to Server Intellect today and provision the following box for around $1100 per month. This box would be capable of running eight instances with roughly the same specifications (although network traffic wasn’t taken into consideration).

ServerIntellectHosting

Now if I do the calculations, that means the cost price per instance for a month would be about $137.50 dollars. An equivalent Amazon instance would be approximately $73.00 dollars – still quite bit cheaper, but we aren’t really comparing apples and apples here in terms of functionality.

Ahem, availability? Redundancy?

What about redundancy you might say? Good point, although remember that Amazon is stateless so it only needs to have a copy of the base image so you could have an “offline” storage facility for that and then copy it forward to the physical hosts when required.

Anyway – I think it would be interesting to see if the Windows hosting community could get something like this off the ground and come up with the licensing scheme to support it.

Let the discussion begin!