Disk Space!

Disk space is cheap. But if disk space is so cheap why do people always under provision virtual machines? The number of times I’ve been asked to help a customer get set up with TFS only to discover that they have allocated a tiny 10GB of disk space is amazing.

So right now, I am going to go on record by saying that the best way to provision TFS is using Virtual Server 2005, and the C: drive of that machine should have 100GB (ten times the amount most people allocate) of storage available to it.

Don’t do what seems to be current practice and install the OS on a small drive an allocate a second larger disk. There is no point, you aren’t going to be able to just re-install the OS and be up and running again, and if you DR strategy lists that as an approach then its time to get real. With a virtualised TFS server your primary backup mechanism is a differencing disk snapshot – if you have to use anything other than that – its a bad day.

That said – if you want up to the hour backups you have no choice but to use the SQL backups inside the virtual machine – but then you need a lot more storage.

Oh – and make your disks dynamically expanding (or differencing disks).

11 thoughts on “Disk Space!

  1. lb

    well said! i particularly agree with this point:

    >you aren’t going to be able to just re-install the OS and
    >be up and running again

    it seems to be part of the ‘head in the sand’ approach to distaster recovery planning.

  2. Titel

    Disk space may be cheap, but it’s also in limited supply and you’ve got to balance it. I have virtual machines with Windows XP squeezed in as little as 3 GB. But that was a rational choice based on a long-term assessment of disk space needed by the OS, updates, applications and temporary files.

    Many people don’t know how to do this assessment, don’t have the experience or a guide to make an educated estimate. This is where detailed system requirements can make a huge difference.

    In my experience, dynamically expanding disks add a significant impact on performance due to fragmentation and inefficient disk space usage. I had a dynamic virtual hard drive increase to 34 GB although it was only using 5 GB, and wouldn’t shrink no matter what.

    I strongly recommend using virtual hard drives with fixed size. If you need to increase the size of a drive, you can create a new hard drive with the desired size, connect the old and new drives to a virtual machine running GPartEd or Acronis True Image, and clone the original partition on the new disk while expanding it to the new size. Easy.

  3. Mitch Denny Post author

    Hi Titel,

    I’m afraid I strongly agree that people don’t use fixed size disks. Especially since from an operational perspective you’ll end up using differencing disks anyway as part of your DR strategy.

  4. Grant

    Don’t forget that dynamically expanding disks also have a max size that they can expand to. I usually set this pretty large (300gb+), because there’s no reason not to. and its a real pain to make it bigger later… think 1) new disk, 2) copy contents across

  5. myrunninglife

    I cant comment on TFS requirements but I do agree that disk space is cheap. However, SAN space is not cheap. Backup is not cheap. Enterprise environments that demand reliability and recoverablity are starting to suffer under the weight of “we need more space” syndrome. I had a meeting the other day with the backup team who reported that soon there wont be enough hours in the day to back everything up. A bit off topic but an interesting thought. I told them to go buy more hours.

  6. Pingback: Team System News : VSTS Links - 08/15/2007

  7. Mitch Denny Post author

    Hi Murls,

    To be honest I can’t see the attraction of the SAN. In most cases it introduces more unreliability into enterprise systems. I’ve seen more outages caused by SAN storage failure than local disk failure. That either means it isn’t be used correctly, or it is to complex to use correctly, or that its just a load of garbage.

    I prefer to use local drives and then build a backup solution into the system (treat it like an appliance). Most systems have custom backup requirements anyway – its not just files anymore, and with virtualisation we can get even better DR capabilities.

    Obviously large databases need to be dealt with care, but most “systems” don’t really have that much data.

    I agree about buying more hours though – suggested something like that recently on this blog:
    https://notgartner.wordpress.com/2007/03/13/time-is-money-but-is-money-time/

  8. Christopher Painter

    I’ve virtualized my QA boxes and my build boxes, but I have not done so with my TFS yet. As I understand it, you wouldn’t want to virtualize a sql database server because of the I/O hit involved. It would be fine to virtualize the webservices / sharepoint TFS layer though. So assuming that I designed it this way, I don’t see why 10GB would be unreasonible.

  9. Mitch Denny Post author

    Hi Christopher,

    I actually virtualize the database as well to make disaster recovery easier. TFS databases don’t get hit too hard (unless you are Microsoft and have 1000s of developers using a single instance).

  10. Pingback: Don’t manage virtual servers like physical servers. « notgartner

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s