Monthly Archives: April 2007

WM5: The case of the missing alarm notifications (fix).

I’m sorry to say I didn’t find this fix, I just got so fed up with the problem I’ve been having for the last couple of days that I did a search and turned up this page.

Basically, the problem I was having was that my alarms were going off (they were sounding) but I wasn’t getting a little pop-up tab to dismiss them. Result was that my two morning wake-up alarms were going off and sounding off on top of each other and I had no way to shut the device up except to mute it.

The thread linked to above points to this page at ScaryBear who has a utility called CheckNotifications. I loaded up CheckNotifications (used the CAB version) and it told me that I had 415 notifications in the queue. I got it to delete the duplicates and now everything seems to be working again.

Knowledge Pressure

I was talking to Darren Neimke on Live Messenger this weekend and he pointed me to his recent post about how his interests are changing, or perhaps more accurately, his learning habits. Like Darren I used to be a bit of a forum junkie as well, in fact if you search some of the oldest .NET mailing lists you will see quite a few posts by me, answering other peoples questions.

This can be an incredibly effective way to learn if the questions that you are answering challenge you to some degree. Where things begin to fall apart is when you are answering questions that you know off the top of your head at the expense of learning new things that stimulate you.

I like to visualise this as a pressure problem, lets call it knowledge pressure. Imagine if you will that everything you learn is being fed into a pressurised container on the left hand side via one valve, and all the knowledge you share with your peers flows out the right hand-side through another valve. In a perfectly balanced system the amount of knowledge that flows in would match the amount that flows out – this never happens, our learning process is not reliable enough to ensure that so what tends to happen is that we fluctuate between high and low (or sometimes even a vacuum) state.

Often when you pick up a new technology you can very quickly end up in a high knowledge pressure state. This might sound counter intuitive but stick with me. When you don’t know anything, you can’t share anything and so you are practiced at being tight lipped and focusing on the new things that you are trying to learn. Because over time you are sharing less than you are learning you build to a high knowledge pressure state at which point you can’t do anything but start to share your knowledge – often there is so much pressure that the apparent speed with which you share knowledge is high (note that speed and volume are different things).

Over time you start to spend more time sharing your knowledge which distracts you from the learning process and what ends up happening is that you can end up in a low knowledge pressure state (where you are sharing more than you are learning, but the knowledge isn’t escaping with the same amount of omph).

Being in one state or the other is natural, it is a cycle that will probably repeat your whole life although I suspect some people have a preference for one state or the other. For someone who makes a living by learning and then sharing knowledge it is important to manage this state and make the highs (where you loose money because you aren’t “out there”) and lows (where you just sprout crap) a little less extreme – you want to be in the green band where you are constantly learning, but getting out there often enough to let folks know what you are doing.

Perhaps one of the biggest traps is that you don’t necessarily know when you are in a low pressure situation (it is very hard for us to objectively guage ourselves) so we rely on our friends and the people we work with to tap us on the shoulder and tell us to “go pressurize”. Personally I’d prefer to the be on the higher pressure side because the information I share is good, but I’m still taking care of myself.

Remember to clean out your temporary files.

Well, its been an interesting couple of days. Earlier this week my machine just started “going nuts” when it stopped being able to perform the simplest tasks:

  • Open legacy Word documents (.doc files).
  • Open legacy Excel documents (.xls files).
  • Install updates to Windows Live Messenger.
  • Pull back and sort the list of Windows Live Messenger contacts.
  • Post content to this blog using Windows Live Writer.
  • My start menu was telling me it couldn’t display all the items.
  • Other odd stuff….

Today I finally got the sh*ts with it and decided to figure out what was going wrong. It turns out that all these things were related to my TEMP folder having too many files. My %TEMP% folder was sitting at about 9.6GB and my suspicion is that Windows was actually having trouble allocating temporary file names, as a result, all of the operations above which tend to require temporary files started to fall over.

It took me about ten minutes to delete the files (boy does my disk need a defrag) but everything seems to be working correctly again. The problem with this particular problem is that it manifests itself as lots of seemingly unrelated things and things kinda keep working. I suspect only very heavy computer users see this problem because you need to allocate ALOT of files to run out of temporary filenames (I had about 65,000+ temporary files).

Oh well – I might of held off a rebuild for another couple of months…

Backup Systems in a Virtualised Environment

Plans for a hosted TFS offering are coming along both on a business and technical front. I had hoped to be online by now but I’ve been told that getting a business up and running always takes twice as long and costs twice as much as you first thought – now thats wisdom for you!

One of the things that I had on my TODO list was to look at backup and recovery of a TFS hosted server in a virtualised environment. I’ve pretty much come to the conclusion that the only way to go with TFS (hosted or otherwise) is virtualised. While I recognise the performance hit is real, especially in relation to larger changeset operations the operational benefits are just to compelling to ignore.

As part of my experimentation I have created a virtualised TFS environment and exposed it directly to the Internet. The virtualised environment includes a AD server, a TFS server, and a Team Build server all running on one physical machine. Once I had the environment established I then had the challenge of figuring out how I would take backups.

I decided what I would do is write a PowerShell script which would “save state” on each of the virtual machines and copy off their differencing disk and state to another drive. The whole process takes about five minutes and disaster recovery is a (practiced) ten to fifteen minute job.

When I wrote the script I decided that should the copy fail I wouldn’t restart the virtual machine. The philosophy is that if something is broken I want it to scream loudly – and nothing screams more loudly than a user (in this case me) reporting that a system is unavailable.

To prove the point I didn’t write in any disk space culling mechanism into the backup script so the drive that I was copying the differencing disks to would slowly fill up. When it happened the first time none of the virtual machines started up and the e-mail report from the backup script let me know what the situation was.

I then decided to write a second script that ran a little earlier in the day that would go through and trim off some backups. Actually – the first version of this script had a bug where it wasn’t actually doing any trimming and so I got another e-mail about the backup failing (and the system was stopped) today.

This must all sound pretty reactive, and you would probably be horrified if someone decided to let systems go down as their warning that something in the environment is wrong. You are right – but as a safety net it is quite effective and it is better that a system goes down because it detected that maintenance wasn’t being performed correctly rather than a system crash where you loose data.

My next step is to get the Readify IT manager (also a TFS expert) to consider ways that we could identify issues before they happen. I’m pretty sure I could do it with some PowerShell scripts 🙂

MVP Awardees: Congratulations Paul, Grant and Phil!

Windows Live Writer just hung on me and ate a longer blog post, but I just wanted to say congratulations to all the new MVP awardees with a special note on Paul Stovell, Grant Holliday and Phil Beadle. Paul and Grant put a lot of effort into their resepective technology areas and it was good to see the mistake of Phil not being re-awarded last year has finally been corrected.

CodeCampOz 2007: The Aftermath

Well, its Tuesday morning and CodeCampOz is starting to become a distant memory but we can see the digital memories of the event surfacing on the blogs of the Australian .NET community members. William has posted up a very large set of photos from the event, Frank is being a human aggregator, Chuck is complimenting Dan, Joseph and Joel on their presentations and some of the mailing lists are starting to get discussion about what people liked and what peole didn’t.

On a personal note I’d like to thank the 200+ people that attended the event this year because without them it wouldn’t be a success. I’d also like to thank Greg Low who is a co-organiser of the event, a lot of people don’t realise this but Greg does most of the work behind the scenes from replying to e-mails, and handling various other logisitical issues – so thanks Greg!

Finally a big thank-you to those organisations that sponsored the event including Microsoft, Charles Sturt University, Avanade and Readify.

Update: Aymeric has posted some photos to his Flickr account.