The RSS format has provided interoperability between user agents so that pretty much any user agent can subscribe to any content feed to get you some of that syndicated goodness. The problem of course is that the experience for the content providers is no where near as good as the content consumers.
When you first start blogging you typically pick the system out of expediency but you soon learn the limitations of the solution you chose. If you have a significant investment in content it can be very difficult to migrate without a lot of fiddling around.
This is where BlogML comes in. BlogML is something that Darren Neimke is working on that allows blog engines to hive off all their content and place it in a single file which can then be archived or even transported to another blogging system.
Darren has integrated support for it into SUB and if Community Server and dasBlog support the standard the bigger lumbering giants will be forced to adopt it (this is the way standards are born now – you create critical mass instead of spending years talking about it).
One thing that I would like to see in the spec is the original content URL. There are a couple of reasons I want this:
- The first is that if my blog engine supported it – it could put in redirects for the old URLs so that if the URL structure changed all the old links out there wouldn’t get broken.
- When I export my content into BlogML virtually everything I link to should have the option of becoming an attachment. Now lets say that I export more content than I intended and I happened to suck in a few megabytes of video. Well when I import it into another system I should be able to say – don’t pull this data in as an attachment – just link to the original URL. To do that you will need to store the original URL at export time.
I believe that Darren is adding that to the change request list for the spec over on the workspace on GDN.
Bill is confused!
I meant to say that I wouldn’t miss his and Nick’s session for the world, because I intend to heckle!
Sometimes you post something to your blog and it seems to insight the ire of a minority group. The number of dentists that have contacted me with reactions to that post is truly amazing – I’m sure some of them would have loved to stick a drill in my eyeball😛
One chap who took it all in good humour was Dr. Nate who now has a blog. I suspect that the dentist that did this to their patient didn’t . . .
The sequel to Code Camp Oz I has been scheduled and has been dubbed “Return to Wagga”! Once again we owe Charles Sturt University for agreeing to host the event. We are still in the early days of planning but you can go an block out the 23rd and 24th of April on your calendar.
For the latest news and RSS feeds you can visit the www.codecampoz.com site! Thanks to Bill Chesnut and Greg Low for getting this set up so quickly.
Just tuned into this post by Rory Primrose on about the interface vs. inheritence issue with class library design. This is something that causes me a little bit of inner turmoil every now and then when I am trying to design something for other people to use, although to stop analysis paralysis just role the dice and see how it goes.
The thing about using the interface approach is that you get the flexibility of having your own domain specific base classes, and you can always encapsulate a helper class and just hand off to those helpers after you implement the interface.
I also find that programming with interfaces also helps to reduce coupling between code. I think its just a mental hurdle that you have to get over when you decide to “just add one more property” to handle a special case scenario.
Maybe there is no right answer, but one thing is for sure – having the experience to ask yourself the question “is this the right way to do it” may be more important than the answer.
One of the biggest pet peeves of mine is the way our industry continually misinterprets a specific design pattern (whatever one it is) and ends up producing something much more complex than it needs to be.
Darren Gosbell provides a classic example of this in his discussion about the pros and cons of using web-services, remoting or enterprise services (COM+/DCOM) in a n-tier application.
Microsoft started promoting their DNA application model a number of years ago now and it really stuck in our collective consciousness. But the industry must have been dropping acid that day because instead of understanding and appreciating the concept of layering in application design they came away with this idea that every application should be split across three physical machines.
What the DNA application model really was all about was breaking your applications down into a set of layers, presentation, business logic and data access. The model has been tweaked over the years but thats pretty much what it is. The key difference here is that we are talking about an n-layer application not an n-tier application.
In n-layer DNA applications you still have your presentation, business logic and data access layers but you just don’t physically split them across processes and machines. In fact there is seldom cause to build an n-tier application.
One is if you have a bunch of paranoids running your IT department who have so little faith in their network infrastructure they force you unnatural splits in your application just so bits of it can sit on either side of a mis-configured firewall. Interestingly they will force their internal developers to go through hell but let a vendor walk in and plonk down a solution that doesn’t conform to this model – do I sound bitter?
My comeback for that argument these days is to ask whoever is making the architectural decision to show be the purchase order for the security expert to come in and do penetration testing on the software. Basically a hacker is more likely achieve their goals by exploiting bad code than they are the fact that an application is running all of its layers in the same process on the same physical network segment.
So my question to you Darren is – do you even need to split the application layers across tiers?
Here is an interesting link via Kieran Jacobsen’s blog. It seems that Microsoft has signed a deal with Fox and Universal to make a movie based on the video game Halo.
I thought that this had already been done Seriously though, I can’t see why Microsoft can’t capitalise on popular games, after all, they made two
BoobTomb Raider movies!
It will be interesting to see what they do with the script so that it differentiates itself from Starship Troopers which has a similar type of vibe. Maybe they should get guys like Dean Loaney to act in the movie!
I plan on finding time to get my butt kicked by Dean in Halo 2 at TechEd 2005!