Monthly Archives: October 2006

Architecture vs. Design

I recieved an e-mail from a client that I hadn’t heard from in a while. I did some consulting work with them on their first .NET project. Apparently it was a “big success”, but that didn’t surprise me because the the folks where were pretty sharp and they worked well as a team.

Was web-services worth it?

On reviewing the finished product my client made the observation that layer of web-services that seperated the user interface and the underlying database.

That isn’t too unusual – there are a few different reasons to employ web-services in a solution. One is to enable you to tuck away complex or proprietary logic while another is to remove the need for client applications to connect directly with databases and accept all thos licensing, scalability and security implications. If the only reason that you implemented web-services is the later, then you will likely end up with a pass-thru CRUD-like implementation. This is especially true if the application is really focused on record keeping/data entry/retrieval as opposed to automating workflows.

My client ponders the question whether the web-services layer represents an uncessary overhead. Well – the answer is *drum roll* it depends.

From a workflow/logic abstraction point of view, most of the interaction logic exists in the client application and the underlying stored procedures in the database. However from a security perspective it really is a good idea to speerate the client from the database because the web-services present only a subset of the underlying database functionality – which (if you do it right) actually reduces the attack surface.

My suggestion would be to stick with the web-services layer – it’s place in the design seems to be justified.

Where should I get my data from?

My client also aksed about building a data access layer and whether they should use some third party toolkit like Castle or stick to the tools that come out of the box. I don’t really have enough experience with Castle to say definitively whether it is the right tool for them, especially since (form memory) they are heavy users of Oracle, and Oracle support usually comes second if at all in the world of community extensions for .NET.

O/R mapping solutions generally come at a price, mostly around getting your head around how their persistence and round-tripping mechanisms work, but once you are over that hump they can give you a pretty decent productivity boost, and some of the newer ones are event developed with an eye on performance. Even Microsoft is taking another stab at this O/R space with LINQ with a special focus on the query mechanism – an area where other tools have struggled because they can’t break into the compiler tool chain as easily.

So what is the wright answer for you (my client)? In all honesty I don’t know, a lot of it depends on how you intend to balance the conflicting business requirements – its a design question, not an architecture question. Please excuse me while I go off on a rant that has been a long time coming.

Why I hate the “architecture” word!

This is not intended to be a architect bashing session, rather I have a problem with the word “architecture” and the amount of baggage that it brings over from our friends in the construction industry. Architecture is a big word, it implies building something relatively large with layer upon layer of support systems, which is ultimately topped off with a pretty facade.

As practioners in the software industry we seem to have taken this and tried to replicate this necessary complexity in software, where in reality – a lot of solutions we build don’t require anywhere near as much code. I think we need to take of the architecture hat, and put on the design hat – design is a much lighter world.

Here I evoke the spirit of Joel Spolsky who has written some great stuff around what design really means. But what I want to say is that a good design is a simple design (or at least as simple as possible) and that a software designer wouldn’t necessarily believe that they need to add code – they might decide that usability (in the holistic sense) of the system might be improved by turning something off – or completely removing a feature.

In the consumer electronics business, some people are actually hired to go through a system and remove components until it stops working – they do this to remove the cost before they go into mass production. We need more of this in the software business.

</soapbox>

Now that I’ve had my rant, how does that relate to the question at hand? Well, my point is that you should look at each new application as a different challenge – go into it with your eyes open and don’t assume that the framework that you used in the previous application is necessarily the best one in this case. If you do implement a framework like Castle or LINQ, be up front about how much code you expect to save yourself – and, if you end up writing more code bail out. More code is seldom more maintainable – in my experience (and that includes code generated code).

The dial-up experience . . .

. . . is about as close a disassociated geek can get to real emotional trauma. The screenshot below is what my life looks like at the moment – three browser tabs all silently churning away pulling down data at a raw 33.6Kbps.

I’ve been waiting for my Internet service provider (featured in the lower right of the picture above) to get my ADSL connection transferred across from my previous address. On the bright side – at least I don’t have to dial-in with GPRS anymore – that was really bad.

My main concern is still that iiNet might turn around and tell me that broadband is not available to us – at which point I don’t know what I will do. Someone suggested DSL Direct from Optus – but how does that get over the problems that ADSL would have?

Three Laws of Software Development?

Leon Bambrick posted up a good overview of the way that the MVC pattern works on a simple login dialog. I’ve currently inherited some responsibility for an ASP.NET application that uses a derivative of MVC extensively – the idea was that the controllers could be re-used across various different delivery platforms including things like Windows Forms and the Compact Framework.

Of course, it never happened – but that architectural decision that was made way back in the beginning of time is probably costing the company signficantly now. I’m not against MVC, but I am starting to wonder if there is a set of laws about software development – kind of like the Three Laws of Software Development that could help keep people from making these mistakes.

  1. A developer may not write more code than is absolutely necessary, or, through inaction, allow more code to be written than is absolutely necessary.
  2. A developer must reuse software and ideas, except where doing so conflicts with the first law.
  3. What would the third law be?

Education in IT

Before I start, I need to point out that I am now a member of the ACS – after all that carry on in the past I decided to join and see what it was like. When I joined a whole heap of knowledge didn’t suddenly get sideloaded into my brain and a  wealth of new job opportunities didn’t open up for me – but then again, I don’t think that I’ve brought anything to the ACS either – other than my credit card.

With that disclaimer out of the road I want to point you to three posts by fellow ACS member, and fellow MVP – Rob Farley:

Rob is asking the right questions I think, and this observation really strikes a chord with me:

The fact is that digital natives won’t do school. But they still want to learn. If we want to be a part of that, we need to reinvent school. The burden is on us, because traditional learning cultures have hurt education significantly.

I’ve had this discussion and debate before, and it does get pretty tiresome, on the brightside however, if what I suspect is true – it is part of a greater social change coming around education and life in general – so in some cases the best way to have the argument is to sit back and watch it just happen.

All your "man" pages belong to us.

I don’t want Carla Schroder to feel like I am throwing hand grenades over to her side of the fence (Linux), but I wanted to point out that PowerShell, the new command shell for Windows does in fact have some pretty good command-line accessible documentation that work in a very similar way to man pages.

With the command shell up all you need to do is type in “help”, and what you actually get back is an information page about how to use the help command, along with a few pages of examples:

The help system is pretty much tied to the various Cmdlet’s that the system has installed. The beauty of that is that you can ask for help for a list of the commands that it supports by typing “help *-*”:

The reason for this consistency is that PowerShell has all commands using a common verb-entity syntax. So if I want to bring up the help on a command – say Get-ChildItem, I would just type in “help get-childitem” and I would get several pages of help back.

Another cool feature is that it understands now to navigate aliases so if I asked for help on the gci command it would have returned the same result (because gci is an alias for Get-ChildItem, as is dir).

Let’s hope that PowerShell has command-line documentation for a long time to come!

Put documents into TFS where your intended audience will find them.

Here is a great post on the VertigoSoftware blog on where the best place to store documents in TFS are. As they point out there are three locations where you can store documents, on the associated WSS portal, as a file attachment on a work item, and in the version control store.

The key piece of wisdom? It depends on who the intended audience is.

I must admit that I use a similar technique – although I find that using work items is the easiest (for me – but I am often my own audience). After that – if I need to share them outside the development team I will use the portal site.

Great post on WTF about software quality.

I noticed a great post on The Daily WTF about software quality, and some of the things that companies do to ensure that software is of a high quality. I was talking Andrew Matthews today while we were out on a client site.

He has been working on a piece of software for the last couple of months that we handed over to the customer today after fixing some bugs that were picked up in testing – he mentioned that he felt that software is something that it can be hard to let go of because you can’t imagine it surviving in the world without you. Its kind of like a child.

While I was reading the WTF post I had his comment in the back of my head I started thinking about the analogy a little bit further. It fits really well – especially in terms of software quality and environmental configuration.

I’ve been a bit hot and cold on having fully isolated environments for software developers, I think that if you can swing it, having everyone on the same network and part of the same Active Directory environment (if we are talking Microsoft-based environments) really makes things easier. Most infrastructure people will tend to want to put an air-gap between developers and “their network”. You have to appreciate their position – they have to try and maintain a certain level of service.

But I am wondering if the air-gap approach actually defeats everyone in the end. It is kind of like those sickly children that you see that were kept in a completely sterile environment and as a result did not get a chance to develop natural immunities to things that occur in nature. Does the same not happen to software?

By developing software in the target environment we get to understand what is good and what is bad very early on in the development cycle, and while there may be some pain – it is probably less than the software arriving DOA.

Thoughts?