Category Archives: Readify

Architecture vs. Design

I recieved an e-mail from a client that I hadn’t heard from in a while. I did some consulting work with them on their first .NET project. Apparently it was a “big success”, but that didn’t surprise me because the the folks where were pretty sharp and they worked well as a team.

Was web-services worth it?

On reviewing the finished product my client made the observation that layer of web-services that seperated the user interface and the underlying database.

That isn’t too unusual – there are a few different reasons to employ web-services in a solution. One is to enable you to tuck away complex or proprietary logic while another is to remove the need for client applications to connect directly with databases and accept all thos licensing, scalability and security implications. If the only reason that you implemented web-services is the later, then you will likely end up with a pass-thru CRUD-like implementation. This is especially true if the application is really focused on record keeping/data entry/retrieval as opposed to automating workflows.

My client ponders the question whether the web-services layer represents an uncessary overhead. Well – the answer is *drum roll* it depends.

From a workflow/logic abstraction point of view, most of the interaction logic exists in the client application and the underlying stored procedures in the database. However from a security perspective it really is a good idea to speerate the client from the database because the web-services present only a subset of the underlying database functionality – which (if you do it right) actually reduces the attack surface.

My suggestion would be to stick with the web-services layer – it’s place in the design seems to be justified.

Where should I get my data from?

My client also aksed about building a data access layer and whether they should use some third party toolkit like Castle or stick to the tools that come out of the box. I don’t really have enough experience with Castle to say definitively whether it is the right tool for them, especially since (form memory) they are heavy users of Oracle, and Oracle support usually comes second if at all in the world of community extensions for .NET.

O/R mapping solutions generally come at a price, mostly around getting your head around how their persistence and round-tripping mechanisms work, but once you are over that hump they can give you a pretty decent productivity boost, and some of the newer ones are event developed with an eye on performance. Even Microsoft is taking another stab at this O/R space with LINQ with a special focus on the query mechanism – an area where other tools have struggled because they can’t break into the compiler tool chain as easily.

So what is the wright answer for you (my client)? In all honesty I don’t know, a lot of it depends on how you intend to balance the conflicting business requirements – its a design question, not an architecture question. Please excuse me while I go off on a rant that has been a long time coming.

Why I hate the “architecture” word!

This is not intended to be a architect bashing session, rather I have a problem with the word “architecture” and the amount of baggage that it brings over from our friends in the construction industry. Architecture is a big word, it implies building something relatively large with layer upon layer of support systems, which is ultimately topped off with a pretty facade.

As practioners in the software industry we seem to have taken this and tried to replicate this necessary complexity in software, where in reality – a lot of solutions we build don’t require anywhere near as much code. I think we need to take of the architecture hat, and put on the design hat – design is a much lighter world.

Here I evoke the spirit of Joel Spolsky who has written some great stuff around what design really means. But what I want to say is that a good design is a simple design (or at least as simple as possible) and that a software designer wouldn’t necessarily believe that they need to add code – they might decide that usability (in the holistic sense) of the system might be improved by turning something off – or completely removing a feature.

In the consumer electronics business, some people are actually hired to go through a system and remove components until it stops working – they do this to remove the cost before they go into mass production. We need more of this in the software business.

</soapbox>

Now that I’ve had my rant, how does that relate to the question at hand? Well, my point is that you should look at each new application as a different challenge – go into it with your eyes open and don’t assume that the framework that you used in the previous application is necessarily the best one in this case. If you do implement a framework like Castle or LINQ, be up front about how much code you expect to save yourself – and, if you end up writing more code bail out. More code is seldom more maintainable – in my experience (and that includes code generated code).

AdWords is the sincerst form of flattery.

Update: This blog post was originally pulled. This post explains why I pulled it back.

One of our consultants did a search for “readify” on Google last night and notice that one of the other Microsoft partners in Sydney had purchased the “readify” AdWord from Google.

You’ve got hand it to Adam, he certainly runs an aggressive marketing campaign and I’d be worried if he somehow managed to push us off the top rankings in the search results.

Still – can someone purchasing your AdWord be considered a form of flattery?

The new Readify home page is live!

We got news this evening at 8:26PM that the new Readify home page has gone live. It is a refresh on an “underpants” theme that we had been carrying for over twelve months. The new site is once again based on top of DotNetNuke. One of the features that I particularly like is the Cool Tools section which showcases some of the “stuff” we have built as individuals and as a group.

Getting Started with TFS Integrator

ATTENTION: TFS Integrator is now obsolete. Please use TFS Continuous Integrator and TFS Dependency Replicator. For more information, read the post on why TFS Integrator is now obsolete.


This is a post that I’ve been meaning to write for a while. Earlier this year, when Chris Burrows joined our team we had the opportunity to spend a little bit of time building a TFS extension that Darren and I dreamed up whilst we were working on a client project together.

As you are probably no doubt aware, Team Foundation Server does not ship out of the box with a Continuous Integration capability. A lot of people, including myself consider this a glaring omission – but as a developer I have to appreciate the demands of tight delivery schedules.

What is Continuous Integration?

Continuous Integration (CI) is the process of continually scanning the source code repository for changes, and, once they have been detected automatically kicking off a build process to verify that the code compiles successfully. In many instances teams will tag on a round of build verification tests to ensure that not only does the code compile, but that it doesn’t catch on fire when you try to run it (smoke tests).

While not unique to agile methods, continuous integration is certainly one of the tools that agile development teams to keep a handle on their code base as it evolves throughout an iteration. It is no surprise then that when agile teams came to use TFS that there was a collective “what the” when it didn’t include a CI engine.

How did we build our TFS CI engine?

Fortunately for us, the team at Microsoft did build TFS with an eye on extensibility and they included in the system an “event service” which we can plug into to listen for events in the source code repository. The way it works is that whenever you do something significant in Team Foundation Server, such as checking in a file, that TFS sub-system notifies the TFS eventing system which then broadcasts that message to any subscribers. This is how you get an e-mail when another developer checks in some code.

This same eventing mechanism can actually be instructed to send a notification via the SOAP format to a web-service endpoint which might be hosted in something like ASP.NET, or event Windows Communication Foundation.

TFS Integrator uses this facility to kick off a build in the Team Build component of Team Foundation Server. When TFS Integrator initialises it reads a configuration file to identify which parts of the source tree it is interested in listening to then subscribes to the check-in event notifications that the eventing system sends out.

When it receives a notification it sleeps for a specified period of time then kicks off a build with the Team Build facility. Once the build is completed the eventing system sends how a notification to subscribers as to the outcome. The completed interaction between the system components looks like this:

 

All of this automation makes use of the tools available from the Visual Studio 2005 SDK which you can download from Microsoft.

How to configure TFS Integrator to support Continuous Integration?

Before we start you need to understand that the current build of TFS Integrator relies on the .NET 3.0 runtime (RC1) being installed. For this reason we don’t currently recommend installing it on the same computer as either your application or data tiers. Once you have downloaded and installed the .NET 3.0 runtime (RC1), the next thing that you are going to need to do is download the latest build of TFS Integrator.

The setup package itself does nothing more than drop a few files onto the file system, the rest of the configuration requires some manual intervention. The installation files are dropped into the “TFS Integrator” directory under program files.

The “TFS Integrator” directory contains three seperate configuration files:

  • TfsIntegrator.exe.config
  • ContinuousIntegration.xml
  • DependencyReplication.xml

The first file, TfsIntegrator.exe.config contains the bootstrap information required to get TFS Integrator talking to your Team Foundation Server installation. In this configuration file there are three main settings that you should be interested in. These are BaseAddress, RegistrationUserName and TeamFoundationServerUrl.

BaseAddress is the address of the server itself. We would have liked to have allowed the program to determine this itself, but it can get tricky in multi-homed systems. The value that you need to provide here is the address that Team Foundation Server will be able to communicate to TFS Integrator on via the event notification service. Provided this address maps to the local host, the TFS Integrator service will self register the port as it starts up.

The next setting, RegistrationUserName is the account that you want TFS Integrator to register subscriptions under. Typically this is the same account that is used to run the service (especially in a domain configuration). Finally, the TeamFoundationServerUrl is the address of the Team Foundation Server that TFS Integrator will need to talk to.

Once the initial configuration is done the service can then be installed. So that it can run headless. You do this by running the “TfsIntegrator.exe” program with the “-i” switch.

 

After the installer registers the service you just need to go into the Services MMC snap-in (services.msc) and specify the logon credentials that the TFS Integrator service is going to use. This account should have access to TFS, be the same account as specified in the RegistrationUserName and have sufficient rights to write files to the “TFS Integrator” directory in program files.

Now that the basic configuration for TFS connectivity is out of the way its time to actually modify the ContinuousIntegration.xml file. The ContinuousIntegration.xml configuration file contains all the information that TFS Integrator needs to trigger the build of a “team build type” in Team Foundation Server.

The root element in the document is ContinuousIntegration, and under that you can have one or more TeamProject elements. The TeamProject element defines the name of the team project to which TFS Integrator will listen to events for. This element in turn contains multiple Build elements. The build element defines what path in the version control store to listen for changes in, what build type to kick off, and on what build machine. The sleep attribute defines the settle period that TFS Integrator will wait for further check-in notifications before spinning off the build.

Once you have made the modifications to the above configuration file you should be able to start up TFS Integrator. Congratulations! But what about DependencyReplication.xml?

What the heck is Dependency Replication?

Dependency Replication is a process that most development teams have to do as the scope of what they are trying to achieve grows. The idea is that within your organisation you might have a set of common frameworks that you use and maintain. Rather than linking in the source code into each new project that you undertake you treat the framework like an internal product which you build seperately.

The problem you then have is integrating the updated binaries into the other projects that depend on them. This can be quite a time consuming process, so much so that teams will often give up and end up compiling hundreds of different versions of the same code into their code base – which creates a maintenance nightmare.

TFS Integrator includes a dependency replication engine which extends the continuous integration feature. It does this by listening for the build completion even from TFS and using that to trigger the execution of some code which picks up the compiled build output and checking it back into Team Foundation Server in a specified location.

The effect is that with dependency replication the feedback loop is complete and dependant components can be linked together.

In the case of the framework example we could build up a scenario where TFS Integrator gradually rebuilds your entire code base.

  1. Developer checks in some code in the framework code base.
  2. Check-in notification sent to TFS Integrator.
  3. TFS Integrator kicks off a build in Team Build.
  4. Build completes.
  5. Notification of successful build completion sent to TFS Integrator.
  6. TFS interator checks in some of the drop files into version control.
  7. Check-in notification sent to TFS Integrator.
  8. TFS Integrator kicks off a build in Team Build.
  9. Build completes.
  10. Notification of successful build completion sent to TFS Integrator.
  11. Rinse, repeat.

In the past I’ve spent significant amounts of time getting similar systems going on top of version control systems like Visual SourceSafe where sharing was used as the replication mechanism. I actually think that this approach is much cleaner and more obvious.

How to configure TFS Integrator to support Dependency Replication?

Like continuous integration, the configuration of dependency replication is controlled via a configuration file – in this case, DependencyReplication.xml.

The root element of the configuration file is DependencyReplication. Within it there are multiple TeamProject elements which define the TeamProject and what the root folder is (I’m not sure what the root folder configuration item is off the top of my head, but I suspect its the relative path to which check-ins will be made). Within each TeamProject element, one or more Dependency elements are defined which specify which build type the team  project depends on. Inside each Dependency element is a File element which specifies the drop location relative source file any dependencies that need to get checked back into source control. The destination is the fully specified path to put those files into version control.

In the configuration file above we are saying that SystemB depends on some files from SystemA (Output1.dll and Output2.dll) and that those files should be checked back into the $/SystemB/SystemB/Dependencies location in version control. The interesting thing to note is that the file name is also specified in the Destination path. This allows configuration managers to rename files as part of the replication process.

Once both continuous integration and dependency replication configuration is complete, you need to restart TFS Integrator to get it to pick up its latest settings.

Next Steps

If you are interested in trying out TFS Integrator in your environment, just go ahead and download it. We are very keen to see people use this tool because it is genuinely useful to us, and we think it will be useful for you too. We probaly won’t be giving away the source code at this stage but unless a future version of Team Foundation Server makes it redundant we will continue to make revisions to the code base and support it (as best we can).

If you want to get support for TFS Integrator then I recommend that you join the OzTFS mailing list and community which was recently setup by Grant Holliday, you can subscribe via e-mail. Once you are subscribed you can send an e-mail with your question. If you can’t post to that mailing list for any reason, just send an e-mail to me.

Thanks

Finally – before I sign off on this post, I want to thank a few people. TFS Integrator was not developed by me – in fact I can’t even claim sole responsibility for its design. Both Darren Neimke and Chris Burrows both helped get this software out the door. Chris in particular put in hours above and beyond the call of duty to get this out the door.