Darren Single User Blog project is humming along nicely and now he has got a new logo!
I’ve been using MSN Search quite a bit lately. A few months ago I was almost always flicking back to Google to do technical searches because its results were always better. But I was thinking about it tonight and MSN Search is now coming back with the results that I expect – this is a good thing.
Watch out Google – Microsoft might be slow starters in this space but I think that they are catching up fast!
I’ve been spending some time this week planning out my TechEd Australia talk and I am mindful of some peoples opinion of PowerPoint decks. At the same time I know that people will want to be able to download the material from my presentation so that they can go through it in their leisure.
My current plan is to only use slides to introduce myself and the agenda then anywhere I need a visual to help describe a concept. However, everything that I present will be supported by a range of slides which capture the key points and a series of demonstrations.
At the moment I’ve got three high level areas that I want to cover, these are:
- New generics features in C#
- New and improved delegates in C#
- New miscellaneous features in C# comprising:
- Accessor accessibility
- Partial classes
- Nullable types
- The :: namespace operator
- Static classes
- #pragma warning [n]
These are really the areas that I thought that people would be most interested in, I haven’t sat down and timed myself yet so I may adjust the mix a little bit. If there is something that you think I have missed or would like to expand on please feel free to leave me a comment here and I will try my hardest to accomodate your request.
Generics is obviously one of those 2.0 features that is getting a lot of airplay and I’ll definately be drilling down into it in detail. Expect coverage of the following:
- Basic how-to’s
- Notable .NET Framework uses
- Constraints (struct, class, new(), base-class, interface)
- Inheritence (open/closed principles)
- Generic methods
- Differences between value and reference types
- The use of the default keyword
- Usage scenarios, performance implications and recommendations
I’ll back all of it up with little demonstration programs to help drive the key points home. Next up – delegates!
Of all the new features in C# 2.0, one of the ones that excites me the most is a number of evolutionary changes to the delegate infrastructure. Some of the improvements are C# specific syntactic sugar (mmmm. sweets), but a few of them are runtime features that smart guys like Joel Pobar worked on. Expect coverage of the following:
- Anonymous methods comprising:
- Basic how-to’s
- What IL gets produced
- Scoping tricks
- Using with the .NET Framework
- Simplified declaration syntax for delegates
- Support for contravariance and covariance
In the demonstrations I’ll try to clarify what contravariance and covariance really mean in the context of delegate usage as well as showing some interesting usages for anonymous methods above and beyond the trusty generic list type.
If you have ever implemented your own collections from scratch then you have no doubt known the joy of implementing your own enumerator. I mean they are fun to use and everything – but writing the same code over and over again got a bit tedious. In this session I will look at how we used to build enumerators and then look at this new technique – including a peak behind the scenes at what IL the C# compiler produces to support it.
Accessibility on Accessors
Property accessors of course! The .NET runtime always had support for different accessibility levels on accessors – but C# resolutely refused to work with libraries that used that feature.
In C# 2.0 we now have the ability to specify accessibility on a per-accessor basis and I’ll talk through why this will have a positive impact on your code and whether there are any interoperability issues with other programming languages.
This is really a feature for the code generation junkies – including the teams behind designers in the Visual Studio environment. It allows you to split source files into multiple parts and have some parts that are generated by wizards, designers and code generators and other parts that allow you to type in your code.
I’ll demonstrate where they are used today and what rules govern their use, and if we have time I’ll see if I can make your head spin with a brief chat around nested partial types
The impedance mismatch between the .NET type system and relational databases really sucks. The BCL team leveraged generics to produce a value type wrapper that could be nulled out. It is a really neat framework feature with some really succinct supporting syntax in C# (the ? suffix and the ?? operator are among my favourite additions in C# 2.0).
:: Namespace Operator
To be honest I’m really covering this one for the sake of completeness, I haven’t come across too many situations where this will help out since you need to really try hard to get into trouble with namespaces.
VB.NET has had “modules” since .NET 1.0, and with C# 2.0 we get the ability to apply the static keyword to class declarations. This is useful when you want to build a flat API which doesn’t contain any per-instance state but you want to ensure your library users don’t get confused and try and call an instance method.
Its essentially the same as putting a private constructor on a class except that with static classes the compiler steps in and does some sanity checks on the data member and member function declarations.
#pragma warning [n]
Build masters will love this one. In most projects of any scale you are likely to get compile warnings on code that you really can’t (or shouldn’t need to) change. This compiler directive allows you to disable warnings for a segment of code without having to disable for a whole project.
It also means you can turn on warnings as errors to catch out the lazy developers on your team
This time last month I read this post by Martin Fowler. In the post Martin puts the case for code as a form of documentation – although he goes to great lengths to stress that is not the only form of documentation, and that it can be good or bad documentation depending on the nature of the code. I didn’t know if I entirely agreed or partially agreed so I decided to put it aside and re-read it at some point in the future (I do this a lot).
At the same time I’ve been spending some time analysing the amount of time spent around the various design, coding, management, release and documentation components in software projects.
While I don’t have any hard data to back it up my gut feeling is that we spent an extremely small amount of time in the design and documentation components and significantly more in the coding and release parts of a project – management is an parallel stream of work which is constantly steering the project towards successful completion (or atleast customer satisfaction).
I feel the reality is that we spend about 1% of our time writing word processed documentation to support software projects – this may be a bit low for some projects, but I doubt its more than 10% for most.
Its a fairly amazing figure (if I am correct) and it actually adds weight to the argument of using code as an important documentation artefact. But how do you create that elusive good code documentation?
A few years ago I worked with an old-C-programmer (are there many young-C-programmers?) and where most of us on the development team were producing some fairly long routines (about a page and a half) he was factoring his code out into about five or ten seperate methods.
At the time I asked him why he did it like that, especially since we as a team had produced a template for implementing these kinds of routines. He said to me that he does it to improve documentation and it wasn’t until I really sat down to analyse his code that I figured out the genius of his implementation.
All of his private methods used very descriptive names. While they were long, they were also very precise and explained perfectly what the piece of containing code did – it made it very easy to read and maintain but when it came to debugging he always found his bugs first – because the methods told the story of what was happening in the stack trace. Anyone who had used our templates were essentially looking into an abyss from whence exceptions were raised.
I think this was a good example of good code documentation when it is both static in the source files but also when it is in action.
This approach works beautifully at the procedural level but as you start layering on the abstractions it can sometimes be hard to distill the design from a series of source files. Personally this is one area where I think patterns help because to an experienced programmer certain class names have obvious linkages and overtime you get a feel for how systems are implemented.
I think that development tools (not UML diagramming tools) can take it one step further and allow us to group classes together into patterns. Tools like Rational’s XDE took steps in that direction but since I wasn’t really a user of it I don’t know what its warts were.
Rather than having a heavy plug-in to design software using patterns I thought it would be useful to have a “Pattern Explorer” which analyses the code-base over time to identify common (and maybe even uncommon) patterns in the code base. The screenshot below is an example of what I am thinking.
Basically – just like with have the Solution Explorer, Team Explorer and Class View, we would also have to Pattern Explorer. I don’t think this would be too hard to implement – especially if it leveraged the existing CodeModel capabilities in Visual Studio.
If this became a common fixture in all IDE’s the ability to grasp the overall design would become much simpler (I think) and it might even give poor quality code documentation a boost by virtue of some brute force analysis and visualisation.
That’s what recursive programming is all about. All recursive functions follow the same basic pattern:
- Am I in the base class? If so, return the easy solution.
- Otherwise, figure out some way to make the problem closer to the base case and solve the problem.
I think I would use recursive algorithms more if I could get past the potential stack problems, and the reality is that most of the time it wouldn’t be a problem anyway. Post on Eric!
I’ve gotten used to calling the v.next products by Microsoft by their code name. Indigo and Avalon are part of my everyday vocabulary but when I talk to other people about them I have stop and explain what they are. The official product names are most welcome:
- Windows Communication Foundation (the API formerly known as Indigo)
- Windows Presentation Foundation (the API formerly known as Avalon)
[via Don Box]
Apple’s rising fortunes mean that they are becoming a larger target for those who would write malicious code. It’ll be interesting to see how they react to the threat, but somehow I don’t think this is the right approach. I wish them luck though – malicious software is a waste of everybodies time.
Joseph’s post on versioning interfaces in BizTalk 2004 SP1 made me squirt water through my nose.
Scoble talks about David Allen not liking the idea of casual Fridays. I’ve worked in places that have a suit and tie culture, and I’ve worked in places which are much more casual – in fact some customers I have worked with (and even previous employes) took the shorts and leather sandal approach. Personally I prefer the casual approach mostly because I find I think better when I am wearing comfortable clothes, but when on customer sites I tend to wear the typical business pants and a Readify shirt – no tie, looks a little silly.
If you look at the history of clothing I’m sure you’ll find that we’ve seen a convergence of peasant and upper class styles as those who order the work and those two do the work start to socialise more. Clothing is also a status symbol, very few uniforms actually help you do your job.
I think we are heading into a language renaissance, its been on its way for few years now with languages like Ruby and Python becoming more popular. Both of these languages have interesting dynamic aspects to them that make them appealing both as learning languages and a platform for some of the more interesting commercial projects (one of my favorite online games is written in Python).
As a developer its important to keep an open mind about programming languages and actively experiment with as many as possible. I know that I haven’t played with Ruby enough to truely appreciate it but I hope to rectify that in the next few months, although I have spent more time with Python as I’ve needed to debug a few Python programs when I drag them over to Windows kicking and screaming.
From what I gather both Jim and Joel are tasked with making the CLR a more hospitable home for dynamic languages. It makes perfect sense for language developers to target the CLR because they get so many runtime services for free – and its a lot more fun focusing on the core language features than figuring out how to manage the heap efficiently.
When the CLR first shipped there were a number of established languages that were ported to the runtime – especially by the academic developer community, but more recently I’ve seen a number of projects start up with new or derived languages.
One such example is “Boo” – cool name. Today I downloaded Boo and took a quick look at it. As the blurb page says, its borrows a lot from Python for its language syntax, but I was surprised how quickly I was able to get up and coding in it.
I loaded up the Boo interactive shell and pretty much started typing code. The above screenshot is actually my second session with booish.exe (the first one I made silly mistakes like using semi-colons). I think a version of C# with an interactive shell would be useful to teach C# – we could call T# – the prompt could be a swigly bracket