Wednesday, June 30, 2004

Microsoft Developer Target Profiles

Microsoft just made available Beta-1 of the Visual Studio Express range of software developer tools. For some reason the C++ guy in particular made me think of the marketing/web people that created that particular web page. The choices of people/models used to represent each of the different express product options may give an insight into what Microsoft Marketing thinks of each of the target developer markets for the Express range of products:

  • Visual Basic dude. Young, clueless and not really paying attention.
  • C# Dudette. Young, capable and way too serious.
  • C++ Dude. Late 20s, Early thirties. Bohemian. Gonna stick with his esoteric ways. Likes being in a niche area because it makes him feel special.
  • Java Dude. Middle aged guy. Uses a blackboard which implies being more academic (and hence not so practical and realistic).
  • Web Girl. Young and it looks like she's focused on what she's doing - when in reality she's actually snoozing at the keyboard from a hard night out on the town with the girls.
  • SQL Dude. Middle aged guy - wears a tie, so he must be dependable and serious about business.

I'd love to see a causticTech interpretation of the page. He'd do it way more justice than my pathetic attempt.

PS: Programming language wide, I'd class myself as a C#, C++ and Java dude - in that order.

Monday, June 28, 2004

I followed this link via Ingo Rammer's blog. Lot's of fun with analogies and an enjoyable view. One of the comments asked for sub-titled versions of the earlier videos that were done in German. I'd definitely view them if sub-titled versions were available. Of course, English versions would be preferable - but just from the ability to bring up a video and be able to keep up with the story-line while doing something else at the computer that is more mundane. As the visuals are part of the primary content being delivered, this is only practical when doing other stuff that is mundane and doesn't take much of your brain power.

Carl Franklin indicated that he will be keeping .NET Rocks as audio only content - because "radio" is something you can listen to while doing something else. I tend to find that this works pretty well with some of the video interviews you get which are just talking head shots. It's good to see the people involved in the interview but then you can make the video window small and still have it running somewhere on the screen. The audio contains most of the content but you can flick your eyes back to the video occassionally to see facial expressions etc.

It's going to be really interesting to see how the media industry evolves over the next 5 years as more and more specialized media content becomes available. There's so much room for niche interests.

Saturday, June 26, 2004


Martin Fowler is one of my favourite technical authors. Recently he wrote a bliki entry titled DiffDebugging. I got the same sense of deja vu with this piece as a I did when first reading the Design Patterns classic. There's a certain satisfaction in finding out that the process or concept that you learned on the job purely by experience or need, is defined as a pattern or identified as a useful approach by experts in your industry. DiffDebugging is certainly a developer workflow pattern I've been using for a number of years. It allows the problem space to be broken down into manageable chunks so you only have to conceptually deal with the changes that have occurred as opposed to the entirety of a set of source code.

Wednesday, June 23, 2004

Whidbey Class Designer

The May preview of Visual Studio 2005 (Whidbey) included some of the Whitehorse initiative functionality. As I'm particularly keen on UML diagrams as a means of communicating technical concepts between developers, getting to know the VS2005 class designer has been on the cards since it's preview availability. Recently I spent some time working with the class designer to help communicate the structure of some existing C# source code.

The class designer interaction was primarily oriented around creating a class diagram from a set of existing code. The VS2005 install went smoothly, but once up and running a number of crashes were experienced. The crash handling optionally transmits details to Microsoft for analysis. I have no problem with this but found that the volume of data transmitted was quite high. It took around 20 seconds to transmit on an ADSL link. This is probably related to the preview nature of the code i.e. extra detailed information sent for the preview code.

A lot of the crashes were found to be related to the use of a network drive (Samba on Linux, not that it should make any difference). Once the source code was copied to a local drive, many of the crash problems disappeared. Interestingly, the .NET 2.0 runtime didn't seem to have the option of changing the security level of the "Intranet Zone" to Full Trust. The maximum allowed was Medium Trust. This might be a problem when developing with source code from a local network drive. The existing source code was up and running on .NET 2.0 quickly - even with the use of a 3rd party component that was designed for .NET 1.0.

The VS2005 user interface seemed sluggish when compared to VS2003. Again, this is likely to be due to the preview nature of the code. Class diagrams are added as another item in a project and have a .cd suffix. Once a class diagram was created, it was simply a matter of dragging classes from the solution or class explorer onto the diagram. A couple of class diagrams were created from the existing C# code. The problems I experienced included:

  • No class events were displayed on the diagram. The class internals display in the preview was limited to fields, properties and methods. There may have been a way of choosing to display the events, but I couldn't find it.
  • There didn't seem to be a toolbox option of showing a dependancy between one class and another where the dependancy is purely in the class methods (the related class isn't referred to by the fields or properties). This ability to document dependencies is useful. Automatically identifying the dependancies would be useful as well.
  • The relationship between a class and the interface(s) it implements is shown by the "lollipop" symbol. If you drag an interface on the diagram and show it's properties and methods, there is no "implements" association drawn.
  • For one of the class diagrams created, VS2005 refused to open the diagram after VS2005 was closed and started again. This only happened with one diagram but was frustrating as the diagram had to be created from scratch again.
  • Printing and copy/pasting of the class diagram doesn't appear to be supported as yet. The use of a screen capture utility helped get the diagrams out of VS2005 for passing onto a fellow developer.

Overall my impression was that the class designer functionality is still quite "raw", but very promising. The "look" of the resulting class diagram is impressive and I found the interface easy to use. For example, just right clicking on a Thread field and choosing a popup menu option allowed the relationship to the Thread class to be displayed as an association. If the class designer functionality could be automatically merged into the output of something like NDoc, then I'd be a very happy .NET developer.

Monday, June 21, 2004


If you're doing .NET development and have a background in Unix, it's well worth checking out the second half of the new .NET Show episode on Longhorn Fundamentals. The first half of the show "Technobabble" was a bit disappointing as it didn't seem to cover what was going to be in the Longhorn Fundamentals layer. Most of the discussion was on security.

The second half "Enter the Programmer" was fascinating. This section of the show covered Monad (also known as MSH) which is a Microsoft command line shell environement. As Robert Hess mentioned, Jeffrey Snover seems pretty chuffed with his software "progeny". Who can blame him? A scripting/shell environment that is object aware and tightly integrated in with the .NET platform has huge potential. There is an element of technical beauty in the ability to pass lists of objects between loosely coupled processing modules using pipes with run time type flexibility. The proof is in the pudding though, and the reality comes with actual usage. I'll be happy to give up the need to go through parsing hassles for every small admin task that needs to be written. Unfortunately, I can't justify the time to play around with it at the current point in time. Bummer.

As usual with .NET related technology, crunch time usually comes with the deployment scenario. Where can I use this stuff on my customer's systems? It seems Monad/MSH is slated as the Longhorn Command Shell. Due to it's nature, my guess is that it's one of the technologies that won't necessarily be bound to Longhorn and be formally made available for other current Windows versions. Hope so.

Even if it were made available on Windows XP, 2003 and 2000 there's stuff all chance it will be available on anything other than Microsoft platforms. There's also the perennial question of whether something similar is developed for Linux (e.g. Mono based) and what Microsoft would do about it.

The strength of the Monad functionality is that it's aware of objects and plays in that space. This is where I think more software environments/tools need to head towards. It's a long held belief of mine that the Software Industry hasn't really put enough effort into Object technology. That may seem like a crazy statement, but take a look at relational databases, SQL and XML technology. The primary programming paradigm for the last 15 years is arguably object oriented but we're still banging our heads against the relational database plumbing needs (check out the Anders Hejlsberg video on Channel 9).

An OO system can represent relationships between abstractions far more comprehensively than a "relational" database (may be they should be called table databases or grid databases). With regards to XML, I'd much prefer a text format that is similar conceptually to XML but maps more closely to OO data related concepts (types, fields, properties, relationships, lists, maps). It seems a lot of the XML syntax is fluff when representing the information we need to store/transport. This is of course of an over-simplification of the subject-matter, but I can't help thinking putting more effort in making object oriented functionality available throughout the software environments we work in (not just the pure coding areas) would be a win in the long run.

Sunday, June 20, 2004

The API War and Microsoft

Thought I may as well weigh in on Joel Spolsky's post on How Microsoft Lost the API War. It's a great read but I think that Joel is way off on his thoughts on .NET. I wasn't aware of the lengths Microsoft has gone to in the past to retain backward compatibility. Some of the backward compatibility workarounds seem absolutely crazy (at least to someone who has never dealt in thousands of deployment desktops let alone tens of millions+). This post doesn't get into arguing whether .NET is a worthwhile development direction as I'm sure there'll be plenty of posts on the subject (Scoble references a bunch in his blog). For me using .NET is a no-brainer once you spend some time researching the environment and languages available. It's simply a top notch development environment.

Regardless of this, I have been disappointed a few times working with the .NET environment. Most of these times relate to strategic planning as to what APIs to use and when to use them. Hence the relevance to the Joel Spolsky post. My .NET API disappointments include:

  • ADO.NET in .NET 1.0. When first looking into the System.Data namespace back in 2003, the lack of emphasis on the ADO.NET interfaces struck me. All the code examples referred to the concrete classes such as OleDb* or Sql*. But the bigger disappointment was that there was no OODB concepts in the framework. Over the next couple of years I avidly read any scraps I could find on ObjectSpaces. Then this year, just as it was getting close to a release, bam! it slipped away another 2 years+ from my fingertips. Like all the major .NET API changes so far, I read and understood the Microsoft spiel and it mostly makes perfect sense. I remember viewing the code part of the MSDN show on Longhorn and WinFS and thinking hmmm.. do the WinFS and ObjectSpaces teams talk?
  • Windows Forms vs Avalon. It's frustrating to commit to a particular API and build up an investment in it only to have rumours of it's imminent replacement send shivers of doubt down your spine. The investment is in terms of time spent learning the API, purchase of third party components that rely on the API and developed source code that needs to be maintained. The first rumour that Microsoft was looking at a Windows Forms replacement was in the discussion groups of the Mono project. The discussion was questioning whether it was worth attempting a Mono implementation of Windows.Forms since a replacement was already on the cards (less than a year after the .NET 1.0 release from memory). At the time I thought, how does Miguel de Icaza know there's are a Windows Forms replacement? He must have a mole in Microsoft or something (visions of the seedy underworld of software development espionage followed).
  • .NET Remoting. I almost feel sorry for .NET Remoting (as an API entity). It only just started it's API life time (what a couple of years or less?) before it's been shoved onto the API scrap heap. .NET Remoting is very much a second class citizen these days (see the Channel 9 Video with Richard Turner). Whatever happened to Ingo Rammer and his devotion to that API? He used to be the ".NET Remoting" guy until Don Box got to his brain (tried it on de Icaza as well). Listening to the Indigo spiel, it all makes sense except for the fact that Indigo isn't available to use now. This makes decision making more uneasy if you need high performance remoting style functionality now and don't need to go outside a particular organization's boundary.
  • SOA Emphasis. There seems to be way too much emphasis on SOA at the moment. Don't get me wrong, SOA sounds like the way to go for a broad set of applications. I thoroughly enjoyed Pat Helland's talk on Metropolis, but feel that there is still a large set of applications that fit within a deployment boundary and would actually benefit from sharing of code between parties - sort of like the smart client deployment concepts except applied to remoting.
  • Generics Support. The available of generics support in VS.NET 2005 will be great. One downer about it is that so much of the standard framework that exists now wasn't designed with generics in mind. This isn't that much of a problem, but the "neatness" nut in me would like to see generics used consistently all over the framework (where it applies of course, the Standard C++ approach of templatizing every freakin thing is not what I'm taking about here).

Okay... enough of the API whinges you say! I'm actually coming around to the point of this post. Unlike Joel Spolsky, I thoroughly applaud Microsoft's .NET efforts but... A developer can only take some many major API turnarounds until cynicism sets in. Microsoft's reasoning for all of the API disappointments listed above do make sense to me. I've accept the reasoning and moved on with strategic planning based on my understanding of the various API directions. Now if another major .NET API turnaround/replacement hits us then.. well then I'll finally lose it (patience that is).

Back to the war analogy for APIs and some crazy-ass crystal ball gazing :

It seems to me that Microsoft will be winning most of the various API skirmishes coming in the next 2-3 years. This is because of the excellent work being done on the .NET framework (or it's next incarnation WinFX). The real "battleground" in my mind (continuing Joel's analogy) is once .NET/WinFX is established as the primary API and installed on most of the world's desktops and a fair chunk of the servers. The WinFX API will "settle down" and alternative implementations of the API will start catching up. The important parts of the WinFX API have to eventually settle down. If they don't developers will probably just throw up their hands and ignore any new APIs or direction changes.

My reasoning for there being non-Microsoft versions of WinFX is that if a bunch of talented people can reproduce the Win32 API in Wine then it is likely that something similar will happen with WinFX - especially considering the advantages of managed code (versus x86 specific binaries). The beauty of managed code is that it is just that "manageable". This is why we have such programs like Reflector.NET Test and JTest. They live in the world of IL/bytecode and leverage off of it. 

Once alternative implementations of WinFX start appearing in 2-3+ years, Microsoft will have some headstart because of all the technical trail-blazing and it will probably be somewhat of a "golden age" for the company. But eventually Microsoft will have some choices to make for handling the new situation. Some options include:

  • Starting to become more attack oriented with their defensive patent portfolio (and other such litigation measures).
  • Just keep pumping out the new APIs.
  • Start obfuscating more IL code to retain some form of propietrary advantage - even the standard libraries (where practical).
  • Rely on having a better implementation of the APIs.
  • Rely on the installed based to carry them through.

Some of these options are particular unsavoury (at least to me). Well... It's going to be fun looking at these sort of posts a decade down the track and seeing how wrong I was!

PS: I know it's unrealistic to theorize about these sort of futures, because of the probability of getting it right is ridiculously low. But the thought process is useful to help make decisions in the here and now.

Monday, June 14, 2004

Carpal Tunnel

Though carpal tunnel syndrome isn't directly software development related, it's worth a mention in this blog as I'm sure many software developers will experience it or some form wrist/keyboarding related problem during their work life time. The huge amount of time that can be spent at a keyboard is a recipe for trouble. About a year and half ago, numbness in my wrist was driving me nuts. Reducing the amount of time at the keyboard wasn't an option since no one pays the bill when you're not working! I tried all sorts of products to alleviate the numbness.

Wrist supports helped a bit, but got in the way of mousing. Some keyboard variations were tried but they were either more uncomfortable or too difficult to type on e.g. the Microsoft natural keyboard. A particularly disastrous purchase was for a touch sensitive keyboard that was expensive and ended up actually being more uncomfortable than a normal keyboard. What ended up working for me was a standard keyboard, a standard mouse and two thick gel wrist rests place symmetrically at the bottom of the keyboard. That simple change plus some basic awareness of potential wrist problems at a keyboard did the trick for me. Hope this helps anyone reading this entry with a similar problem.

Monday, June 07, 2004

Agile UML

Following on from the Disrespecting UML post, I've been thinking about what "UML" means to different people and what I want from UML tools. Martin Fowler references these three classifications for thinking about the UML:

There's certainly more possible variations of these classifications such as:

  • UmlAsTheOneTrueWay

  • UmlWeDontDoThatHere

  • UmlIsJustSomePrettyPicturesGiveMeRealCode

  • UmlIsForQuicheEaters

  • UmlAsaStraightJacketInTheRupAsylum

My preference is more towards UmlAsSketch i.e. as a means of communicating the complexities of a design or large block of code without having to go into the nitty gritty details. There's a great post on Models by Hal Pierson (got there via Scoble).
Models allow us to focus on some aspects of a structure while ignoring others.

For me, this is the key to approaches such as the UML or Microsoft's DSL/Whitehorse initiative. It allows software developer's to deal with higher level abstractions and communicate those abstractions to other developers. My ideal is to be able to do the following with a UML (or similar) tool:

  • At the start of a project, use high level diagrams to describe the architecture and/or design of a proposed software system. Those diagrams could be done on a white board, hand sketched on paper, hand sketched on tablet PC or entered into some development environment.

  • After a planning phase, any high level diagrams can be converted to code with a minimum of effort.

  • Once code exists for a project, it should be easy to generate a modeling diagrams from the existing code base. There shouldn't be any major manual effort or processing time to get these diagrams. The manual effort should be limited to defining what portions of the code/classes go on what diagrams and possibly some manually tweaking of positioning.

  • The API or class documentation for the code should be able to be generated from the code and include the model diagrams as part of the documentation automatically i.e. sort of like ndoc but with UML and other modeling diagrams mixed in.

  • If a developer needs to get familiar with a block of new source code, it should be possible to just point a modeling tool to the code and get back models of the code e.g. class diagrams, sequence diagrams in the case of UML.

  • The Together modeling tool did a number of these "ideals" very well. I regularly used the community edition to be able to understand a new block of Java or C++ source before Borland bought them out and the community edition disappeared. The reverse-engineering ability for C++ and the automatic diagramming positioning was amazing, though the cost was prohibitive and the start up time was very frustrating (it's a Java Swing app). I've given up on the Together tool because of the cost and start up time issues.

    These days I use Enterprise Architect from Sparx Systems. It's reasonably priced but the problems I have with it is that it keeps a separate relational database as a copy of the structure that's in the code and it's auto-diagramming/reverse engineering isn't as good as Together. Flywheel from Velocitis looked promising, but it may be effectively trounced by the functionality in Visual Studio.NET 2005. At some point in the near future I'll look into the VS.NET 2005 functionality in detail and post how I go.

    In summary, UML and other modeling approaches such as Microsoft's Whitehorse initiative don't have to be straight jackets. They can be used in an agile manner especially if tools truly automate as much as possible so that the software developer doesn't end up just doing "busy work" that doesn't directly contribute to putting running reliable software "on the table".

    Friday, June 04, 2004

    Disrespecting UML

    I was listening to .NET Rocks Live! today and enjoying it as usual. Rocky Lhotka was talking about the Visual Studio 2005 Whitehorse designer functionality. The subject of the class designer and UML came up.

    There was a general dissing of UML which seemed somewhat unfair. My experiences with UML haven't all been peaches and cream (the term use case still sends a shiver up my spine), but a lot of the UML can be incredibly useful. The subject of UML and modeling languages in general will probably be the subject of a future post, but this post is about the difficulty in keeping up with software development knowledge.

    The aforementioned UML dissing got me thinking about the various times I had a poor opinion of a technology simply from being un-informed. Two technology memories from the dim dark past came to mind. One was the proposed introduction of delegates into the Java language by Microsoft and the other was my first exposure to C++ templates.

    Back in the last millennium, Microsoft proposed the addition of delegates to the Java language owned by Sun (this was before they "kissed" and made up). Sun's experts roundly derided the proposal. Microsoft's version of the truth can be found here while Sun's can be found here. Now at the time I was heavily into the Java flavored koolaid, read all the opinions on introducing delegates on Java sites and thought yeah ... can't let those scumbags Microsoft sully the language with their proprietary extensions. Years later when learning C# and .NET, I was exposed to delegates in more detail and actually found them to be a great language addition. My original opinion of the Microsoft Java delegates proposal was simply uninformed.

    Around the same time, C++ compilers with support for templates started to become available. Consistent template support on the various compilers was patchy and they had a reputation for producing bloated executables (well bloated given the typical executable size at the time). In addition, I simply didn't understand what templates were all about. Rather than spending the time to get a basic understanding the technology, I just continued programming as before. Years later (again!), I worked on a project that required the use of C++ templates. It wasn't long before that ah-hah moment occurred with respect to understanding of what templates were all about.

    So what's the point of this post? It's that you can't form accurate opinions on the many and varied software technologies that exist today, without spending some time getting to know the technology details. It's tough to find the time to research all the interesting nooks and crannies of software technology. Then there's the continual industry changes and the fact that competing software technologies actually feed off each other in the change process. Where would Java be without the work done on previous virtual machine technologies (such as Smalltalk)? What about the influence of Java on the .NET platform? How much was the Visual Studio 2005 class designer influenced by UML class diagrams?

    Finally a shout out to Carl/Rory: Love your work, but hey UML ain't that bad!

    (Though I would prefer first class support for property and event concepts in class models, so am looking forward to VS2005)

    Wednesday, June 02, 2004

    The Microsoft Visual C++ /clr Switch

    Today I had a go at compiling a large body of existing legacy C/C++ code with the /clr switch. The bulk of it compiled without changing the existing source code. Most of the compile problems looked fairly trivial to resolve as they were related to the use of older C/C++ syntax in the code. The purpose of the exercise was to see if the existing (non COM) C/C++ code could be used directly within a .NET application without using p/invoke. The ideal would have been able to just refer to the existing C++ classes in newly developed C# code. Unfortunately, the Reflector utility shows that the compilation output "mangles" the C++ classes somewhat. A .NET struct is created to represent the class and then a series of static methods created for the methods of the class (which weren't necessarily static in the header file). The Microsoft developers must have done this to be able to practically implement all the C++ features (such as multiple inheritance) within the .NET environment. If the __gc keyword is used on a C++ class then the compilation output is a class derived from System.Object and all the methods defined in the header become part of the class i.e. not just a struct and a bunch of static methods. Using the __gc keyword would allow better interoperability with VB.NET/C# code but introduces more C++ compilation restrictions. Oh well, can't have everything! Just the ability to compile existing C++ code to the CLR is pretty amazing.