.net and other musings

Ben Lovell, an agile developer living in the UK.

Category: Open Source

Twitter bootstrap rails updated to bootstrap v2.2.2

Just a quick one: I’ve updated twitter bootstrap rails to bootstrap v2.2.2 which includes a fix (among others) to address the issues with iOS devices and button group drop-downs.

Go forth!

Incremental development with Monorail: Part Six

We finished up in part five with a full suite of passing tests and our BlogPostService is slowly taking shape now. The next few posts will introduce persistence and validation but before we get started on these features, we have a little housekeeping to perform on our existing code.

First step is to build the Castle trunk and update our references to the latest versions. Secondly, we’ll incorporate a few changes suggested by Hammett.

After building and replacing our references with the latest Castle build I’ll run the tests as a sanity check before we proceed:


With that out of the way our first code change will ensure we play nicely with HTTP. It is considered good practice to ensure requests which update or create resources is carried out via POST – so we can modify our only PostController.Save action to ensure this is so but before we do this we’ll write a test:


Running our test produces the following:


We’ll modify our Save action now to ensure we only accept the POST verb:


We have added the AccessibleThrough attribute and explicitly specified that our action must only allow the POST verb. We’ll run our test again and make sure everything is working as expected:


The next step is to modify our add.vm view to use the helper methods to generate our form tags. We’ve done that and once again we’ll run our tests:


I’m noticing a little duplication creeping in to our integration tests that should be removed. Also, until now we’ve had to assume that a web server is running on a specific port on our development machine in order to run the integration tests. Through a little tweaking we can spin-up a temporary server in our test fixture setup code and host our test code there instead.

First we add the WebDev.WebHost.dll to our lib project directory and ensure we’re referencing this from the testing project. The next step is to create an app.config file in the testing project so we can configure where and from which port our testing server should be hosting from:

66  Now we add some setup code for the fixture:


We’ll also make sure we bring the server down when we’re finished:


Now ensure the VS web server is not running on the same port, then we can run our tests and make sure our latest changes work out:


Now, since we specify the web root and port through configuration, we should also use these configurable values to determine the URLs in our tests:


You can see our tests now call BuildTestUrl to determine the full URL where we’re hosting the development server. Neat. I’m sure when we add further integration fixtures we’ll extract class this, but for now we have a nice working set of tests and should remember the principle of YAGNI!

I’m going to cut this installment short here. In the next post we’ll get back into the swing of it and continue implementing features.

As ever, the latest code has been checked in and is available here:


Agile project management with StoryVerse

If you’re practicing agile project management take a look at StoryVerse – an open source agile project management application built on Monorail and ActiveRecord.


In particular I like the no-fuss UI. Very reminiscent of Google UI. Anyway take a look.

Dependency injection with Monorail and Windsor

Dependency Injection (or IoC) is a practice I apply when designing software. It enables enforcement of the Single Responsibility Principle and also has the added side effect of producing more testable code.

Scott Bellware has posted an excellent write-up on dependency patterns and it got me thinking about my favoured approach with dependency injection and how the Castle Windsor container provides that for us. I favour constructor based DI as it enforces the fact that your class must be wired up during initialisation. It makes your dependencies highly visible and as Scott mentions – goes some way to producing a self-documenting API.

Some of the common alternatives, or perhaps complimentary approaches include resolving dependencies from the container or service locator hidden in the constructor (dirty), or the provision of property setters for your dependent interfaces. Of course providing getters (or at least public getters) is a no-no as you would be exposing your dependencies to consumers thus violating encapsulation. But I guess this goes without saying?

Thankfully, Monorail controllers and their dependencies can be wired up using constructor based DI via RailsFacility and Windsor integration. Take the following example:


ContactController in the diagram above is a simple Monorail controller that depends on the IContactService to carry out most of its work. Stemming from there the IContactService implementation has dependencies on IContactRepository and an external library (from NSpectre) interface IValidatorFactory which is created using the Windsor factory facility and some custom factory code, but more on that later… As you can see from the model, the dependencies are injected via constructor arguments.

Using the RailsFacility is fairly simple, rather than me explaining it I’ll point you to the docs here. Once you’ve got the facility and container set up, you simple declare your controllers and their dependent components via configuration:

<component id="contact.controller" type="Campaigns.Controllers.ContactController, Campaigns.Controllers">
        <subject>Contact message from Website</subject>

I’ve explicitly declared some of the required values for the ContactController constructor above. Just as reminder the signature for ContactController  constructor:

public ContactController(IContactService contactService, string mailTo, string subject)

When the ContactController is wired up by windsor the IContactService contactService argument is resolved to the following service configured in my components.config file:

<component id="contact-service"
               service="Campaigns.Core.IContactService, Campaigns.Core"
               type="Campaigns.Services.ContactService, Campaigns.Services">

Of course, ContactService also joins in the fun and has its dependencies injected via the same means:

<component id="contact-repository"
               service="Campaigns.Core.IContactRepository, Campaigns.Core"
               type="Campaigns.Repository.ContactRepository, Campaigns.Repository" />

Windsor detects that the ContactService constructor takes a IContactRepository argument and resolves this automatically.

Now for the keen-eyed among you, and going back to my point earlier regarding the IValidatorFactory dependency… This is resolved using the natty Windsor factory facility, the best explanation of which is here. In my configuration I state that IValidatorFactory dependencies should be resolved via my custom factory code:

using NSpectre.Core;
using NSpectre.Core.Configuration;
using NSpectre.Core.Implementation;

namespace Campaigns.Core
    /// <summary>
    /// Factory for NSpectre validators
    /// </summary>
    public class NSpectreFactory : INSpectreFactory
        #region Fields

        private readonly string xmlEmbeddedResourcePath;
        private readonly bool saveGeneratedCode = false;
        private readonly string path;


        #region Constructors

        /// <summary>
        /// Initialises the factory with the embedded resource path.
        /// </summary>
        /// <param name="xmlEmbeddedResourcePath">The path to the embedded resource.</param>
        /// <param name="saveGeneratedCode">Save the generated code</param>
        /// <param name="path">The path to save the code to</param>
        public NSpectreFactory(string xmlEmbeddedResourcePath, bool saveGeneratedCode, string path) : this(xmlEmbeddedResourcePath)
            this.saveGeneratedCode = saveGeneratedCode;
            this.path = path;

        /// <summary>
        /// Initialises the factory with the path to the NSpectre configuration embedded resource.
        /// </summary>
        /// <param name="xmlEmbeddedResourcePath"></param>
        public NSpectreFactory(string xmlEmbeddedResourcePath)
            this.xmlEmbeddedResourcePath = xmlEmbeddedResourcePath;


        #region Properties

        /// <summary>
        /// Gets the path to the configuration embedded resource.
        /// </summary>
        public string XmlEmbeddedResourcePath
            get { return xmlEmbeddedResourcePath; }

        /// <summary>
        /// Gets a flag indicating whether NSpectre should save the generated code.
        /// </summary>
        public bool SaveGeneratedCode
            get { return saveGeneratedCode; }

        /// <summary>
        /// Gets the path to save the generated code to.
        /// </summary>
        public string Path
            get { return path; }


        #region Methods

        /// <summary>
        /// Creates the validator factory
        /// </summary>
        /// <returns>The validator factory, initialised</returns>
        public IValidatorFactory CreateFactory()
            IConfigurationReader reader = new EmbbeddedXmlResourceConfigurationReader(xmlEmbeddedResourcePath, new NullLogger());
            Initialiser initialiser = new Initialiser();

            if (SaveGeneratedCode)
                return initialiser.CreateValidatorFactory(reader, saveGeneratedCode, path);
                return initialiser.CreateValidatorFactory(reader);


The custom factory is hooked up using the following configuration:

<component id="nspectre.factory" 
               service="Campaigns.Core.INSpectreFactory, Campaigns.Core" 
               type="Campaigns.Core.NSpectreFactory, Campaigns.Core">
        <xmlEmbeddedResourcePath>Campaigns.Core.Model.NSpectreValidations.xml, Campaigns.Core</xmlEmbeddedResourcePath>

    <component id="nspectre.default" 
               type="NSpectre.Core.IValidatorFactory, NSpectre.Core"
               factoryCreate="CreateFactory" />

You really notice the effectiveness of this approach when adding new controllers to your project. You simply add the controller code, define its dependencies in the constructor and add the configuration for the controller to your controllers.config file and everything is resolved and injected for you at runtime. Very nice indeed, I’m sure you will agree!

Testing is made easy by providing dynamic mocks. To make this an even nicer experience take a look at the AutoMockingContainer from the nice folks at Eleutian!

To wrap up I have to say: Castle really does kick the llama’s ass.

Monorail services

The monorail pipeline is modeled around a key set of services. This allows a certain amount of flexibility when acquiring resources or using dependent services such as email sending, scaffolding support, controller factories and the like.

Upon initialization of the monorail framework these services are defaulted to monorail provided implementations based on the existence (or non-existence) of service configuration in your web.config file.

This is particularly handy for mocking out infrastructure during automated acceptance testing. Emailing is a good example of when you might want to do such a thing. To provide a mock service for email sending you simply add the following into your web.config:

        type="Castle.Components.Common.EmailSender.Mock.MockEmailSender, Castle.Components.Common.EmailSender" />

You could of course provide your own mock implementations if necessary.

To provide dynamic mocking for unit testing you should mock out the RailsEngineContext and return an IEmailSender implementation (or dynamic mock) by setting up an expected return on the IRailsEngineContext.GetService call:


Velocity language reference

Since I always seem to lose this and take forever to find it (hidden deep down within the Apache project website) here is a link to the VTL reference!

MonoRail & ActiveRecord tips

Adam Esterline has posted a bunch of MonoRail & ActiveRecord tips over on his blog. Some of the detail is fairly standard stuff but I learned a few things: MARS on SQL Server 2005 being one of them.

One thing I would add to the ActiveRecord tips is be mindful of lazy-loading on collection mappings. Especially now we have DetachedCriteria and the ability to be explicit about fetching strategies. Lazy-loading big lists can be pretty disastrous to application performance when used in the wrong place. For example: when traversing every node of a persistent object graph and touching unloaded collections.

Take a look at the NHibernate performance docs for more information.

NHibernate Query Generator

We should all be more than familiar with useful tools and commits coming from Ayende by now 🙂 However, the one I really dig is his NHibernate Query Generator.

NHQG is a console application which when pointed at your NHibernate mapping files, produces a bunch of partial classes for the types in your domain model which ultimately generate NHibernate DetachedCriteria. The real goodness is in the expressiveness of the querying API which they create.

Consider the following traditional NHibernate code:

IQuery query = session.CreateQuery("from Publisher as pub where pub.Name = :Name and pub.City = :City");
query.SetParameter("Name", "Test");
query.SetParameter("City", "London");
return query.List<Publisher>();

Using the NHQG generated helpers we can express this as:

return Repository.FindAll(
    (Where.Publisher.Name == name) && 
    (Where.Publisher.City == city));

Via nifty operator overloading and generics magic we end up with the Where syntax. Obviously this example just touches the surface of what we can do but don’t you agree how nice the 2nd approach is. The generated code supports navigating the relationships, ordering etc as you would expect.

Ayende prefers the no-strings approach. I tend to agree!

Quack quack

Mark mentions duck typing as a neat way of handling the object-as-XML coupling scenario. Of course, us Ruby hackers are already familiar with duck typing (if I can lump myself in with that bunch yet:) ) as are users of other dynamic languages.

The shortest and most simplistic explanation of duck typing I’ve seen is:

an object having all the methods described in an interface can be made to implement that interface dynamically at runtime, even if the object’s type does not include the interface in its definition

So why the term: “duck type”:

If it walks like a duck and quacks like a duck, it must be a duck!

There are duck type frameworks available for .NET, NDuck being one of the more popular.

RoR vs. Web Forms vs. MonoRail Pt2

Ok so in the last post I talked about some of the more obvious issues I have with ASP.NET and web forms development. Thankfully, the Castle Project’s MonoRail framework goes some way to alleviating that pain.

The Positives

MVC, MVP, ABC easy as 123…

Under the covers, MonoRail is a full MVC compliant framework essentially comprising a front controller sitting over the ASP.NET infrastructure intercepting specially formed URI’s. The HTML ends up in the browser via your choice of view engine, with NVelocity (a fork of the Jakarta Velocity port) the default and most accessible, Brail being the Boo dynamic language and most performant option and of course a cruddy web forms engine too which nobody should use. 🙂 I’ve been comfortably using NVelocity for a while now, and provide testament that its not difficult to pick up.

The controller code orchestrates the flow of logic through the application and rails URI’s map nicely to actions on the controller which are ultimately public methods on the controller class itself. All nice and easy. Actions exposed on controllers derived from the magical SmartDispatcherController can translate posted form values into domain model objects for you by simply adding the DataBind attribute to your action’s argument list:

public void Save([DataBind("publisher")] Publisher publisher)

In the above example the publisher argument is passed into the Save action fully constructed via the form or query string values posted! This saves on the usual binding and scraping stuff you normally do in web forms and which almost always ends up sitting in the code behind untested. Note: the databinding stuff is of course very flexible and my example above just touches on the possibilities.


The model can be supplied with an ActiveRecord enabled domain model. This of course abstracts away NHibernate behind the scenes. Unfortunately generic collections support doesn’t work right now but I believe that work is firmly underway.

Edit: Generic collections are supported, as long as you’re running from the trunk.

Domain entities are adorned with the necessary attributes defining how their properties are mapped to your data source. This is a departure from the usual hbm XML mapping files but just as expressive.

ActiveRecord also supplies simple validation in the form of special attributes including email validation, regular expression validation and the like.

UI Components

Reusability of UI components is achieved via ViewComponent. These can support nested sections, parameters etc.

AJAX and client scripting support

Built into MonoRail are a number of helpers which allow you to perform the usual AJAX stuff such as calling remote server side methods and callback support, Scriptaculous and Effects2 client side goodness. In fact the helper extensibility is a nice way of adding extensions and wrapping commonly performed functionality in the view.

Convention over configuration

Like RoR, MonoRail prefers convention over configuration. The project structure skeleton is the same throughout all MonoRail enabled solutions. The MonoRail installer also includes a handy templated solution provider for both VS.NET 2003 and 2005 to create the project skeleton and testing is included. Accustomed MonoRail developers can open any solution and just know where things are going to be.


One of the sore points of ASP.NET development was the unit testing difficulty. MonoRail overcomes this by enforcing the MVC pattern. Controllers are testable classes sitting outside of the usual ASP.NET cruft. There are a number of framework features within MonoRail which specifically aid testing.


Like RoR, controllers adorned with the necessary scaffolding attributes can automatically produce the basic markup and logic for CRUD operations on your domain model. Handy for prototyping or creating rough-and-ready data entry capabilities in your applications.

Container support

MonoRail supports IoC via the Windsor container. Controllers and their dependencies/parameters can be injected by the container if necessary.

The negatives

MonoRail does have some negatives, which I will mull over:

No 3rd party or ASP.NET web forms controls support

I don’t really see this as an issue but can see that some ASP.NET developers will. Fans of the big UI library packages will complain, but there is this old concept of HTML and it is surprising how flexible it can be 😉 Most use of control libraries I have seen has been misguided, by this I mean they’re usually added to a screen just because they can.

The web forms control library isn’t such an issue as things like calendars, grids and the like are not to difficult to duplicate the MonoRail way.

Learning a whole bunch of new concepts/patterns/practices

Most ASP.NET developers are what can only be described as morts. They’re not familiar with design patterns and good practices such as clean separation since they came from either ASP or VB (procedural backgrounds) so this stuff doesn’t come naturally. If you’ve used NHibernate you’ll pick up the ActiveRecord stuff in no time. Learning the template syntax for your chosen view engine is of course another consideration.


In summary, the benefits of using MonoRail over web forms are clear. Clean separation of concerns, testability, reduced complexity and none of the web form cruft and artifacts. MonoRail won’t suit everybody, but even if you’re happy with web forms it’s definitely worth a look. Personally I think its the only way worth programming in the enterprise with ASP.NET today.

Catching the DLR

DLR announcements abound… Wow, the DLR or Dynamic Language Runtime, is causing quite a buzz across the tech-blogs and with good reason! Expect lots of Ruby goodness to come.

Nice to see that this is an open-source initiative, being licensed under the Microsoft Permissive License. What is it with the big vendors and their individual open-source licenses? If you don’t have your own open-source license you’re officially a nobody nowadays it seems 🙂

Silverlight… Cross-platform?.

Wow! Looks like I missed this somehow… Silverlight, although being touted as “cross-platform” won’t ship with Linux compatible plug-ins. Wise move Microsoft.

I can’t see the YouTube’s (especially given how Google isn’t historically one of the great adopters of MS technology) of this world adopting  your  “cross-platform”  technology? I thought Microsoft were starting to move away from these lock-in encouraging business tactics…

At least drop the “cross-platform” misnomer!

Ubuntu experiences

I’ve been using Ubuntu as my main OS since Dapper Drake; having been my first venture into GNU/Linux. Since then Ubuntu has been through two other releases: Edgy Eft and now bringing us up-to-date Feisty Fawn or 7.04 (Ubuntu versioning being 7 signifying the release year and 04 the release month).

It’s needless to say i’m thoroughly impressed with Ubuntu… Admittedly i’m a little green as to the other distros’, having only dabbled with SUSE, DSL and PuppyLinux, but Ubuntu has fulfilled my needs so I haven’t really had to look elsewhere.

I upgraded to the early release of Feisty a while back now, and it has been rock-solid throughout. Not bad for pre-release software never mind an OS! Anyway the main improvements over the earlier version are a more stable and compatible network manager (wireless networking support primarily), a refocused control panelesque administration section, flashy 3d desktop effects via Compiz, and better binary/restricted or non-free driver support to name a few. The best feature in my eyes however, and the one which has made Ubuntu that bit easier to adopt, has to be the automatic codec installation support.

Due to the Ubuntu philosophy of only including free and non-proprietary packages, many codecs were not installed with the vanilla OS installation and required a bit of tweaking and cajoling to get configured… Now they’re installed automatically as and when you need them, making widescale adoption of the OS that bit easier.

The final version has been just as stable as the early builds for me, the only notable issue for me being that the 3d effects are a little buggy. I’ve turned those supplied by the OS off and installed Beryl (a fork of Compiz, more stable IME) and all my previous 3d desktop niceness is stable again.

Windows development (.NET, IIS, SQL), when necessary is facilitated by VMWare Player and a Windows VM with my stock VS environments. Mono, Rails, MySql and the little PHP i’ve been getting into recently has obviously been native via Linux, Apache and using several IDE’s including Eclipse and RadRails.

The usual day-to-day stuff such as browsing, email, media playing etc has been snag-free as usual with Firefox and Opera for browsing, Amarok for audio and iPod management and finally VLC, MPlayer and Xine for video.

I find myself only using Windows when necessity dictates now. I have briefly used Vista but not yet in anger so can’t really comment on that. The hardware requirement is a bit of a joke though and I don’t think i’ll be upgrading in the near future… But thats another post all together 🙂

Lightbox goodness

I’ve been creating a gallery page for my current project and wanted a nice way of displaying the full-size images without resorting to pop-ups (yuk!)… Enter Lightbox 2.

Lightbox 2 is a JS library which is ideal for displaying gallery pages of images. When the user clicks on a thumbnail the full version of the image transitions in to the center of the screen and the background is overlayed with a transparent black image – which is what gives it the light-box effect.

Anyway my explanation isn’t the most revealing so go take a look at it yourself!

Thinking about Open Source

It amazes me to think that it has only been a handful of years since I started using open source software in my hacking. My projects now have open source deeply embedded: ranging from ORM frameworks to even operating systems and application servers.

Since most of my previous development was in .NET (Microsoft’s implementation not Mono), my first forrays into open source were the frameworks and librarys built for this purpose. A quick role call of some I am familiar with and have used extensively:

NHibernate: Excellent ORM framework which is really maturing into a nice product. Was the basis of prescriptive architecture during my time at BBC Worldwide.

NUnit: For unit-testing and dynamic mock objects. Again, another essential open source framework which has implementations for common OO language environments.

Log4Net: The canonical instrumentation library, has implementations throughout almost all OO languages. Used in literally all decent open source libraries and frameworks.

Spring.NET: The excellent and mature object container and IoC framework. Although this has grown into a complete application framework I can only comment on dependency-injection aspects of it.

NSpectre: Created and maintained by a colleague of mine during my time at BBC. Superb validation framework which uses XML to describe validation rules. Does an excellent job of taking out complicated validation logic from the code and provides a means to declare all the validation gunk declaratively.

OpenSmtp: An open source SMTP library. No longer maintained I believe but still very nice.

Just a few mentioned there but the main ones I’ve used during my Windows & .NET development. I’ll get into what I consider true open source development in another post soon 😉