Archive Page 2

DDD8

Today I went to the DeveloperDeveloperDeveloper day at the Microsoft campus in Thames Valley Park, Reading. It was a fantastic day with many great sessions covering a wide range of topics.

The first session I attended was “Mixing functional and object oriented approaches to programming in C#“ by Mark Needham from Thoughtworks. Mark discussed using functional programming techniques in C#. The talk focussed on using features of LINQ to replace imperative operations, such as for-each loops and if-else statements. Mark applied functional techniques at the operation-level and also at the structural-level, demonstrating how some common “Gang Of Four” patterns can implemented using a functional approach, by passing functions instead of interface implementations.

Mark recommended the book “Real-World Functional Programming” as a good resource on functional programming techniques.

The next session I attended was “Commercial Software Development – Writing Software Is Easy, Not Going Bust Is The Hard Bit” by Liam Westley. Liam gave an informative and entertaining talk on tactics he found useful when running a software development company. His tips included:

  • Offering customers an alternative to phone-based support to avoid interruptions.
  • Using automated unit tests and functional tests to prevent bugs from reaching production only to be discovered by your users. Liam said for him, automated testing didn’t immediately increase productivity, but over time has enabled him to develop high-quality software faster, resulting in satisfied customers.
  • Having good logging and error notifications. This enabled Liam to quickly identify and fix problems, sometimes before the customer had even raised the issue.
  • Get organised by tracking time, thoroughly reading and understanding contracts, always having an agenda for meetings and keeping a detailed history of support calls.
  • Improve your sales pitches by researching the business you are selling to and focusing on delivering value to the business, not just a set of features. He also recommends using light-weight specification documents that allow for change and spreading costs over time.

Liam suggested a good book on software product pricing called “Don’t Just Roll The Dice”.

The last session for the morning was “C# 4” by StackOverflow superstar, Jon Skeet. Jon discussed the new features of C#4, including named parameters, improved COM-interop, generic variance (covariants, contravariants and invariants, oh my!) and dynamic typing.

During the lunch break there was a series of Grok talks. Several presenters gave short talks on using T4 templates, the features of CodeRush Xpress, tracking tasks using a personal Kanban board and an introduction to Albacore: a non-XML based build system.

The first session I attended after lunch was “C# on the iPhone with Monotouch” presented by Chris Hardy. MonoTouch allows you develop iPhone apps using C# on Mono, an open-source version of the .NET Framework. I was amazed at the tooling support and how well it integrated with the iPhone development tools. Chris said the MonoTouch team released support for the iPad within 24 hours of the SDK being released by Apple. You still need a Mac to develop, but the fact you can use C# and standard .NET libraries makes it easy for .NET developers to reuse their existing skills. Chris stressed that it’s still important to learn how to read Objective-C to follow the Apple documentation. The talk was very inspirational and I now might have to invest in a MacBook Pro 🙂

The last talk I went to was “Testing C# and ASP.Net applications using Ruby” presented by Ben Hall (who also happed to be celebrating his birthday, resulting in birthday cake and a chorus of Happy Birthday To You). Ben showed that Ruby can be used to create more readable tests than C#. He gave an example of testing a .NET web application using Cucumber, a Ruby-based Behaviour-Driven Development (BDD) framework. He used IronRuby for calling standard .NET libraries and demonstrated using WebRat for running the tests through a browser. Ben explained the tests can be run in the background by using a “headless“ browser. The RubyMine IDE includes extensive refactoring support and a Visual Studio-like experience. The browser-based tests ran much slower than standard unit tests and Ben suggests only automating high-value, happy-path scenarios.

Further information can be found in the book, Testing ASP.NET Web Applications, which Ben co-authored.

Thanks to the organisers, Microsoft and other sponsors for putting on a fantastic day.

Advertisements

Creating a Simple IoC Container

Inversion of Control (IoC) is a software design principle that describes inverting the flow of control in a system, so execution flow is not controlled by a central piece of code. This means that components should only depend on abstractions of other components and are not be responsible for handling the creation of dependent objects. Instead, object instances are supplied at runtime by an IoC container through Dependency Injection (DI).

IoC enables better software design that facilitates reuse, loose coupling, and easy testing of software components.

At first IoC might seem complicated, but it’s actually a very simple concept. An IoC container is essentially a registry of abstract types and their concrete implementations. You request an abstract type from the container and it gives you back an instance of the concrete type. It’s a bit like an object factory, but the real benefits come when you use an IoC container in conjunction with dependency injection.

An IoC container is useful on all types of projects, both large and small. It’s true that large, complex applications benefit most from reduced coupling, but I think it’s still a good practice to adopt, even on a small project. Most small applications don’t stay small for long. As Jimmy Bogard recently stated on Twitter: “the threshold to where an IoC tool starts to show its value is usually around the 2nd hour in the life of a project”.

There are many existing containers to choose from. These have subtle differences, but all aim to achieve the same goal, so it’s really a matter of personal taste which one you choose. Some common containers in .NET are:

While I recommended using one of the containers already available, I am going to demonstrate how easy it is to implement your own basic container. This is primarily to show how simple the IoC container concept is. However, there might be times when you can’t use one of the existing containers, or don’t want all the features of a fully-fledged container. You can then create your own fit-for-purpose container.

Using Dependency Injection with IoC

Dependency Injection is a technique for passing dependencies into an object’s constructor. If the object has been loaded from the container, then its dependencies will be automatically supplied by the container. This allows you to consume a dependency without having to manually create an instance. This reduces coupling and gives you greater control over the lifetime of object instances.

Dependency injection makes it easy to test your objects by allowing you to pass in mocked instances of dependencies. This allows you to focus on testing the behaviour of the object itself, without depending on the implementation of external components or services.

It is good practice to reduce the number of direct calls to the container by only resolving top-level objects. The rest of object-graph will be resolved through dependency injection. This also prevents IoC-specific code from becoming scattered throughout the code base, making it easy switch to a different container if required.

Implementing a Simple IoC Container

To demonstrate the basic concepts behind IoC containers, I have created a simple implementation of an IoC container.

Download the source and sample code here

This implementation is loosely based on RapidIoc, created by Sean McAlindin. It does not have all the features of a full IoC container, however, it should be enough to demonstrate the main benefits of using a container.

public class SimpleIocContainer : IContainer
{
    private readonly IList registeredObjects = new List();

 

    public void Register<TTypeToResolve, TConcrete>()
    {
        Register<TTypeToResolve, TConcrete>(LifeCycle.Singleton);
    }

 

    public void Register<TTypeToResolve, TConcrete>(LifeCycle lifeCycle)
    {
        registeredObjects.Add(new RegisteredObject(typeof (TTypeToResolve), typeof (TConcrete), lifeCycle));
    }

 

    public TTypeToResolve Resolve()
    {
        return (TTypeToResolve) ResolveObject(typeof (TTypeToResolve));
    }

 

    public object Resolve(Type typeToResolve)
    {
        return ResolveObject(typeToResolve);
    }

 

    private object ResolveObject(Type typeToResolve)
    {
        var registeredObject = registeredObjects.FirstOrDefault(o => o.TypeToResolve == typeToResolve);
        if (registeredObject == null)
        {
            throw new TypeNotRegisteredException(string.Format(
                "The type {0} has not been registered", typeToResolve.Name));
        }
        return GetInstance(registeredObject);
    }

 

    private object GetInstance(RegisteredObject registeredObject)
    {
        if (registeredObject.Instance == null ||
            registeredObject.LifeCycle == LifeCycle.Transient)
        {
            var parameters = ResolveConstructorParameters(registeredObject);
            registeredObject.CreateInstance(parameters.ToArray());
        }
        return registeredObject.Instance;
    }

 

    private IEnumerable<object> ResolveConstructorParameters(RegisteredObject registeredObject)
    {
        var constructorInfo = registeredObject.ConcreteType.GetConstructors().First();
        foreach (var parameter in constructorInfo.GetParameters())
        {
            yield return ResolveObject(parameter.ParameterType);
        }
    }
}

The SimpleIocContainer class has two public operations: Register and Resolve.

Register is used to register a type with a corresponding concrete implementation. When type is registered, it is added to a list of registered objects.

Resolve is used to get an instance of a type from the container. Depending on the Lifecycle mode, a new instance is created each time the type is resolved (Transient), or only on the first request with the same instance passed back on subsequent requests (Singleton).

Before a type is instantiated, the container resolves the constructor parameters to ensure the object receives its dependencies. This is a recursive operation that ensures the entire object graph is instantiated.

If a type being resolved has not been registered, the container will throw a TypeNotRegisteredException.

Using the Simple IoC Container with ASP.NET MVC

Now I am going to demonstrate using the container by creating a basic order processing application using ASP.NET MVC.

First we create a custom controller factory called SimpleIocControllerFactory that derives from DefaultControllerFactory. Whenever a page is requested, ASP.NET calls GetControllerInstance to get an instance of the page controller. We can then pass back an instance of the controller resolved from our container.

public class SimpleIocControllerFactory : DefaultControllerFactory
{
    private readonly IContainer container;

 

    public SimpleIocControllerFactory(IContainer container)
    {
        this.container = container;
    }

 

    protected override IController GetControllerInstance(Type controllerType)
    {
        return container.Resolve(controllerType) as Controller;
    }
}

We now need to set SimpleIocControllerFactory as the current controller factory in the global Application_Start handler.

public class MvcApplication : System.Web.HttpApplication
{
    protected void Application_Start()
    {
        var container = new SimpleIocContainer();

 

        BootStrapper.Configure(container);

 

        ControllerBuilder.Current.SetControllerFactory(new SimpleIocControllerFactory(container));
    }
}

In order for the SimpleIocControllerFactory to resolve an instance of the OrderController, we need to register the OrderController with the container.

Here I have created a static Bootstrapper class for registering types with the container.

public static class BootStrapper
{
    public static void Configure(IContainer container)
    {
        container.Register<OrderController, OrderController>();
    }
}

The controllers do not contain state, therefore we can use the default singleton lifecycle to create an instance of the controller only once per request.

When we run the application, the OrderController should be resolved and the page will load.

It is worth noting that the controller factory should be the only place we need to explicitly resolve a type from the container. The controllers are top-level objects and all our other objects stem from these. Dependency Injection is used to resolve dependencies down the chain.

To place an order we need to make a call to OrderService from the controller. We inject a dependency to the order service by passing the IOrderService interface into the OrderController constructor.

public class OrderController : Controller
{
    private readonly IOrderService orderService;

 

    public OrderController(IOrderService orderService)
    {
        this.orderService = orderService;
    }

 

    public ActionResult Create(int productId)
    {
        int orderId = orderService.Create(new Order(productId));

 

        ViewData["OrderId"] = orderId;

 

        return View();
    }
}

When we build and run the application we should get an error: “The type IOrderService has not been registered”. This means the container has tried to resolve the dependency, but the type has not been registered with the container. So we need to register IOrderService and its concrete implementation, OrderService, with the container.

public static class BootStrapper
{
    public static void Configure(IContainer container)
    {
        container.Register<OrderController, OrderController>();
        container.Register<IOrderService, OrderService>();
    }
}

The OrderService in turn has a dependency on IOrderRepository which is responsible for inserting the order into a database.

public class OrderService : IOrderService
{
    private readonly IOrderRepository orderRepository;

 

    public OrderService(IOrderRepository orderRepository)
    {
        this.orderRepository = orderRepository;
    }

 

    public int Create(Order order)
    {
        int orderId = orderRepository.Insert(order);
        return orderId;
    }
}

As the OrderService was resolved from the container, we simply need to register an implementation for IOrderRepository for OrderService to receive its dependency.

public static class BootStrapper
{
    public static void Configure(IContainer container)
    {
        container.Register<OrderController, OrderController>();
        container.Register<IOrderService, OrderService>();
        container.Register<IOrderRepository, OrderRepository>();
    }
}

Any further types are that required simply need to be registered with the container then passed as an argument on the constructor.

Most full-featured IoC containers support some form of auto-registration. This saves you from having do to a lot of one-to-one manual component mappings.

I hope I have demonstrated that IoC containers are not magic. They are in fact a simple concept that, when used correctly, can help to create flexible, loosely-coupled applications.

Software Development and Chaos Theory

After watching an excellent BBC documentary called The Secret Life of Chaos, I was interested to see if anyone had written about the association between chaos theory and the unpredictable nature of software development.

I came across these papers written in 1995 by L.B.S. Raccoon:

The Chaos Model and the Chaos Life Cycle

Raccoon (1995) The Chaos Model and the Chaos Life Cycle, in ACM Software Engineering Notes, Volume 20, Number 1, Pages 55 to 66, January 1995, ACM Press. (PDF available here).

The chaos model combines a linear problem-solving loop with fractals to suggest that a project consists of many interrelated levels of problem solving. The behaviour of a complex system emerges from the combined behaviour of the smaller building blocks.

The chaos life cycle defines the phrases of the software development life cycle in terms of fractals that show that all phrases of the life cycle occur within other phrases.

This suggests an iterative, outside-in development process, which is one of the fundamental principles of Behaviour-Driven Development (BDD).

The Chaos Strategy

Raccoon (1995) The Chaos Strategy, in ACM Software Engineering Notes, Volume 20, Issue 5, Pages 40 to 47, December 1995, ACM Press. (PDF available here).

The main rule in chaos strategy is always resolve the most important issue first. This reflects the Just-in-Time production ideology as promoted by Lean Software Development.

It’s interesting to note that today’s popular agile software methodologies use some of the principles of chaos theory as presented in these papers. It seems that software development is recognised as an example of complexity at work.

Agile software development follows a natural, evolving, chaotic cycle. The smallest variation in conditions can result in a massive difference in outcome. There will never be a way of accurately predicting the outcome of a complex software project. We simply have to accept chaos as a fact of life and embrace an evolutionary development process.

Success, But At What Price?

A friend asked me recently, “do all these good organisational and development practices really make that much of a difference to the successful outcome of a project?”. We all know following good practices improves the lives of those involved, but do they really help to achieve a business success? After all, some extremely-late, over-budget, bloated, misguided and poor-quality software development projects do go on to make the business a lot of money, therefore can be deemed to be successful. But at what price? Projects like this often leave a path of destruction behind them. The stress, political battles, low morale, fear, late nights, strained relationships, blood, sweat and tears of the people who worked hard under pressure, following poor organisational and development practices. That’s the difference. Good practices not only help to achieve a successful business outcome, they also help to build a good working environment. There is a human factor to the successful outcome of a project. Let’s not forget that.

Internal And External Collaborators

The Single Responsibility Principal (SRP) states that every object should have a single responsibility, and that all its services should be aligned with that responsibility. By separating object responsibilities we are able to achieve a clear Separation of Concerns. Objects need to collaborate with each other in order to perform a behaviour. I find I use two distinct styles of collaboration with other objects which I have called internal and external collaborators.

The principals of object collaboration are nothing new, but I have found that defining these roles has helped me to better understand how to design and test the behaviour of objects.

External Collaborators

External collaborators are objects that provide a service or resource to an object but are not directly controlled by the object. They are passed to an object by dependency injection or through a service locator. An object makes calls to its external collaborators to perform actions or retrieve data. When testing an object, any external collaborators are stubbed-out. We can then write tests that perform an action, then determine if the right calls were made to the external object.

Examples of external collaborator objects include: services, repositories, presenters, framework classes, email senders, loggers and file system wrappers.

We are interested in testing how we interact with the external collaborator and not how it affects the behaviour of our object. For example, if we are testing a controller that retrieves a list of customers from a repository, we want to know that we have asked the repository for a list of customers, but we are not concerned that the repository returns the correct customers (this is a test for the repository itself). Of course, we might need some particular customer objects returned by the stub for the purpose of testing the behaviour of the controller. These customer objects then become internal collaborators, which we’ll come to next.

External collaborators can be registered with an IoC container to manage the creation and lifecycle of the object, and to provide an instance to dependent objects.

Internal Collaborators

Internal collaborators have a smaller scope than external collaborators. They are used in the context of the local object to provide functions or hold state. An object and its internal collaborators work together closely and should be treated as a single unit of behaviour.

Examples of internal collaborators include: DTOs, domain entities, view-models, utilities, system types and extension methods.

When testing an object with internal collaborators, we are interested in the effect on behaviour, not the interaction with the object. Therefore we shouldn’t stub-out internal collaborators. We don’t care how we interact with them, just that the correct behaviour occurs.

These objects are not affected by external influences, such as a database, email server, or file system. They are also not volatile or susceptible to environmental changes, such as a web request context. Therefore, they should not require any special context setup before testing.

We don’t get passed an instance of a internal collaborator through dependency injection, instead they may be passed to us by an external collaborator (e.g. a repository returning an entity), or we create an instance within our own object when we need it (such as a DTO).

By understanding the roles and responsibilities of collaboration between objects, our design becomes clearer and tests are more focused and easier to maintain.

object_collaborators

Continuous Testing In .NET

Since the article was published, there have been several new continuous testing tools enter the market. I recommend checking out NCrunch, which is in my opinion the best continuous testing tool available for .NET – Tim 26/10/2012

The Test-Driven Development cycle of red-green-refactor gets you into a rhythm of writing a failing test, making it pass and then refactoring. It is important that we get feedback early if something is broken so we can keep this rhythm going. Stopping to run tests and wait for the result can impact this rhythm and distract our focus on the next step.

The concept of continuous testing came from research carried out by the Program Analysis Group at MIT. They found that continuously running tests increased developer productivity and reduced waste. You can find more information about continuous testing in this blog post by Ben Rady.

Tools For Continuous Testing

There are a number of tools available that support continuous testing. Ruby has AutoSpec, a command-line continuous testing tool. Java has Infinitest, a plug-in for Ecplise and IntelliJ. In .NET, James Avery has been working on AutoTest.NET and I was involved in a project to write a Visual Studio add-in called QuickTest. The main problem I have found with using these tools for .NET development is the complexity of real-world project structures. Whenever I’d try to use QuickTest on real projects, I found I was dealing with very different project structures, test configurations and naming conventions. It was really difficult to factor all these configuration scenarios into a tool that is designed to “discover” which unit tests to continuously run.

That gave me the idea for AutoBuild.

Introducing AutoBuild

AutoBuild is a continuous testing tool for .NET that runs a NAnt script whenever a file is saved. You simply tell AutoBuild to watch a particular folder and give it a simple NAnt script to run whenever a file changes. The advantage of this approach is that all the complexity in customising a continuous testing tool for your project is contained within the NAnt script. The NAnt script is likely to be a small subset of an actual build script. For example, you might not want tests that hit the database to be run on every save, so you could add a “DatabaseTest” attribute to your test and configure the NAnt script to ignore all tests with that attribute.

If you aren’t familiar with NAnt, don’t worry. A default build file is provided and you only need to modify the solution path and unit test assembly path properties.

Currently, the test failure output in AutoBuild only supports the NUnit task. I plan to add support for other unit test frameworks in the near future. I am also interested in adding support for Rake build scripts.

Using AutoBuild

You can find the source code for AutoBuild here:
http://code.google.com/p/autobuildtool/

To get started, get the source and run the default.build.cmd file. This will run the build and create an output folder that contains the AutoBuild assemblies, along with a sample build script and command file. You can copy these to your own project and customise the build file for your solution.

To run the “dog food” sample, simply run the autobuild.cmd file in the root AutoBuild folder (not the one in the output folder). This will open AutoBuild and start watching the source directory. Then open the AutoBuild solution and modify any .cs file. This should kick-off the build and run the tests. Mess around with the code to watch the build fail. Note that this autobuild.build script is configured to only run tests not marked with “integration”.

AutoBuild will produce an output with build errors and failing tests in red and successful builds and test runs in green. You can change the output colours by modifying the log4net settings in the AutoBuild.Console.exe.config file. You can also switch on debug output by changing the log level to debug.

autobuild

AutoBuild is ideally suited to a dual-monitor set up, with Visual Studio in one monitor and AutoBuild in the other. This allows you to focus on writing code and check the status of the build at a glance. I would like to integrate the console into Visual Studio for cases when you only have one monitor (e.g. working on a laptop). This might involve writing a Visual Studio add-in that allows you to load the console into the IDE.

Please submit any issues to the project on Google Code. If you have any comments or suggestions, then please feel free to contact me.

Services Are Not Objects

Many .NET applications I see that have a so-called Service-Oriented Architecture (SOA) use a technology such as Windows Communication Foundation (WCF) to treat services as if they were local objects. You call methods on the remote object in the same way as you would call a local object, only rather than executing locally, the request is sent to a remote service for processing which may then return a response. This is known as a Remote Procedure Call, or RPC. Often these calls can cause the application to hang while waiting for a response. The usual way to handle this is to call the service method asynchronously, providing a method to call when a response comes back. This allows your application to continue processing after the request is made.

Sounds simple enough, but I have found these direct request-response calls can lead to a fragile and unreliable application that is tightly-coupled to the services it calls. If for some reason a service is not available, your application may stop functioning. Sure, you can build in error-handling, but it then becomes the responsibility of your application to manage the dependency on the service.

Recently, Jennifer Smith and I were discussing the unreliable nature of these architectures on Twitter, when Udi Dahan, SOA specialist and creator of NServiceBus, pointed us to a recent blog post of his that explains why we shouldn’t call services as if they were real objects. Udi suggests that there is a better way to handle calls to services and that is to use a message queue. He also makes the point that we need to stop treating services as local objects and use messaging as in integral part of the application architecture:

A queue isn’t an implementation detail. It’s the primary architectural abstraction of a distributed system.

So, if you’re developing a distributed application, use a message bus to communicate between applications. The message bus is responsible for transporting messages between applications. When you use a message bus, the application that sends a message is no longer coupled to the receiver.

I am currently converting a fragile request-response service-based application to use messaging with the NServiceBus message bus. I hope to significantly improve the performance and robustness of the application by making messaging a first-class citizen in the architecture, and not just an abstraction.

For more information on messaging and NServiceBus, check out:

This book also contains excellent information on designing and developing messaging solutions: