Archive for the 'Patterns' Category

Creating a Simple IoC Container

Inversion of Control (IoC) is a software design principle that describes inverting the flow of control in a system, so execution flow is not controlled by a central piece of code. This means that components should only depend on abstractions of other components and are not be responsible for handling the creation of dependent objects. Instead, object instances are supplied at runtime by an IoC container through Dependency Injection (DI).

IoC enables better software design that facilitates reuse, loose coupling, and easy testing of software components.

At first IoC might seem complicated, but it’s actually a very simple concept. An IoC container is essentially a registry of abstract types and their concrete implementations. You request an abstract type from the container and it gives you back an instance of the concrete type. It’s a bit like an object factory, but the real benefits come when you use an IoC container in conjunction with dependency injection.

An IoC container is useful on all types of projects, both large and small. It’s true that large, complex applications benefit most from reduced coupling, but I think it’s still a good practice to adopt, even on a small project. Most small applications don’t stay small for long. As Jimmy Bogard recently stated on Twitter: “the threshold to where an IoC tool starts to show its value is usually around the 2nd hour in the life of a project”.

There are many existing containers to choose from. These have subtle differences, but all aim to achieve the same goal, so it’s really a matter of personal taste which one you choose. Some common containers in .NET are:

While I recommended using one of the containers already available, I am going to demonstrate how easy it is to implement your own basic container. This is primarily to show how simple the IoC container concept is. However, there might be times when you can’t use one of the existing containers, or don’t want all the features of a fully-fledged container. You can then create your own fit-for-purpose container.

Using Dependency Injection with IoC

Dependency Injection is a technique for passing dependencies into an object’s constructor. If the object has been loaded from the container, then its dependencies will be automatically supplied by the container. This allows you to consume a dependency without having to manually create an instance. This reduces coupling and gives you greater control over the lifetime of object instances.

Dependency injection makes it easy to test your objects by allowing you to pass in mocked instances of dependencies. This allows you to focus on testing the behaviour of the object itself, without depending on the implementation of external components or services.

It is good practice to reduce the number of direct calls to the container by only resolving top-level objects. The rest of object-graph will be resolved through dependency injection. This also prevents IoC-specific code from becoming scattered throughout the code base, making it easy switch to a different container if required.

Implementing a Simple IoC Container

To demonstrate the basic concepts behind IoC containers, I have created a simple implementation of an IoC container.

Download the source and sample code here

This implementation is loosely based on RapidIoc, created by Sean McAlindin. It does not have all the features of a full IoC container, however, it should be enough to demonstrate the main benefits of using a container.

public class SimpleIocContainer : IContainer
{
    private readonly IList<RegisteredObject> registeredObjects = new List<RegisteredObject>();
 
    public void Register<TTypeToResolve, TConcrete>()
    {
        Register<TTypeToResolve, TConcrete>(LifeCycle.Singleton);
    }
 
    public void Register<TTypeToResolve, TConcrete>(LifeCycle lifeCycle)
    {
        registeredObjects.Add(new RegisteredObject(typeof (TTypeToResolve), typeof (TConcrete), lifeCycle));
    }
 
    public TTypeToResolve Resolve<TTypeToResolve>()
    {
        return (TTypeToResolve) ResolveObject(typeof (TTypeToResolve));
    }
 
    public object Resolve(Type typeToResolve)
    {
        return ResolveObject(typeToResolve);
    }
 
    private object ResolveObject(Type typeToResolve)
    {
        var registeredObject = registeredObjects.FirstOrDefault(o => o.TypeToResolve == typeToResolve);
        if (registeredObject == null)
        {
            throw new TypeNotRegisteredException(string.Format(
                "The type {0} has not been registered", typeToResolve.Name));
        }
        return GetInstance(registeredObject);
    }
 
    private object GetInstance(RegisteredObject registeredObject)
    {
        if (registeredObject.Instance == null || 
            registeredObject.LifeCycle == LifeCycle.Transient)
        {
            var parameters = ResolveConstructorParameters(registeredObject);
            registeredObject.CreateInstance(parameters.ToArray());
        }
        return registeredObject.Instance;
    }
 
    private IEnumerable<object> ResolveConstructorParameters(RegisteredObject registeredObject)
    {
        var constructorInfo = registeredObject.ConcreteType.GetConstructors().First();
        foreach (var parameter in constructorInfo.GetParameters())
        {
            yield return ResolveObject(parameter.ParameterType);
        }
    }
}

The SimpleIocContainer class has two public operations: Register and Resolve.

Register is used to register a type with a corresponding concrete implementation. When type is registered, it is added to a list of registered objects.

Resolve is used to get an instance of a type from the container. Depending on the Lifecycle mode, a new instance is created each time the type is resolved (Transient), or only on the first request with the same instance passed back on subsequent requests (Singleton).

Before a type is instantiated, the container resolves the constructor parameters to ensure the object receives its dependencies. This is a recursive operation that ensures the entire object graph is instantiated.

If a type being resolved has not been registered, the container will throw a TypeNotRegisteredException.

Using the Simple IoC Container with ASP.NET MVC

Now I am going to demonstrate using the container by creating a basic order processing application using ASP.NET MVC.

First we create a custom controller factory called SimpleIocControllerFactory that derives from DefaultControllerFactory. Whenever a page is requested, ASP.NET calls GetControllerInstance to get an instance of the page controller. We can then pass back an instance of the controller resolved from our container.

public class SimpleIocControllerFactory : DefaultControllerFactory
{
    private readonly IContainer container;
 
    public SimpleIocControllerFactory(IContainer container)
    {
        this.container = container;
    }
 
    protected override IController GetControllerInstance(Type controllerType)
    {
        return container.Resolve(controllerType) as Controller;
    }
}

We now need to set SimpleIocControllerFactory as the current controller factory in the global Application_Start handler.

public class MvcApplication : System.Web.HttpApplication
{
    protected void Application_Start()
    {
        var container = new SimpleIocContainer();
 
        BootStrapper.Configure(container);
 
        ControllerBuilder.Current.SetControllerFactory(new SimpleIocControllerFactory(container));
    }
}

In order for the SimpleIocControllerFactory to resolve an instance of the OrderController, we need to register the OrderController with the container.

Here I have created a static Bootstrapper class for registering types with the container.

public static class BootStrapper
{
    public static void Configure(IContainer container)
    {
        container.Register<OrderController, OrderController>();
    }
}

The controllers do not contain state, therefore we can use the default singleton lifecycle to create an instance of the controller only once per request.

When we run the application, the OrderController should be resolved and the page will load.

It is worth noting that the controller factory should be the only place we need to explicitly resolve a type from the container. The controllers are top-level objects and all our other objects stem from these. Dependency Injection is used to resolve dependencies down the chain.

To place an order we need to make a call to OrderService from the controller. We inject a dependency to the order service by passing the IOrderService interface into the OrderController constructor.

public class OrderController : Controller
{
    private readonly IOrderService orderService;
 
    public OrderController(IOrderService orderService)
    {
        this.orderService = orderService;
    }
 
    public ActionResult Create(int productId)
    {
        int orderId = orderService.Create(new Order(productId));
 
        ViewData["OrderId"] = orderId;
 
        return View();
    }
}

When we build and run the application we should get an error: “The type IOrderService has not been registered”. This means the container has tried to resolve the dependency, but the type has not been registered with the container. So we need to register IOrderService and its concrete implementation, OrderService, with the container.

public static class BootStrapper
{
    public static void Configure(IContainer container)
    {
        container.Register<OrderController, OrderController>();
        container.Register<IOrderService, OrderService>();
    }
}

The OrderService in turn has a dependency on IOrderRepository which is responsible for inserting the order into a database.

public class OrderService : IOrderService
{
    private readonly IOrderRepository orderRepository;
 
    public OrderService(IOrderRepository orderRepository)
    {
        this.orderRepository = orderRepository;
    }
 
    public int Create(Order order)
    {
        int orderId = orderRepository.Insert(order);
        return orderId;
    }
}

As the OrderService was resolved from the container, we simply need to register an implementation for IOrderRepository for OrderService to receive its dependency.

public static class BootStrapper
{
    public static void Configure(IContainer container)
    {
        container.Register<OrderController, OrderController>();
        container.Register<IOrderService, OrderService>();
        container.Register<IOrderRepository, OrderRepository>();
    }
}

Any further types are that required simply need to be registered with the container then passed as an argument on the constructor.

Most full-featured IoC containers support some form of auto-registration. This saves you from having do to a lot of one-to-one manual component mappings.

I hope I have demonstrated that IoC containers are not magic. They are in fact a simple concept that, when used correctly, can help to create flexible, loosely-coupled applications.

Building A Complex Web Forms UI

I recently wrote a post about composing a UI based on discrete behaviors. I thought maybe I should explain a bit more about the problem that led me to this idea.

Mike Wagg, Zubair Khan and I were tasked with developing a rather complex UI using ASP.NET Web Forms (the company had a suite of existing custom controls, so unfortunately MVC was not an option). We started-off using a Model View Presenter (MVP) pattern, but found that our presenters became overloaded with state management responsibilities. So we introduced a Presentation Model to handle the state and behavior of the View.

The ASP.NET page life cycle never failed to cause us grief. We attempted to abstract away the life cycle using a combination of a View, Presentation Model and backing Controller. The page populated the Presentation Model on Load and bound back to it on PreRender. Any state changes that occurred during the page life cycle was applied to the Presentation Model. The Controller’s responsibility was to receive events from the View, call out to services and pass data to the Presentation Model.

We found this greatly simplified things, as we didn’t have to worry about the changing state of the View throughout the page life cycle. We simply updated the Presentation Model and the View submissively bound to it at the end of the life cycle. We could also effectively test the entire process, as we didn’t rely on the page to manage state changes.

Presentation_Model

The only downside to the Presentation Model was that we had to change the code in order to accommodate new behavior. This violates the Open/Closed Principal (OCP) and increased the risk of breaking existing functionality. That led me to investigate the discrete behaviors approach that I blogged about.

Another problem we faced was getting the presentation components to talk to each other. We were using the View as the messaging bus, but this led to a lot of code that looked like parentView.childView.DoSomething(). This was very brittle, so we created an Event Hub that acted as a central messaging bus that any of the presentation components could publish/subscribe to.

We now feel we have this complex UI project under control. Mike is currently writing a series of posts that go into further details on the Presentation Model and Event Hub approaches. We learned a lot from this project and I hope this can help someone else who is creating a complex UI in a stateless web environment.

User Interface Code And The Open/Closed Principal

The Open/Closed Principal (OCP) is a fundamental object-oriented design principal that states:

“Software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification.”

This means that we should be able to add new behavior to a software entity without altering its code.

Most UI code I have seen, including Model-View-Presenter (MVP) and Model-View-Controller (MVC) implementations, clearly violate the open/closed principal.

When developing UI code, we tend to create presentation classes in terms of views, or sections of the UI. To add new behavior to the UI, we need to modify these presentation classes. This increases the risk of breaking existing functionality.

UI development can be notoriously complex because of the many interactions and state changes that can occur. To manage this complexity we should ensure that our classes have only one reason to change.

Instead of grouping presentation classes into visible sections of the UI, maybe we should be creating a series of discrete presentation behaviors that react to events raised from the view.

These behaviors can be added and removed without affecting other behaviors on the page.

Behavior-Driven Development (BDD) advocates creating tests based on individual behaviors, so why not create our presentation classes in the same way? The tests then correspond directly to a single discrete unit, rather than a behavior that occurs within a larger structure, e.g. a presenter class.

Each behavior has a name that describes the behavior it represents. Using a descriptive name provides instant documentation of the UI behavior. It should be easy for someone to understand what the UI does simply by looking at the behavior names. If a problem occurs, it should be easy to identify and isolate the affected behavior.

Implementing Presentation Behaviors

I have created a simple music finder application that demonstrates an implementation of presentation behaviors.

Download the sample code here.

The sample application contains a single page, represented by an IFindMusicView interface. The behaviors respond to events raised by this View and update the View accordingly.

A typical behavior can be defined as:

Given… a particular scenario

When… an event occurs

And… all conditions have been met

Then… do something

And… do something else

Each behavior is implemented as a class that derives from a base class with an IBehavior interface. This interface contains two methods: When() and Then().

The When() method contains code that registers the behavior with a certain event on the page. The Then() method contains the code that responds to the event if all conditions have been met. The “Given” aspect is implemented by the class constructor, which takes the view and any associated dependencies.

   1: public class when_song_list_is_empty_disable_select_button : FindMusicBehavior
   2: {
   3:     public when_song_list_is_empty_disable_select_button(IFindMusicView view)
   4:         : base(view)
   5:     {
   6:     }
   7: 
   8:     public override void When()
   9:     {
  10:         View.AfterSongsLoaded += (sender, e) => Then();
  11:     }
  12: 
  13:     public override void Then()
  14:     {
  15:         View.IsSelectButtonEnabled = View.SongsList.Count > 0;
  16:     }
  17: }

 

The behaviors are instantiated by the View, but an Inversion Of Control container could be used to register the behaviors at run-time. The View then wouldn’t need to know anything about the behaviors that are implemented for it. We could then drop-in and drop-out behaviors without needing to change the existing code.

Further Considerations

Although this is an interesting concept, I am yet to implement it on a large-scale application. There are several areas I need to investigate further, such as:

  1. Can one behavior extend the functionality of another behavior?
  2. Can parallel behaviors remain independant from one another?
  3. How does this work with MVC frameworks? Are the behaviors triggered by actions?
  4. How well does this scale?

Feedback

Does this concept make sense? Does it sound practical? Could it potentially solve some of the issues we face with developing complex UI code? Any feedback would be greatly appreciated.

Enjoy!

Legacy Code Can Be Fun!

No, I’m not being ironic; I’m actually enjoying working with legacy code!

I have recently inherited a large, legacy code-base. There are hardly any tests and it’s full of dependencies between components. The application is critical to the day-to-day operation of the business and requires ongoing feature additions, support and maintenance.

By “legacy code”, I am referring to the definition as “code without tests and in need of refactoring”. This is described by Michael Feathers in his excellent book Working Effectively with Legacy Code. This post uses techniques from this book. If you are working with a legacy code base, buy this book! It will make your life much easier.

As developers, most of us have had to endure the frustration of working with code that is unclear and untested (if you haven’t, you don’t know what you’re missing ;-). It can be painful having to make changes without knowing the affect the change could have on the application. We usually apply Change-and-Pray Development, that often results in hard-to-find problems slipping through to production. Over time, constant tacking-on of changes further degrades the quality of the code-base and can result in a Big Ball of Mud.

But fear not, there is a way to make changes to legacy code much easier, and – dare I say, fun! Continuous refactoring and testing can help to turn a bug-ridden, untested and tangled application into a slick, streamlined code-base that just gets better over time.

Introducing Changes To Legacy Code

One of the most difficult things about working with legacy code is introducing changes. Without tests to back us up, it is hard to know if a change has worked and hasn’t broken anything unexpectedly.

For this demonstration, I have created a sample class that reflects some common issues when introducing changes to legacy code. The code is intentionally simple to clarify the refactoring. Real-world legacy code is not usually so simple, but these techniques still apply.

Here we have a DocumentManager class.

public class DocumentManager
{
    public static Document GetDocument(int documentId)
    {
        DocumentDAL documentDAL = new DocumentDAL();
        Document document = documentDAL.Get(documentId);
        if (document != null && !File.Exists(document.Path))
        {
            throw new FileNotFoundException("Document file was not found");
        }
        return document;
    }
    public static List<Document> GetAllDocuments()
    {
        DocumentDAL documentDAL = new DocumentDAL();
        List<Document> documents = documentDAL.List();
        foreach (Document doc in documents)
        {
            if (doc != null && !File.Exists(doc.Path))
            {
                throw new FileNotFoundException("Document file was not found");
            }
        }
        return documents;
    }
    public static void SaveDocument(Document document)
    {
        File.WriteAllText(document.Path, document.Contents);
        DocumentDAL documentDAL = new DocumentDAL();
        documentDAL.Save(document);
    }
    public static void DeleteDocument(int documentId)
    {
        Document document = GetDocument(documentId);
        DocumentDAL documentDAL = new DocumentDAL();
        documentDAL.Delete(document);
        File.Delete(document.Path);
    }
    public static string GetFileName(Document document)
    {
        return Path.GetFileName(document.Path);
    }
}

The purpose of this class is to manage the persistence of documents to a database and file system.

I have come across this type of class many times; it is mostly composed of static methods that call an underlying data access layer. The class is doesn’t have any unit tests and contains direct calls to the file system and a data access class.

My goal is to add some validation code into the Save method of the DocumentManager class and to create unit tests to verify the new functionality.

Firstly, I’m going to create a test harness so I know my changes won’t adversely affect the existing functionality.

Creating a Test Harness

Creating a test harness allows us to make changes while ensuring the code still functions as expected.

Let’s create a new test fixture called DocumentManagerTests.

[TestFixture]
public class DocumentManagerTests
{
}

Instantiating the Class

Now that we have a test fixture in place, let’s create a test that instantiates the class we want to test.

[Test]
public void Can_Create_Instance_Of_DocumentManager()
{
    DocumentManager documentManager = new DocumentManager();
    Assert.That(documentManager, Is.InstanceOfType(typeof(DocumentManager)), "Should create an instance of DocumentManager");
}

Our code doesn’t compile, as DocumentManager is a static class and can’t be instantiated. We could just test the members of the static class, but we need an instance so any dependencies can be passed into the constructor. We will come to this shortly, but for now, let’s remove the static keyword from the class declaration so we can instantiate the class.

public class DocumentManager
{ ...

After making the class non-static our test compiles and passes! The assertion isn’t very meaningful, but at least we know we can instantiate the class without an error.

Breaking Dependencies

The reason legacy code seems difficult to test, is that it often contains internal calls to other components or sub-systems. This makes it difficult to isolate a piece of code for testing.

So why break dependencies and not just test everything? Well, firstly you end up testing the integration between components, rather than a single unit of code. Calling external components can cause our tests to run slowly, or to fail due to an problem with a sub-system.

To test the Save method in isolation, we need to extract calls to the DocumentDAL and File classes.

These are fairly safe structural refactorings, so we should be able to make the changes without affecting functionality.

To remove the dependency on the DocumentDAL class in the Save method, we can use a technique called Dependency Injection.

Dependency Injection

Dependency Injection involves passing a reference to the dependent object into the class constructor. Here, we can use an Extract Interface refactoring to decouple the DocumentDAL implementation from its interface. Using an interface allows us to provide a mock implementation for testing.

public class DocumentDAL : IDocumentDAL
{ ...

We can now “inject” the dependency into the DocumentManager constructor.

public class DocumentManager
{
    private readonly IDocumentDAL documentDAL;
    public DocumentManager(IDocumentDAL documentDAL)
    {
        this.documentDAL = documentDAL;
    } ...

Our test doesn’t compile, as we need to pass an implementation of IDocumentDAL into the constructor. I am using Rhino Mocks to mock an IDocumentDAL implementation.

[Test]
public void Can_Create_Instance_Of_DocumentManager()
{
    MockRepository mocks = new MockRepository();
    IDocumentDAL documentDAL = mocks.CreateMock<IDocumentDAL>();
    DocumentManager documentManager = new DocumentManager(documentDAL);
    Assert.That(documentManager, Is.InstanceOfType(typeof(DocumentManager)), "Should create an instance of DocumentManager");
}

To remove the Save method’s dependency on the file system, we can use another dependency-breaking technique called Subclass and Override Method.

Subclass and Override Method

First, we use Extract Method to move call to the file system into a separate virtual method.

protected virtual void WriteFile(Document document)
{
    File.WriteAllText(document.Path, document.Contents);
} 

We need to change the Save method to call the new WriteFile method. Because WriteFile is an instance method, we need to make the Save method non-static. Note that any code that calls this method will also need to be changed to use the non-static method.

 

public void SaveDocument(Document document)
{
    WriteFile(document);

    DocumentDAL documentDAL = new DocumentDAL();

    documentDAL.Save(document);
}

Now we can create a subclass of DocumentManager called TestingDocumentManager and override the WriteFile method. This allows us to stub-out the calls to the file system by the Save method for use in our tests.

public class TestingDocumentManager : DocumentManager
{
    public TestingDocumentManager(IDocumentDAL documentDAL)
        : base(documentDAL)
    {
    }
    protected override void WriteFile(Document document)
    {
    }
}

Next, we update our test to use the TestDocumentManager with the overridden WriteFile method.

[Test]
public void Can_Create_Instance_Of_DocumentManager()
{
    MockRepository mocks = new MockRepository();
    IDocumentDAL documentDAL = mocks.CreateMock<IDocumentDAL>();
    DocumentManager documentManager = new TestingDocumentManager(documentDAL);
    Assert.That(documentManager, Is.InstanceOfType(typeof(DocumentManager)), "Should create an instance of DocumentManager");
}

Ideally, we could remove the dependency on the file system altogether by injecting an interfaced wrapper for the File class, but that would distract us from the focus of our test. Subclass and Override Method is a very useful technique for quickly getting legacy code under test. We can refactor the code at a later time if required.

Testing the Method

To test the Save method we need to remove the instantiation of DocumentDAL and use the field that was set in the constructor.

 

public void SaveDocument(Document document)
{
    WriteFile(document);

    documentDAL.Save(document);
}

Now we can test the functionality of the Save method.

[Test]
public void Can_Save_Document()
{
    MockRepository mocks = new MockRepository();
    IDocumentDAL documentDAL = mocks.CreateMock<IDocumentDAL>();
    DocumentManager documentManager = new TestingDocumentManager(documentDAL);
    Document document = TestHelper.GetDocument();
    Expect.Call(() => documentDAL.Save(document));

    mocks.ReplayAll();
    documentManager.SaveDocument(document);
    mocks.VerifyAll();
}

Running this test shows us that the behaviour of Save is working as expected. It provides a safety net for making further changes. If a change affects the core behaviour of the Save method, this test should fail.

Making the Changes

Now that we have a test harness in place, we can implement our new changes using Test Driven Development (TDD).

First, we write a test to verify the changes we want to make.

[Test]
public void Cannot_Save_Invalid_Document()
{
    MockRepository mocks = new MockRepository();
    IDocumentDAL documentDAL = mocks.CreateMock<IDocumentDAL>();
    DocumentManager documentManager = new TestingDocumentManager(documentDAL);
    Document document = TestHelper.GetInvalidDocument();
    try
    {
        documentManager.SaveDocument(document);
        Assert.Fail("Should throw an exception for invalid document");
    }
    catch(Exception ex)
    {
        Assert.That(ex, Is.InstanceOfType(typeof(ValidationException)), "Should throw a ValidationException");
    }
}

The code compiles and the test is failing as expected. We now need to implement the code to make the test pass.

public void SaveDocument(Document document)
{
    if(!document.IsValid)
    {
        throw new ValidationException();
    }
    WriteFile(document);
    documentDAL.Save(document);
}

The test passes! We now run all the tests to ensure our change hasn’t broken the existing functionality.

Refactoring

With our tests passing, we can now clean things up a bit. We are free to make changes on any code that has been tested.

We could also take the time to further improve the design of the DocumentManager class, such as separating the File Access responsibilities into another class. The amount of refactoring you do depends on how much time you have. If you can spend some time tidying things up around the code you have changed, the better the code will be for the next person who needs to change it. You don’t have to do everything at once. Generally, code that is changed often should become continuously refactored and improved over time.

Summary

This is just one of many techniques you can use to refactor and test legacy code. Many more examples can be found in Working Effectively with Legacy Code. This has been a very simple example, but I have used these same techniques to make big changes to some very dodgy pieces of code.

Refactoring and testing legacy code can help to improve a legacy code-base over time. It feels like you’re writing new code, rather than just tacking onto existing code. It is also a great feeling walking away from implementing a change, knowing that the surrounding code is now cleaner and more robust than before.

Implementing the MVP Pattern in Silverlight

Silverlight 2 is the next major update of Microsoft’s Rich Internet Application (RIA) technology. It includes a cross-platform, cross-browser version of the .NET Framework, that enables a rich .NET development platform that runs in the browser . The recent release of Silverlight 2 Beta has given developers the opportunity to start developing rich internet applications using Visual Studio. So now that we can develop rich client applications in Silverlight, how can we architect a Silverlight application in a way that allows us to effectively unit test our code? The Model-View-Presenter design pattern can help to achieve this.

The MVP Pattern

The Model-View-Presenter (MVP) pattern separates user interface logic into testable components. This pattern is commonly used in ASP.NET applications to improve the testability of presentation logic. The same principals can be applied when developing applications in Silverlight 2.

MVP Pattern

There are many articles that explain the principals of the MVP pattern, so I won’t go into detail here. If you are unfamiliar with the pattern, a good introduction can be found here.

Basically, the MVP pattern consists of the following components:

Model – Represents the domain data.

View – Contains the user interface.

Presenter – Manages presentation logic and data binding.

User interface code can be difficult to test as it is usually tightly bound to controls. Separating the code from the view allows presentation logic to be more easily tested.

This separation also allows us to reuse presentation logic throughout an application. If an application contains different views of the same data, we only need one presenter to manage each view.

Unit Testing Silverlight Applications

One of the great benefits of the MVP pattern is that it allows you to unit test your presentation logic. Unfortunately, I struck some problems while attempting to unit test code written in a Silverlight project.

Visual Studio will not allow you to reference a Silverlight assembly from a standard .NET assembly. This causes a problem as unit testing frameworks, such as NUnit, are standard .NET assemblies.

The following error occurs when attempting to add a reference to NUnit in a Silverlight class library project:

“You can’t add a reference to nunit.framework.dll as it was not built against the Silverlight runtime. Silverlight projects will only work with Silverlight assemblies.”

image2.png

This is understandable, as standard .NET assemblies may use framework features that are not available in the compact version of the .NET Framework in Silverlight. So, what if we add a reference to our Silverlight code to a standard .NET class library project? Well, we are then presented with this error:

“Silverlight projects can only be referenced by other Silverlight projects.”

image1.png

Again, this is understandable for deployed code, as the Silverlight application may be using features specific  to the Silverlight framework. But our unit test code shouldn’t contain any Silverlight-specific features.

So how can we unit test our Silverlight code if we can’t reference it from a test fixture? Well, I discovered that it is possible bypass the project reference and add a direct reference to the Silverlight DLLs from a standard class library project! This allows us to unit test the Silverlight presenter assemblies using NUnit, or any other .NET unit testing framework.

You may also find you have to reference a few Silverlight framework assemblies, such as System.dll and System.Windows.dll, but if you minimise references to Silverlight-specific features in the presenter classes, this shouldn’t be a problem.

Microsoft have announced a Unit Testing framework for Silverlight 2. There are no details on this yet, but I would be surprised if it supports third-party testing frameworks, such as NUnit and mbUnit, or mocking frameworks, such as NMock and RhinoMocks. I could be wrong though, but until then, this method seems to work just fine.

So, now that we can unit test our Silverlight code, what is the best way to architect a Silverlight application? Well, I’m sure, like any application, it depends on your project. I develop ASP.NET applications and constantly find it difficult to unit test interface logic stuck in code-behind files. This problem seems apparent in Silverlight too, as the code-behind is tightly coupled with the XAML view. We can break this coupling and split our user interface logic into separate, testable components using the MVP Pattern.

Applying the MVP Pattern to a Silverlight Application

The following describes a conceptual model for a Silverlight application implementing the MVP Pattern. 

architecture.png

Server Application

The main database, domain model and business logic reside on the server.  The data and business logic is accessed by the Silverlight application through calls to Web Services. The server application can be architected independently and is not specific to the MVP Pattern.

Silverlight Client

The Silverlight client application contains only presentation logic and no business logic. Any operations that require business logic should be passed back to the server to be processed and results returned in the form of a Data Transfer Object (DTO).

Service Reference

The Silverlight application talks to the server via web services. Visual Studio creates a service reference that contains methods for making asynchronous calls to a web service. We can create a partial class from the generated service soap client that implements a custom interface containing methods used by the presenter. We can then call service methods through this interface rather than directly referencing the service class. Using an interface also allows us to stub-out the service when unit testing the presenter.

Proxy Model

When the Silverlight application references a web service, a proxy data class is generated. This class acts as a Data Transfer Object (DTO) that represents a data structure for transferring data between the client and server applications. The DTO provides the model used by the view and presenter.

View

The view is the visual representation of the application.

The view is composed of XAML files and their corresponding code-behind files. This structure is very similar to that of an ASP.NET page.

The XAML files describe the layout and behavior of the Silverlight interface. Controls in the view are wrapped by properties in the code-behind file. Any events fired from controls are handled in the code-behind and passed up to the presenter for further processing. The presenter can also pass information back to the view through properties and methods in the code-behind.

Presenter

The presenter sits between the view and the service reference. If the view requests data, the presenter will call a service reference and present the results to the view. If an action requires logic, the presenter will handle the event and process the information to send back to the view.

Example Code

As an example, I have created a Silverlight application that implements the MVP pattern to display data from the Customers table in the Northwind database. The data can be edited and saved back to the database via a web service.

image3.png

image4.png

The solution contains the following projects:

Server Projects (Standard assemblies will reside on the server)

SilverlightMvp.DataAccess - Provides data from the Customers table using Linq to SQL.

SilverlightMvp.Web - Contains the web site that will host the Silverlight application and web services that the Silverlight application will call to access the customer data.

Client Projects (Silverlight assemblies that run inside the browser)

SilverlightMvp.View - Contains the XAML and code-behind files.

SilverlightMvp.Core – User Interface logic classes, including the presenters, service references and interfaces.

SilverlightMvp.Core.UnitTests – While not deployed to the client, this standard .NET assembly tests classes in the SilverlightMvp.Core project.

You can download the example code here

I hope this information is useful to anyone developing .NET applications in Silverlight. If you have any questions, comments or suggestions please let me know. Enjoy!

Implementing The Circuit Breaker Pattern In C# – Part 2

In my previous post, I discussed an implementation of The Circuit Breaker Pattern as described in Michael T. Nygard’s book, Release It! Design and Deploy Production-Ready Software. In this post, I will talk about several additions and improvements I have made to the initial implementation.

Service Level

The Circuit Breaker Pattern contains a failure count that keeps track of the number of consecutive exceptions thrown by an operation. When the failure count reaches a threshold, the circuit breaker trips. If an operation succeeds before the failure threshold is reached, the failure count is reset to zero. This works well if the service outage causes multiple consecutive failures, but if the threshold is set to 100 and the service fails 99 times, then one operation succeeds, the failure count is reset to 0, even though there is obviously a problem with the service that should be handled.

To deal with intermittent service failures, I have implemented a “service level” calculation. This indicates the ratio of successful operations to failed operations, expressed as a percentage. For example, if the circuit breaker has a threshold of 100 and an operation fails 50 times, then the current service level is 50%. If the service recovers and 25 operations succeed, then the service level will be 75%. The circuit breaker will not completely reset after a single successful operation. Each successful operation increments the service level, while each failed operation decrements the service level. Once the service level reaches 0%, i.e. the ratio of failed operations over successful ones have reached the threshold, the circuit trips.

A ServiceLevelChanged event allows the client application to be notified of service level changes. This could be used for monitoring performance, or for tracking service levels against a Service Level Agreement (SLA).

The threshold value determines the circuit breaker’s resistance to failures. If a client makes a lot of calls to a service, a higher threshold will allow it more time to recover from failures before tripping. If the client makes fewer calls, but the calls are expensive to the service, a lower threshold will allow the circuit breaker to trip more easily.

Ignored Exception Types

Sometimes a service will throw an exception as part of the service logic. We might not want these exceptions to affect the circuit breaker service level. I have added an IgnoredExceptionTypes property, which holds a list of exception types for the circuit breaker to ignore. If an operation throws one of these exceptions, the exception is thrown back to the caller and is not logged as a failure.

 CircuitBreaker cb = new CircuitBreaker();
 cb.IgnoredExceptionTypes.Add(typeof(AuthorizationException));

Invoker Exceptions

If the operation invoker throws an exception that was not caused by the operation itself, then the exception is thrown back to the caller and is not logged as a failure.

Threading

As mentioned in a comment by Søren on my last post, it is likely a circuit breaker would be used in a multi-threaded environment, therefore it should be able to function property when multiple threads are executing operations.

The failure count is now updated atomically using the System.Threading.Interlocked.Increment and System.Threading.Interlocked.Decrement methods. This ensures the failure count variable is locked while being modified by a thread. Other threads wanting to update the failure count must wait until it is released.

While this does not guarantee the circuit breaker is completely thread-safe, it does prevent problems with multiple threads executing operations and tracking failures. I have to confess I’m not an expert at multi-threaded application designs, so if anyone has any further suggestions on how to make the circuit breaker more thread-safe, I would love to hear them!

For more information implementing threading, see the .NET Framework Threading Design Guidelines.

Download

Download the circuit breaker code and unit tests (VS 2008).

I hope you find these enhancements helpful. Does providing a service level make sense? How can I improve multi-threading support? If you have any comments or suggestions, please let me know your thoughts.

Implementing The Circuit Breaker Pattern In C#

When developing enterprise-level applications, we often need to call external services and resources. These services could be a network location, database server, or web service. Whenever we call a service, there is a chance that a problem with the network or the end-service itself could cause a service failure. One method of attempting to overcome a service failure is to queue requests and retry periodically. This allows us to continue processing requests until the service becomes available again. However, if a network or service is experiencing problems, hammering it with retry attempts will not help the service to recover, especially if it is under increased load. Such a pounding can cause even more damage and interruption to services. If we know there could potentially be a problem with a service, we can help take some of the strain by implementing a Circuit Breaker pattern on the client application.

Circuit breakers in our home prevent a surge of current from damaging appliances or overheating the wiring. They work by allowing a certain level of current to enter the system. If the current exceeds the threshold, the circuit opens, stopping the current from flowing and preventing further damage. Once the problem has been fixed, the circuit breaker can be reset which closes the circuit and allows electricity to flow again. The Circuit Breaker patten uses the same concept by stopping requests to a resource if the number of failures exceed a certain threshold.

The Circuit Breaker pattern is described in Michael T. Nygard’s book, Release It! Design and Deploy Production-Ready Software. The pattern has three operational states: closed, open and half-open.

In the “closed” state, operations are executed as usual. If an operation throws an exception, the failure count is incremented and an OperationFailedException is thrown. If the failure count exceeds the threshold, the circuit breaker trips into the “open” state. If a call succeeds before the threshold is reached, the failure count is reset.

In the “open” state, all calls to the operation will fail immediately and throw an OpenCircuitException. A timeout is started when the circuit breaker trips. Once the timeout is reached, the circuit breaker enters a “half-open” state.

In the “half-open” state, the circuit breaker allows one operation to execute. If this operation fails, the circuit breaker re-enters the “open” state and the timeout is reset. If the operation succeeds, the circuit breaker enters the “closed” state and the process starts over.

You can download the circuit breaker code and tests here. If you have any comments or suggestions, I would love to hear them!

Update: I have posted a new article that contains a number of additions and improvements to the circuit breaker code.

For more information on this pattern and many other ways to improve software stability, capacity and operational ability, I highly recommend the book Release It! Design and Deploy Production-Ready Software by Michael T. Nygard.

Circuit breaker class diagram



Follow

Get every new post delivered to your Inbox.