Archive for the 'Software Practices' Category

If You Must Rewrite

As a general rule, you should never rewrite your software. I have (barely) survived several rewrites in the past and they all shared the same horrible experience:

  • The rewrite took much longer than expected, during which time the business stood still while the market moved on.
  • Functionality written over several years had to be re-implemented in a fraction of the time.
  • The development team was immediately pressured to “get it done”. This created a sense of always being behind schedule.
  • The business became frustrated and could not understand why it was taking so long.
  • The rewrite provided little or no new functionality to the business.
  • There were hidden business rules that had been long forgotten and no one knew what they meant.
  • The old system could not be turned off until all the features were migrated to the new system.
  • The old system still had to be maintained while the new system was under development.

There has been plenty written about the problems with software rewrites. Here are some great articles on the subject:

The general consensus from each of these articles is to never rewrite software unless you really have to.

So when do you really have to rewrite? I mean reeeeaaally have to. Well, there are times when a rewrite is completely unavoidable. In one such case I came across recently, the company no longer had access to the source code of their core product. That’s a pretty tough position to be in and a rewrite is pretty much the only option for moving forward.

So, here are a few retrospective tips from my experience enduring rewrites. This is not a prescriptive list and is only meant to represent ideas that might have helped in the cases I was involved in.

If you must rewrite…

The rewrite will always take longer than you think. Be prepared for a lot of work. An effective strategy is crucial – time spent rewriting the system is time lost from adding new features to grow the business.

Involve the product owner. They will be eager to complete the rewrite as it is costing them money to re-implement functionality they already have. Have them on-hand to prioritise features and answer questions. Keep them aware of progress.

Identify the core set of features the business can not function without. Focus on these features and plan to release as soon as you have a minimum feature set.

Prioritise the features. Ensure you are always working on the highest-priority feature.

Limit work in progress. Don’t try to re-implement everything at once. Complete a feature before moving onto the next.

Don’t rewrite everything. Be harsh – throw out anything you don’t absolutely need. Remember, every feature you rewrite costs adding a new feature that could grow the business.

Simplify your business processes. Use this as an opportunity to refine and simplify your existing business processes. Don’t just re-implement a feature because “that’s how it works in the old system”.

Use a well-known, established technology. Don’t be tempted to use the latest and greatest just because you get to start over. Speed is the key here, you don’t want to be thrashing with an unfamiliar technology.

Release as soon as possible, but not a moment sooner. Your current customers won’t be happy if the new system doesn’t function correctly or important features are missing.

Migrate existing data early. Don’t leave data migration to the last minute. You need to know your new system can work with the data you currently have. I was once involved in a rewrite where the migration happened at the last minute – the entire system failed to function and we lost more time rewriting parts to improve performance.

Implement whole features at a time. Don’t focus on horizontal layers of the application, e.g. database layer, services, UI, etc.

If you can, replace parts of your system at a time. If you can rewrite parts of the new system that integrate the old system, then you can stagger the rewrite and release sooner.

Use off-the-shelf software. If you have a generic feature, such as a forum or CMS, consider using a commercial or open-source alternative to avoid reinventing the wheel.

Don’t create a mess. Write adequate tests. The only way to go fast is to go well. Buggy software will cost more time and money, even in the short-term.

Success, But At What Price?

A friend asked me recently, “do all these good organisational and development practices really make that much of a difference to the successful outcome of a project?”. We all know following good practices improves the lives of those involved, but do they really help to achieve a business success? After all, some extremely-late, over-budget, bloated, misguided and poor-quality software development projects do go on to make the business a lot of money, therefore can be deemed to be successful. But at what price? Projects like this often leave a path of destruction behind them. The stress, political battles, low morale, fear, late nights, strained relationships, blood, sweat and tears of the people who worked hard under pressure, following poor organisational and development practices. That’s the difference. Good practices not only help to achieve a successful business outcome, they also help to build a good working environment. There is a human factor to the successful outcome of a project. Let’s not forget that.

Internal And External Collaborators

The Single Responsibility Principal (SRP) states that every object should have a single responsibility, and that all its services should be aligned with that responsibility. By separating object responsibilities we are able to achieve a clear Separation of Concerns. Objects need to collaborate with each other in order to perform a behaviour. I find I use two distinct styles of collaboration with other objects which I have called internal and external collaborators.

The principals of object collaboration are nothing new, but I have found that defining these roles has helped me to better understand how to design and test the behaviour of objects.

External Collaborators

External collaborators are objects that provide a service or resource to an object but are not directly controlled by the object. They are passed to an object by dependency injection or through a service locator. An object makes calls to its external collaborators to perform actions or retrieve data. When testing an object, any external collaborators are stubbed-out. We can then write tests that perform an action, then determine if the right calls were made to the external object.

Examples of external collaborator objects include: services, repositories, presenters, framework classes, email senders, loggers and file system wrappers.

We are interested in testing how we interact with the external collaborator and not how it affects the behaviour of our object. For example, if we are testing a controller that retrieves a list of customers from a repository, we want to know that we have asked the repository for a list of customers, but we are not concerned that the repository returns the correct customers (this is a test for the repository itself). Of course, we might need some particular customer objects returned by the stub for the purpose of testing the behaviour of the controller. These customer objects then become internal collaborators, which we’ll come to next.

External collaborators can be registered with an IoC container to manage the creation and lifecycle of the object, and to provide an instance to dependent objects.

Internal Collaborators

Internal collaborators have a smaller scope than external collaborators. They are used in the context of the local object to provide functions or hold state. An object and its internal collaborators work together closely and should be treated as a single unit of behaviour.

Examples of internal collaborators include: DTOs, domain entities, view-models, utilities, system types and extension methods.

When testing an object with internal collaborators, we are interested in the effect on behaviour, not the interaction with the object. Therefore we shouldn’t stub-out internal collaborators. We don’t care how we interact with them, just that the correct behaviour occurs.

These objects are not affected by external influences, such as a database, email server, or file system. They are also not volatile or susceptible to environmental changes, such as a web request context. Therefore, they should not require any special context setup before testing.

We don’t get passed an instance of a internal collaborator through dependency injection, instead they may be passed to us by an external collaborator (e.g. a repository returning an entity), or we create an instance within our own object when we need it (such as a DTO).

By understanding the roles and responsibilities of collaboration between objects, our design becomes clearer and tests are more focused and easier to maintain.


Services Are Not Objects

Many .NET applications I see that have a so-called Service-Oriented Architecture (SOA) use a technology such as Windows Communication Foundation (WCF) to treat services as if they were local objects. You call methods on the remote object in the same way as you would call a local object, only rather than executing locally, the request is sent to a remote service for processing which may then return a response. This is known as a Remote Procedure Call, or RPC. Often these calls can cause the application to hang while waiting for a response. The usual way to handle this is to call the service method asynchronously, providing a method to call when a response comes back. This allows your application to continue processing after the request is made.

Sounds simple enough, but I have found these direct request-response calls can lead to a fragile and unreliable application that is tightly-coupled to the services it calls. If for some reason a service is not available, your application may stop functioning. Sure, you can build in error-handling, but it then becomes the responsibility of your application to manage the dependency on the service.

Recently, Jennifer Smith and I were discussing the unreliable nature of these architectures on Twitter, when Udi Dahan, SOA specialist and creator of NServiceBus, pointed us to a recent blog post of his that explains why we shouldn’t call services as if they were real objects. Udi suggests that there is a better way to handle calls to services and that is to use a message queue. He also makes the point that we need to stop treating services as local objects and use messaging as in integral part of the application architecture:

A queue isn’t an implementation detail. It’s the primary architectural abstraction of a distributed system.

So, if you’re developing a distributed application, use a message bus to communicate between applications. The message bus is responsible for transporting messages between applications. When you use a message bus, the application that sends a message is no longer coupled to the receiver.

I am currently converting a fragile request-response service-based application to use messaging with the NServiceBus message bus. I hope to significantly improve the performance and robustness of the application by making messaging a first-class citizen in the architecture, and not just an abstraction.

For more information on messaging and NServiceBus, check out:

This book also contains excellent information on designing and developing messaging solutions:

Focus On Quality Improves Delivery

Ron Jeffries explains why software quality and speed of delivery are not mutually exclusive. If you want to continually deliver working software quickly, then you need to maintain high quality:

Uncle Bob has also posted his thoughts:

I often find there is a lot of focus on delivering software quickly, but not so much on maintaining quality. I think this is because it is generally considered that maintaining high quality means greater expense and later delivery. However, the higher the quality of code, the better able we are to deliver working software quickly and repeatedly. My belief is that the quality leaver should never be touched, or you risk taking on crushing debt.

The War On Nulls

As .NET developers, we’ve all seen this exception hundreds of times: “System.NullReferenceException – Object reference not set to an instance of an object”. In .NET, this exception occurs when trying to access a reference variable with a null value. A null value means the variable does not hold a reference to any object on the heap. It is one of the most frustrating and prolific errors that we programmers encounter. But it needn’t be this way! We can prevent this error by following a few simple rules. But first, a little history…

The null reference was invented by Tony Hoare, inventor of QuickSort, one of the world’s most widely used sorting algorithms. In this introduction to his talk at QCon 2009, Tony describes the impact the null reference has had on software:

I call it my billion-dollar mistake. It was the invention of the null reference in 1965. At that time, I was designing the first comprehensive type system for references in an object oriented language (ALGOL W). My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn’t resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years. In recent years, a number of program analysers like PREfix and PREfast in Microsoft have been used to check references, and give warnings if there is a risk they may be non-null. More recent programming languages like Spec# have introduced declarations for non-null references. This is the solution, which I rejected in 1965.

So obviously null references have caused quite a lot of damage. But neither Tony or null references are to blame. It’s the careless use of null references that has made them as damaging and prolific as they are.

I can’t think of a single reason why you would need to use null references as part of your system design. Here are some tips for preventing null references in your system.

Never use null references as part of the design

Business logic should not be based around testing for null references. If an object requires an empty state, be explicit about it by creating an empty representation of the object. You can then check if the object is in an empty state by comparing the current instance to an empty instance.

Here is an example of some code that uses a generic interface called ICanBeEmpty to support an empty representation of a Customer object. An extension method called HasValue() allows us to check if an object represents an empty instance.

public class Customer : ICanBeEmpty<Customer>
    private int id;
    private string name = string.Empty;
    public bool Equals(Customer other)
        return ==;
    public static Customer Empty
        get { return new Customer(); }
    Customer ICanBeEmpty<Customer>.Empty
        get { return Empty; }
public interface ICanBeEmpty<T> : IEquatable<T>
    T Empty { get; }
public static class Extensions
    public static bool HasValue<T>(this ICanBeEmpty<T> obj)
        return obj.Equals(obj.Empty);

Don’t accept null references as parameters

Guard statements are often used to check for null references in methods. If you design your system not to pass nulls, you won’t need guards to check for null in your methods. But when you can’t guarantee input to your public methods, then you need to be defensive about null references.

Don’t return null references

A call to a method or property should never return a null reference. Instead, return an empty representation of an object, or throw an exception if a non-empty value is expected.

Fail fast if a null reference is detected

Design-by-contract technologies, such as Spec#, have declarations that can check for null references at compile time. You can also use an aspect-oriented programming (AOP) solution, such as PostSharp, to create custom attributes that ensures an exception is thrown if any null references are passed in, or returned by a method at runtime. By throwing an exception as soon as a null reference is detected, we can avoid hunting through code to find the source of a null reference.

public class CustomerRepository
    public Customer GetCustomer(int id)
        return Customer.Empty;
    public void SaveCustomer(Customer customer)
        if (customer.HasValue())

Wrap potential sources of null references

If you are using a third-party service or component where you might receive a null reference, then wrap the call in a method that handles any null references to ensure they don’t leak into the rest of the system.

Always ensure object members are properly instantiated

All object members should be instantiated when an object is created. Be careful with strings in C#, as these are actually reference types. Always set string variables to a default value, such as string.Empty.

Nullable value types are ok

The nullable value types introduced in C# 2.0, such as int? and DateTime?, are better at handling null references as you have to explicitly cast them to a non-null value before accessing them. Be careful with using the Value property on a nullable type without first checking if the variable has a non-null value using the HasValue property. You can use GetValueOrDefault to return a default value if the variable is null.

By limiting the use of null references and not letting them leak into other parts of the system, we can prevent the troublesome NullReferenceException from ruining our day.

Are Burndowns Evil?

Agile teams often use a burndown chart to track the progress of a software project. The burndown chart provides an indication of work done over time. I have used burndowns on many projects and I have come to believe their use can negatively impact the quality of the software we deliver, without providing much benefit to the outcome of the project.

Firstly, a burndown can create schedule pressure early in the iteration. This is especially apparent on a new project with a new team. It takes time for a new project to get under way and for team members to get up to speed. This time can be very difficult to factor into an iteration.  Even though progress will improve, seeing the burndown flat-lining immediately can cause a lot of negative pressure on a team.

A burndown chart is linear and has no room for variations in team size, unscheduled meetings, unforeseen technical problems and other issues outside the control of the team. The burndown doesn’t take into account the unplanned, but necessary work that needs to be done to ensure the success of the project.

A burndown can be very unforgiving. One bad day and the progress line goes off course. This can cause pressure within the team to cut corners to get back on track. This is detrimental to the quality of the software and encourages developers to get a story done quick-and-dirty just to satisfy the burndown progress.

The progress of a burndown can be taken the wrong way by project managers and customers. No stories complete == no work done, which is usually not the case. If we need to track progress, we can simply look at the task board! How many stories are done? How many are still to do? How much time is left in the iteration? What needs reprioritising? Talk about it. Discuss the issues. The task board is a great place for the team to get together and talk about the progress of the project. If required, create a throw-away progress chart. But we shouldn’t drive the development process from it.

Use the number of points/ideal-days completed to estimate the team’s velocity. The velocity should be calculated on work already completed. For a new team and a new project, it is almost impossible to predict a velocity for the sake of creating a burndown, so why bother? It can cause more harm than good.

Another problem with the linear nature of a burndown is that it doesn’t factor in breakthroughs. A breakthrough is a fundamental shift in the understanding of a software design. This is a very important step in improving the design, quality and maintainability of the software. If a breakthrough is discovered by the team, then taking the time to refactor should be encouraged. Breakthrough refactorings can be hugely beneficial for the future development of the software.  A burndown can discourage refactoring and improvement by promoting a linear track.

The focus on a burndown is on reaching a predetermined end-point in time. Instead we should be focusing on delivering value to the business without compromising quality.

Experienced teams working on new or familiar software might not have any of these problems. I have been on projects where the burndown was very accurate and the project went very smoothly. But this was not because of a burndown. The burndown didn’t really provide any benefit to the outcome of the project. The project work was easy to estimate and so the burndown was always on track.

I’m not saying never use a burndown. They are often required by project managers to report on progress. Just don’t let it become the focus of the project, as it can potentially do more harm than good.