Archive for the 'Testing' Category

Internal And External Collaborators

The Single Responsibility Principal (SRP) states that every object should have a single responsibility, and that all its services should be aligned with that responsibility. By separating object responsibilities we are able to achieve a clear Separation of Concerns. Objects need to collaborate with each other in order to perform a behaviour. I find I use two distinct styles of collaboration with other objects which I have called internal and external collaborators.

The principals of object collaboration are nothing new, but I have found that defining these roles has helped me to better understand how to design and test the behaviour of objects.

External Collaborators

External collaborators are objects that provide a service or resource to an object but are not directly controlled by the object. They are passed to an object by dependency injection or through a service locator. An object makes calls to its external collaborators to perform actions or retrieve data. When testing an object, any external collaborators are stubbed-out. We can then write tests that perform an action, then determine if the right calls were made to the external object.

Examples of external collaborator objects include: services, repositories, presenters, framework classes, email senders, loggers and file system wrappers.

We are interested in testing how we interact with the external collaborator and not how it affects the behaviour of our object. For example, if we are testing a controller that retrieves a list of customers from a repository, we want to know that we have asked the repository for a list of customers, but we are not concerned that the repository returns the correct customers (this is a test for the repository itself). Of course, we might need some particular customer objects returned by the stub for the purpose of testing the behaviour of the controller. These customer objects then become internal collaborators, which we’ll come to next.

External collaborators can be registered with an IoC container to manage the creation and lifecycle of the object, and to provide an instance to dependent objects.

Internal Collaborators

Internal collaborators have a smaller scope than external collaborators. They are used in the context of the local object to provide functions or hold state. An object and its internal collaborators work together closely and should be treated as a single unit of behaviour.

Examples of internal collaborators include: DTOs, domain entities, view-models, utilities, system types and extension methods.

When testing an object with internal collaborators, we are interested in the effect on behaviour, not the interaction with the object. Therefore we shouldn’t stub-out internal collaborators. We don’t care how we interact with them, just that the correct behaviour occurs.

These objects are not affected by external influences, such as a database, email server, or file system. They are also not volatile or susceptible to environmental changes, such as a web request context. Therefore, they should not require any special context setup before testing.

We don’t get passed an instance of a internal collaborator through dependency injection, instead they may be passed to us by an external collaborator (e.g. a repository returning an entity), or we create an instance within our own object when we need it (such as a DTO).

By understanding the roles and responsibilities of collaboration between objects, our design becomes clearer and tests are more focused and easier to maintain.

object_collaborators

Continuous Testing In .NET

Since the article was published, there have been several new continuous testing tools enter the market. I recommend checking out NCrunch, which is in my opinion the best continuous testing tool available for .NET – Tim 26/10/2012

The Test-Driven Development cycle of red-green-refactor gets you into a rhythm of writing a failing test, making it pass and then refactoring. It is important that we get feedback early if something is broken so we can keep this rhythm going. Stopping to run tests and wait for the result can impact this rhythm and distract our focus on the next step.

The concept of continuous testing came from research carried out by the Program Analysis Group at MIT. They found that continuously running tests increased developer productivity and reduced waste. You can find more information about continuous testing in this blog post by Ben Rady.

Tools For Continuous Testing

There are a number of tools available that support continuous testing. Ruby has AutoSpec, a command-line continuous testing tool. Java has Infinitest, a plug-in for Ecplise and IntelliJ. In .NET, James Avery has been working on AutoTest.NET and I was involved in a project to write a Visual Studio add-in called QuickTest. The main problem I have found with using these tools for .NET development is the complexity of real-world project structures. Whenever I’d try to use QuickTest on real projects, I found I was dealing with very different project structures, test configurations and naming conventions. It was really difficult to factor all these configuration scenarios into a tool that is designed to “discover” which unit tests to continuously run.

That gave me the idea for AutoBuild.

Introducing AutoBuild

AutoBuild is a continuous testing tool for .NET that runs a NAnt script whenever a file is saved. You simply tell AutoBuild to watch a particular folder and give it a simple NAnt script to run whenever a file changes. The advantage of this approach is that all the complexity in customising a continuous testing tool for your project is contained within the NAnt script. The NAnt script is likely to be a small subset of an actual build script. For example, you might not want tests that hit the database to be run on every save, so you could add a “DatabaseTest” attribute to your test and configure the NAnt script to ignore all tests with that attribute.

If you aren’t familiar with NAnt, don’t worry. A default build file is provided and you only need to modify the solution path and unit test assembly path properties.

Currently, the test failure output in AutoBuild only supports the NUnit task. I plan to add support for other unit test frameworks in the near future. I am also interested in adding support for Rake build scripts.

Using AutoBuild

You can find the source code for AutoBuild here:
http://code.google.com/p/autobuildtool/

To get started, get the source and run the default.build.cmd file. This will run the build and create an output folder that contains the AutoBuild assemblies, along with a sample build script and command file. You can copy these to your own project and customise the build file for your solution.

To run the “dog food” sample, simply run the autobuild.cmd file in the root AutoBuild folder (not the one in the output folder). This will open AutoBuild and start watching the source directory. Then open the AutoBuild solution and modify any .cs file. This should kick-off the build and run the tests. Mess around with the code to watch the build fail. Note that this autobuild.build script is configured to only run tests not marked with “integration”.

AutoBuild will produce an output with build errors and failing tests in red and successful builds and test runs in green. You can change the output colours by modifying the log4net settings in the AutoBuild.Console.exe.config file. You can also switch on debug output by changing the log level to debug.

autobuild

AutoBuild is ideally suited to a dual-monitor set up, with Visual Studio in one monitor and AutoBuild in the other. This allows you to focus on writing code and check the status of the build at a glance. I would like to integrate the console into Visual Studio for cases when you only have one monitor (e.g. working on a laptop). This might involve writing a Visual Studio add-in that allows you to load the console into the IDE.

Please submit any issues to the project on Google Code. If you have any comments or suggestions, then please feel free to contact me.

Design and Testability, Which Comes First?

Jeremy Miller recently posted a good entry on Design and Testability. He talks about how important a good understanding of fundamental design principals is to writing testable code. I totally agree with this. I also believe that testing code is a great way to truly understand the benefits of applying design principals. Can you spot the circular dependency? 🙂

I first learned about the S.O.L.I.D. design principals several years ago and thought I was applying them in my code. However, it wasn’t until I started writing unit tests that I truly grasped the problems dependencies in particular can create in a system.

I eventually learned the hard way. In my early unit-tested applications, the tests ran slowly, often hit the database and required a fair amount of set up. The tests were still very beneficial, but the tight-coupling meant that it was difficult to refactor without breaking hundreds of tests.

So here’s the catch-22:

Without unit testing, you are unlikely the fully appreciate the effect dependencies have on a system.

But, without an understanding of good design principals, your unit tests may be ineffective, difficult to write, require a lot of set up and can quickly become hard to maintain. At which point you are likely to blame unit testing itself and not your lack of understanding the fundamentals of good design.

So what should come first? Learning about unit testing, thus feeling the pain of a poor design. Or, learning about good design but not fully understanding the benefits it has on a well structured and tested application?

Testing NHibernate Repositories

The Repository pattern is a technique used to manage the persistence of objects to a relational database, while decoupling the domain objects from the persistence technology. An Object-Relational Mapper (ORM), such as NHibernate can be used to map objects to database tables.

There is not usually much logic to test in a repository class, except for the interaction with the ORM, so there is often little need for testing the repository class independently from the database. What we really care about is the integration with the underlying data store. We can write integration tests to ensure this interaction is working correctly.

When executing tests against a database, tables can become cluttered with test data, often causing conflicts with subsequent tests. One solution is to run a SQL script before the tests are run which resets the database. However, this means we have to maintain a separate database script. If the database schema changes, we have to make sure this script is updated, or the tests might fail.

The open-source CodeCampServer project offers a great solution to testing NHibernate repository classes against a database. This involves recreating the test database schema from the NHibernate mapping files before each test is run.

Creating the database schema

The NHibernate SchemaExport method can be used to generate database schema from the .hbm mapping files. This allows us to create the test database schema without the need to maintain a separate database script.

   1: var exporter = new SchemaExport(new HybridSessionBuilder().GetConfiguration());
   2: exporter.Execute(false, true, false, true);

By placing this code in the test fixture SetUp method, the database schema is recreated before each test is run, ensuring there are no conflicts with data from previous tests.

A base class can be created to provide this functionality for any test fixture that derives from it.

   1: public class DatabaseTesterBase : RepositoryBase
   2: {
   3:     public DatabaseTesterBase() : base(new HybridSessionBuilder())
   4:     {
   5:     }
   6:  
   7:     [SetUp]
   8:     public virtual void Setup()
   9:     {
  10:         recreateDatabase();
  11:     }
  12:  
  13:     public static void recreateDatabase()
  14:     {
  15:         var exporter = new SchemaExport(new HybridSessionBuilder().GetConfiguration());
  16:         exporter.Execute(false, true, false, true);
  17:     }
  18: }

To create the database schema, the data type must be specified for each primary key column mapping. NHibernate derives the SQL types for the other columns from the .NET type associated with each mapping. The generated schema may not exactly match the real database schema, but it is sufficient enough to test the object persistence.

   1: <class name="Person" table="People">
   2:     <id name="Id" column="Id" type="Guid">
   3:         <generator class="guid.comb"/>
   4:     </id>
   5:  
   6:     <component name="Contact" class="Contact">
   7:         <property name="FirstName" length="20" not-null="true"/>
   8:         <property name="LastName" length="20" not-null="true"/>
   9:         <property name="Email" length="60" not-null="true"/>
  10:     </component>
  11:     <property name="Website" not-null="false"/>
  12:     <property name="Comment" not-null="false"/>
  13:     <property name="Password" length="256"/>
  14:     <property name="PasswordSalt" length="256"/>
  15:     <property name="IsAdministrator" not-null="true" />
  16:  
  17: </class>

It is preferable to run the tests against a local database, as this will help increase test performance and prevent conflicts with other developers running tests against a shared database.

Writing the tests

Here I am using an example of a repository integration test from the CodeCampServer project.

The PresonRespositoryTester class derives from DatabaseTesterBase, which provides the schema export functionality.

   1: [TestFixture]
   2: public class PersonRepositoryTester : DatabaseTesterBase
   3: {
   4:     ...
   5: }

Any required test data is created using the NHibernate session directly. The test data objects are saved to the session and flushed to the database.

   1: [Test]
   2: public void ShouldSavePersonToDatabase()
   3: {
   4:     Conference theConference = new Conference("foo", "");
   5:     using (ISession session = getSession())
   6:     {
   7:         session.SaveOrUpdate(theConference);
   8:         session.Flush();
   9:     }
  10:     ...
  11: }

Next, the SUT (System Under Test) is exercised by calling the Save method on the repository.

   1: [Test]
   2: public void ShouldSavePersonToDatabase()
   3: {
   4:     Conference theConference = new Conference("foo", "");
   5:     using (ISession session = getSession())
   6:     {
   7:         session.SaveOrUpdate(theConference);
   8:         session.Flush();
   9:     }
  10:     Person person = new Person("Andrew","Browne", "");
  11:     person.Conference = theConference;
  12:     person.Website = "";
  13:     person.Comment = "";
  14:  
  15:     IPersonRepository repository = new PersonRepository(_sessionBuilder);
  16:     repository.Save(person);
  17:     ...
  18: }

Finally, the results are loaded back from the database and verified against a set of expectations.

   1: [Test]
   2: public void ShouldSavePersonToDatabase()
   3: {
   4:     Conference theConference = new Conference("foo", "");
   5:     using (ISession session = getSession())
   6:     {
   7:         session.SaveOrUpdate(theConference);
   8:         session.Flush();
   9:     }
  10:     Person person = new Person("Andrew","Browne", "");
  11:     person.Conference = theConference;
  12:     person.Website = "";
  13:     person.Comment = "";
  14:  
  15:     IPersonRepository repository = new PersonRepository(_sessionBuilder);
  16:     repository.Save(person);
  17:  
  18:     Person rehydratedPerson = null;
  19:     //get Person back from database to ensure it was saved correctly
  20:     using (ISession session = getSession())
  21:     {
  22:         rehydratedPerson = session.Load<Person>(person.Id);
  23:  
  24:         Assert.That(rehydratedPerson != null);
  25:         Assert.That(rehydratedPerson.Contact.FirstName, Is.EqualTo("Andrew"));
  26:         Assert.That(rehydratedPerson.Contact.LastName, Is.EqualTo("Browne"));
  27:     }
  28: }

These tests will run much slower than standard unit tests, but that’s fine, as integration tests should be run less often than unit tests.

By generating the database schema before each test is run, we can ensure each test executes against a known set of data and is not affected by data generated from other tests. This helps us to create reliable and consistent data access integration tests.