The path to TDD and software excellence…

The more I study TDD, and the more I read Robert C. Martin’s study on clean code principals, the more I see this pyramid of development maturity.

TDD feels like the icing on the cake. The cake must come first. In this case, the cake is the principles and practices of clean code. Without clean code and the knowledge of Agile software principals, TDD is a very difficult practice to do. In my mind, the maturity model goes like this:

TDD Maturity ModelThe model shows level of difficulty and experienced needed, the further up the model you go. I have no doubt that some companies have taken TDD and enforced the process on their teams, without first going through the preliminary processes of learning. This is dangerous. I have seen projects eventually bury themselves in complexity and grind themselves down to a stop because of lack of clean code principles. When TDD is applied that way, not only will your software have fields of bad code inside of it, but you now have a test library to maintain with an equal amount of, if not more, bad code. The levels of complexity long running software projects can build up is scary. Maintenance costs and improvements become 10 times more expensive.

The investment in the above principles and practices are absoloutely golden to maintaining cost effective development of software.

“Agile Principles, Patterns, and Practices in C#” is a fantastic book from Robert C. Martin. Also, check out his Bad Code presentation over at InfoQ. It definitily got me laughing.

Posted in Uncategorized | 1 Comment

TDD – Just enough code…

I’ve spoke and demonstrated on my YouTube videos about only putting in enough code to a project to get a failing test passing, even if the code you’re adding isn’t going to make any sense to how it might end up later on. It really sounds confusing and I’m going to try and back this up with an example. The more I can emphasize this point, the more it will help in keeping tests high and code lean and fully exercised by tests.

Consider the following. I’m going to create a HttpServer and part of my first test, we will see the following code:

HttpServer server = new HttpServer(80);

In that code, we have 1 undefined object ‘HttpServer’, 2 methods, and 1 constructor. To add the most minimal code to get the test passing, I just need to define the class and stub out the methods. The methods will not contain any code, neither will they have return values. My test is now passing.

To further developer this class, I need to keep adding more tests that initially fail, to detail its behaviour. So another test would be written and the code then filled in the HttpServer class to pass that test – and only enough code to pass that test.

If I did not write just enough code to make the test pass and instead I just went on and fully implemented the ‘obvious’ code that should go in those methods, then what would happen is I would be implementing behaviour and fine details of the class that do not have any tests for that behaviour.

So once there is enough code in that class to pass my test, I stop. Then I would continue to think of what other behaviours I want from the HttpServer. I’d then write a failing test to verify that behaviour, and then write the code to pass that test.

This principle of only writing enough code to make the test pass is key to maintaining the TDD flow.

Just to summarize, stick to these two principles:

    • Only add code when a test is failing.
    • Write just enough code to make the failing test pass – nothing more.
      Posted in Agile, TDD | Leave a comment

      YouTube Channel and TDD Continued

      Just a quick update to link you to my new YouTube channel (Click Here) containing my introductions to TDD, and practical examples of TDD using NUnit and Visual Studio 2008. Please don’t forget to subscribe to get my latest tutorial videos as I explore TDD. Next up is the use of mocking frameworks, and some theory around “Test Doubles”.

      This weekend, I’ll be reading through “Clean Code” by Rober C. Martin. A few chapters in and already I’m feeling this is a great book. Very well written, illustrated, and right to the point.

      This is a step across from where TDD is taking me. To TDD well, you really need to know how to write “clean code” so that code remains testable. Over the past couple of months, my view on code, design, and development life cycle has made a huge shift for the good.

      Here are some books that I’m highly recommending if you are too starting out with TDD:

      I’m very grateful that I can discuss everything that I’m reading and thinking about with a life long friend Lasse Jarvensivu. Lars, for short, is the leading technical developer of Ludosity Interactive, who recently published a game on Steam, “Bob Came In Pieces“.

      Posted in Agile, TDD | Leave a comment

      TDD – A step to Agile…

      My interests recently have moved towards looking in to the Agile methodology. And I can tell you, it’s got me really excited. I’m excited because I can see these huge benefits to the quality of code at a developer level and the delivery of software at the project management level. As a developer, my path begins with TDD…

      TDD, which stands for Test Driven Development, is a technique for producing Agile code. TDD forces you to think about a system from the outside-in by writing tests first, before the code. This makes you think about the outside design and behaviour of components, without so much of how you’re going to do it in detail. The discipline of driving out code design by first having a test also disciplines you in to keeping code “testable”. It is very difficult to add testing to code that has been created before testing was considered because methods are too complex, responsibility of collaborating objects is messy, and so on.

      One of my biggest realisations about TDD, and even Agile as a whole, is the paradigm shift you have to make in your own mind about how you think about code design and the process of producing code. It greatly helps balance what should be important to you as a developer in your mind. TDD requires a good knowledge of patterns, good skills in refactoring, and a lot of patience to understand that to make the move to becoming a Test Driven Developer; you’re going to think about code a lot differently.

      With such little space in a blog post to explain what TDD is all about, I highly suggest reading Kent Beck’s book on Test Driven Development by Example, a great 250 paged kick start to TDD.

      Please check out my 8 minute introduction video to TDD over on YouTube.

      Posted in Agile, TDD | Leave a comment

      REST Assure – A REST Service Framework for .NET

      Since my research in to REST I couldn’t help but feel there was something missing in the .NET community for developing RESTful services. The REST starter kit from Microsoft really didn’t cut it for me, it still left a lot to be wanted, such as security, caching, and other bits. So naturally, I dared myself to write a REST service framework.

      I gathered all my thoughts and concerns around REST , and realised that if I was ever to going ensure REST services were to become a common practice in a work place that has never done/seen REST before, then either strong governance would have to be put in to place, which is a cost, or, create a framework in such a constrained way that it didn’t allow people to produce RESTful services outside of the rules that govern RESTful architecture. I also set out to make REST as easy as producing and consuming a web service in .NET.

      The most natural thing to do was take all the abstractions of REST, what was going to be completely common across one REST service to the next, and keep this in meta-data. To create this meta-data, I developed a thick client that allows me to visually design the resources, set the content types of representations, and control caching etc. This service studio tool would also validate the meta-data to make sure things like creating a resource that allows GET, has appropriate representation content types defined.

      Here’s what it looks like so far:

      REST Studio

      REST Studio

      The meta-data of the service produced from this studio is then loaded in to the service framework that runs in WCF 3.5, and the only thing left up to the developer is to map the data layer to the resources for the appropriate HTTP methods.

      My feelings so far is that REST puts you immediately at the starting line of web services, and I feel a little uncomfortable not having the full support of WS-* profiles, like what “Big Web Services” via SOAP give you, but, did I really ever use them? What really excites me, is how easy it is to throw data out in so many different formats. One of my examples was exposing a repository of configuration settings via an Atom feed, and very quickly, I could subscribe to the data repository and notify when configuration items changed that were perhaps important to me in a business environment.

      If you’re interested in any of this described above, drop me a line at

      Posted in REST | Tagged , , | Leave a comment

      Describing REST Services

      So as promised, I’m going to cover how we describe and guide a client through a REST service, as promised in my last blog post.

      One of the first things I’d like to point out is the absence of the WSDL in REST services. So no WSDL, and knowing that RESTful services do not completely describe themselves, this leaves us with the following questions:

      • How do we know  the operations on the service?
      • How do we know the format of the representations?
      • How do we know the workflow?

      If you carefully design your service URI’s and “connectedness” between resources, the consumers of a RESTful web services can start from the base URI of the service and navigate around the landscape of your service using the concept of HATEOAS and the OPTIONS verb. That’s the first and third problem solved, however, the one thing they are still missing is the format of the representations.

      RESTful web services are still going to require human readable specifications of the representations passed between your service calls. The XML structures, the attributes, the value constraints, are all still left up to you as the service provider to communicate to your clients. Amazon present a good example of REST API documentation for the Amazon S3 solution, you can check this out here.

      The less constraints you put on the format of the representations and the more clarity you can put to your clients on interpreting the representations, then the more agility you both have for absorbing change.

      Don’t encourage clients to build domain objects based on your representations to  deserialize on the wire, as this pretty much locks you in to not changing the representations – the moment the representation changes, the deserializers breaks, and clients won’t be happy.

      Here’s an example of a SLA on service representations:

      1. All representations start with an opening header element that describe the URI’s that the current resources state can move to.
      2. All URI’s are provided in <link> elements, where the attribute “rel” identifies a short description of the relation of the link to the current representation.

      An example representation that conforms to this SLA would look like:

          <link rel="summary"></link>
          <link rel="lines"></link>
          <link rel="confirm"></link>

      Do carefully consider the connectedness of resources to each other so that this leaves less for you to do in documentation. Also consider how consumable the service is from a client perspective.  A service is much easier to read when things are described from a business naming perspective, for example, “PlaceOrder” as opposed to “CreateOrderHeader”. Lastly, are the names of your URI’s well representing the concepts and resources of the service?

      Overall, keep it easy on the client, but be clear on what you strictly want to be in complete control to change to keep you agile for future changes and enhancements.

      Posted in REST | Tagged , | Leave a comment

      HATEOAS – Hypermedia as the Engine of Application State

      One of the understandings of REST that seems to be overlooked the most is its ability of guiding a consumer of a service through the various transitions using hyper-media.

      To simplify what this means, imagine you start at the base URI for a service that allows you to place an order. To place an order, you PUT or POST a representation of an order to a URI provided in the initial representation. You do not know what this URI is until you GET the starting representation from the base URI of the service, which contains links to resources (URI’s) that you can interact with from this starting state.

      So you have your order representation, and you’ve got the URI to send it to, so off you go and POST. The order is placed and a representation is returned to you that contains the order you placed, and more interestingly, more URI’s that may enable you to cancel, modify or confirm the order. This is now the second state of the order placement process of the service, which you have been guided to through hyper-media links.

      The key here is that all you know is you want to place an order, perhaps modify it, and eventually confirm it. Where you POST/PUT/GET representations to, i.e the URI’s, shouldn’t be something the client has hard-coded in their client API. The service will guide you through the state of the application through links.

      So, by giving the client the initial start URI of the service, that client should be able to walk over the entire landscape of URI’s in the service landscape, entirely guided through hyper-media links.

      The biggest advantage to this is that the service is in complete control of its addressing. If for some reason the work flow changes slightly, for example, more states are introduced to the order placement process such as promotions, and new URI’s are created or changed, the client will not break as it is not tightly coupled to the addressing of the service.

      However, one important thing to note here is that the client is still dependant on 2 things.

      The first being how the links are represented in the representations. The links will be described semantically, perhaps through a <link> element or in the example above, an XHTML tag <a rel=”placeOrder”> for the place order process. The SLA between the client and the service must be that these semantics are never changed.

      Secondly, the process flow of the service. The service above allows me to successfully place an order in two state transitions. I first create my order, then I can confirm the order. If the service introduces a 3rd state between the 1st and 2nd, that will break the way my client interacts. However, if more URI’s are introduced in the 1st or 2nd state, for example, the service now supports adding promotion codes to the order before confirming, then the client will not break as it will simply ignore the URI’s it does not understand. This is powerful. The server has a lot more agility by introducing new state transitions in to current representations without breaking clients. Furthermore, the clients will not break if the service decides to change its URI format or structure! Lovely =)

      Part of what I touched on this post is the SLA between RESTful services and clients. I am seeing a lot of confusion about how RESTful services are described to clients in the absence of the WSDL. Since this post is already a bit to take in, I will leave this until next time.

      Posted in REST | Leave a comment