Start with what you do now

Well I managed to get out to Lean Agile Manchester last night about Workplace visualisation and catch up with Carl Phillips of ao.  We had an excellent burger in the Northern Quarter before heading over to madlab, therefore delaying our arrival – oops sorry guys!  We spent quite a bit of time chatting about old times & then got down to the problem at hand.

Weirdly the first example was very reminiscent of where Carl & I started our journey at ao nearly four years ago.  Although, some of the key points where slightly different – i.e. number of developers, the location of the constraints in the department.

A rough guide of the scenario:

  • Specialists with hand-offs, bottleneck in test, multi-tasking galore
  • Small changes and incidents getting in the way of more strategic projects
  • 4 devs, 1 QA, 1 BA

We came up with a board similar to this:

We introduced WIP limits around the requirements section and set this at 1 as testing tended to be the bottleneck.  Reflecting this morning – introducing the WIP limits might have been a little premature…

As we where happy with the previous board, Ian challenged us to look at the digital agency scenario.

We started to map out the current value stream for the digital agency.  A rough guide for the scenario:

  • Multitasking on client projects, fixed scope and deadlines, 
  • lots of rework particularly because of UX
  • lots of late changes of requirements
  • 3rd party dependencies and a long support handover for client projects
  • 4 devs, 1 QA, 1 BA

This time we used swim lanes.  It looked similar but not exactly like this:

 These actually added inherent WIP limits because a swim lane could only have one card in within the column.  We introduced different two additional classes of service (we had a “feature” class of service for project work):

  1. A defect which blocked the current ticket and got tracked across the board.
  2. CR’s which would track across the board too.

We had a differing opinion on whether the “client support hand over” should be an additional column or a ticket – we ended up going with a column.

What I realised about this second scenario was that we mapped what was currently happening, although we introduced some inherent WIP limits due to board layout, but these where not forced upon the board – i.e. with numbers above the card columns.

Visualisation is good first step on understanding how work flows through a system, but remember to not force concepts on to the current environment.  “Start with what you do now”.

The Repository Pattern Sucks

A couple of years ago I would have told you that if you are accessing the a database you need a repository pattern. This allows you to hide away your data access code behind a nice interface – then at least you can mock the data access layer and change your data access code if you ever wanted to do that.

So say we are in a web application and want to get a “customer” by a specific reference. This is how we would do it with the repository pattern…

How different my opinion is today. I can’t help but feel that the pedal pushed repository pattern (in the C# .Net world) that is on the internet is fundamentally broken. The key thing for me is that everyone seems to want to create the following interface:

 public interface IRepository<TEntity, in TKey> where TEntity : class  
{
IEnumerable<TEntity> GetAll();
TEntity GetBy(TKey id);
void Save(TEntity entity);
void Delete(TEntity entity);
}

Not bad so far hey? This seems pretty “standard” apart from the fact you’ve broken single responsibility (SRP). How many responsibilities does the IRepository already have – 4 public methods which do very different things…

The other thing I’ve found is that things can & generally will get worse from here – then follows:

 public class Repository<TEntity, TKey> : IRepository<TEntity, TKey> where TEntity : class  
{
private readonly IUnitOfWork _unitOfWork;
public Repository(IUnitOfWork unitOfWork)
{
_unitOfWork = unitOfWork;
}
public IEnumerable<TEntity> GetAll()
{
return _unitOfWork.Session.QueryOver<TEntity>().ToList();
}
public TEntity GetBy(TKey id)
{
return Session.QueryOver<TEntity>().FirstOrDefault(x => x.Id == id);
}
public void Save(TEntity entity)
{
}
public void Delete(TEntity entity)
{
}
}

So now all you need to do to get access to something is as follows:

 public class CustomerRepository : Repository<Customer, int>, ICustomerRepository  
{
public CustomerRepository(IUnitOfWork unitOfWork) : base(unitOfWork)
{
}
}

Does this look familiar? It does to me – I’ve done this and I’m pretty sure its wrong!

Again there are a couple of things that appear wrong:

  1. SRP is broken
  2. What happens if you’ve got a complicated queryable object and you are using an ORM – you’ll end up with lots of SELECT N+1.
  3. We’ve over complicated a simple task with – 3 interfaces, a base class (coupling), and 2 classes.
  4. If you want to read from one data store and write to another data store how would you do this?

We’ve introduced so many layers and complexity that it seems that even to do a simple thing it’s been massively over complicated. I’ve forgot to show you the IUnitOfWork which is equally badly suggested and implemented on the internet…

Now for the alternative:

 var customer;  
using(var transaction = Session.BeginTransaction())
{
customer = Session.QueryOver<Customer>().FirstOrDefault(x => x.Id == customerId) ?? new NullCustomer();
transaction.Commit();
}

Wow, is this simple or what? But what about testing? Well I tend to use NHibernate with an InMemoryDatabase so I don’t have to mock the data access layer… I don’t tend to use mocks but that’s another story… The code is simple and clear.

Now if things advanced and I wanted to put it in a class then I’d put it behind something that only reads customer information:

 public class ReadCustomers  
{
private ISession _session;
public ReadCustomers(ISession session)
{
_session = session;
}
public Customer GetBy(int customerId)
{
var customer;
using(var transaction = _session.BeginTransaction())
{
customer = _session.QueryOver<Customer>().FirstOrDefault(x => x.Id == customerId) ?? new NullCustomer();
transaction.Commit();
}
return customer;
}
}

Again with storing information if you really wanted to push it in a class then something like this:

 public class StoreCustomers  
{
private ISession _session;
public StoreCustomers(ISession session)
{
_session = session;
}
public void Add(Customer customer)
{
using(var transaction = _session.BeginTransaction())
{
_session.SaveOrUpdate(customer);
transaction.Commit();
}
}
}

The classes have clear responsibility & a single responsibility – you can even easily change where you read and write data from and too! Plus the code is much simpler and easier to optimise.

Rhythm

No this is not a post about music.  It’s about software development and more importantly delivery.

Before I progress we are all agreed that we are mammals, therefore have an inbuilt rhythm – http://en.wikipedia.org/wiki/Circadian_rhythm – this is usually a 24/25 hour period in which we all sleep, wake, eat and go to work or do something else!

So let’s elaborate further and take this in to the realm of software delivery.  It appears that we (mammals) need rhythm – “Although circadian rhythms are endogenous (“built-in”, self-sustained), they are adjusted (en-trained) to the local environment by external cues called zeitgebers, commonly the most important of which is daylight.”.

This is where software delivery comes in, I’d like to take SCRUM as the example of setting up this rhythm.  Every 2 weeks we’ll deliver software, have some planning and retrospect.  This sets the pace for the delivery – team members know when things are happening and therefore can get in to a rhythm.  The team understands when the start and end is – they have a focus too for a given time period.

Now I’m an advocate of delivering more often (every couple of days), but I believe that until you can do these things well which is walking, you can’t sprint (no pun intended).

The other rhythm is that of the daily stand up – this occurs at the same place & time each day.

Other Agile practices seem to promote the same ideas – such as the TDD cycle, pair programming (if you pair well – i.e. swapping the keyboard!!).

It appears that with no rhythm you’ll deliver software, but maybe you won’t be going as quickly as you can.

From “Golden Master” to Unit Tests

So I’m currently working on some code which has been written but needs refactoring to add some additional business requirements – there are very few tests.  So I started by wrapping the code that I want to work with a “golden master”/characterisation test below is the pseudo code for this:

[Test, Ignore (Run this once at the begining)]
void GenerateGoldenMaster()
{
Call service
Serialize result to xml
Save to file
}

[Test]
void CompareGoldenMaster()
{
Call service
Serialize result to xml
Save file
Compare with master & assert correctness
}

I save the file so that I can see the results side by side – sometimes this is helpful rather than assertion failure in the test runner.

If you’ve never heard of the “golden master”/characterisation test then it’s a testing technique to ensure that as you refactor some legacy code the output is the same.  Anyhow, as I started the refactor the code to add the new functionality, I noticed a couple of bugs.  Now while you’ve got the “golden master” I would advise against fixing these bugs until a little later in the refatoring stage, I prefer to highlight and fix these bugs by using tests.  This is where you should be going – as I don’t tend to like to run comparison tests in a CI environment.

A couple of things happened while running through the refactoring – the “golden master” didn’t have enough test combinations in.  I therefore had to reconfigure the data in the database to give this additional test combinations.  I only did this once I had everything passing with the original “golden master” that I had created.  This gave me the confidence that the changes to the data would generate the necessary failing assertion.

Once I’d got to a point where I was happy that “most” of the combinations had been covered.  I then started to add the tests which will run in the CI environment.  Essentially, as this is a service with a db, I start by creating an in memory database, adding some data and asserting that the service response contains these bits of data.  I also use random data (guids) generated each test to add some testing invariance…

Now is when I want to highlight the bugs that I’ve found before as I now feel confident that the current refactored code is the same as it was before, but now I can add the tests to fix these bugs.  Insert the data as required then check that the output is incorrect – this will obviously create a red test which can now be fixed with confidence.

As the tests grow the “golden master” can be removed and deleted from the code base.

I’ve used the “golden master” a number of times within production code with no tests, if you find yourself in this position, then see if you can wrap the original code with a “golden master” and then start refactoring.  I think you’ll be surprised how effect they are…