Tuesday, 18 December 2012

Screencast: Testing and Refactoring Legacy Code

In this screencast I take a small piece of legacy Java code that contains the most common problems found in much larger legacy code bases. The objective is to first write tests to understand what the code does and then refactor it to make it better. The code contains Singletons, static calls and behaviour that does not belong there. It also has some design issues.

As an advice, I always recommend that we should never "touch" the production code as we retrofit tests, that means, we should not change it typing into the production file. However, as normally legacy code is not written with testing in mind, sometimes we need to change the production code in order to write tests for it. I address this scenario explaining how we can do that in a very safe way.

A common question when developers want to make legacy code better is "Where do we start?" I also address that explaining the how the approaches for testing and refactoring legacy code are the opposite from each other.

Besides a few other things, I also cover the use of code coverage tools to help us testing the code, how often we should be committing, how to fix a possible problem with the design in very small steps and how to stay in the green while refactoring our code.

Last but not least, I show how our tests and production code be easily written in a way that it captures the business requirements.

Although it is a bit long, I hope you all enjoy it.



There are two minor things I forgot to do while doing this exercise. Let me know if you spot it. :)

Monday, 10 December 2012

The Wrong Notion of Time

No one wakes up in the morning and say "Today I'm gonna screw up. Today I'm gonna piss my boss and all my team mates off writing the worst code I could possibly write". Well, there are always exceptions but normally no one does that. If that is the case, how come Agile projects are now failing? How come do we still have the same old problems?

A Technical Debt Story

Some time ago I was in this project and one of the developers chose to work on a brand new feature. For the implementation of this new feature, we did not need to touch any of our existing code, besides very few things just to wire the new feature into the application. After a day or so, I offered to pair with him. Naturally, since I had just joined him, I asked him to give me an overview of what the feature was about. He promptly explained it to me and I asked him to show me where he was so we could continue. After he finished showing the code to me I made a few observations since it was not clear to me that his code was reflecting what needed to be done - according to his previous explanation. Basically, the language the he used to explain me the business feature was not in sync with the language that he had used in the code and I could also see some code that was not really necessary for the implementation of that feature. I also noticed that there were no tests. When I asked him about that he said _It is working now and I may need that extra code in the future. Let's add this refactoring you are proposing and the unit test to the technical debt backlog. I need to finish this feature. How crazy is that? That was a brand new feature. We should be reducing technical debt as we went along instead of adding more of it. However, this developer somehow felt that it was OK to do that. At the end of the day we had a technical debt backlog, didn't we? That was supposedly an Agile team with experienced developers but somehow, in their minds, it was OK to have this behaviour. Perhaps one day someone would look at the technical debt and do something about. Possibly. Maybe. Quite unlikely. Nah, will never gonna happen.
 
But we want to do the right thing

But we all want to do the right thing. We do not do these things on purpose. However, over the time, I realised that we developers have a wrong notion of time. We think we need to rush all the time to deliver the tasks we committed to. Pressure will always be part of a software developer life and when there is pressure, we end up cutting corners. We do not do that because we are sloppy. We normally do that because we feel that we need to go faster. We feel that we are doing a good job, proving the business the features they want as fast as we can. The problem is that not always we understand the implications of our own decisions.
 
A busy team with no spare time

I joined this team in a very large organisation. There were loads of pressure and the developers were working really hard. First, it took me days to get my machine set up. The project was a nightmare to configure in our IDEs. We were using Java and I was trying to get my Eclipse to import the project. The project had more than 70 maven projects and modules, with loads of circular dependencies. After a few days, I had my local environment set. The project was using a heavyweight JEE Container, loads of queues and had to integrate with many other internal systems. When pairing with one of the guys (pairing was not common there but I asked them if could pair with them) I noticed that he was playing messages in a queue and looking at logs. I asked him what he was doing and he said that it was not possible to run the system locally so he had to add loads of logs to the code, package, deploy the application in the UAT environment, play XML messages into one of the inbound queues, look at the logs in the console and try to figure out what the application was doing. Apparently he had made a change and the expected message was not arriving in the expected outbound queue. So, after almost twenty minutes of changing the XML message and replaying it into the inbound queue, he had an idea of what the problem could be. So he went back to his local machine, changed a few lines of code, added more logs, changed a few existing ones to print out more information and started building the application again. At this point I asked if he would not write tests for the change and if he had tests for the rest of the application. He then told me that the task he was working on was important so he had to finish it quickly and did not have time to write tests. Then he deployed the new version of the application in UAT again (note that no one else could use the UAT environment while he was doing his tests), played an XML message into the inbound queue and started looking at the logs again. That went on for another two days until the problem was fixed. It turned out that there were some minor logical bugs in the code, things that a unit test would have caught immediately.
 
We don't have time but apparently someone else does

Imagine the situation above. Imagine an application with a few hundred thousand lines. Now imagine a team of seven developers. Now imagine ten of those teams in five different countries working on the same application. Yes, that was the case. There were some system tests (black box tests) but they used to take four hours to run and quite often they were broken so no one really paid attention to them. Can you imagine the amount of time wasted per developer per task or story. Let's not forget the QA team because apparently testers have all the time in the world. They had to manually test the entire system for every single change in the system. Every new feature added to the system was, of course, making the system bigger causing the system tests to be even slower and QA cycles even longer. Debug time was also getting bigger since each developer was adding more code that all the others would need to debug to understand how things work. Now thing about all the time wasted here, every day, every week, every month. This is all because we developers do not have time.

Dedicated Quality Assurance teams are an anti-pattern. Testers should find nothing, zero, nada. Every time a tester finds a bug, we developers should feel bad about it. Every bug found in production is an indication of something that we have not done. Some bugs are related to bad requirements but even then we should have done something about that. Maybe we should have helped our BAs or product owners to clarify them. By no means I am saying that we should not have testers. They can be extremely valuable to explore our applications in unexpected ways that just a human could do. They should not waste their time executing test plans that could be automated by the development team.

Business want the features as soon as possible and we feel that it is our obligation to satisfy them - and it is. However, business people look at the system as a whole and so should we. They look at everything and not just the story we are working on. It is our job to remove (automate) all the repetitive work. I still remember, back in the 90ies, when debugging skills was a big criteria in technical interviews. Those days are gone. Although it is important to have debugging skills, we should be unpleasantly surprised whenever we need to resort to it and when it occurs, we need to immediately address that, writing tests and refactoring our code so we never need to do that again. 

Using time wisely

Our clients and/or employers are interested in software that satisfy their needs, that works as specified and is easy to change whenever they change their minds. It is our job to provide that to them. The way we go about satisfying their expectations, normally, it is up to us. Although they may mention things like automated testing and Agile methodologies, what they really want is a good value for their money when it comes to the investment that they are making in a software project. We need to use our (their) time wisely, automating whatever we can - being tests or deployment procedures - instead of thinking that we may not have time to do it. We can always quantify how much time we are spending in repetitive tasks and even get to the extend of showing them how much time is being spent over a period of time in those activities. Before implementing any new feature or task, we should spend some time preparing our system to accept the changes in a nice way, so we can just _slide that in_ with no friction, and making sure that whatever we write can be easily tested and deployed. When estimating our work, we should always take this into account as **part of the time** that will take us to do it instead of having the false impression that we will be going faster if we treat them as a separate task since, chances are, they may never get done and the whole time will be slowed down because of that. The less time we waste manually testing (or waiting for a long automated test suite to run), debugging, dealing with a huge amount of technical debt, trying to get your IDE to work nicely with your fucked up project structure, or fighting to deploy the application, the more time we have to look after the quality of our application and make our clients happy.  

Note: The teams I mentioned above after a lot of hard work, commitment, support from management and a significant amount of investment (time and money) managed to turn things around and are now among the best teams in the organisation. Some of the teams managed to replace (re-write) an unreliable in-house test suite that used to take over three hours to run with a far more reliable one that takes around 20 minutes. One of the teams is very close to achieve a "one-button" deployment and has an extensive test suite with tests ranging from unit to system (black box) that run in minutes and with code coverage close to 100%.

Sunday, 11 November 2012

Testing legacy code with Golden Master

As a warm up for SCNA, the Chicago Software Craftsmanship Community ran a hands-on coding session where developers, working in pairs, should test and refactor some legacy code. For that they used the Gilded Rose kata. You can find links to versions in java, C# and ruby here and for clojure here.

We ran the same session for the London Software Craftsmanship Community (LSCC) early this year and back then I decided to write my tests BDD-style (I used JBehave for that). You can check my solution here.

This time, instead of writing unit tests or BDD / Spec By Example to test every branch of that horrible code, I decided to solve it using a test style called Golden Master.

The Golden Master approach

Before making any change to the production code, do the following:
  1. Create X number of random inputs, always using the same random seed, so you can generate always the same set over and over again. You will probably want a few thousand random inputs.
  2. Bombard the class or system under test with these random inputs.
  3. Capture the outputs for each individual random input
When you run it for the first time, record the outputs in a file (or database, etc). From then on, you can start changing your code, run the test and compare the execution output with the original output data you recorded. If they match, keep refactoring, otherwise, revert back your change and you should be back to green.

Approval Tests

An easy way to do Golden Master testing in Java (also available to C# and Ruby) is to use Approval Tests. It does all the file handling for you, storing and comparing it. Here is an example:


For those not familiar with the kata, after passing a list of items to the GildedRose class, it will iterate through them and according to many different rules, it will change their "sellIn" and "quality" attributes.

I've made a small change in the Item class, adding a automatically generated toString() method to it:
The first time the test method is executed, the line:

Approvals.verify(getStringRepresentationFor(items));

will generate a text file, in the same folder where the test class is, called: GildedRoseTest.should_generate_update_quality_output.received.txt. That mean, ..received.txt

ApprovalTests then will display the following message in the console:

To approve run : mv /Users/sandromancuso/development/projects/java/gildedrose_goldemaster/./src/test/java/org/craftedsw/gildedrose/GildedRoseTest.should_generate_update_quality_output.received.txt /Users/sandromancuso/development/projects/java/gildedrose_goldemaster/./src/test/java/org/craftedsw/gildedrose/GildedRoseTest.should_generate_update_quality_output.approved.txt

Basically, after inspecting the file, if we are happy, we just need to change the .received with .approved to approve the output. Once this is done, every time we run the test, ApprovalTests will compare the output with the approved file.

Here is an example of how the file looks like:

Item [name=Aged Brie, sellIn=-23, quality=-44]
Item [name=Elixir of the Mongoose, sellIn=-9, quality=45]
Item [name=Conjured Mana Cake, sellIn=-28, quality=1]
Item [name=Aged Brie, sellIn=10, quality=-2]
Item [name=+5 Dexterity Vest, sellIn=31, quality=5]

Now you are ready to rip the GildedRose horrible code apart. Just make sure you run the tests every time you make a change. :)
Infinitest

If you are using Eclipse or IntelliJ, you can also use Infinitest. It automatically runs your tests every time you save a production or test class. It is smart enough to run just the relevant tests and not the entire test suite.  In Eclipse, it displays a bar at the bottom-left corner that can be red, green or yellow (in case there are compilation errors and the tests can't be run).

With this, approach, refactoring legacy code becomes a piece of cake. You make a change, save it, look at the bar at the bottom of the screen. If it is green, keep refactoring, if it is red, just hit CTRL-Z and you are back in the green. Wonderful. :)

Thanks

Thanks to Robert Taylor and Balint Pato for showing me this approach for the first time in one of the LSCC meetings early this year. It was fun to finally do it myself.

Wednesday, 15 August 2012

The best approach to software development



Today, talking about doing a big design up-front (BDUF) sounds a bit ridiculous, right? Who would do that? That's not craftsmanship, is it?

However, in the past, that would be considered the norm. Writing requirement documents, drawing architectural and very low level detail diagrams was the right thing to do. Well, that’s what very smart guys proposed on the 1968 NATO Software Engineering Conference and it worked for NASA and the US Department of Defense. I’m sure they know what they are doing and if it works for them, it will definitely work for our small CRUD application or one page website. And then it happened. It became a religion and the majority of projects in the following decades were developed like that.

No, but not nowadays. We've learned the lesson, right? We wouldn't make this mistake again.  

After watching a few talks in conferences and InfoQ, we understood that this is not a good thing. We’ve also read in some books that we should do TDD. The design should emerge from tests.

And of course, we should adopt an Agile methodology as well. Let’s adopt Scrum to start with. There are many books about it, certifications and even entire conferences dedicated to it. Of course we should adopt TDD an Scrum because that’s the best approach to manage and develop and software.

Oh, but what about all this lean stuff? Eliminate waste, limit work in progress, system thinking, theory of constrains, Kanban. I heard it worked really well for Toyota so we should definitely do that as well. Why? Jesus, you just don’t get it. Of course that’s best approach to manage and develop software.

God, how could I forget? I was also told that I really should speak to my customer. We should really discuss the requirements so we can have a better understanding of what we should build. Are you using BDD? No!!! Wow! How can you develop software without that? Why should you use it? The answer is simple. That’s the best approach to manage and develop software. Don’t tell me that you thought that BDD was a testing tool. You are so yesterday. That was version one. BDD version three is all about communication. It's about software development done right. Yes, I know it sounds alien but apparently we are supposed to speak to people. God, how on Earth we haven’t thought about that before? How did we develop software all these years? If you don’t use BDD, you are just doing it wrong. Why? Because that’s the best approach to manage and develop software. Duh!

Outside-In TDD, Inside-Out TDD, ATDD, Classic TDD, London School TDD? Really? Are you still discussing that? Don’t tell me that we you are still writing tests. What? Why are you wasting time writing unit tests? It doesn’t make sense any more. You should spike and stabilize. What if you don’t know what you are doing or where you are going? What if you just want to explore your options? Why are you writing tests? Oh, I get it. You were told that this was the best approach to manage and develop software. Nah, forget that. Unit tests are for losers. We write small services and just monitor them. If they are wrong, we just throw them away and re-write. And THAT is the best way to manage and develop software.

Architecture and design patterns? What??? Who are you? My grandfather? Scrap that. That’s for programmers from the 80ies and 90ies. In the real world we have our design emerging from tests. No, stupid. Not using normal TDD. God, which planet do you live? We use TDD As If You Meant It. We use this technique and PRESTO, the design emerges and evolves nicely throughout the life span of the project regardless of how many developers, teams and design skills. Every one can see code smells, right?

And what about DDD? Domain Driven what? Nah, never heard about it. Hmm.. hold on. I think I heard something about it many years ago, but probably it was not important enough otherwise we would have more people today saying that DDD is the best approach to manage and develop software.

Noooo. No no no no. No, I didn't hear that. Get out of here. Did you just say that you are still using an Object-Oriented language? STATICALLY-TYPED???? No, sorry. This conversation is a waste of my time. It's because of people like you that our industry is shit. The only way for this conversation to get even worse is if you tell me that you still use a relational database. Haven't you heard that functional programming is a MUST DO and just retards and programmers from the 80ies and 90ies use relational databases. Functional languages, NoSQL databases... Repeat with me. Functional languages, NoSQL databases. What a douche bag.

Ah, trying to be a smart ass? Yeah, functional programming appeared long ago in the 50ies and developers in the 60ies and 70ies preferred to use OO instead. But you don't know why, do you? DO YOU? They did use OO because they were a bunch of hippies that didn't take anything seriously. They were that sort of people that went to Woodstock, got high on LSD and had brain damage after that. No, don't take this whole OO stuff seriously. We are finally getting back to reality now. Functional programming and NoSQL databases are definitely the best approach for software development.

Dogmatism, religion, context, curiosity, inquiring mind and pragmatism

Before I conclude, I just want to clarify that by no means I'm criticizing any person or group of people behind some of the methodologies, technologies or techniques mentioned above. These people have done an amazing job thinking, practicing and sharing their own ideas of how software development could be done in a better way and for that we should all be grateful. Our industry if definitely better with their contribution.

My main criticism here is about how the vast majority of developers react to all these things. It is not just because someone, somewhere wrote a book, recorded a video or gave a talk in a conference about something that it will make that thing right, in all contexts. Quite often, we fail to question things just because the people promoting it are relatively well known. We fail to understand the context where a methodology, technology or technique should be best suitable for. We fail, quite often, to use our own judgement because of the fear to be ridiculed by our colleagues. We should stop being dogmatic and religious about things. This just leads to stupid decisions. Doing things for the sake of doing or because someone else said so is just plain wrong and stupid. 

Being a good developer means to be inquisitive, curious and pragmatic. Never religious. Never dogmatic. Curious means that we should be eager to learn about all the things mentioned above and many many more. Inquisitive means that we should investigate and question all the things we learn. Pragmatic means that we should choose the right tools, being technologies, methodologies or techniques, for the job.

Context matters. 

Whenever you see people saying that we should or shouldn't do something, ask them why. Ask them about the context where they tried to do (or not to do) what they are saying. 

Software development is not the same thing of producing cars. Once the car is ready, you don't go back to the manufacturer and ask them to add another wheel or put the engine somewhere else. Developing software for expensive hardware is not the same thing as developing a simple web application with two pages. Hardware has an specification that you need to code against. Quite often, you don't even have access to the hardware because it is just not built yet. The cost of a bug in production is not the same for all applications. The cost of a few bugs in a social networking or cooking website can be very different from the cost of a few bugs in a trading or financial system processing millions of transactions per day. Working with a small team, every one co-located and with easy access to customers is very different from working on a project with 10+ teams spread in 5 countries and different timezones. 

Read and learn as much as you can. However, don't assume that everything you read or watch applies in every context. Make informed decisions and trust your instincts.

The bad news is that there is no best approach to software development. Maximum we could say is that there are certain technologies, methodologies and techniques that are more suitable to a specific context.

In case you are really trying to find the best approach to software development in general, I hope you don't get too frustrated and good luck with that. If you ever find it, please let us know. It's always interesting to know about different approaches. Maybe unicorns really exist. Who knows?



This post was inspired by conversations during many London Software Craftsmanship Community (LSCC) Round-table meetings, conversations during SoCraTes 2012 and also conversations with colleagues at work. Thanks everyone for that.

Saturday, 9 June 2012

Test-driving Builders with Mockito and Hamcrest


A lot of people asked me in the past if I test getters and setters (properties, attributes, etc). They also asked me if I test my builders. The answer, in my case is it depends.

When working with legacy code, I wouldn’t bother to test data structures, that means, objects with just getters and setters, maps, lists, etc. One of the reasons is that I never mock them. I use them as they are when testing the classes that uses them.
For builders, when they are used just by test classes, I also don’t unit test them since they are used as “helpers” in many other tests. If they have a bug, the tests will fail.
In summary, if these data structures and builders already exist, I wouldn’t bother retrofitting test for them.

But now let’s talk about doing TDD and assume you need a new object with getters and setters. In this case, yes, I would write tests for the getters and setters since I need to justify their existence writing my tests first.

In order to have a rich domain model, I normally tend to have business logic associated with the data and have a richer domain model. Let’s see the following example.

In the real life, I would be writing on test at a time, making them pass and refactor. For this post, I’ll just give you the full classes for clarity’s sake. First let’s write the tests:


Now the implementation:


This case is interesting since the Trade object has one property called inboundMessage with respective getters and setters and also uses a collaborator (reportabilityDecision, injected via setter) in its isReportable business method.

A common approach that I’ve seen many times to “test” the setReportabilityDecision method is to introduce a getReportabilityDecision method returning the reportabilityDecision (collaborator) object.

This is definitely the wrong approach. Our objective should be to test how the collaborator is used, that means, if it is invoked with the right parameters and if whatever it returns (if it returns anything) is used. Introducing a getter in this case does not make sense since it does not guarantee that the object, after had the collaborator injected via setter, is interacting with the collaborator as we intended.

As an aside, when we write tests that are about how collaborators are going to be used, defining their interface, is when we are using TDD as a design tool and not just simply as a testing tool. I’ll cover that in a future blog post.

OK, now imagine that this trade object can be created in different ways, that means, with different reportability decisions. We also would like to make our code more readable and we decide to write a builder for the Trade object. Let’s also assume, in this case, that we want the builder to be used in the production and test code as well. 
In this case, we want to test drive our builder. 

Here is an example that I normally find when developers are test-driving a builder implementation.


Now let’s have a look at these tests.  
The good news is, the tests were written in the way developers want to read them. That also means that they were “designing” the TradeBuilder public interface (public methods). 
The bad news is how they are testing it.


If you look closer, the tests for the builder are almost identical to the tests in the TradeTest class. 

You may say that it is OK since the builder is creating the object and the tests should be similar. The only different is that in the TradeTest we instantiate the object by hand and in the TradeBuilderTest we use the builder to instantiate it, but the assertions should be the same, right? 

For me, firstly we have duplication. Secondly, the TradeBuilderTest doesn’t show it’s real intent. 

After many refactorings and exploring different ideas, while pair-programming with one of the guys in my team we came up with this approach:

So now, the TradeBuilderTest express what is expected from the TradeBuilder, that means, the side effect when the build method is called. We want it to create a Trade and set its attributes. There are no duplications with the TradeTest. It is left to the TradeTest to guarantee the correct behavior of the Trade object.

For completion’s sake, here is the final TradeBuider class:



The combination of Mockito and Hamcrest is extremely powerful, allowing us to write better and more readable tests.

Friday, 18 May 2012

Testing multiple properties with single assertion

Every time I was trying to test an object's properties I was neither satisfied writing very verbose tests nor in using some of the out of the box hamcrest matchers. Although using the matchers was a big help, I never managed to make them read the way I wanted.

Another thing that was very important to me, I wanted to have a single assertion per method and a very descriptive description if the test did not pass.

I've decided to write my own matcher and hopefully it will be useful to other people. So, that's what I've done:

BeanMatcher

Hamcrest matcher to match multiple attributes of an object within a single assertion.

How to use it


NOTE: Make sure you are using org.hamcrest.MatcherAssert.assertThat instead of the JUnit one.

If you run this test, you will get a message like

Now, change the age check to
   property("age", greaterThan(60)) 
And you should get:


Testing object graphs

You can also do this

I use a combination of two matchers to do that:
- BeanMatcher: Provides the "has" method responsible to group all the property matchers.
- BeanPropertyMatcher: Provides the "property" method.

I expect to make more changes to them, so for the most up-to-date version, please check BeanMatcher on my github account.

Enjoy!!!

Monday, 26 March 2012

Extract, Inject, Kill: Breaking hierarchies (part 3)

In part one I explain the main idea behind this approach and in part two I start this example. Please read parts one and two before reading this post

Although the main ideas of Extract, Inject, Kill is already expressed, it's good to finish the exercise just for completion's sake. Here is where we stopped:

Let's have a look at the VoucherPricingService, that now is the only concrete class at the bottom of our hierarchy. 


Note that it uses the VoucherService class to calculate the voucher value.

Before anything, let's write some tests to VoucherPricingService.java




Once thing to notice is that the User parameter is not used for anything. So let's remove it.

Now it is time to user the Extract, Inject, Kill on the VoucherPricingService. Let's Extract the content of the VoucherPricingService.applyAdditionalDiscounts(double, String) method and add it to a class called VoucherDiscountCalculation. Let's call the method calculateVoucherDiscount(). Of course, let's do that writing our tests first. They need to test exactly the same things that are tested on VoucherPricingService.applyAdditionalDiscounts(double, String). We also take the opportunity to pass the VoucherService object into the constructor of VoucherDiscountCalculation.

If you noticed, when doing the extraction, we took the opportunity to give proper names to our new classes and methods and also to pass their essential dependencies to the constructor instead of using method injection.
 
Let's now change the code in the VoucherPricingService to use the new VoucherDiscountCalculation and see if all the tests still pass.


Cool. All the tests still pass, meaning that we have the same behaviour, but now in the VoucherDiscountCalculation class, and we are ready to move to the Inject stage.

Let's now inject VoucherDiscountCalculation into PricingService, that is the top class in the hierarchy. As always, let's add a test that will test this new collaboration.

And here is the changed PriningService.

Now it is time to kill the VoucherPricingService class and kill the PricingService.applyAdditionalDiscounts(double total, String voucher) template method, since it is not being used anymore. We can also kill the VoucherPricingServiceTest class and fix the PricingServiceTest removing the applyAdditionalDiscounts() method from the testable class.

So now, of course, we don't have a concrete class in our hierarchy anymore, since the VoucherPricingService was the only one. We can now safely promote UserDiscountPricingService to concrete.

That is now how our object graph looks like:


Our hierarchy is another level short. The only thing we need to do now is to apply  Extract, Inject, Kill once again, extracting the logic inside UserDiscountPricingService into another class (e.g. UserDiscountCalculation), inject UserDiscountCalculation into PricingService, finally kill UserDiscountPricingService and the calculateDiscount(User user) template method.  UserDiscountPricingService

Since the approach was described before, there is no need to go step by step anymore. Let's have a look at the final result.

Here is the diagram representing where we started:

  After the last Extract, Inject, Kill refactoring, this is what we've got:


The cool thing about the final model pictured above is that now we don't have any abstract classes anymore. All classes and methods are concrete and every single class is independently testable. 


That's how the final PricingService class looks like:


For a full implementation of the final code, please look at https://github.com/sandromancuso/breaking-hierarchies

Note: For this three part blog post I used three different approaches to drawing UML diagrams. By hand, using ArgoUML and Astah community edition. I'm very happy with the latter. 

Tuesday, 6 March 2012

Extract, Inject, Kill: Breaking hierarchies (part 2)

In part 1 of this post I explain the problems of using the template method in deep class hierarchies and how I went to solve it. Please read it before reading this post.

Here is a more concrete example in how to break deep hierarchies using the Extract, Inject, Kill approach. Imagine the following hierarchy.

Let's start with the StandardPricingService. First, let's write some tests:


Note that I used  a small trick here, extending the StandardPricingService class inside the test class so I could have access to the protected method. We should not use this trick in normal circumstances. Remember that if you feel the need to test protected or private methods, it is because your design is not quite right, that means, there is a domain concept missing in your design. In other words, there is a class crying to come out from the class you are trying to test.

Now, let's do the step one in our Extract, Inject, Kill strategy. Extract the content of the calculateProductPrice() method into another class called StandardPriceCalculation. This can be done automatically using IntelliJ or Eclipse. After a few minor adjusts, that's what we've got.


And the StandardPriceService now looks like this:


All your tests should still pass.

As we create a new class, let's add some tests to it. They should be the same tests we had for the StandardPricingService.



Great, one sibling done. Now let's do the same thing for the BoxingDayPricingService.



Now let's extract the behaviour into another class. Let's call it BoxingDayPricingCalculation.



The new BoxingDayPriceService is now



We now need to add the tests for the new class.


Now both StandardPricingService and BoxingDayPricingService have no implementation of their own. The only thing they do is to delegate the price calculation to StandardPriceCalculation and BoxingDayPriceCalculation respective. Both price calculation classes have the same public method, so now let's extract a PriceCalculation interface and make them both implement it.




Awesome. We are now ready for the Inject part of Extract, Inject, Kill approach. We just need to inject the desired behaviour into the parent (class that defines the template method). The calculateProductPrice() is defined in the PricingService, the class at the very top at the hierarchy. That's where we want to inject the PriceCalculation implementation. Here is the new version:



Note that the template method calculateProductPrice() was removed from the PricingService, since its behaviour is now being injected instead of implemented by sub-classes.

As we are here, let's write some tests for this last change, checking if the PricingService is invoking the PriceCalculation correctly. 

Great. Now we are ready for the last bit of the Extract, Inject, Kill refactoring. Let's kill both StandardPricingService and BoxingDayPricingService child classes. 

The VoucherPricingService, now the deepest class in the hierarchy, can be promoted to concrete class. Let's have another look at the hierarchy:
And that's it. Now it is just to repeat the same steps for VoucherPricingService and UserDiscountPricingService. Extract the implementation of their template methods into classes, inject them into PricingService, and kill the classes.

In doing so, every time you extract a class, try to give them proper names instead of calling them Service. Suggestions could be VoucherDiscountCalculation and PrimeUserDiscountCalculation.

There were a few un-safe steps in the re-factoring described above and I also struggled a little bit to describe exactly how I did it since I was playing quite a lot with the code. Suggestions and ideas are very welcome.

For the final solution, please check the last part of this blog post.

NOTE
If you are not used to use builders in your tests and is asking yourself where the hell aProduct() and aShoppingBasket() come from, check the code in here:

ProductBuilder.java
ShoppingBasketBuilder.java

For more information about the original problem that triggered all this, please read part 1 of this blog post.
In part 3 I finish the exercise, breaking the entire hierarchy. Please have a look at it for the final solution

Extract, Inject, Kill: Breaking hierarchies (part 1)


Years ago, before I caught the TDD bug, I used to love the template method pattern. I really thought that it was a great way to have an algorithm with polymorphic parts. The use of inheritance was something that I had no issues with. But yes, that was many years ago.

Over the years, I've been hurt by this "design style". That's the sort of design created by developers that do not TDD.

The situation

Very recently I was working on one part of our legacy code and found a six level deep hierarchy of classes. There were quite a few template methods defined in more than one of the classes. All classes were abstract with the exception of the bottom classes, that just implemented one or two of the template methods. There were just a single public method in the entire hierarchy, right at the very top.

We had to make a change in one of the classes at the bottom. One of the (protected) template method implementations had to be changed.

The problem

How do you test it? Goes without saying that there were zero tests for the hierarchy.

We know that we should never test private or protected methods. A class should "always" be tested from its public interface. We should always write tests that express and test "what" the method does and not "how". That's all well and good. However, in this case, the change needs to be done in a protected method (template method implementation) that is part of the implementation of a public method defined in a class six level up in the hierarchy. To test this method, invoking the public method of its grand grand grand grand parent we will need to understand the entire hierarchy, mock all dependencies, create the appropriate data, configure the mocks to have a well defined behaviour so that we can get this piece of code invoked and then tested.

Worse than that, imagine that this class at the bottom has siblings overriding the same template method. When the siblings need to be changed, the effort to write tests for them will be the same as it was for our original class. We will have loads of duplications and will also need to understand all the code inside all the classes in the hierarchy. The ice in the cake: There are hundreds of lines to be understood in all parent classes.

Breaking the rules

Testing via the public method defined at the very top of the hierarchy has proven not to be worth it. The main reason is that, besides painful, we already knew that the whole design was wrong. When we look at the classes in the hierarchy, they didn't even follow the IS-A rule of inheritance. They inherit from each other so some code could be re-used.

After some time I thought: Screw the rules and this design. I'm gonna just directly test the protected method and then start breaking the hierarchy.

The approach: Extract, Inject, Kill

The overall idea is:

1. Extract all the behaviour from the template method into a class.
2. Inject the new class into the parent class (where the template is defined), replacing the template method invocation with the invocation of the method in the new class.
3. Kill the child class (the one that had the template method implementation).

Repeat these steps until you get rid of the entire hierarchy.

This was done writing the tests first, making the protected template method implementation public.

NOTES
1. This may not be so simple if we have methods calling up the stack in the hierarchy.
2. If the class has siblings, we have to extract all the behaviour from the siblings before we can inject into the parent and kill the siblings.

This probably is too complicate to visualise, so in part 2 of this post I'll be giving a more concrete example






Monday, 6 February 2012

Code coverage is a side effect and not an end goal


In my previous life as a consultant, one of our clients hired us to "increase the code coverage" of their code base. We would not develop any new features or make any changes. They just wanted us to write tests. Basically they wanted us to bring the coverage up to 70%. Why 70% you may ask. Probably because someone pulled a finger out of his or her back orifice, sniffed it and thought: Hmm.. this definitely smells like 70%.

Regardless the stupid and random number, as consultants we actually did our best to actually write proper and meaningful tests. They did not want us to refactoring. Just tests. The developers themselves (client employees) would not be writing tests. Probably too busy doing proper work, breaking the tests after each commit. Thanks for them, they had the monkeys here fixing the tests. The reason for that was that the same person with the smelly finger decided that everyone would have a higher bonus if they reached 70% test coverage. 

More recently, I've seen a similar behaviour. Keeping a close eye on the metrics provided by Sonar, that does a great job by the way, managers and senior developers are constantly pushing the teams to improve the quality of the code. Time is allocated in every sprint and developers are asked to increase the code coverage. The intention behind it is good, right? Bit by bit the application get better. Awesome. 

The idea was to improve the quality and reliability of the application. Unfortunately, developers with the urge to finish the task, just focused on the "increase code coverage" part of the task. They then started writing "tests" that were not asserting anything. Basically what they were doing was writing some code inside a test method that would invoke public methods in classes but would not test the outcomes or side effects of those method invocations. 

What does code coverage measure?

It measures the percentage of the code exercised by code written inside test methods. It does not measure the percentage of code tested by testing code. Having a test method that just invokes a public method in a class and does not test (assert/verify) anything, will make the code coverage go up but will not test the system. This, of course, is totally pointless since it does not test if the public method invoked actually does what it is supposed to do or if it does anything at all.

Why do we invest time in writing tests? (In no particular order)

  • Make sure we understand what the application does, according to the executable requirements (tests); 
  • Make sure that the application still does the right thing after we make changes to it or indicates possible areas of conflicts (failing tests);
  • Make sure we have regression test; 
  • Make sure that in a click of a button we can know if the application is working correctly; 
  • Make sure that we have a quick and small feedback loop in terms of quality and correctness of our application; 
  • Make sure the application is simplified and easily maintainable, via testing and refactoring; 
  • Make sure the complexity of the application is spread thin instead of having localised big balls of mud that we are scared to touch;
  • Make sure that we can easily add new features quickly; 
  • Make sure that we extend the ROI in our software, extending its lifespan; 
  • Make sure that our software does not rot, impeding and/or slowing down business progress; 
  • Make sure that our clients are happy with the software;


By the way, when I mention test, I mean all types/levels of tests, not just unit.

What should we focus on?

Focus on what the metrics are telling you and not on improving the numbers. Focusing on numbers may give you a false impression of quality.

My point here is that the focus should be on TESTING the application and not in increasing its code coverage metric. We need to make sure we capture the behaviour of our application via tests, expressing its intent, refactoring to make it better. Tests (at all levels) should tell you a story about the behaviour of your application. And guess what? In writing tests for your application, besides all the benefits mentioned above, your code coverage metric will still go up as a side effect.