Friday, 23 August 2013

Test Deafness

A few months ago I was talking to a musician friend of mine. He is also Brazilian, same age as me, and moved to the UK on the same year I did. As we were talking about music, I mentioned I like punk rock. He said he likes punk rock as well, but as a musician, he listens to a bit of everything. The conversation was going well until he asked me what my favourite bands were. "Legiao Urbana" is my favourite Brazilian band I said. "Seriously?" he said with a puzzled face. "They are rubbish, man."

As he was a friend (he became an enemy after that stupid comment), I thought to myself: How can I make him suffer for the rest of his life? Luck for him, I remembered that I was not a Brazilian savage anymore. I was British now and I had to act like one: "Oh, are they? Thanks for sharing that with me. Would you mind telling me what your favourite bands are? More tea?" He then named a few bands, including some Forró bands. Shock! Horror!! Blasphemy!!! I really wanted to kill him right there. Fuck the British citizenship. How could a guy, that also liked punk rock music, say that the band I liked was rubbish and then name some stupid Forro bands as his favourite bands?

After quite a long list of swear words pronounced in a very loud tone, I asked him to elaborate. All songs of Legiao Urbana, he said, are played with three or four chords maximum. Their lyrics are great, but they are very poor musicians. The Forro bands are totally the opposite. The lyrics suck but no one cares. Their are great musicians that focus on creating music for people to dance.

That conversation made me realise something really important. If you are a person like me, good music is related to good and strong lyrics. For a musician like my friend, good music is about the techniques used by other musicians when playing their instruments, regardless of the lyrics. For a person that likes to sing, she may appreciate opera, even if she doesn't have a clue about what the words mean.

But what does it have to do with tests?

You cannot expect to produce quality code just by listening to your test. If you don't know what good code looks like, you are pretty much test deaf. Musicians have a trained ear to listen to each instrument individually while the music is playing. They can also imagine, quite precisely, how different instruments could be combined to create new music.

Walking around asking other developers to listen to their tests, as if this advice alone would make them produce quality code immediately, doesn't work. It may make us look smart (or at least feel smart) but it does not really help the other developers receiving the advice. It's too vague.

If we want to produce quality code, we should study the concepts and techniques that lead to it: Domain Driven Design, Clean Code, SOLID principles, design patterns, coupling, cohesion, different programming languages and paradigms, architecture, just to name a few. Once we get a good understanding of all these things, we will have an implied knowledge about what constitutes good code. This implied knowledge is what may cure our test deafness, allowing us to listen to our tests in order to produce quality code. 

Tuesday, 30 July 2013

My birthday wish list

Today is my birthday. Yay! And since today is all about me, I will choose what I want as a present. As a developer, here's what I want:

  1. I want developers to be empowered to do whatever they need to do to satisfy the real business needs and delight their customers.
  2. I want developers to be accountable for the decisions they make, and not by decisions that are made for them. 
  3. I don't want to see developers going through endless meetings trying to prove why they shouldn't use the technologies or architecture defined by an ivory tower architect. In fact, ivory-tower architects should be an extinct species (visit your nearest National History Museum if you want to see one). 
  4. I want developers to know the truth. If a decision was made because of political reasons, please tell us that that is the case. We will still be unhappy but at least we will find it easier to digest. 
  5. Although we are happy to justify and explain every technical decision we, the development team, make, we don't want to have any person that is not part of the development team making technical decisions. 
  6. I don't want to see developers working with technical people that think that their role is to define the architecture of a project. I want developers to work with people that focus on delivering the simplest solution to a problem; satisfying functional and non-functional requirements. If we can achieve that without writing software, even better. And no, that doesn't mean quick and dirty.
  7. In case we need to write software to solve a business problem, I want developers to craft solutions in a way that changes are embraced and the business will never be slowed down by technical incompetence.
  8. I want developers that can build applications that will provide a good return on investment to the business. I don't want to see applications being decommissioned after a few years because they became a pile of crap; the maintenance cost is so high that it is cheaper to rewrite. 
  9. I want to work with developers that are passionate and care about what they do. Every single day I want to speak to my colleagues and learn something new from them, the same way they should learn something new from me. 
  10. I want to work with people (not just developers) that embrace changes and provide business agility. I don't want to keep embarrassing myself every time I need to tell stakeholders that the small change they want will take a few months to be done (due to the crap application/architecture/process we have). 
  11. I want to work in a place where we can choose the best technology or tool for the job; without being told that we cannot do that because our people don't have the skills. "They need to be trained." (sigh)
  12. Instead of being told that we need to build a new feature or even an entire new application using specific technologies, we would appreciate if we were just given the business requirements. I would love to see that developers are trusted to provide the best solution for the problem.
  13. I would like to see the people behind all the bureaucracy and stupid processes being blamed by the damage they are causing. 
  14. I would like to see all the 9-to-5 employees being replaced by just a few passionate, talented, well-paid and empowered professionals. 
  15. I wish all companies recognised software development as a full career and not as just the initial steps for management. Yes, this is stupid.
  16. I wish that every organisation paying for a software project understood the principles of Software Craftsmanship and the importance to have real software craftsmen working on their projects. 


You may be thinking that I'm not being reasonable. After all, my birthday is almost over and of course no one will be able to give me all the things I want today, as a birthday present. You are right. I agree. And because of that, I won't add all the many other wishes I have to the list above. I could go on forever. 


The good news for me is that I'm really smart and I always have a plan B. Although I'm a disappointed that I won't get all these things for my birthday this year, my plan is to be a good boy for the rest of the year and maybe Santa will bring me some of these for Xmas. 

Tuesday, 18 December 2012

Screencast: Testing and Refactoring Legacy Code

In this screencast I take a small piece of legacy Java code that contains the most common problems found in much larger legacy code bases. The objective is to first write tests to understand what the code does and then refactor it to make it better. The code contains Singletons, static calls and behaviour that does not belong there. It also has some design issues.

As an advice, I always recommend that we should never "touch" the production code as we retrofit tests, that means, we should not change it typing into the production file. However, as normally legacy code is not written with testing in mind, sometimes we need to change the production code in order to write tests for it. I address this scenario explaining how we can do that in a very safe way.

A common question when developers want to make legacy code better is "Where do we start?" I also address that explaining the how the approaches for testing and refactoring legacy code are the opposite from each other.

Besides a few other things, I also cover the use of code coverage tools to help us testing the code, how often we should be committing, how to fix a possible problem with the design in very small steps and how to stay in the green while refactoring our code.

Last but not least, I show how our tests and production code be easily written in a way that it captures the business requirements.

Although it is a bit long, I hope you all enjoy it.



There are two minor things I forgot to do while doing this exercise. Let me know if you spot it. :)

Monday, 10 December 2012

The Wrong Notion of Time

No one wakes up in the morning and say "Today I'm gonna screw up. Today I'm gonna piss my boss and all my team mates off writing the worst code I could possibly write". Well, there are always exceptions but normally no one does that. If that is the case, how come Agile projects are now failing? How come do we still have the same old problems?

A Technical Debt Story

Some time ago I was in this project and one of the developers chose to work on a brand new feature. For the implementation of this new feature, we did not need to touch any of our existing code, besides very few things just to wire the new feature into the application. After a day or so, I offered to pair with him. Naturally, since I had just joined him, I asked him to give me an overview of what the feature was about. He promptly explained it to me and I asked him to show me where he was so we could continue. After he finished showing the code to me I made a few observations since it was not clear to me that his code was reflecting what needed to be done - according to his previous explanation. Basically, the language the he used to explain me the business feature was not in sync with the language that he had used in the code and I could also see some code that was not really necessary for the implementation of that feature. I also noticed that there were no tests. When I asked him about that he said _It is working now and I may need that extra code in the future. Let's add this refactoring you are proposing and the unit test to the technical debt backlog. I need to finish this feature. How crazy is that? That was a brand new feature. We should be reducing technical debt as we went along instead of adding more of it. However, this developer somehow felt that it was OK to do that. At the end of the day we had a technical debt backlog, didn't we? That was supposedly an Agile team with experienced developers but somehow, in their minds, it was OK to have this behaviour. Perhaps one day someone would look at the technical debt and do something about. Possibly. Maybe. Quite unlikely. Nah, will never gonna happen.
 
But we want to do the right thing

But we all want to do the right thing. We do not do these things on purpose. However, over the time, I realised that we developers have a wrong notion of time. We think we need to rush all the time to deliver the tasks we committed to. Pressure will always be part of a software developer life and when there is pressure, we end up cutting corners. We do not do that because we are sloppy. We normally do that because we feel that we need to go faster. We feel that we are doing a good job, proving the business the features they want as fast as we can. The problem is that not always we understand the implications of our own decisions.
 
A busy team with no spare time

I joined this team in a very large organisation. There were loads of pressure and the developers were working really hard. First, it took me days to get my machine set up. The project was a nightmare to configure in our IDEs. We were using Java and I was trying to get my Eclipse to import the project. The project had more than 70 maven projects and modules, with loads of circular dependencies. After a few days, I had my local environment set. The project was using a heavyweight JEE Container, loads of queues and had to integrate with many other internal systems. When pairing with one of the guys (pairing was not common there but I asked them if could pair with them) I noticed that he was playing messages in a queue and looking at logs. I asked him what he was doing and he said that it was not possible to run the system locally so he had to add loads of logs to the code, package, deploy the application in the UAT environment, play XML messages into one of the inbound queues, look at the logs in the console and try to figure out what the application was doing. Apparently he had made a change and the expected message was not arriving in the expected outbound queue. So, after almost twenty minutes of changing the XML message and replaying it into the inbound queue, he had an idea of what the problem could be. So he went back to his local machine, changed a few lines of code, added more logs, changed a few existing ones to print out more information and started building the application again. At this point I asked if he would not write tests for the change and if he had tests for the rest of the application. He then told me that the task he was working on was important so he had to finish it quickly and did not have time to write tests. Then he deployed the new version of the application in UAT again (note that no one else could use the UAT environment while he was doing his tests), played an XML message into the inbound queue and started looking at the logs again. That went on for another two days until the problem was fixed. It turned out that there were some minor logical bugs in the code, things that a unit test would have caught immediately.
 
We don't have time but apparently someone else does

Imagine the situation above. Imagine an application with a few hundred thousand lines. Now imagine a team of seven developers. Now imagine ten of those teams in five different countries working on the same application. Yes, that was the case. There were some system tests (black box tests) but they used to take four hours to run and quite often they were broken so no one really paid attention to them. Can you imagine the amount of time wasted per developer per task or story. Let's not forget the QA team because apparently testers have all the time in the world. They had to manually test the entire system for every single change in the system. Every new feature added to the system was, of course, making the system bigger causing the system tests to be even slower and QA cycles even longer. Debug time was also getting bigger since each developer was adding more code that all the others would need to debug to understand how things work. Now thing about all the time wasted here, every day, every week, every month. This is all because we developers do not have time.

Dedicated Quality Assurance teams are an anti-pattern. Testers should find nothing, zero, nada. Every time a tester finds a bug, we developers should feel bad about it. Every bug found in production is an indication of something that we have not done. Some bugs are related to bad requirements but even then we should have done something about that. Maybe we should have helped our BAs or product owners to clarify them. By no means I am saying that we should not have testers. They can be extremely valuable to explore our applications in unexpected ways that just a human could do. They should not waste their time executing test plans that could be automated by the development team.

Business want the features as soon as possible and we feel that it is our obligation to satisfy them - and it is. However, business people look at the system as a whole and so should we. They look at everything and not just the story we are working on. It is our job to remove (automate) all the repetitive work. I still remember, back in the 90ies, when debugging skills was a big criteria in technical interviews. Those days are gone. Although it is important to have debugging skills, we should be unpleasantly surprised whenever we need to resort to it and when it occurs, we need to immediately address that, writing tests and refactoring our code so we never need to do that again. 

Using time wisely

Our clients and/or employers are interested in software that satisfy their needs, that works as specified and is easy to change whenever they change their minds. It is our job to provide that to them. The way we go about satisfying their expectations, normally, it is up to us. Although they may mention things like automated testing and Agile methodologies, what they really want is a good value for their money when it comes to the investment that they are making in a software project. We need to use our (their) time wisely, automating whatever we can - being tests or deployment procedures - instead of thinking that we may not have time to do it. We can always quantify how much time we are spending in repetitive tasks and even get to the extend of showing them how much time is being spent over a period of time in those activities. Before implementing any new feature or task, we should spend some time preparing our system to accept the changes in a nice way, so we can just _slide that in_ with no friction, and making sure that whatever we write can be easily tested and deployed. When estimating our work, we should always take this into account as **part of the time** that will take us to do it instead of having the false impression that we will be going faster if we treat them as a separate task since, chances are, they may never get done and the whole time will be slowed down because of that. The less time we waste manually testing (or waiting for a long automated test suite to run), debugging, dealing with a huge amount of technical debt, trying to get your IDE to work nicely with your fucked up project structure, or fighting to deploy the application, the more time we have to look after the quality of our application and make our clients happy.  

Note: The teams I mentioned above after a lot of hard work, commitment, support from management and a significant amount of investment (time and money) managed to turn things around and are now among the best teams in the organisation. Some of the teams managed to replace (re-write) an unreliable in-house test suite that used to take over three hours to run with a far more reliable one that takes around 20 minutes. One of the teams is very close to achieve a "one-button" deployment and has an extensive test suite with tests ranging from unit to system (black box) that run in minutes and with code coverage close to 100%.

Sunday, 11 November 2012

Testing legacy code with Golden Master

As a warm up for SCNA, the Chicago Software Craftsmanship Community ran a hands-on coding session where developers, working in pairs, should test and refactor some legacy code. For that they used the Gilded Rose kata. You can find links to versions in java, C# and ruby here and for clojure here.

We ran the same session for the London Software Craftsmanship Community (LSCC) early this year and back then I decided to write my tests BDD-style (I used JBehave for that). You can check my solution here.

This time, instead of writing unit tests or BDD / Spec By Example to test every branch of that horrible code, I decided to solve it using a test style called Golden Master.

The Golden Master approach

Before making any change to the production code, do the following:
  1. Create X number of random inputs, always using the same random seed, so you can generate always the same set over and over again. You will probably want a few thousand random inputs.
  2. Bombard the class or system under test with these random inputs.
  3. Capture the outputs for each individual random input
When you run it for the first time, record the outputs in a file (or database, etc). From then on, you can start changing your code, run the test and compare the execution output with the original output data you recorded. If they match, keep refactoring, otherwise, revert back your change and you should be back to green.

Approval Tests

An easy way to do Golden Master testing in Java (also available to C# and Ruby) is to use Approval Tests. It does all the file handling for you, storing and comparing it. Here is an example:


For those not familiar with the kata, after passing a list of items to the GildedRose class, it will iterate through them and according to many different rules, it will change their "sellIn" and "quality" attributes.

I've made a small change in the Item class, adding a automatically generated toString() method to it:
The first time the test method is executed, the line:

Approvals.verify(getStringRepresentationFor(items));

will generate a text file, in the same folder where the test class is, called: GildedRoseTest.should_generate_update_quality_output.received.txt. That mean, ..received.txt

ApprovalTests then will display the following message in the console:

To approve run : mv /Users/sandromancuso/development/projects/java/gildedrose_goldemaster/./src/test/java/org/craftedsw/gildedrose/GildedRoseTest.should_generate_update_quality_output.received.txt /Users/sandromancuso/development/projects/java/gildedrose_goldemaster/./src/test/java/org/craftedsw/gildedrose/GildedRoseTest.should_generate_update_quality_output.approved.txt

Basically, after inspecting the file, if we are happy, we just need to change the .received with .approved to approve the output. Once this is done, every time we run the test, ApprovalTests will compare the output with the approved file.

Here is an example of how the file looks like:

Item [name=Aged Brie, sellIn=-23, quality=-44]
Item [name=Elixir of the Mongoose, sellIn=-9, quality=45]
Item [name=Conjured Mana Cake, sellIn=-28, quality=1]
Item [name=Aged Brie, sellIn=10, quality=-2]
Item [name=+5 Dexterity Vest, sellIn=31, quality=5]

Now you are ready to rip the GildedRose horrible code apart. Just make sure you run the tests every time you make a change. :)
Infinitest

If you are using Eclipse or IntelliJ, you can also use Infinitest. It automatically runs your tests every time you save a production or test class. It is smart enough to run just the relevant tests and not the entire test suite.  In Eclipse, it displays a bar at the bottom-left corner that can be red, green or yellow (in case there are compilation errors and the tests can't be run).

With this, approach, refactoring legacy code becomes a piece of cake. You make a change, save it, look at the bar at the bottom of the screen. If it is green, keep refactoring, if it is red, just hit CTRL-Z and you are back in the green. Wonderful. :)

Thanks

Thanks to Robert Taylor and Balint Pato for showing me this approach for the first time in one of the LSCC meetings early this year. It was fun to finally do it myself.

Wednesday, 15 August 2012

The best approach to software development



Today, talking about doing a big design up-front (BDUF) sounds a bit ridiculous, right? Who would do that? That's not craftsmanship, is it?

However, in the past, that would be considered the norm. Writing requirement documents, drawing architectural and very low level detail diagrams was the right thing to do. Well, that’s what very smart guys proposed on the 1968 NATO Software Engineering Conference and it worked for NASA and the US Department of Defense. I’m sure they know what they are doing and if it works for them, it will definitely work for our small CRUD application or one page website. And then it happened. It became a religion and the majority of projects in the following decades were developed like that.

No, but not nowadays. We've learned the lesson, right? We wouldn't make this mistake again.  

After watching a few talks in conferences and InfoQ, we understood that this is not a good thing. We’ve also read in some books that we should do TDD. The design should emerge from tests.

And of course, we should adopt an Agile methodology as well. Let’s adopt Scrum to start with. There are many books about it, certifications and even entire conferences dedicated to it. Of course we should adopt TDD an Scrum because that’s the best approach to manage and develop and software.

Oh, but what about all this lean stuff? Eliminate waste, limit work in progress, system thinking, theory of constrains, Kanban. I heard it worked really well for Toyota so we should definitely do that as well. Why? Jesus, you just don’t get it. Of course that’s best approach to manage and develop software.

God, how could I forget? I was also told that I really should speak to my customer. We should really discuss the requirements so we can have a better understanding of what we should build. Are you using BDD? No!!! Wow! How can you develop software without that? Why should you use it? The answer is simple. That’s the best approach to manage and develop software. Don’t tell me that you thought that BDD was a testing tool. You are so yesterday. That was version one. BDD version three is all about communication. It's about software development done right. Yes, I know it sounds alien but apparently we are supposed to speak to people. God, how on Earth we haven’t thought about that before? How did we develop software all these years? If you don’t use BDD, you are just doing it wrong. Why? Because that’s the best approach to manage and develop software. Duh!

Outside-In TDD, Inside-Out TDD, ATDD, Classic TDD, London School TDD? Really? Are you still discussing that? Don’t tell me that we you are still writing tests. What? Why are you wasting time writing unit tests? It doesn’t make sense any more. You should spike and stabilize. What if you don’t know what you are doing or where you are going? What if you just want to explore your options? Why are you writing tests? Oh, I get it. You were told that this was the best approach to manage and develop software. Nah, forget that. Unit tests are for losers. We write small services and just monitor them. If they are wrong, we just throw them away and re-write. And THAT is the best way to manage and develop software.

Architecture and design patterns? What??? Who are you? My grandfather? Scrap that. That’s for programmers from the 80ies and 90ies. In the real world we have our design emerging from tests. No, stupid. Not using normal TDD. God, which planet do you live? We use TDD As If You Meant It. We use this technique and PRESTO, the design emerges and evolves nicely throughout the life span of the project regardless of how many developers, teams and design skills. Every one can see code smells, right?

And what about DDD? Domain Driven what? Nah, never heard about it. Hmm.. hold on. I think I heard something about it many years ago, but probably it was not important enough otherwise we would have more people today saying that DDD is the best approach to manage and develop software.

Noooo. No no no no. No, I didn't hear that. Get out of here. Did you just say that you are still using an Object-Oriented language? STATICALLY-TYPED???? No, sorry. This conversation is a waste of my time. It's because of people like you that our industry is shit. The only way for this conversation to get even worse is if you tell me that you still use a relational database. Haven't you heard that functional programming is a MUST DO and just retards and programmers from the 80ies and 90ies use relational databases. Functional languages, NoSQL databases... Repeat with me. Functional languages, NoSQL databases. What a douche bag.

Ah, trying to be a smart ass? Yeah, functional programming appeared long ago in the 50ies and developers in the 60ies and 70ies preferred to use OO instead. But you don't know why, do you? DO YOU? They did use OO because they were a bunch of hippies that didn't take anything seriously. They were that sort of people that went to Woodstock, got high on LSD and had brain damage after that. No, don't take this whole OO stuff seriously. We are finally getting back to reality now. Functional programming and NoSQL databases are definitely the best approach for software development.

Dogmatism, religion, context, curiosity, inquiring mind and pragmatism

Before I conclude, I just want to clarify that by no means I'm criticizing any person or group of people behind some of the methodologies, technologies or techniques mentioned above. These people have done an amazing job thinking, practicing and sharing their own ideas of how software development could be done in a better way and for that we should all be grateful. Our industry if definitely better with their contribution.

My main criticism here is about how the vast majority of developers react to all these things. It is not just because someone, somewhere wrote a book, recorded a video or gave a talk in a conference about something that it will make that thing right, in all contexts. Quite often, we fail to question things just because the people promoting it are relatively well known. We fail to understand the context where a methodology, technology or technique should be best suitable for. We fail, quite often, to use our own judgement because of the fear to be ridiculed by our colleagues. We should stop being dogmatic and religious about things. This just leads to stupid decisions. Doing things for the sake of doing or because someone else said so is just plain wrong and stupid. 

Being a good developer means to be inquisitive, curious and pragmatic. Never religious. Never dogmatic. Curious means that we should be eager to learn about all the things mentioned above and many many more. Inquisitive means that we should investigate and question all the things we learn. Pragmatic means that we should choose the right tools, being technologies, methodologies or techniques, for the job.

Context matters. 

Whenever you see people saying that we should or shouldn't do something, ask them why. Ask them about the context where they tried to do (or not to do) what they are saying. 

Software development is not the same thing of producing cars. Once the car is ready, you don't go back to the manufacturer and ask them to add another wheel or put the engine somewhere else. Developing software for expensive hardware is not the same thing as developing a simple web application with two pages. Hardware has an specification that you need to code against. Quite often, you don't even have access to the hardware because it is just not built yet. The cost of a bug in production is not the same for all applications. The cost of a few bugs in a social networking or cooking website can be very different from the cost of a few bugs in a trading or financial system processing millions of transactions per day. Working with a small team, every one co-located and with easy access to customers is very different from working on a project with 10+ teams spread in 5 countries and different timezones. 

Read and learn as much as you can. However, don't assume that everything you read or watch applies in every context. Make informed decisions and trust your instincts.

The bad news is that there is no best approach to software development. Maximum we could say is that there are certain technologies, methodologies and techniques that are more suitable to a specific context.

In case you are really trying to find the best approach to software development in general, I hope you don't get too frustrated and good luck with that. If you ever find it, please let us know. It's always interesting to know about different approaches. Maybe unicorns really exist. Who knows?



This post was inspired by conversations during many London Software Craftsmanship Community (LSCC) Round-table meetings, conversations during SoCraTes 2012 and also conversations with colleagues at work. Thanks everyone for that.

Saturday, 9 June 2012

Test-driving Builders with Mockito and Hamcrest


A lot of people asked me in the past if I test getters and setters (properties, attributes, etc). They also asked me if I test my builders. The answer, in my case is it depends.

When working with legacy code, I wouldn’t bother to test data structures, that means, objects with just getters and setters, maps, lists, etc. One of the reasons is that I never mock them. I use them as they are when testing the classes that uses them.
For builders, when they are used just by test classes, I also don’t unit test them since they are used as “helpers” in many other tests. If they have a bug, the tests will fail.
In summary, if these data structures and builders already exist, I wouldn’t bother retrofitting test for them.

But now let’s talk about doing TDD and assume you need a new object with getters and setters. In this case, yes, I would write tests for the getters and setters since I need to justify their existence writing my tests first.

In order to have a rich domain model, I normally tend to have business logic associated with the data and have a richer domain model. Let’s see the following example.

In the real life, I would be writing on test at a time, making them pass and refactor. For this post, I’ll just give you the full classes for clarity’s sake. First let’s write the tests:


Now the implementation:


This case is interesting since the Trade object has one property called inboundMessage with respective getters and setters and also uses a collaborator (reportabilityDecision, injected via setter) in its isReportable business method.

A common approach that I’ve seen many times to “test” the setReportabilityDecision method is to introduce a getReportabilityDecision method returning the reportabilityDecision (collaborator) object.

This is definitely the wrong approach. Our objective should be to test how the collaborator is used, that means, if it is invoked with the right parameters and if whatever it returns (if it returns anything) is used. Introducing a getter in this case does not make sense since it does not guarantee that the object, after had the collaborator injected via setter, is interacting with the collaborator as we intended.

As an aside, when we write tests that are about how collaborators are going to be used, defining their interface, is when we are using TDD as a design tool and not just simply as a testing tool. I’ll cover that in a future blog post.

OK, now imagine that this trade object can be created in different ways, that means, with different reportability decisions. We also would like to make our code more readable and we decide to write a builder for the Trade object. Let’s also assume, in this case, that we want the builder to be used in the production and test code as well. 
In this case, we want to test drive our builder. 

Here is an example that I normally find when developers are test-driving a builder implementation.


Now let’s have a look at these tests.  
The good news is, the tests were written in the way developers want to read them. That also means that they were “designing” the TradeBuilder public interface (public methods). 
The bad news is how they are testing it.


If you look closer, the tests for the builder are almost identical to the tests in the TradeTest class. 

You may say that it is OK since the builder is creating the object and the tests should be similar. The only different is that in the TradeTest we instantiate the object by hand and in the TradeBuilderTest we use the builder to instantiate it, but the assertions should be the same, right? 

For me, firstly we have duplication. Secondly, the TradeBuilderTest doesn’t show it’s real intent. 

After many refactorings and exploring different ideas, while pair-programming with one of the guys in my team we came up with this approach:

So now, the TradeBuilderTest express what is expected from the TradeBuilder, that means, the side effect when the build method is called. We want it to create a Trade and set its attributes. There are no duplications with the TradeTest. It is left to the TradeTest to guarantee the correct behavior of the Trade object.

For completion’s sake, here is the final TradeBuider class:



The combination of Mockito and Hamcrest is extremely powerful, allowing us to write better and more readable tests.