TDD it is a programming technique that helps us write thoroughly tested code and evolve our code with the best design possible at each stage.
The big picture from book "Test Driven" by Lasse Koskela (this and all next citates made from this book):
“Only ever write code to fix a failing test.” That’s test-driven development, or TDD, in one sentence. First we write a test, then we write code to make the test pass. Then we find the best possible design for what we have, relying on the existing tests to keep us from breaking things while we’re at it. This approach to building software encourages good design, produces testable code, and keeps us away from over-engineering our systems because of flawed assumptions. And all of this is accomplished by the simple act of driving our design each step of the way with executable tests that move us toward the final implementation.
Using TDD, we encouraging good software design. "Good" means "enough good at current moment", not over-engineered design to pass tests / acceptance criterias.
Well-written code exhibits good design and a balanced division of responsibilities without duplication—all the good stuff. Poorly written code doesn’t, and working with it is a nightmare in many aspects. One of them is that the code is difficult to understand and, thus, difficult to change. As if that wasn’t enough of a speed bump, changing problematic code tends to break functionality elsewhere in the system, and duplication wreaks havoc in the form of bugs that were supposed to be fixed already. The list goes on.
Quality of produced software remains huge problem. Poorly written code
Diffucult to understand
Diffucult to change
Changing problematic code breaks functionality elsewhere in the system
This is real problem, becouse software needs to change.
Nobody likes buying a pig in a poke. Yet the customers of software development groups have been constantly forced to do just that. In exchange for a specification, the software developers have set off to build what the specification describes—only to find out 12 months later that the specification didn’t quite match what the customer intended back then. Not to mention that, especially in the modern day’s hectic world of business, the customer’s current needs are significantly different from what they were last year.
Software developed late, do not match customer needs
Customer needs change significantly
To create maintainable, working software that meets the customer’s actual, present needs, we need learn
How to build things right
How to build right things
To ensure external and internal software quality, on lower level we should use technique called "TDD", on higher level "acceptance TDD".
TDD is a way of programming that encourages good design and is a disciplined process that helps us avoid programming errors. TDD does so by making us write small, automated tests, which eventually build up a very effective alarm system for protecting our code from regression. You cannot add quality into software after the fact, and the short development cycle that TDD promotes is well geared toward writing high-quality code from the start.
To avoid programming errors, we should write high quality code from start. We should make small, autonomous components and accurately test them.
The short cycle is different from the way we’re used to programming. We’ve always designed first, then implemented the design, and then tested the implementation somehow—usually not too thoroughly. (After all, we’re good programmers and don’t make mistakes, right?) TDD turns this thinking around and says we should write the test first and only then write code to reach that clear goal. Design is what we do last. We look at the code we have and find the simplest design possible.
We should break old software develoment cycle: design → implementation → test (not thoroughly, not automated way).
Write test first, Then apply design. And at time the requirements change — change test first. Then apply simplest design possible.
The last step in the cycle is called refactoring. Refactoring is a disciplined way of transforming code from one state or structure to another, removing duplication, and gradually moving the code toward the best design we can imagine. By constantly refactoring, we can grow our code base and evolve our design incrementally.
In short, acceptance tests are indicators of the completion of a requirement or feature. When all acceptance tests for a requirement or feature are passing, you know you’re done.
In essence, this means that tests are no longer merely a verification tool, but also an integral part of requirements and specification as well as a medium for customer collaboration. In this section, we’ll go into more detail about these new roles for tests, starting with examining their part in nurturing close collaboration between developers, testers, and customers, and then discussing the use of tests as the shared language facilitating this collaboration.
Acceptance TDD as a technique is not coupled to any specific format for expressing requirements. The same ideas can be applied much to the same effect whether you’re implementing use cases, user stories, or some other equivalent medium for documenting what needs to be done. It’s worth noting, though, that teams using user stories to manage their requirements tend to talk about story test-driven development instead—which is a different name for the same technique.
Test-driven software development cycle starts from use cases (use stories, job stories, etc). When use cases documented using acceptance criterias, they become new shared language between developers, custromers, testers, business analysts.
With acceptance TDD, we are able to collaborate effectively by bringing together the knowledge, skills, and abilities required for doing a good job.
One of the biggest problems of developing software for someone else is the prevalent ambiguity in requirements. It is far from child’s play to express and communicate requirements in such a way that no information is lost and that the original idea is transmitted fully intact. Some would go as far as saying it’s impossible to do so. After all, we can’t read minds
This problem is highlighted when the medium used for this communication is written documentation—a requirements specification, for example—which is far from being a perfect way to transfer information and understanding. If we were able to transform the requirements into executable tests that verify whether the system conforms to each particular aspect of the specification, there would be many fewer problems with ambiguity and less room for interpretation on behalf of developers. This is the wonderful premise of tests as specification.