Excerpted from Chapter 9

Back to Basics

We want to do everything we must do to have stable, predictable software development. But we don't have time for anything extra. The four basic activities of development are coding, testing, listening, and designing.

Learning To Drive

Four values--communication, simplicity, feedback, and courage. A double handful of principles. Now we are ready to start building a discipline of software development. The first step is to decide on the scope. What is it that we will try to prescribe? What sorts of problems will we address and what sorts of problems will we ignore?

I remember when I first learned to program in BASIC. I had a couple of workbooks covering the fundamentals of programming. I went through them pretty quickly. When I had done that, I wanted to tackle a more challenging problem than the little exercises in the books. I decided I would write a Star Trek game, kind of like one I had played at the Lawrence Hall of Science in Berkeley, but cooler.

My process for writing the programs to solve the workbook exercises had been to stare at the problem for a few minutes, type in the code to solve it, then deal with whatever problems arose. So I sat confidently down to write my game. Nothing came. I had no idea how to write an application bigger than 20 lines. So I stepped away and I tried to write down the whole program on paper before typing it in. I got three lines written before I got stuck again.

I needed to do something beyond programming. But I didn't know what else to do.

So, what if we went back to that state, but in the light of experience? What would we do? We know we can't just "code till we're done." What activities would we add? What would we try to get out of each activity as we experienced it afresh?


At the end of the day, there has to be a program. So, I nominate coding as the one activity we know we can't do without. Whether you draw diagrams that generate code or you type at a browser, you are coding.

What is it that we want to get out of code? The most important thing is learning. The way I learn is to have a thought, then test it out to see if it is a good thought. Code is the best way I know of to do this. Code isn't swayed by the power and logic of rhetoric. Code isn't impressed by college degrees or large salaries. Code just sits there, happily doing exactly what you told it to do. If that isn't what you thought you told it to do, that's your problem.

When you code something up, you also have an opportunity to understand the best structure for the code. There are certain signs in the code that tell you that you don't yet understand the necessary structure.

Code also gives you a chance to communicate clearly and concisely. If you have an idea and explain it to me, I can easily misunderstand. If we code it together, though, I can see in the logic you write the precise shape of your ideas. Again, I see the shape of your ideas not as you see them in your head, but as they find expression to the outside world.

This communication easily turns into learning. I see your idea and I get one of my own. I have trouble expressing it to you, so I turn to code, also. Since it is a related idea, we use related code. You see that idea and have another.

Finally, code is the one artifact that development absolutely cannot live without. I've heard stories of systems where the source code was lost but they stayed in production. Sightings of such beasts has become increasingly rare, however. For a system to live, it must retain its source code.

Since we must have the source code, we should use it for as many of the purposes of software engineering as possible. It turns out that code can be used to communicate--expressing tactical intent, describing algorithms, pointing to spots for possible future expansion and contraction. Code can also be used to express tests, tests that both objectively test the operation of the system and provide a valuable operational specification of the system at all levels.


The English Positivist philosophers Locke, Berkeley, and Hume said that anything that can't be measured doesn't exist. When it comes to code, I agree with them completely. Software features that can't be demonstrated by automated tests simply don't exist. I am good at fooling myself into believing that what I wrote is what I meant. I am also good at fooling myself into believing that what I meant is what I should have meant. So I don't trust anything I write until I have tests for it. The tests give me a chance to think about what I want independent of how it will be implemented. Then the tests tell me if I implemented what I thought I implemented.

When most people think of automated tests, they think of testing functionality--that is, what numbers are computed. The more experience I get writing tests, the more I discover I can write tests for nonfunctional requirements--like performance or adherence of code to standards.

Erich Gamma coined the phrase "Test Infected" to describe a person who doesn't code if they don't already have a test. The tests tell you when you are done--when the tests run you are done coding for the moment. When you can't think of any tests to write that might break, you are completely done.

Tests are both a resource and a responsibility. You don't get to write just one test, make it run, and declare yourself finished. You are responsible for writing every test that you can imagine won't run immediately. After a while you will get good at reasoning about tests--if these two tests work, then you can safely conclude that this third test will work without having to write it. Of course, this is exactly the same kind of reasoning that leads to bugs in programs, so you have to be careful about it. If problems show up later and they would have been uncovered had you written that third test, you have to be prepared to learn the lesson and write that third test next time.

Most software ships without being developed with comprehensive automated tests. Automated tests clearly aren't essential. So why don't I leave testing out of my list of essential activities? I have two answers, one short-term and one long-term.

The long-term answer is that tests keep the program alive longer (if the tests are run and maintained). When you have the tests, you can make more changes longer than you can without the tests. If you keep writing the tests, your confidence in the system increases over time.

One of our principles is to work with human nature and not against it. If all you could make was a long-term argument for testing, you could forget about it. Some people would do it out of a sense of duty or because someone was watching over their shoulder. As soon as the attention wavered or the pressure increased, no new tests would get written, the tests that were written wouldn't be run, and the whole thing would fall apart. So, if we want to go with human nature and we want the tests, we have to find a short-term selfish reason for testing.

Fortunately, there is a short-term reason to write tests. Programming when you have the tests is more fun than programming when you don't. You code with so much more confidence. You never have to entertain those nagging thoughts of "Well, this is the right thing to do right now, but I wonder what I broke." Push the button. Run all the tests. If the light turns green, you are ready to go to the next thing with renewed confidence.

I caught myself doing this in a public programming demonstration. Every time I would turn from the audience to begin programming again, I would push my testing button. I hadn't changed any code. Nothing in the environment had changed. I just wanted a little jolt of confidence. Seeing that the tests still ran gave me that.

Programming and testing together is also faster than just programming. I didn't expect this effect when I started, but I certainly noticed it and have heard it reported by lots of other people. You might gain productivity for half an hour by not testing. Once you have gotten used to testing, though, you will quickly notice the difference in productivity. The gain in productivity comes from a reduction in the time spent debugging--you no longer spend an hour looking for a bug, you find it in minutes. Sometimes you just can't get a test to work. Then you likely have a much bigger problem, and you need to step back and make sure your tests are right, or whether the design needs refinement.

However, there is a danger. Testing done badly becomes a set of rose-colored glasses. You gain false confidence that your system is okay because the tests all run. You move on, little realizing that you have left a trap behind you, armed and ready to spring the next time you come that way.

The trick with testing is finding the level of defects you are willing to tolerate. If you can stand one customer complaint per month, then invest in testing and improve your testing process until you get to that level. Then, using that standard of testing, move forward as if the system is fine if the tests all run.

Looking ahead just a little, we will have two sets of tests. We will have unit tests written by the programmers to convince themselves that their programs work the way they think the programs work. We will also have functional tests written by (or at least specified by) the customers to convince themselves that the system as a whole works the way they think the system as a whole should work.

There are two audiences for the tests. The programmers need to make their confidence concrete in the form of tests so everyone else can share in their confidence. The customers need to prepare a set of tests that represent their confidence, "Well, I guess if you can compute all of these cases, the system must work."


Programmers don't know anything. Rather, programmers don't know anything that business people think is interesting. Hey, if those business people could do without programmers, they would throw us out in a second.

Where am I going with this? Well, if you resolve to test, you have to get the answers from somewhere. Since you (as a programmer) don't know anything, you have to ask someone else. They will tell you what the expected answers are, and what some of the unusual cases are from a business perspective.

If you are going to ask questions, then you'd better be prepared to listen to the answers. So listening is the third activity in software development.

Programmers must listen in the large, too. They listen to what the customer says the business problem is. They help the customer to understand what is hard and what is easy, so it is an active kind of listening. The feedback they provide helps the customer understand their business problems better.

Just saying, "You should listen to each other and to the customer," doesn't help much. People try that and it doesn't work. We have to find a way to structure the communication so that the things that have to be communicated get communicated when they need to be communicated and in the amount of detail they need to be communicated. Similarly, the rules we develop also have to discourage communication that doesn't help, that is done before what is to be communicated is really understood, or that is done in such great detail as to conceal the important part of the communication.


Why can't you just listen, write a test case, make it run, listen, write a test case, make it run indefinitely? Because we know it doesn't work that way. You can do that for a while. In a forgiving language you may even be able to do that for a long while. Eventually, though, you get stuck. The only way to make the next test case run is to break another. Or the only way to make the test case run is far more trouble than it is worth. Entropy claims another victim.

The only way to avoid this is to design. Designing is creating a structure that organizes the logic in the system. Good design organizes the logic so that a change in one part of the system doesn't always require a change in another part of the system. Good design ensures that every piece of logic in the system has one and only one home. Good design puts the logic near the data it operates on. Good design allows the extension of the system with changes in only one place.

Bad design is just the opposite. One conceptual change requires changes to many parts of the system. Logic has to be duplicated. Eventually, the cost of a bad design becomes overwhelming. You just can't remember any more where all the implicitly linked changes have to take place. You can't add new function without breaking existing function.

Complexity is another source of bad design. If a design requires four layers of indirection to find out what is really happening, and if those layers don't provide any particular functional or explanatory purpose, then the design is bad.

So, the final activity we have to structure in our new discipline is designing. We have to provide a context in which good designs are created, bad designs are fixed, and the current design is learned by everyone who needs to learn it.

As you'll see in the following chapters, how XP achieves design is quite different from how many software processes achieve design. Design is part of the daily business of all programmers in XP in the midst of their coding. But regardless of the strategy used to achieve it, the activity of design is not an option. It must be given serious thought for software development to be effective.


So you code because if you don't code, you haven't done anything. You test because if you don't test, you don't know when you are done coding. You listen because if you don't listen you don't know what to code or what to test. And you design so you can keep coding and testing and listening indefinitely. That's it. Those are the activities we have to help structure: