dinsdag 27 december 2011

Stopping the Toy Shop

Last week I had a flash meeting called by a user (the most important one) about a project I'm on. The project makes a replacement for existing software, modernizing it and adding some new tools. The new input was a bunch of wishes that basically diverted from the whole concept, going to another style of working. Screen shots of competitors came on the table.

Fear and panic spread amongst us. Older and a bit wiser, I started analyzing (using the user present) the new requirements. It turned out that the new requirements could fit in with the existing ones nicely, by changing the design slightly and adding a new subclass to an existing factory structure. It took some work to keep calm and see through the initial mood of 'the old stuff is simply not good enough' and go to a state of 'the old stuff is still useful in some parts, but we also want a new way of handling another part'.

Avoiding the threat of a sudden Toy Shop situation is a good test for your nerves. But it pays to keep calm and analyze immediately. In this case, the meeting ended peacefully, and all went their way happily.

In other situations, you may not be so lucky. In such cases you often have no other resort than to pull up an overview of already existing commitments - early. Ask what should be canceled to be able to do the things just mentioned. That often helps to shield against the rest of the list to be presented. Because remember, every impossible wish with its hidden expectations is an attack on everyone's motivations.

woensdag 21 december 2011

Agile Tools

I get asked about what tools we use every once in a while. Last week, someone said: 'Surely, this command-line stuff is not what you're using normally. You are using some kind of IDE, right?'. Nope. IDE. Huh? OK let's take a step back.

Tools are there to support your goals. If your goal is to be able to type, compile and run as fast as possible, then an IDE may be your best bet. Mine happens to be to produce the best software possible given the constraints. Constraints like time available, the interests of the programmer, the wishes of the users, and lots more.

One constraint is a high 'manageability'. For that, it is important to know exactly what the role of every piece of software is. And the dependencies on other parts. The role of all sorts of tools that claim to support 'Agile' must be to increase the quality of the agile process. As far as I can see, routinely working in IDE's does not reach that goal. Why not? Because for one you are not forced to think about your dependencies. Everything is always available, everywhere. Another serious problem is that quick compile&run doesn't invite to thoroughly think about the logic; rather, it invites 'run and see what happens'. Which is disastrous for quality.

On a higher level, there are the 'Unified' methods. This is basically old stuff in a new form, trying to alleviate the consequences of top-down schemes. The funny thing is that they do all have the same concept behind it: Think on the high level, (almost) generate the low level. Which looks much like ... Waterfall revisited. A prime lesson of Agile is that everything from requirements to high-level design to implementation details is coupled. You can say you want the one-button 'Find oil' application, but the implementation details are simply proving that your requirements are wrong.

What you need to do is consider everything at any time. So you need tools to help you get insight: They generate designs from code rather than the other way round. And they are very 'free-form'. An important tool for us is Wiki. With this you can state initial ideas, order them to make estimates, collect examples, etc. etc (see - for example - the No specs? post). It is ideal to have clients cooperate even if the only thing they do is read it and make comments.

Procedures and communication are much more important than tools. For example, I can say that a debugger is a very important tool. Every developer should be extremely comfortable with a good debugger. Nobody should deliver anything that hasn't gone through debugger runs, where you simply inspect all your constructs to see whether everything is exactly as you expect and intend it to be. Thus, if you want to, you can state the procedure as: 'never just let it run and see what happens'. This is much more important than the tool, as there are other things that complement debuggers: dump facilities, debug modes, ...

zaterdag 17 december 2011

Hunting down problems

Our dishwasher has been malfunctioning for about 2 months. It didn't get hot, and sometimes it stopped in the middle of the program - beeping, and with a red tap icon blinking (as in: can't get water supply). So we called in a repair man. He started the machine, took temperatures, did tests, and concluded: there's nothing wrong with this machine. Cost: 138 euros.

My girlfriend called me about this after the guy has left. I got angry, because just not being able to reproduce the problem doesn't mean there *is* no problem. There are several things that can be said about the behavior of the repair man.

First of all, the fact is that there were problems, and often. You can't get away with just 'I cannot reproduce'. You need to figure out why and have a model for the future. Consider you go to the car shop telling them that the car had problems starting. They try it, and it starts like a dream. Problem solved, right? No, we need to know why the defect could appear in the first place, because that will give us an idea on chance of recurrence. And a plan to handle the problem in the future.

The second issue already shines through: it's almost impossible to get the exact same circumstances back. Thus, not being able to reproduce the problem may actually be a good diagnostic, telling you that your tests simply miss something. For example, you may have just one or a few processors, where the problem only starts with many processors.

That leads to something related to this. I often hear things like: 'I ran it through valgrind [an advanced diagnostics tool] and it found no problems'. The remark then is: "so?" It doesn't even mean that you're testing the wrong things. If valgrind does find something, then it is certain that something is wrong. But the reverse is not true. Sounds almost like real science.

When handling problems, it's very important to realize that: 'this has happened'. Reproducibility, or not being able to diagnose is not an excuse. You can ask for understanding - but only if you acknowledge that his problem was real - you just don't have the resources to help.

donderdag 15 december 2011

Design patterns

Some 15 years ago, design patterns were hot. I myself bought the book, Design Patterns. That book was and is hugely popular, and it was the first attempt to capture what most good designers were already doing.

After some time people began to see the dangers of patterns. First of all, presenting these things as things-to-do doesn't move you through the phase of discovering the necessity, hence a feeling of relative importance. You will discover how to do these things when you go along anyway, and then you'll use them because you have no other option.

Secondly, it invites over-use. Literally everything can be pattern-ized, even the simplest operations. Software development is always a trade-off, and it's important to know what to use when, and how to evaluate the necessity of a certain technique. A rather hilarious article appeared in JOOP about a developer who forced everyone in his team to rigorously use patterns for every little thing, and started to doubt whether he did anything wrong - now that they didn't deliver any products anymore.

All in all patterns can be useful, if seen as a way to get wiser and discover new ways of doing things. As long as you keep remembering the principle that following schemes blindly, without true understanding, will lead you to nothing.

dinsdag 6 december 2011

Beware of the Toy Shop

It's not just Agile projects that need to watch out for the Toy Shop. But especially Agile developers should be heavily aware of the dangers of letting your users into the Toy Shop.

What Toy Shop? The Toy Shop of all the beautiful things that you could do for your users. We all know that given time, you can make whatever is needed. You're a skilled software developer, and your team can produce anything. You are flexible and good at your work. So, you invite some users and show them what technology can do for them these days. They get all excited and start dreaming about all these exciting new functionalities and how brilliantly they could use that in their work.

After some time, your limbic system begins to activate the defensive systems of your body. In defensive mode, you try to point out to the users that of course you cannot implement everything: it would cost years of work for the whole team, and you need to do lots of other work too. Sure, the users understand. But their limbic system is also activated by now, albeit in another mode. At the end of the meeting, everybody goes their way. The developers already have a feeling of helplessness ('so much work, so little time'). The users, well, it depends. If you're lucky they have experience with this sort of meetings and have developed a skeptic attitude ('we never get what we want anyway'). Even worse is if the users go away in an enthusiastic state ('wow, that looks good!').

The next months you experience the true kick back of this meeting. Whenever you deliver something, the users are visibly disappointed with such a basic, un-slick product. You worked your butt off, and all they can think of is that they would want much better stuff. This demotivates both your team, and the users. Being professionals, your team finishes the project with things that are OK, and better than anyone would have thought at the start of the project. But by then, your good developers are leaving ('bloody spoiled users here') and users are grumbling ('as always, we get the inferior stuff'). Management says the project hasn't failed but there will be no extension.

There is BTW an alternative route: this is when y'all are not so good developers after all. In such environments you see nice screen shots appear in project meetings. Many of the toys seem to be made! Unluckily, when it comes to delivering the actual software, for some reason the 95% ready state remains 95% ready - indefinitely.


Agile needs lots of communication with users. Guided communication. No toy shops. Limit brainstorming to what you need to know. As soon as your users start moving in the direction of the toy shop (or worse, come to you with their shopping lists), stop them in any way you can. It's hard to find anything more demotivating for both developers and users than the effects of the Toy Shop.

zondag 4 december 2011

Limiting scope

I'm currently implementing some stuff for the new OpendTect installer. One of the classes has to get files from a remote HTTP site. An issue there is that the site may be down, or failing. Users may want to quit, or wait a bit longer. In the presence of mirrors it's a good idea to allow the user to switch mirror when such a thing happens.

All easy to understand, and reasonable. Now is it so strange to think that similar things could also happen to 'local' files? Local servers go down, can become real slow, you may want to use another server. Now why don't we prepare for that too? Why is it that we simply open a local file and work with it?

I think this is a typical example of the necessity of limiting scope. Imagine us analyzing what was necessary for the first OpendTect version. We'd have a UI toolkit, 3D visualization tools, ... file tools ... Now imagine someone analyzing for 'completeness' (as some advocate). If you really go for it, you'd be imagining the problems of servers going down during file read or write, and you'd need to figure out how to handle it. And yes, we may have thought about that fact at that point in time. But we simply limited the scope immediately to 'whatever the OS handles automatically' and went on to get something out there.

This is a general principle. Building software is not only about what to implement, it's also about what has to be left out. Not just "can't do" but also "won't do". Part of the art of good design. Now this is all rather standard stuff. For Agile development, you realize that things can change, and thus it is more important there to make your choices explicit. In many cases it's very beneficial to go through scenarios and alternate designs relating to these "won't do" issues. That can just lead you to a design that can stand the test of time more gracefully. Just by making a few choices differently, just by spending a bit more time now you can save yourself tons of trouble later.

Thus, limiting scope is unavoidable, and simply a part of any analysis and design. But try to wire your software in such a way that shifting priorities and new insights may just be easier to support. It can be done, and contrary to Waterfall environments where the fact is largely ignored, it is a big topic in Agile projects.