Some 15 years ago, design patterns were hot. I myself bought the book, Design Patterns. That book was and is hugely popular, and it was the first attempt to capture what most good designers were already doing.
After some time people began to see the dangers of patterns. First of all, presenting these things as things-to-do doesn't move you through the phase of discovering the necessity, hence a feeling of relative importance. You will discover how to do these things when you go along anyway, and then you'll use them because you have no other option.
Secondly, it invites over-use. Literally everything can be pattern-ized, even the simplest operations. Software development is always a trade-off, and it's important to know what to use when, and how to evaluate the necessity of a certain technique. A rather hilarious article appeared in JOOP about a developer who forced everyone in his team to rigorously use patterns for every little thing, and started to doubt whether he did anything wrong - now that they didn't deliver any products anymore.
All in all patterns can be useful, if seen as a way to get wiser and discover new ways of doing things. As long as you keep remembering the principle that following schemes blindly, without true understanding, will lead you to nothing.
donderdag 15 december 2011
dinsdag 6 december 2011
Beware of the Toy Shop
It's not just Agile projects that need to watch out for the Toy Shop. But especially Agile developers should be heavily aware of the dangers of letting your users into the Toy Shop.
What Toy Shop? The Toy Shop of all the beautiful things that you could do for your users. We all know that given time, you can make whatever is needed. You're a skilled software developer, and your team can produce anything. You are flexible and good at your work. So, you invite some users and show them what technology can do for them these days. They get all excited and start dreaming about all these exciting new functionalities and how brilliantly they could use that in their work.
After some time, your limbic system begins to activate the defensive systems of your body. In defensive mode, you try to point out to the users that of course you cannot implement everything: it would cost years of work for the whole team, and you need to do lots of other work too. Sure, the users understand. But their limbic system is also activated by now, albeit in another mode. At the end of the meeting, everybody goes their way. The developers already have a feeling of helplessness ('so much work, so little time'). The users, well, it depends. If you're lucky they have experience with this sort of meetings and have developed a skeptic attitude ('we never get what we want anyway'). Even worse is if the users go away in an enthusiastic state ('wow, that looks good!').
The next months you experience the true kick back of this meeting. Whenever you deliver something, the users are visibly disappointed with such a basic, un-slick product. You worked your butt off, and all they can think of is that they would want much better stuff. This demotivates both your team, and the users. Being professionals, your team finishes the project with things that are OK, and better than anyone would have thought at the start of the project. But by then, your good developers are leaving ('bloody spoiled users here') and users are grumbling ('as always, we get the inferior stuff'). Management says the project hasn't failed but there will be no extension.
There is BTW an alternative route: this is when y'all are not so good developers after all. In such environments you see nice screen shots appear in project meetings. Many of the toys seem to be made! Unluckily, when it comes to delivering the actual software, for some reason the 95% ready state remains 95% ready - indefinitely.
Agile needs lots of communication with users. Guided communication. No toy shops. Limit brainstorming to what you need to know. As soon as your users start moving in the direction of the toy shop (or worse, come to you with their shopping lists), stop them in any way you can. It's hard to find anything more demotivating for both developers and users than the effects of the Toy Shop.
What Toy Shop? The Toy Shop of all the beautiful things that you could do for your users. We all know that given time, you can make whatever is needed. You're a skilled software developer, and your team can produce anything. You are flexible and good at your work. So, you invite some users and show them what technology can do for them these days. They get all excited and start dreaming about all these exciting new functionalities and how brilliantly they could use that in their work.
After some time, your limbic system begins to activate the defensive systems of your body. In defensive mode, you try to point out to the users that of course you cannot implement everything: it would cost years of work for the whole team, and you need to do lots of other work too. Sure, the users understand. But their limbic system is also activated by now, albeit in another mode. At the end of the meeting, everybody goes their way. The developers already have a feeling of helplessness ('so much work, so little time'). The users, well, it depends. If you're lucky they have experience with this sort of meetings and have developed a skeptic attitude ('we never get what we want anyway'). Even worse is if the users go away in an enthusiastic state ('wow, that looks good!').
The next months you experience the true kick back of this meeting. Whenever you deliver something, the users are visibly disappointed with such a basic, un-slick product. You worked your butt off, and all they can think of is that they would want much better stuff. This demotivates both your team, and the users. Being professionals, your team finishes the project with things that are OK, and better than anyone would have thought at the start of the project. But by then, your good developers are leaving ('bloody spoiled users here') and users are grumbling ('as always, we get the inferior stuff'). Management says the project hasn't failed but there will be no extension.
There is BTW an alternative route: this is when y'all are not so good developers after all. In such environments you see nice screen shots appear in project meetings. Many of the toys seem to be made! Unluckily, when it comes to delivering the actual software, for some reason the 95% ready state remains 95% ready - indefinitely.
Agile needs lots of communication with users. Guided communication. No toy shops. Limit brainstorming to what you need to know. As soon as your users start moving in the direction of the toy shop (or worse, come to you with their shopping lists), stop them in any way you can. It's hard to find anything more demotivating for both developers and users than the effects of the Toy Shop.
zondag 4 december 2011
Limiting scope
I'm currently implementing some stuff for the new OpendTect installer. One of the classes has to get files from a remote HTTP site. An issue there is that the site may be down, or failing. Users may want to quit, or wait a bit longer. In the presence of mirrors it's a good idea to allow the user to switch mirror when such a thing happens.
All easy to understand, and reasonable. Now is it so strange to think that similar things could also happen to 'local' files? Local servers go down, can become real slow, you may want to use another server. Now why don't we prepare for that too? Why is it that we simply open a local file and work with it?
I think this is a typical example of the necessity of limiting scope. Imagine us analyzing what was necessary for the first OpendTect version. We'd have a UI toolkit, 3D visualization tools, ... file tools ... Now imagine someone analyzing for 'completeness' (as some advocate). If you really go for it, you'd be imagining the problems of servers going down during file read or write, and you'd need to figure out how to handle it. And yes, we may have thought about that fact at that point in time. But we simply limited the scope immediately to 'whatever the OS handles automatically' and went on to get something out there.
This is a general principle. Building software is not only about what to implement, it's also about what has to be left out. Not just "can't do" but also "won't do". Part of the art of good design. Now this is all rather standard stuff. For Agile development, you realize that things can change, and thus it is more important there to make your choices explicit. In many cases it's very beneficial to go through scenarios and alternate designs relating to these "won't do" issues. That can just lead you to a design that can stand the test of time more gracefully. Just by making a few choices differently, just by spending a bit more time now you can save yourself tons of trouble later.
Thus, limiting scope is unavoidable, and simply a part of any analysis and design. But try to wire your software in such a way that shifting priorities and new insights may just be easier to support. It can be done, and contrary to Waterfall environments where the fact is largely ignored, it is a big topic in Agile projects.
All easy to understand, and reasonable. Now is it so strange to think that similar things could also happen to 'local' files? Local servers go down, can become real slow, you may want to use another server. Now why don't we prepare for that too? Why is it that we simply open a local file and work with it?
I think this is a typical example of the necessity of limiting scope. Imagine us analyzing what was necessary for the first OpendTect version. We'd have a UI toolkit, 3D visualization tools, ... file tools ... Now imagine someone analyzing for 'completeness' (as some advocate). If you really go for it, you'd be imagining the problems of servers going down during file read or write, and you'd need to figure out how to handle it. And yes, we may have thought about that fact at that point in time. But we simply limited the scope immediately to 'whatever the OS handles automatically' and went on to get something out there.
This is a general principle. Building software is not only about what to implement, it's also about what has to be left out. Not just "can't do" but also "won't do". Part of the art of good design. Now this is all rather standard stuff. For Agile development, you realize that things can change, and thus it is more important there to make your choices explicit. In many cases it's very beneficial to go through scenarios and alternate designs relating to these "won't do" issues. That can just lead you to a design that can stand the test of time more gracefully. Just by making a few choices differently, just by spending a bit more time now you can save yourself tons of trouble later.
Thus, limiting scope is unavoidable, and simply a part of any analysis and design. But try to wire your software in such a way that shifting priorities and new insights may just be easier to support. It can be done, and contrary to Waterfall environments where the fact is largely ignored, it is a big topic in Agile projects.
dinsdag 29 november 2011
Why do software projects fail?
On the radio I heard a couple of these horror stories of large (semi-)government software projects failing. The presenter asked himself the eternal question: why do these projects fail so often?
Can large projects be done in an Agile way? Without proof, I'm pretty sure they can. All you need is some people with talent and vision. And that's exactly what lacks on these large projects. I've seen a few in my days. The terror of SDM-like project management tools make it impossible, even for the good people, to make the project work.
For people who know nothing about making software this is all a mystery. They compare to physical building projects, and see that the Waterfall-conducted projects are just like that. Why would that approach not work in software? Here's one refutation to this analogy. Here's mine.
Let's say the physical project we're talking of is the installment of a sewage system in a part of a city. Planning encompasses re-directing traffic, breaking up roads, getting the old systems out, placing new ones, taking care of cables, and so forth. This is all planned ahead and executed in a tight schedule.
Now consider the computer world. There, things change at an enormous speed. Think about the question whether at the end of the year people will still want sewage - maybe destruction-in-place has been introduced. Roads may have gone because everyone's flying by then. Ten times more cables need to be laid out than expected. Now imagine you didn't really know what to do in the first place. All your precious large phase X final reports are useless,and lead to ever more costs and failures. Imagine what will happen when even the physical building projects often go over budget and past schedule.
Another problem is that very often in the computer world there is hardly anybody in the top of the government organizations with enough knowledge and experience to be able to really grasp what the computer companies are offering, and whether this is OK. The Cap Gemini's of the world just claim what they offer is necessary, and who can refute it? Certainly not the government people that were trained in exactly the same environments.
Lacking true insight, and lacking methods that have change as a basic principle - it's no wonder these projects fail. Just adopting Nazi methods demanding that things have to be 100% controlled will never guarantee success ...
Can large projects be done in an Agile way? Without proof, I'm pretty sure they can. All you need is some people with talent and vision. And that's exactly what lacks on these large projects. I've seen a few in my days. The terror of SDM-like project management tools make it impossible, even for the good people, to make the project work.
For people who know nothing about making software this is all a mystery. They compare to physical building projects, and see that the Waterfall-conducted projects are just like that. Why would that approach not work in software? Here's one refutation to this analogy. Here's mine.
Let's say the physical project we're talking of is the installment of a sewage system in a part of a city. Planning encompasses re-directing traffic, breaking up roads, getting the old systems out, placing new ones, taking care of cables, and so forth. This is all planned ahead and executed in a tight schedule.
Now consider the computer world. There, things change at an enormous speed. Think about the question whether at the end of the year people will still want sewage - maybe destruction-in-place has been introduced. Roads may have gone because everyone's flying by then. Ten times more cables need to be laid out than expected. Now imagine you didn't really know what to do in the first place. All your precious large phase X final reports are useless,and lead to ever more costs and failures. Imagine what will happen when even the physical building projects often go over budget and past schedule.
Another problem is that very often in the computer world there is hardly anybody in the top of the government organizations with enough knowledge and experience to be able to really grasp what the computer companies are offering, and whether this is OK. The Cap Gemini's of the world just claim what they offer is necessary, and who can refute it? Certainly not the government people that were trained in exactly the same environments.
Lacking true insight, and lacking methods that have change as a basic principle - it's no wonder these projects fail. Just adopting Nazi methods demanding that things have to be 100% controlled will never guarantee success ...
vrijdag 25 november 2011
Simple tests
Just got an e-mail at OpendTect support, about not being able to load the simplest SEG-Y file from the Madagascar web site into OpendTect. Between the lines I could read that the sender implied that if anything should be loadable, then at least the simplest file of all.
That is a common misconception, not just in this case but more in general. Very often the simplest cases are very bad test cases. They lack everything needed for good testing, and sometimes the system responds a 100% correct to reject them. Therefore, a good rule is: make the first test case simple, but not simpler than possible.
For test material, the simple 'first test' must comply with the requirements of the system. Other test cases, designed to test error conditions, can be made later. Very often, you can simulate error conditions by changing some variables in the debugger BTW. Even later, you can add test cases that cover as much of the specs as possible.
This particular SEG-Y file didn't have any positions in it. That may be unnecessary for Madagascar, OpendTect doesn't buy that and tells you that all traces are invalid. Which they are (there are numerous workarounds though).
That is a common misconception, not just in this case but more in general. Very often the simplest cases are very bad test cases. They lack everything needed for good testing, and sometimes the system responds a 100% correct to reject them. Therefore, a good rule is: make the first test case simple, but not simpler than possible.
For test material, the simple 'first test' must comply with the requirements of the system. Other test cases, designed to test error conditions, can be made later. Very often, you can simulate error conditions by changing some variables in the debugger BTW. Even later, you can add test cases that cover as much of the specs as possible.
This particular SEG-Y file didn't have any positions in it. That may be unnecessary for Madagascar, OpendTect doesn't buy that and tells you that all traces are invalid. Which they are (there are numerous workarounds though).
woensdag 23 november 2011
Handling complexity
An essential part of Agile software development is using methods that are highly different from the old school structured development paradigm. Structured analysis/design/programming has always been all about translating a fixed set of specs into executable code. It was never meant to support change. This is what was added in object- or service-oriented development.
To get this working right, you have to give a central role to separation of concerns. Every object you make should have a clear task, which should have a low to medium complexity. Further, the best objects are designed with mainly the service it delivers to the outside world in mind.
Complexity is unavoidable in any non-trivial system. The way to attack that is to split up the complexity amongst many objects that are all of a low complexity. So rather than making an object that handles all complexity directly, you make a bunch of co-operating objects that handle the different aspects, then blend these services in a higher level object. Very often this has the added benefit of being able to use the separate sub-objects in other contexts.
An important principle is to make every service just right in terms of generality. If you make the object too general, then the interface will become huge and difficult to understand. Make it too specific and it may be easy to use, but impossible to re-use in other contexts. As always, this is part of an optimization process, where you weigh all aspects and try to find an optimum.
In the end, the main objective of using advanced design- and programming constructs should be to attack the problem of steadily increasing complexity.
In a naive system, complexity grows fast, possibly exponential. In well designed systems, complexity is the same in every part of the system. This costs work up front, meaning you will get your first results later than in the naive system. The benefits will come later in the life cycle of your software.
In OpendTect, we are now way past the cross-over point (the thin blue line above). There is no way we could have ever reached what we have now without investing daily in complexity reduction - keeping the complexity linear.
To get this working right, you have to give a central role to separation of concerns. Every object you make should have a clear task, which should have a low to medium complexity. Further, the best objects are designed with mainly the service it delivers to the outside world in mind.
Complexity is unavoidable in any non-trivial system. The way to attack that is to split up the complexity amongst many objects that are all of a low complexity. So rather than making an object that handles all complexity directly, you make a bunch of co-operating objects that handle the different aspects, then blend these services in a higher level object. Very often this has the added benefit of being able to use the separate sub-objects in other contexts.
An important principle is to make every service just right in terms of generality. If you make the object too general, then the interface will become huge and difficult to understand. Make it too specific and it may be easy to use, but impossible to re-use in other contexts. As always, this is part of an optimization process, where you weigh all aspects and try to find an optimum.
In the end, the main objective of using advanced design- and programming constructs should be to attack the problem of steadily increasing complexity.
In a naive system, complexity grows fast, possibly exponential. In well designed systems, complexity is the same in every part of the system. This costs work up front, meaning you will get your first results later than in the naive system. The benefits will come later in the life cycle of your software.
In OpendTect, we are now way past the cross-over point (the thin blue line above). There is no way we could have ever reached what we have now without investing daily in complexity reduction - keeping the complexity linear.
zaterdag 19 november 2011
Team roles
The more experience you get, the more likely it is that you will pick up more and more management tasks. Getting old, I'm trying to avoid this as much as possible by positioning myself as an internal adviser. But you cannot avoid telling other people what to do. That's one thing managers do. But good managers should be mainly facilitating and motivating, and in good teams that's what they do most of the time.
What does the manager want? He wants his team to do everything that is needed, with a minimum of steering every once in a while. The more you have to steer actual working processes, the less you like it. The best situation you can get in is when all the work is delegated and you can relax and do nothing else than keep people motivated and help them every once in a while.
What does the manager fear? That things go wrong. People can do all sorts of things to screw up their work. Do wrong things, maybe at the wrong time, maybe not communicate enough, whatever. You get the picture. What do you want? Feedback. Reporting, asking before taking crucial decisions, that sort of thing. So, you have to keep yourself informed. If you're lucky, this information comes automatically. If you're unlucky, you have to constantly go and beg for it.
For the managed people, the above gives a certain tension: do I give enough feedback? Or too much? Do I decide too much, or do I ask for guidance too much? Should I try to figure out things myself more, or should I ask for help more? In all the above you can see that team work is - like so many things in software development - an optimization game. Being very pro-active and self-supporting can also mean being low on feedback and using a lot of time figuring out things that others can easily help you with. On the other end of the spectrum is the re-active, help-leeching colleague that seems to suck the energy out of all of his/her team members.
In Agile teams, we like people to be pro-active and self-supporting. The manger's task is rather easy compared to traditional teams. But ... keep monitoring.
What does the manager want? He wants his team to do everything that is needed, with a minimum of steering every once in a while. The more you have to steer actual working processes, the less you like it. The best situation you can get in is when all the work is delegated and you can relax and do nothing else than keep people motivated and help them every once in a while.
What does the manager fear? That things go wrong. People can do all sorts of things to screw up their work. Do wrong things, maybe at the wrong time, maybe not communicate enough, whatever. You get the picture. What do you want? Feedback. Reporting, asking before taking crucial decisions, that sort of thing. So, you have to keep yourself informed. If you're lucky, this information comes automatically. If you're unlucky, you have to constantly go and beg for it.
For the managed people, the above gives a certain tension: do I give enough feedback? Or too much? Do I decide too much, or do I ask for guidance too much? Should I try to figure out things myself more, or should I ask for help more? In all the above you can see that team work is - like so many things in software development - an optimization game. Being very pro-active and self-supporting can also mean being low on feedback and using a lot of time figuring out things that others can easily help you with. On the other end of the spectrum is the re-active, help-leeching colleague that seems to suck the energy out of all of his/her team members.
In Agile teams, we like people to be pro-active and self-supporting. The manger's task is rather easy compared to traditional teams. But ... keep monitoring.
Abonneren op:
Posts (Atom)