dinsdag 27 december 2011

Stopping the Toy Shop

Last week I had a flash meeting called by a user (the most important one) about a project I'm on. The project makes a replacement for existing software, modernizing it and adding some new tools. The new input was a bunch of wishes that basically diverted from the whole concept, going to another style of working. Screen shots of competitors came on the table.

Fear and panic spread amongst us. Older and a bit wiser, I started analyzing (using the user present) the new requirements. It turned out that the new requirements could fit in with the existing ones nicely, by changing the design slightly and adding a new subclass to an existing factory structure. It took some work to keep calm and see through the initial mood of 'the old stuff is simply not good enough' and go to a state of 'the old stuff is still useful in some parts, but we also want a new way of handling another part'.

Avoiding the threat of a sudden Toy Shop situation is a good test for your nerves. But it pays to keep calm and analyze immediately. In this case, the meeting ended peacefully, and all went their way happily.

In other situations, you may not be so lucky. In such cases you often have no other resort than to pull up an overview of already existing commitments - early. Ask what should be canceled to be able to do the things just mentioned. That often helps to shield against the rest of the list to be presented. Because remember, every impossible wish with its hidden expectations is an attack on everyone's motivations.

woensdag 21 december 2011

Agile Tools

I get asked about what tools we use every once in a while. Last week, someone said: 'Surely, this command-line stuff is not what you're using normally. You are using some kind of IDE, right?'. Nope. IDE. Huh? OK let's take a step back.

Tools are there to support your goals. If your goal is to be able to type, compile and run as fast as possible, then an IDE may be your best bet. Mine happens to be to produce the best software possible given the constraints. Constraints like time available, the interests of the programmer, the wishes of the users, and lots more.

One constraint is a high 'manageability'. For that, it is important to know exactly what the role of every piece of software is. And the dependencies on other parts. The role of all sorts of tools that claim to support 'Agile' must be to increase the quality of the agile process. As far as I can see, routinely working in IDE's does not reach that goal. Why not? Because for one you are not forced to think about your dependencies. Everything is always available, everywhere. Another serious problem is that quick compile&run doesn't invite to thoroughly think about the logic; rather, it invites 'run and see what happens'. Which is disastrous for quality.

On a higher level, there are the 'Unified' methods. This is basically old stuff in a new form, trying to alleviate the consequences of top-down schemes. The funny thing is that they do all have the same concept behind it: Think on the high level, (almost) generate the low level. Which looks much like ... Waterfall revisited. A prime lesson of Agile is that everything from requirements to high-level design to implementation details is coupled. You can say you want the one-button 'Find oil' application, but the implementation details are simply proving that your requirements are wrong.

What you need to do is consider everything at any time. So you need tools to help you get insight: They generate designs from code rather than the other way round. And they are very 'free-form'. An important tool for us is Wiki. With this you can state initial ideas, order them to make estimates, collect examples, etc. etc (see - for example - the No specs? post). It is ideal to have clients cooperate even if the only thing they do is read it and make comments.

Procedures and communication are much more important than tools. For example, I can say that a debugger is a very important tool. Every developer should be extremely comfortable with a good debugger. Nobody should deliver anything that hasn't gone through debugger runs, where you simply inspect all your constructs to see whether everything is exactly as you expect and intend it to be. Thus, if you want to, you can state the procedure as: 'never just let it run and see what happens'. This is much more important than the tool, as there are other things that complement debuggers: dump facilities, debug modes, ...

zaterdag 17 december 2011

Hunting down problems

Our dishwasher has been malfunctioning for about 2 months. It didn't get hot, and sometimes it stopped in the middle of the program - beeping, and with a red tap icon blinking (as in: can't get water supply). So we called in a repair man. He started the machine, took temperatures, did tests, and concluded: there's nothing wrong with this machine. Cost: 138 euros.

My girlfriend called me about this after the guy has left. I got angry, because just not being able to reproduce the problem doesn't mean there *is* no problem. There are several things that can be said about the behavior of the repair man.

First of all, the fact is that there were problems, and often. You can't get away with just 'I cannot reproduce'. You need to figure out why and have a model for the future. Consider you go to the car shop telling them that the car had problems starting. They try it, and it starts like a dream. Problem solved, right? No, we need to know why the defect could appear in the first place, because that will give us an idea on chance of recurrence. And a plan to handle the problem in the future.

The second issue already shines through: it's almost impossible to get the exact same circumstances back. Thus, not being able to reproduce the problem may actually be a good diagnostic, telling you that your tests simply miss something. For example, you may have just one or a few processors, where the problem only starts with many processors.

That leads to something related to this. I often hear things like: 'I ran it through valgrind [an advanced diagnostics tool] and it found no problems'. The remark then is: "so?" It doesn't even mean that you're testing the wrong things. If valgrind does find something, then it is certain that something is wrong. But the reverse is not true. Sounds almost like real science.

When handling problems, it's very important to realize that: 'this has happened'. Reproducibility, or not being able to diagnose is not an excuse. You can ask for understanding - but only if you acknowledge that his problem was real - you just don't have the resources to help.

donderdag 15 december 2011

Design patterns

Some 15 years ago, design patterns were hot. I myself bought the book, Design Patterns. That book was and is hugely popular, and it was the first attempt to capture what most good designers were already doing.

After some time people began to see the dangers of patterns. First of all, presenting these things as things-to-do doesn't move you through the phase of discovering the necessity, hence a feeling of relative importance. You will discover how to do these things when you go along anyway, and then you'll use them because you have no other option.

Secondly, it invites over-use. Literally everything can be pattern-ized, even the simplest operations. Software development is always a trade-off, and it's important to know what to use when, and how to evaluate the necessity of a certain technique. A rather hilarious article appeared in JOOP about a developer who forced everyone in his team to rigorously use patterns for every little thing, and started to doubt whether he did anything wrong - now that they didn't deliver any products anymore.

All in all patterns can be useful, if seen as a way to get wiser and discover new ways of doing things. As long as you keep remembering the principle that following schemes blindly, without true understanding, will lead you to nothing.

dinsdag 6 december 2011

Beware of the Toy Shop

It's not just Agile projects that need to watch out for the Toy Shop. But especially Agile developers should be heavily aware of the dangers of letting your users into the Toy Shop.

What Toy Shop? The Toy Shop of all the beautiful things that you could do for your users. We all know that given time, you can make whatever is needed. You're a skilled software developer, and your team can produce anything. You are flexible and good at your work. So, you invite some users and show them what technology can do for them these days. They get all excited and start dreaming about all these exciting new functionalities and how brilliantly they could use that in their work.

After some time, your limbic system begins to activate the defensive systems of your body. In defensive mode, you try to point out to the users that of course you cannot implement everything: it would cost years of work for the whole team, and you need to do lots of other work too. Sure, the users understand. But their limbic system is also activated by now, albeit in another mode. At the end of the meeting, everybody goes their way. The developers already have a feeling of helplessness ('so much work, so little time'). The users, well, it depends. If you're lucky they have experience with this sort of meetings and have developed a skeptic attitude ('we never get what we want anyway'). Even worse is if the users go away in an enthusiastic state ('wow, that looks good!').

The next months you experience the true kick back of this meeting. Whenever you deliver something, the users are visibly disappointed with such a basic, un-slick product. You worked your butt off, and all they can think of is that they would want much better stuff. This demotivates both your team, and the users. Being professionals, your team finishes the project with things that are OK, and better than anyone would have thought at the start of the project. But by then, your good developers are leaving ('bloody spoiled users here') and users are grumbling ('as always, we get the inferior stuff'). Management says the project hasn't failed but there will be no extension.

There is BTW an alternative route: this is when y'all are not so good developers after all. In such environments you see nice screen shots appear in project meetings. Many of the toys seem to be made! Unluckily, when it comes to delivering the actual software, for some reason the 95% ready state remains 95% ready - indefinitely.


Agile needs lots of communication with users. Guided communication. No toy shops. Limit brainstorming to what you need to know. As soon as your users start moving in the direction of the toy shop (or worse, come to you with their shopping lists), stop them in any way you can. It's hard to find anything more demotivating for both developers and users than the effects of the Toy Shop.

zondag 4 december 2011

Limiting scope

I'm currently implementing some stuff for the new OpendTect installer. One of the classes has to get files from a remote HTTP site. An issue there is that the site may be down, or failing. Users may want to quit, or wait a bit longer. In the presence of mirrors it's a good idea to allow the user to switch mirror when such a thing happens.

All easy to understand, and reasonable. Now is it so strange to think that similar things could also happen to 'local' files? Local servers go down, can become real slow, you may want to use another server. Now why don't we prepare for that too? Why is it that we simply open a local file and work with it?

I think this is a typical example of the necessity of limiting scope. Imagine us analyzing what was necessary for the first OpendTect version. We'd have a UI toolkit, 3D visualization tools, ... file tools ... Now imagine someone analyzing for 'completeness' (as some advocate). If you really go for it, you'd be imagining the problems of servers going down during file read or write, and you'd need to figure out how to handle it. And yes, we may have thought about that fact at that point in time. But we simply limited the scope immediately to 'whatever the OS handles automatically' and went on to get something out there.

This is a general principle. Building software is not only about what to implement, it's also about what has to be left out. Not just "can't do" but also "won't do". Part of the art of good design. Now this is all rather standard stuff. For Agile development, you realize that things can change, and thus it is more important there to make your choices explicit. In many cases it's very beneficial to go through scenarios and alternate designs relating to these "won't do" issues. That can just lead you to a design that can stand the test of time more gracefully. Just by making a few choices differently, just by spending a bit more time now you can save yourself tons of trouble later.

Thus, limiting scope is unavoidable, and simply a part of any analysis and design. But try to wire your software in such a way that shifting priorities and new insights may just be easier to support. It can be done, and contrary to Waterfall environments where the fact is largely ignored, it is a big topic in Agile projects.

dinsdag 29 november 2011

Why do software projects fail?

On the radio I heard a couple of these horror stories of large (semi-)government software projects failing. The presenter asked himself the eternal question: why do these projects fail so often?

Can large projects be done in an Agile way? Without proof, I'm pretty sure they can. All you need is some people with talent and vision. And that's exactly what lacks on these large projects. I've seen a few in my days. The terror of SDM-like project management tools make it impossible, even for the good people, to make the project work.

For people who know nothing about making software this is all a mystery. They compare to physical building projects, and see that the Waterfall-conducted projects are just like that. Why would that approach not work in software? Here's one refutation to this analogy. Here's mine.

Let's say the physical project we're talking of is the installment of a sewage system in a part of a city. Planning encompasses re-directing traffic, breaking up roads, getting the old systems out, placing new ones, taking care of cables, and so forth. This is all planned ahead and executed in a tight schedule.

Now consider the computer world. There, things change at an enormous speed. Think about the question whether at the end of the year people will still want sewage - maybe destruction-in-place has been introduced. Roads may have gone because everyone's flying by then. Ten times more cables need to be laid out than expected. Now imagine you didn't really know what to do in the first place. All your precious large phase X final reports are useless,and lead to ever more costs and failures. Imagine what will happen when even the physical building projects often go over budget and past schedule.

Another problem is that very often in the computer world there is hardly anybody in the top of the government organizations with enough knowledge and experience to be able to really grasp what the computer companies are offering, and whether this is OK. The Cap Gemini's of the world just claim what they offer is necessary, and who can refute it? Certainly not the government people that were trained in exactly the same environments.

Lacking true insight, and lacking methods that have change as a basic principle - it's no wonder these projects fail. Just adopting Nazi methods demanding that things have to be 100% controlled will never guarantee success ...

vrijdag 25 november 2011

Simple tests

Just got an e-mail at OpendTect support, about not being able to load the simplest SEG-Y file from the Madagascar web site into OpendTect. Between the lines I could read that the sender implied that if anything should be loadable, then at least the simplest file of all.

That is a common misconception, not just in this case but more in general. Very often the simplest cases are very bad test cases. They lack everything needed for good testing, and sometimes the system responds a 100% correct to reject them. Therefore, a good rule is: make the first test case simple, but not simpler than possible.

For test material, the simple 'first test' must comply with the requirements of the system. Other test cases, designed to test error conditions, can be made later. Very often, you can simulate error conditions by changing some variables in the debugger BTW. Even later, you can add test cases that cover as much of the specs as possible.

This particular SEG-Y file didn't have any positions in it. That may be unnecessary for Madagascar, OpendTect doesn't buy that and tells you that all traces are invalid. Which they are (there are numerous workarounds though).

woensdag 23 november 2011

Handling complexity

An essential part of Agile software development is using methods that are highly different from the old school structured development paradigm. Structured analysis/design/programming has always been all about translating a fixed set of specs into executable code. It was never meant to support change. This is what was added in object- or service-oriented development.

To get this working right, you have to give a central role to separation of concerns. Every object you make should have a clear task, which should have a low to medium complexity. Further, the best objects are designed with mainly the service it delivers to the outside world in mind.

Complexity is unavoidable in any non-trivial system. The way to attack that is to split up the complexity amongst many objects that are all of a low complexity. So rather than making an object that handles all complexity directly, you make a bunch of co-operating objects that handle the different aspects, then blend these services in a higher level object. Very often this has the added benefit of being able to use the separate sub-objects in other contexts.

An important principle is to make every service just right in terms of generality. If you make the object too general, then the interface will become huge and difficult to understand. Make it too specific and it may be easy to use, but impossible to re-use in other contexts. As always, this is part of an optimization process, where you weigh all aspects and try to find an optimum.

In the end, the main objective of using advanced design- and programming constructs should be to attack the problem of steadily increasing complexity.
In a naive system, complexity grows fast, possibly exponential. In well designed systems, complexity is the same in every part of the system. This costs work up front, meaning you will get your first results later than in the naive system. The benefits will come later in the life cycle of your software.

In OpendTect, we are now way past the cross-over point (the thin blue line above). There is no way we could have ever reached what we have now without investing daily in complexity reduction - keeping the complexity linear.

zaterdag 19 november 2011

Team roles

The more experience you get, the more likely it is that you will pick up more and more management tasks. Getting old, I'm trying to avoid this as much as possible by positioning myself as an internal adviser.  But you cannot avoid telling other people what to do. That's one thing managers do. But good managers should be mainly facilitating and motivating, and in good teams that's what they do most of the time.

What does the manager want? He wants his team to do everything that is needed, with a minimum of steering every once in a while. The more you have to steer actual working processes, the less you like it. The best situation you can get in is when all the work is delegated and you can relax and do nothing else than keep people motivated and help them every once in a while.

What does the manager fear? That things go wrong. People can do all sorts of things to screw up their work. Do wrong things, maybe at the wrong time, maybe not communicate enough, whatever. You get the picture. What do you want? Feedback. Reporting, asking before taking crucial decisions, that sort of thing. So, you have to keep yourself informed. If you're lucky, this information comes automatically. If you're unlucky, you have to constantly go and beg for it.

For the managed people, the above gives a certain tension: do I give enough feedback? Or too much? Do I decide too much, or do I ask for guidance too much? Should I try to figure out things myself more, or should I ask for help more? In all the above you can see that team work is - like so many things in software development - an optimization game. Being very pro-active and self-supporting can also mean being low on feedback and using a lot of time figuring out things that others can easily help you with. On the other end of the spectrum is the re-active, help-leeching colleague that seems to suck the energy out of all of his/her team members.

In Agile teams, we like people to be pro-active and self-supporting. The manger's task is rather easy compared to traditional teams. But ... keep monitoring.

dinsdag 15 november 2011

Improving bad stuff

At the moment I'm trying to rescue a project that went bad. The problem is one that is very common:
  • Initial design is not bad, but simple
  • During extension, the design wasn't adapted to new insights
  • Things that were added were mostly badly thought out
In this case, I made the initial simple design/implementation. It highlighted some concepts that seemed to be right. Months later I'm stuck with a project that has gone bad. A product of misplaced trust in the developer.

The question is always: adapt or rebuild? My personal experience says: if in any doubt, rebuild. It's amazing to see how many are scared to death to do that. What are they thinking of? The extra typing work? The re-design?

The thing is: however bad the current product is, it always gives more insight in what is needed, and how certain things can be done. See it as a (bad) prototype. Re-do the basic design and fill up the new framework - partly by cut-and-paste of things that can be taken from the old stuff. The other part, well, typing is the easiest part. Lots of people seem to think that you have to cut-and-paste almost everything. But also here: in doubt, simply type the stuff again., probably just a bit different, better. My guess is that during programming, 5% of the time is spent typing the code. The rest is thinking, checking, switching, debugging, testing, ...

The nice thing is I've quickly made a new specs wiki in a matter of hours (the old one wasn't a specs wiki at all, with useless stuff all over the place). Now I just need to get a new basic implementation in place, and then go through the check list to get the whole thing into a better state. One in which the product will actually work, and can be improved to become a great product. Luckily, this is an internal project, if this would have happened in a project for a client there would have been some painful explaining to do ...

zaterdag 12 november 2011

Fear of Methodologies

In 1988 I graduated in geophysics, at Utrecht University. During my study, I was always drawn to programming. In the years after my graduation I worked as a field geophysicist, and when work was low I also did some programming. In 1990, my 2 year contract was not extended, mainly because the company had 5 field geophysicists and had only one crew in the field (a time of crisis in the oil business). After a few months I decided I'd make the switch to software development, and started at Cap Gemini.

This was my first confrontation with the horrors of the Waterfall model. Millions of guilders (now Euros) were poured into almost bottomless pits of top-down nightmare projects. The company was strongly involved with SDM (System Development Methodology) - a methodology still existing today. A methodology still making life difficult for legions of poor software slaves around the world.

I learned a lot in that year (actually a bit less than a year - I couldn't stand working there longer), and in many ways I transformed myself to a real software developer there. Of course there's a lot of useful knowledge in the fat books that go with such grand methodologies. When I left Cap Gemini for a geophysical software company, I knew much more about the whole process of making software, and also even much more about how things should obviously not be done. We are talking about 1992, the term Agile wouldn't be around for a long time. After, this, I went to the TNO (one of the Netherlands' national research organizations). In the three years there I read tons of books by the gurus of OO, and years of JOOP and C++ report. I got quite familiar with Object-Oriented software development.

At that point in time I could see essentially three ways in which people go about attacking a project:
  1. The top-down, waterfall, project-management-driven way
  2. Priority-driven, first-things-first solution-oriented
  3. Unstructured hacking, technology-driven
At Cap Gemini, I had seen the blatant failure of  (1), and at TNO (3) was 'tested' many times. TNO asked themselves: why are we so bad at making software? They had traditionally been on path (3) a lot and they had hired people to try (1), which led to even higher costs and even less success. I myself had seen (1) fail miserably in the past, and was being put on projects where (3) wasn't leading to any real product. I started working, on instinct, on some sort of (2) route. This led to some actual successful products in the Oil&Gas software group, something they were not familiar with (and not prepared for BTW). The last one was the 'GeoProbe' project, which Paul and I took with us to found dGB.

At that point in time, newsgroups became popular, and I spent lots of time on comp.object. There, you could see something happening, although most of us could not foresee the Agile manifesto coming. I remember the discussions like the one about 'cowboy coders' (later renamed to 'cowhand coders' because of politically correct pressure). I also remember repeatedly advocating things like influence of architecture on the process of analysis, requirements negotiations, implementation-centered working, and so forth. Always opposed especially by a guy called Elliott (don't forget the 2nd 't' at the risk of being flamed for weeks), who was (and is AFAIK) always talking about a 'holistic' approach. I thought our camp would never amount to anything, because the Waterfall people were in power and nothing would ever change.

In 1998 I had major surgery because of a leaking heart valve. Because of that, I completely missed the Agile manifesto and everything around it. Moreover, I decided to no longer invest my precious time in people like Elliott - people thinking that making software is the same as making the perfect analysis and the rest just a semi-automatic road down to testing. So I quit the newsgroups completely.

In 2003 I came across the term 'Agile development' by chance. When I followed the links (the search machines were there by then) I could see: this is in fact the way we're working (by then, I wasn't the only software developer in the company anymore, we had a real team!). So, I signed the manifesto and from then on dTect (later OpendTect) was made using Agile methods. We did actually learn new things, but in general, we carried on like we did before.

Looking back, one of the major allergies I retained was for anything that calls itself a 'methodology', or that behaves like one. And I don't get intimidated by big wording anymore, especially after my second and even larger surgery in 2006 that nearly got me killed (I now have two artificial heart valves) - I simply try to avoid everything that looks like the old stuff in a new package. That is exactly what the project-management-driven people do. They know how to market their product, and simply claim to comply with the latest trend. Now who can guess which new trendy methodology I think does just that?

woensdag 9 november 2011

Deployment of software

When you start making software, you will learn to program. Lots of people think they can make software while all they did was learning to program. There is much more to it than just hacking a few programs together. From early needs surveying to late maintenance and abandonment of software systems there is a whole new world to discover. You can spend your life studying just one of the phases.

For many programmers it comes as a surprise that deployment of a software system can be hard, and a job often well worth defining a project for. Rarely, you can release and sit back. Getting a system going is one of the hardest parts in terms of controlling. Issues like continuity of work, training of people, pre-testing hardware compliance, documentation, time tables, temporary re-wiring, plans for the subsequent maintenance phase - and many more issues have to be taken into account.

What is different in Agile environments, is that problems with the introduction of the system can often be preempted by changing the software itself. Foreseeing deployment issues can even have its impact on the basic specs of the product. For example, knowing that the users must be able to use the system continuously and together with the old solutions may result in totally different designs than simple, new, fresh applications in new areas.

Planning the deployment looks surprisingly like design activity. You have to keep a thorough eye on the sequence of things, and be aware of things that can happen concurrently, and others that cannot. This makes it a candidate for tools that we already know in our normal Agile work.

We currently have a couple of projects that have a heavy deployment 'load'.  Nageswara sent me an e-mail yeasterday, with his deployment plan for our new bug database access tool. It was rather short, just a couple of bullets. That couldn't be right. So I suggested that he would probably be helped a lot by simply putting the plan orderly into a wiki. This makes things more reviewable:
  • Forces thinking about many aspects
  • Enables concrete discussion with everyone involved
  • Can act as a simple script
I'm looking forward to his first 'product'.

maandag 7 november 2011

Work flows vs specs

In the informatics world, we always talk about 'the requirements'. Agile handles these different from Waterfall, because we realize that all requirements seldom can be known up front. One of the things we do is collect 'Use cases'. For normal human beings like geoscientists (our users), this is still way too abstract. Use cases come with roles, actors and all that stuff. We have to realize that ordinary people think in 'Work flows'.

Work flows map very closely to use cases. Every deliverable we make enables one or more work flows. It's very important that we are aware of this - users need the tools to get their work done; that means getting their work flows supported.

Work flows are part of what we call the 'problem domain'. In the 'solution domain', these result in specs. Here's the thing. In large projects, you will probably need a repository for both work flows and specs. For smaller projects, almost always the specs are more important than the work flows. Huh? OK, in the grand scheme of things everything is important, but collecting unlimited amounts of work flows will not get you very far when building the tools. Not only because work flows are very often quite obvious. Even if they're not, they are often hopelessly numerous.

The specs on the other hand are much more orderly; they can more easily be listed. They are therefore essential for checking whether you have everything covered. So where there are many branches to work flows that would result in lengthy descriptions (often speaking for themselves), the specs on the other hand are always finite, allow ticking off, and you can much easier make sure you won't forget a thing.

Time for an example. Let's say we want to enable some kind of analysis on some kind of data. For this, user will have data from some source, get results, may want to plot it, etc. etc. The point is that if you just say 'this type of analysis must be supported', then very often there are many possible work flows, and they kind of speak for themselves. But the actual implementation has to enable all branches that can be gone into. User wants to see a diagram somewhere half way, save results, set input parameters, estimate coefficients, etc. etc. The specs list this; they show what is (going to be) supported and what not. Developers and users can check and say what they think is still missing (they generate work flows on-the-fly, in their minds, how cool is that?). Now imagine trying to capture all requirements/work flows. You sit there, sweating, trying to think of each and every use case that may appear. And sure as hell, you will miss a lot. When making the specs, things flow much more easily. Specify a 'store intermediate results'? Then sure as hell there must be some place that these will be read back. If you think about what places that could be, then you have enabled all the work flows that need the implementation of these specs.

Humans are bad with keeping accounts, but great with imagining work flows. Thus, we need some help with the specs, meaning we have much more need for using tools for uncovering them. Yes, for testing you will have to cover the most significant work flows. And the fun thing is, once you have the tools made, it's much easier to see what those 'significant' work flows are. But while making the tool, your back-bone helper is the repository of specs.

zondag 6 november 2011

Teamwork

Teamwork is the all-out favorite buzz-word of Agile. And rightly so. The team is the unit of work; without a good team, nothing happens and then only a Waterfall approach can get you to a low-quality result (for a lot of sweat, money, trouble and oh, it's probably late).

A nice example of how this works was last week. I had read the cepstrum post from Matt Hall on the Agile Geosciences blog. It's clear that this would be nice for OpendTect, we have the tools, we have the people to do it.

So, I Skyped (what a great tool Skype is for Agile development) Satyaki, pointing him to Matt's post. Before I knew it, he had made a button on the normal Amplitude Spectrum window, and asked me to check it out. I immediately visited Bruno, because we don't have spectra for well logs yet. And I think these cepstra are particularly interesting for well logs. In a moment, Bruno and Satyaki were discussing, I added some small design tips, and off everything went. I'm pretty sure Bruno will deliver something this week. Mind you, all beside their main project work.

The big things here are:
  • Personal Skills
  • Willingness to Help (team members, but also users)
  • Interest for the work and topic
  • Ability and willingness to Communicate
Our team is a SHIC team. (just invented that, here and now). Making plans for ehh, team meetings (every day - is that enough?), sprints (how fast can you code?), masters (who are the slaves?) and whatever more the apparatchiks of the various methodologies prescribe - suit yourself, whatever. The team itself, its building and nurturing - now that is the important part.

donderdag 3 november 2011

No specs?

"Agile developers just hop in before creating specs. This is because they know that the requirements will not be known up front, and have to be discovered during development." Is that true? Not entirely. What we do is refine the sketch that we make in the beginning of the project. Little by little, these specs become more and more complete, and we get more and more insight as to the true extent of the project.

At OpendTect, we keep the specs in wikis. These are nicely free-form, and it's easy to annotate with states 'done' or 'still need weeks to get this right'. Also time estimates go in there naturally. If something gives more trouble than expected, break it up in sub-bullets.


This particular example shows everything in green = done. My work. Hehe.

The nice thing is that there's always an overview on where we stand, and signs of problems can be seen rather easily. Also, it's very reviewable. Together with the executable system that should be updated as much as possible, you have a nice overview of the system: where are we now, what is still left. Prioritization is also easy to do. Just add a little tag to all items - the fun thing is you can do it on every level you want. You can tag a whole sub-tree as [Priority: High], or [Leave till the end].

Of course, in the end, the executable deliverable - that is the core of Agile work. Claiming something is one thing, the proof of the pudding is in the eating. But having a good backbone of the approach is a very important ingredient in getting there reliably.

dinsdag 1 november 2011

The birth of a project

Most often at OpendTect, we don't do a lot of formal stuff to embark on certain new developments. We discuss wishes from our (internal) users, they prioritize, and the seniors distribute amongst the team members. Priorities from users are driven by usability for their projects. The commercial people of course want a larger user base.

We also make paid stuff for external parties (mostly, oil companies). In these projects, we now have a sort of standard way to handle to process of going from the initial idea to a real concrete project.

Muy bonita!

I have, yesterday night, been investigating what it is we do. Why? Because I was trying out a real nice piece of Open Source software: Bonita Open Solution, a Business Process Modeling (BPM) tool.

I think the central thing different from the traditional way of handling things is the central role of design/architecture. Check out this process I made:

In the tool you can define actors (client, team, managers, ...), add variables (for example project name=text), which makes the model executable. I'm sure you can make everything real beautiful if you use Java or XML to get your data, and use the model as a real executable. Good stuff in general, but nobody will ever start a project using such a tool, so I won't go further into that here.

The ingredients

Let's take a look at the separate 'activities':
  1. Initial ideas. Are generated by literally everybody: team members, clients, users, family members, ... anyone.
  2. Gathering information is essential. Figuring out what exactly they mean, and very importantly what not. Traditional exploratory analysis - but ...
  3. What information do you gather? The focus is set by the design/architecture environment. In our case, usually what is already available in OpendTect. In theory you could investigate every possible implementation, but we know we have limits. We need to map everything on the structure of our solution space. This is central. It determines what we can offer at what 'price' (time cost, but also things like: how much synergy is there with the current environment?, are we willing to support this for years to come?, ...)
  4. Using the design/architecture mapping we can now think about the extent and therefore time cost of all parts of the solution. We usually keep these estimates in a wiki page.
  5. When you're satisfied you have a realistic set of possibilities, decision makers can start pondering about what they *really* want, now they have the overview with cost and extent of everything.
  6. Usually, there's a 'Go', very often with everything proposed, but sometimes large parts disappear. Well, disappear, they stay there as 'sleeping' items that may get added if there's time left. Sometimes clients/users discover they've made the wrong choices - during development. In Agile projects, this is not a problem. We just move our focus because of the new insights. Because very often, at the end we see that we couldn't foresee half of what is eventually implemented - to everybody's satisfaction.

Why?

All in all you can say: Agile is meant to tackle the problem of un-foreseeability, but you have to start somewhere. And why should the start be very different from the way we always work?

zaterdag 22 oktober 2011

The big difference

If you'd ask me what is the top-of-the-tops of advantages of Agile development, I would go for ... Hmmm. It's always difficult to mark one specific item, but I'd say Risk Reduction is the main thing.

The funny thing is that the Requirements-Specs-Analysis-Design-Implementation-Test-Introduction-Maintain  (aka 'Waterfall model') people try to claim this, but they're dead wrong; it's exactly the opposite from what they claim. Waterfall procedures are almost a guarantee for overruns and blatant failure. And always producing mediocre results (if any).

Value levels

Agile methods are much better suited for reducing risk, especially for avoiding all-out failures. By concentrating on 'value level's you can always work towards useful products - in terms of user, or developer value. Value levels translate into milestones. Very commonly, we get the following milestones:

  1. Initial demonstration model (executable!)
  2. First actually usable 'system'
  3. First acceptable but no-so-luxury product
  4. Satisfactory, worthy product
  5. Great software
In agile you need a development toolbox that allows taking software onto higher levels all the time. Where Waterfall produced series of build-and-dump prototypes, Agile just goes on with the same executable model. This requires re-shuffling of activities.

Trust is key here. The users have to trust that the developers will always seek to optimize user value. Developers must make sure they always live up to that code of ethics. This is a question of integrity. For Agile team managers, integrity in of the team is one of the main issues. Team members with 'hidden agendas' or other sneaky ideas about how to work need to be expelled at all cost. I can't stress that point enough.

Risk reduction

Back to risk reduction. By striving to get these milestones ready as early as possible (usually whilst releasing early and often) you always head for systems that at least do something useful. There always is maximum user value at each point in time. After (1) you have a great tool to discuss what's needed with users, how things should 'look&feel', and so forth. For the software developers, it is even more important. Everything present should be working with a final system in mind. This gives the blueprint of the design/architecture concepts used for the product. After (2), users can do actual work with the system (in a Spartan sort-of-way). Next is a product that you could introduce without problems, albeit that there is a feeling of unease among the parties. Getting stuck at this level is often a question of budget/funding. After that, the only way is up.

Conclusion

Working in terms of value levels gives great risk reduction. People say that half of the software built anywhere never makes it to the work floor. Agile is the way to minimize that risk. No guarantees will be given up front other than 'we will make it as good as possible'. Live by that rule; always have the maximum user value in mind - that's what Agile is all about.

donderdag 20 oktober 2011

Pro-active vs re-active

If people have to tell you time and again that they don't like what they are getting, then you're not on the right track.

Can it happen? Of course. You can't figure out user satisfaction with certainty until you show them the goods. One of the fundamentals of Agile development. The thing is, lots of developers have trouble with two important ingredients: listening and preparing.

  • Listening: Rather than the narrow definition (using ears only) I also use that for 'reading carefully', and 'paying attention to signs'. Not only handling every detail, but also trying to understand why the user has come up with something. In e-mails, first of all make sure you react to everything brought up by the user. The 'old style' e-mail response works really well (interleaved quoting and answers). But good listening is much more than that.
  • Preparing: Agile requires lots of interaction with users. On the other hand, lots of user interaction is simply unnecessary. Not just because you could have figured it out yourself. Very often it's harmful. There are many things in life that need preparation, and sure as hell interaction with users needs careful preparation. I can think of hardly any case where I just go over to the users and start like 'please tell me what you want'. Do they know? They may think they do. Do you know what you have to offer? Probably not. Investigate.

zondag 16 oktober 2011

Following standards

Hey, I don't like standard methodologies. That doesn't mean I'm against technical standards.

Just answered an e-mail from a student at the University of Manchester. Tried to export a SEG-Y file in IEEE format from Petrel. The amplitudes were screwed up and he sent me the file. I was amazed to see that Petrel simply has the byte order wrong. Yes, the byte order is specified in Revision 1. And Rev.1 is over 10 years old now! (OK the official first version is from May 2002 - whatever). The byte order for IEEE (and IBM) floating point data is ... big endian. Not little endian. Live with it. For those who doubt it, here's the doc:



... how clear is that?






SEG-Y sucks. It's incredibly old-fashioned, outdated, inefficient and [fill in your favorite bad word here]. But with standards it's simple: follow them 100% or don't even pretend to follow them. As long as there is no real alternative, at least let's stick to the standard ... please ...?