Too Busy For Words - the PaulWay Blog

Thu 7th Apr, 2011

The short term fallacy

There are a couple of things that I'm butting my head up against these days that all seem to be aspects of the same general problem, which I mentally label the 'short term fallacy'. This fallacy generally states that there's no point planning for something to survive a long time because if there are legacy problems they can be solved simply by starting again. Examples of this are:

Every time one of these 'short term' solutions is proposed, no matter how reasonable the assumption is that "no-one could ever need to do $long_term_activity for more than $time_period", it seems to be proved wrong in the long run. Then, inevitably, there's this long, gradually worsening process of fixes, workarounds, kludges and outright loss of service. Straight out of classic game theory, the cost of each workaround is compared against the cost of redoing the whole thing and found to be less, even as the total cost of all workarounds exceeds the cost of the correct long-term solution.

Yes, these problems are hard. Yes, limits have to be set - processors will use a certain number of bits for storing a register and so forth. Yes, sometimes it's impossible to predict the things that will change in your system - where your assumptions will be invalidated. But we exist in a world that moves on, changing constantly, and we must acknowledge that there is no way that the system we start with will be the same as the system we end up using. The only thing that's worse than building in limitations is to insert them in such a way that there is no way to upgrade or cope with change. Limitations exist, but preventing change is just stupid.

And the real annoyance here is that there are plenty of examples of other, equivalent systems coping with change perfectly. LVM can move the contents of one disk to another without the user even noticing (let alone having to stop the entire system). Tridge and Rusty have demonstrated several methods of replacing an old daemon with a newer version without even dropping a single packet - even if the old program wasn't designed for it in the first place. File systems that insist that it's impossible to shrink are shown up by file systems with similar performance that, again, can do so without even blocking a single IO. You don't even have to reboot for a kernel upgrade if you're using ksplice (thanks to Russell Coker for reminding me).

It's possible to do; sometimes it's even elegant. I can accept that some things will have a tradeoff - I don't expect the performance of a file system that's being defragmented to be the same as if it was under no extra load. But simply saying "we can't shrink your filesystem" is begging the question "why not", and the answer will reveal where you limited your design. The cost, in the long run, will always be higher to support a legacy system than to future-proof yourself.

Last updated: | path: tech / ideas | permanent link to this entry


All posts licensed under the CC-BY-NC license. Author Paul Wayper.


Main index / tbfw/ - © 2004-2023 Paul Wayper
Valid HTML5 Valid CSS!