There are a couple of things that I'm butting my head up against these days
that all seem to be aspects of the same general problem, which I mentally
label the 'short term fallacy'. This fallacy generally states that there's
no point planning for something to survive a long time because if there are
legacy problems they can be solved simply by starting again. Examples of this
- A building in Brisbane that I worked in - Mineral House - was supposed
to have a 25 year lifespan. Components in it were not designed to last a
lot longer than that because theoretically the building would be pulled down
and rebuilt with a larger, better one later.
- File systems that don't support defragmenting - when you ask about
defragmenting the experts say "just blow it away and restore from backups"
(or the even better ridiculous statement that "ext3 doesn't fragment").
- Programs that leak memory - even those that are only supposed to be
run for a short time.
- Networking people don't want to support IPv6 because it's easier to
just add a layer of NAT, using a higher level of DHCP turnover, proxies, or
- Databases that allow you to add data files to table space, but do not
allow you to remove them without destroying any table that used that file,
provide no way to migrate used data in that file to other free space
elsewhere, or even direct the database to not allocate objects to that
file, and little information about what objects might even be in a file.
- People requiring certain older technology - PCs with ISA sockets, for
example - in order to support the old RS-422 data adapters, rather than
spend the money on rewriting the software to be adapter independent or
use more modern equipment.
- Protocols and systems written with no version information, so that
version A cannot work out if version B's data is correct or even compatible.
This category also includes programs where reading any other version is
explicitly denied without any knowledge of what differences those revisions
might have made to the data being read or written.
- Programs written with the assumption that no-one else is ever going
to need to use your program, so things like documentation, data recovery or
error checking don't have to be done.
- Employing a person whose job it is to go through your data warehouse
and press the reset button on servers that have hung - and their entire
eight hours a day is taken up with this.
Every time one of these 'short term' solutions is proposed, no matter how
reasonable the assumption is that "no-one could ever need to do
for more than $time_period
", it seems to be
proved wrong in the long run. Then, inevitably, there's this long, gradually
worsening process of fixes, workarounds, kludges and outright loss of service.
Straight out of classic game theory, the cost of each workaround is compared
against the cost of redoing the whole thing and found to be less, even as the
total cost of all workarounds exceeds the cost of the correct long-term
Yes, these problems are hard. Yes, limits have to be set - processors will
use a certain number of bits for storing a register and so forth. Yes,
sometimes it's impossible to predict the things that will change in your
system - where your assumptions will be invalidated. But we exist in a world
that moves on, changing constantly, and we must acknowledge that there is no
way that the system we start with will be the same as the system we end up
using. The only thing that's worse than building in limitations is to insert
them in such a way that there is no way to upgrade or cope with change.
Limitations exist, but preventing change is just stupid.
And the real annoyance here is that there are plenty of examples of other,
equivalent systems coping with change perfectly. LVM can move the contents of
one disk to another without the user even noticing (let alone having to stop
the entire system). Tridge and Rusty have demonstrated several methods of
replacing an old daemon with a newer version without even dropping a single
packet - even if the old program wasn't designed for it in the first place.
File systems that insist that it's impossible to shrink are shown up by
file systems with similar performance that, again, can do so without even
blocking a single IO. You don't even have to reboot for a kernel upgrade
if you're using ksplice (thanks to
Russell Coker for reminding me).
It's possible to do; sometimes it's even elegant. I can accept that some
things will have a tradeoff - I don't expect the performance of a file system
that's being defragmented to be the same as if it was under no extra load.
But simply saying "we can't shrink your filesystem" is begging the question
"why not", and the answer will reveal where you limited your design. The
cost, in the long run, will always be higher to support a legacy system than
to future-proof yourself.