Too Busy For Words - the PaulWay Blog

Mon 3rd Jun, 2013

New file system operations

Many many years ago I thought of the idea of having file operations that effectively allowed you to insert and delete, as well as overwrite, sections of a file. So if you needed to insert a paragraph in a document, you would simply seek to the byte in the file just before where you wanted to insert, and tell the file to insert the required number of bytes. The operating system would then be responsible for handling that, and it could then seamlessly reorganise the file to suit. Deleting a paragraph would be handled by similar means.

Now, I know this is tricky. Once you go smaller than the minimum allocation unit size, you have to do some fairly fancy handling in the file system, and that's not going to be easy unless your file system discards block allocation and goes with byte offsets. The pathological case of inserting one byte at the start of a file is almost certainly going to mean rewriting the entire file on any block-based file system. And I'm sure it offends some people, who would say that the operations we have on files at the moment are just fine and do everything one might efficiently need to do, and that this kind of chopping and changing is up to the application programmer to implement.

That, to me, has always seemed something of a cop-out. But I can see that having file operations that only work on some file systems is a limiting factor - adding specific file system support is usually done after the application works as is, rather than before. So there it sat.

Then a while ago, when I started writing this article, I found myself thinking of another set of operations that could work with the current crop of file systems. I was thinking specifically of the process that rsync has to do when it's updating a target file - it has to copy the existing file into a new, temporary file, add the bits from the source that are different, then remove the old file and substitute the new. In many cases we're simply appending new stuff to the end of the old file. It would be much quicker if rsync could simply copy the appended stuff into a new file, then tell the file system to truncate the old file at a specific byte offset (which would have to be rounded to an allocation unit size) and concatenate the two files in place.

This would be relatively easy for existing file systems to do - once the truncate is done the inodes or extents of the new file are simply copied into the table of the old file, and then the appended file is removed from the directory. It would be relatively quick. It would not take up much more space than the final file would. And there are several obvious uses - rsync, updating some types of archives - where you want to keep the existing file until you really know that it's going to be replaced.

And then I thought: what other types of operations are there that could use this kind of technique. Splitting a file into component parts? Removing a block or inserting a block - i.e. the block-wise alternative to my byte offset operations above? All those would be relatively easy - rewriting the inode or offset map isn't, as I understand it, too difficult. Even limited to operations that are easy to implement in the file system, there are considerably more operations possible than those we currently have to work with.

I have no idea how to start this. I suspect it's a kind of 'chicken and egg' problem - no-one implements new operations for file systems because there are no clients needing them, and no-one clients use these operations because the file systems don't provide them. Worse, I suspect that there are probably several systems that do weird and wonderful tricks of their own - like allocating a large chunk of file as a contiguous extent of disk and then running their own block allocator on top of it.

Yes, it's not POSIX compliant. But it could easily be a new standard - something better.

Last updated: | path: tech / ideas | permanent link to this entry

Mon 1st Apr, 2013

Preventing patent obscurity

One of the problems I see with the patent system is that patents are often written in obscure language, using unusual and non-standard jargon, so as to both apply as broadly as possible and not show up as "obvious" inventions.

So imagine I'm going to try to use a particular technology, or I'm going to patent a new invention. As part of my due diligence, I have to provide a certified document that shows what search terms I used to search for patents, and why any patents I found were inapplicable to my use. Then, when a patent troll comes along and says "you're using our patent", my defence is, "Sorry, but your patent did not appear relevant in our searches (documentation attached)."

If my searches are considered reasonable by the court, then I've proved I've done due diligence and the patent troll's patent is unreasonably hard to find. OTOH, if my searches were unreasonable I've shown that I have deliberately looked for the wrong thing in the hopes that I can get away with patent infringement, so damages would increase. If I have no filing of what searches I did, then I've walked into the field ignorant and the question then turns on whether I can be shown to have infringed the patent or whether it's not applicable, but I can be judged as not taking the patent system seriously.

The patent applicant should be the one responsible for writing the patent in the clearest, most useful language possible. If not, why not use Chinese? Arpy-Darpy? Ganster Jive? Why not make up terms: "we define a 'fnibjaw' to be a sequence of bits at least eight bits long and in multiples of eight bits"? Why not define operations in big-endian notation where the actual use is in little-endian notation, so that your constants are expressed differently and your mathematical operations look nothing like the actual ones performed but your patent is still relevant? The language of patents is already obscure enough, and even if you did want to actually use a patent it is already hard enough with some patents to translate their language into the standard terms of art. Patent trolls rely on their patents being deliberately obscure so that lawyers and judges have to interpret them, rather than technical experts.

The other thing this does is to promote actual patent searches and potential usage. If, as patent proponents say, the patent system is there to promote actual use and license of patents before a product is implemented, then they should welcome something that encourages users to search and potentially license existing patents. The current system encourages people to actively ignore the patent system, because unknowing infringement is seen as much less of an offence than knowing infringement - and therefore any evidence of actually searching the patent system is seen as proof of knowing infringement. Designing a system so that people don't use it doesn't say a lot about the system...

This could be phased in - make it apply to all new patents, and give a grace period where searches are encouraged but not required to be filed. Make it also apply so that any existing patent that is used in a patent suit can be queried by the defendent as "too obscure" or "not using the terms of art", and require the patent owner to rewrite them to the satisfaction of the court. That way a gradual clean-up of the current mess of incomprehensible patents that that have deliberately been obfuscated can occur.

If the people who say patents are a necessary and useful thing are really serious in their intent, then they should welcome any effort to make more people actually use the patent system rather than try to avoid it.

Personally I'm against patents. Every justification of patents appeals to the myth of the "home inventor", but they're clearly not the beneficiaries of the current system as is. The truth is that far from it being necessary to encourage people to invent, you can't stop people inventing! They'll do it regardless of whether they're sitting on billion-dollar ideas or just a better left-handed cheese grater. They're inventing and improving and thinking of new ideas all the time. And there are plenty of examples of patents not stopping infringement, and plenty of examples of companies with lots of money just steamrollering the "home inventor" regardless of the validity of their patents. Most of the "poster children" for the "home inventor" myth are now running patent troll companies. Nothing in the patent system is necessary for people to invent, and its actual objectives do not meet with the current reality.

I love watching companies like Microsoft and Apple get hit with patent lawsuits, especially by patent trolls, because they have to sit there with a stupid grin on their face and still admit that the system that is screwing billions of dollars in damages out of them is the one they also support because of their belief that patents actually have value.

So introducing some actual utility into the patent system should be a good thing, yeah?

Last updated: | path: tech / ideas | permanent link to this entry

Thu 7th Apr, 2011

The short term fallacy

There are a couple of things that I'm butting my head up against these days that all seem to be aspects of the same general problem, which I mentally label the 'short term fallacy'. This fallacy generally states that there's no point planning for something to survive a long time because if there are legacy problems they can be solved simply by starting again. Examples of this are:

Every time one of these 'short term' solutions is proposed, no matter how reasonable the assumption is that "no-one could ever need to do $long_term_activity for more than $time_period", it seems to be proved wrong in the long run. Then, inevitably, there's this long, gradually worsening process of fixes, workarounds, kludges and outright loss of service. Straight out of classic game theory, the cost of each workaround is compared against the cost of redoing the whole thing and found to be less, even as the total cost of all workarounds exceeds the cost of the correct long-term solution.

Yes, these problems are hard. Yes, limits have to be set - processors will use a certain number of bits for storing a register and so forth. Yes, sometimes it's impossible to predict the things that will change in your system - where your assumptions will be invalidated. But we exist in a world that moves on, changing constantly, and we must acknowledge that there is no way that the system we start with will be the same as the system we end up using. The only thing that's worse than building in limitations is to insert them in such a way that there is no way to upgrade or cope with change. Limitations exist, but preventing change is just stupid.

And the real annoyance here is that there are plenty of examples of other, equivalent systems coping with change perfectly. LVM can move the contents of one disk to another without the user even noticing (let alone having to stop the entire system). Tridge and Rusty have demonstrated several methods of replacing an old daemon with a newer version without even dropping a single packet - even if the old program wasn't designed for it in the first place. File systems that insist that it's impossible to shrink are shown up by file systems with similar performance that, again, can do so without even blocking a single IO. You don't even have to reboot for a kernel upgrade if you're using ksplice (thanks to Russell Coker for reminding me).

It's possible to do; sometimes it's even elegant. I can accept that some things will have a tradeoff - I don't expect the performance of a file system that's being defragmented to be the same as if it was under no extra load. But simply saying "we can't shrink your filesystem" is begging the question "why not", and the answer will reveal where you limited your design. The cost, in the long run, will always be higher to support a legacy system than to future-proof yourself.

Last updated: | path: tech / ideas | permanent link to this entry

Sat 12th Mar, 2011

CodeCave 2011 Update 1

Just a quick update to my readers to say that CodeCave 2011 is definitely going ahead and will be on the 3rd to 5th of June. Cost will be about $80 per person for the accommodation. I have about three or four more places, depending on a few factors. I haven't worked out a cost or menu for the meals for the weekend, but it will probably be fairly reasonable - $40 for the weekend is the figure I'm aiming for. Please email me if you'd like to come along!

Last updated: | path: tech / ideas | permanent link to this entry

Tue 15th Feb, 2011

Codecave Init

After talking with Peter Miller at CodeCon, and other people over the last year or two, I've decided to put together a similar style of event. For a weekend, we all go off into the bush, far from the internet and other quotidian distractions, and write code, eat and drink well, and share great ideas. What's the difference? It's in a cave.

Well, not literally. The location that I'm aiming for is the Yarrangobilly Caves, an area of limestone caves and other scenic delights about seventy kilometres due south-west of Canberra, although it's about two hours by car because the direct route goes over the Brindbella mountain range. As well as the caves, there are bushwalks and (perhaps most importantly) a thermal spring pool to soak in after a hard day's slaving over the laptop. We would be staying in the Yarrangobilly Caves House, a historic homestead of the region offering bedrooms for up to sixteen people, kitchen, dining room, lounge, verandahs, and (also importantly) power.

Interstate visitors who didn't want to drive all the way could be picked up from Canberra airport or bus stations and ferried to Yarrangobilly on Friday evening, coming back to Canberra on Sunday in time for flights home or other onward travel. For those that wanted it, interstate or local, I would do catering for the whole weekend at a fixed price and with a roster for jobs. If we had the whole place booked optimally it would be about $60 per person for the weekend, the complication being that the rooms are not all single - there are some bunk beds and some doubles. They also only book by entire wings (9 or 7 people), so the fewer people the more it would cost per person, in certain ratios depending on requirements. At this stage the earliest we could get a booking for a weekend is late May or early June.

If you're interested in coming to this, please drop me an email. I really need to get firm bookings, preferably by the end of February, to have any chance of getting the accommodation booked and the pricing finalised. I wouldn't run the event if the cost was more than $100 per person for the accommodation, which means that it won't run with less than five people. There's also no way to accommodate more than sixteen people, so bookings would be limited in date order. Please also email me if you've got suggestions, because a lot of the planning is flexible at this stage.

This will also be posted to the CLUG list and the Linux Australia email list.

(P.S. Sorry if the environment.nsw.gov.au links don't work correctly, they seem to require session tokens which stuff direct linking up.)

Last updated: | path: tech / ideas | permanent link to this entry

Tue 9th Feb, 2010

File system sequences

I recently had the occasion to create a new filesystem on a partition:

mkfs -T largefile4 /dev/sdc1

This creates copies of the superblock on a bunch of sectors across the disk, which can be used for recovering the superblock of the disk should something tragic happen to the main one (such as overwriting the first megabyte of a disk by accident). A useful tip here is that one can do the same command with the '-n' option to see what sectors it would write the superblock to, without actually reformatting the partition, in order to then provide a copy of a superblock to fsck:

mkfs -n -T largefile4 /dev/sdc1

In my case, these copies were written to these offsets:

Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
	4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 
	102400000, 214990848

What determines these magic numbers? Well, you can see from 163840 and 819200 that they're multiples of 32768. If we work out the multiples of the beginning offset for each offset, we get:

98304 = 3 * 32768
163840 = 5 * 32768
229376 = 7 * 32768
294912 = 9 * 32768
819200 = 25 * 32768
884736 = 27 * 32768
1605632 = 49 * 32768
2654208 = 81 * 32768
4096000 = 125 * 32768
7962624 = 243 * 32768
11239424 = 343 * 32768
20480000 = 625 * 32768
23887872 = 729 * 32768
71663616 = 2187 * 32768
78675968 = 2401 * 32768
102400000 = 3125 * 32768
214990848 = 6561 * 32768

Hmm. 3, 5, 7, eh? Then 9, which is 3 squared; then 25, which is 5 squared. Interesting. The 27 throws us for a second before we realise that that's 3 cubed, and it comes between 5 squared and 7 squared. And, sure enough, there's 81 (3^4) and 125 (5^3) ... it seems to be the sequence of successive square, cubes, etc. of 3, 5 and 7. It's a sequence of successive powers.

Why? Well, the whole object here is to make sure that a copy of the superblock survives if some tragedy happens to the disk. There are two broad kinds of disaster scenario here - destroying a contiguous block of disk, and destroying multiples of a specific sector offset across the disk (e.g. 0, 10, 20, 30, 40...). For the first, we can see the successive powers method quickly generates fairly large numbers without leaving any obvious large gaps - the ratio of number N and number N+1 never goes higher than 3. For the second situation, you can fairly quickly see from number factor theory that multiples of N will increasingly rarely intersect with the successive powers series, and only when N is (a multiple of) 105 will it intersect all three sequences.

It's perhaps arguable here that drive technology has made some of this irrelevant - ATA block replacement changes the mapping between logical and physical block numbers - and in fact the types of disaster scenarios this scheme of superblock copies addresses aren't really reflected in real-world usage. For example, if you're striping blocks across two disks then all your superblock copies are going to start on one disk (even if they then get striped across the second disk) because the successive power series always generates odd numbers. But as a way of avoiding some of the more obvious failure modes, it makes a lot of sense.

Another little bit of trivia explained.

Last updated: | path: tech / ideas | permanent link to this entry

Fri 27th Nov, 2009

The new age of programming

I gave a lightning talk at OSDC this year and thought I'd write my thoughts up into my blog. It was the confluence of a number of ideas, technologies and thoughts gradually merging, and I think it's going to be an increasingly important issue in the future.

Most laptops now have at least two cores in them. It's hard to get a desktop machine without at least two. The same chips for ordinary x86-architecture machines will soon have six, eight and twelve cores. The Niagara architecture has at least this many and quite possibly more. The Cell architeture allows for up to sixty-four cores on-chip, with a different architecture and instruction set between the FPE and SPE cores. The TileGX architecture includes one variant with a hundred 64-bit cores, connected to three internal DDR-3 memory interfaces and four internal 10-gigabit ethernet interfaces.

The future, it can therefore be said, is in parallel processing. No matter what new technologies are introduced to decrease the size of the smallest on-die feature, it's now easier to include more cores than it is to make the old one faster. Furthermore, other parts of our computers are now hefting considerable computation power of their own - graphics cards, network cards, PhysX engines, video encoder cards and other peripherals are building in processors of no mean power themselves.

To harness these requires a shift in the way we program. The people who have grown up with programming in the last thirty years have, by and large, been working on small, single-processor systems. The languages we've used have been designed to work on these architectures - parallel processing is either supported using third-party libraries or just plain impossible in the language. There have been parallel and concurrent programming languages, but for the most part they haven't had anywhere near the popularity of languages like Basic, C, Pascal, Perl, Python, Java, and so forth.

So my point is that we all need to change our way of thinking and programming. We need to learn to program in small units that can be pipelined, streamed, scattered and distributed as necessary. We need larger toolkits that implement the various semantics of distributed operation in the best way, so that we don't have people reinventing thread processing badly all the time. We need to make languages, toolkits, and operating systems that can easily share processing power across multiple processors, distributed across cores, chips, and computers. We need to help eachother understand how things interact better, rather than controlling your own little environment and trying to optimise that in isolation.

I think it's going to be great.

Last updated: | path: tech / ideas | permanent link to this entry

Tue 4th Aug, 2009

Understanding the chinese room

The Chinese Room argument against strong AI has always bothered me. It's taken me a while to realise what I dislike about the argument and to put it into words, though. For those of you who haven't read up on this, it's worth perusing the article above and others elsewhere to familiarise yourself with it, as there's a great deal of subtlety in Searle's arguing position.

Firstly, he's established that the computer program as is comfortably passes the Turing Test, so we know it's at least an artifical intelligence by that standard. Then he posits that he can perform the same program by following the same instructions (thus still passing the Turing Test), even though he himself "doesn't understand a word of Chinese". Then he proposes that he can memorise that set of instructions to pass the Turing Test in Chinese in his head, and still doesn't understand Chinese. If he can do that while not understanding Chinese, then the machine passing the Turing Test doesn't "understand" Chinese either.

So. Firstly, let's skip over the obvious problem: that the human trying to perform the computer program will do it millions of times slower. This speed is fairly important to the Turing Test, as we're judging the computer based on its ability to interact with us in real time - overly fast or slow responses can be used to identify the computer. A human that's learnt all the instructions by rote and follows them as a computer would still, I'd argue, be identifiably slow. We're assuming here that the person doesn't understand Chinese, so they have to follow the instructions rather than respond for themselves.

And let's skip over the big problem of what you can talk about in a Turing Test. Any system that can pass that has to be able to carry on a dialogue with quite a bit of stored state, has to be able to answer fairly esoteric questions about their history or their current state that a human has and a computer doesn't (e.g. what did you eat last, what sex are you, etc). I'm skipping that question because it's an even call as to whether this is in or out for current Turing Test practice: if an AI was programmed with an invented personality it might be able to pass this in ways a pure 'artificial intelligence' would not. It's a problem for the Chinese Room, because that too has to hold a detailed state in memory and have a life outside the questioning, and the example Searle gives is of a person simply answering questions and not actually carrying on some external 'life'. ("Can I tell you a secret later?" is the kind of thing that a human will remember to ask about later but the Chinese Room doesn't say anything about).

It's easy to criticise the Chinese Room at this point as being fairly stupid. You're not talking to the person inside the room, you're talking to a person inside the simulation. And the person executing all those instructions, even if they're in a high-level language, would have to be superhumanly ... something in order to merely execute those instructions rather than try to understand them. It's like expecting a person to take the numbers from one to a million in random order and sort them via bubble sort in their head, whilst forbidding them from just saying "one, two, three..." because they can see what the sequence is going to be.

To me the first flaw in Searle's argument is that his person in the room could somehow execute all those instructions without ever trying to understand what they mean. If nothing else, trying to learn Chinese is going to make the person's job considerably easier - she can skip the whole process of decoding meaning and go straight to the 'interact with the meaning' rules. Any attempt by Searle to interfere here and say that, no, you're not allowed to do that really has interfered with any attempt to disprove that the person doesn't understand Chinese - if he makes her too simple to even understand a language, then how does she read the books; if he makes her incapable of learning then how did she learn to do this process in the first place, etc. So the basis on which Searle's judgement that the AI doesn't really "understand" because the person in the room doesn't "understand" is based on the sophistry that you can have such a person in the first place.

But, more than this, the fundamental problem I have is that any process of trying to take statements / questions in a language and give responses to them in the same (or any other) language is bound to deal with the actual meaning and intelligence in the original question or statement. It's fairly counterintuitive to make an AI capable of interacting in a meaningful way in Chinese without understanding what makes a noun and a verb, understanding its rules of tense and plurality, or understanding its rules of grammar and structure and formality. If Searle would have us assume that we've somehow managed to create an AI that can pass the Turing Test without the programmers building these understandings of the actual meaning behind the symbols into the program, then I think he's constructed somewhat of an artificial (if you'll forgive the pun) situation.

To try and put this in context, imagine the instructions for the person in the room have been written in English (rather than in Python, for example). The obvious way to write this Chinese Room program, therefore, is by having big Chinese-English and English-Chinese dictionaries and a book of rules by which the person pretends that there's another person (the AI) answering the questions based on the English meaning of the words. I argue here that any attempt to obfuscate the process and remove the use of the dictionaries is not only basically impossible but would stop the Chinese Room being able to pass the Turing Test. It's impossible to remove the dictionaries because you're going to need some kind of mapping between each Chinese symbol and the English word that the instructions deal with, if for no other reason that Chinese has plenty of homographs - symbols which have two different meanings depending on context or inflection - and you need a dictionary to distinguish between them. No matter how you try to disguise that verb as something else, you'll need to put it in context so that the person can answer questions about it, which is therefore to make it meaningful.

So once you have a person capable of learning a language, in a room where symbols are given meaning in that language, you have a person that understands (at some level) the meaning of the symbols, and therefore understands Chinese.

Even if you introduce the Python at this point, you've only added an extra level of indirection to the equation. A person reading a piece of Python code will eventually learn what the variables mean no matter how obscurely the code is written - if we're already positing a person capable of executing an entire program literally then they are already better than the best maintenance programmer. If you take away this ability to understand what the variables mean, then you also (in my view) take away the ability for the person to learn how to interpret that program in the first place.

Searle's argument, therefore, is based on two fallacies. Firstly, that it's possible to have a human that can successfully execute a computer program without trying to learn the process. Secondly, that the program will not at some point deal with the meaning of the Chinese in a way that a person would make sense of. So on both counts Searle's "Chinese Room" is no argument against a machine intelligence "understanding" in the same way we understand things.

What really irritates me about Searle's argument here - and it does not change anything in my disproof above - is that it's such an arrogant position. "Only a real *human* mind can understand Chinese, because all those computer thingies are really just playing around with symbols! I'm so clever that I can never possibly learn Chinese - oh, wait, what was that?" He's already talking about an entity that can pass the Turing Test - and the first thing I would argue about that test is that people look for understanding in their interlocutors - and then says that "understanding" isn't there because it's an impelementation detail? Give me a break!

And then it all comes down to what "understand" means, and any time you get into semiotics it means that you've already lost.

Last updated: | path: tech / ideas | permanent link to this entry

Tue 6th Feb, 2007

The Linux Ads 1

The Linux Australia email list has been alive with questions about video ads that promote Linux as a usable alternative to other closed-source, proprietary, costly operating systems and software. I had a series of three ads in mind.

All three show two people using an ordinary PC side-by-side. We see brief snippets of the software they use in their everyday work and play. Each time, the one on the left is using Windows; the one on the right is using Linux. In the bottom corner, we see a constantly-updating price of the software they use, where the package name and cost appear and disappear and the running total is left on the screen. The voice-over describes what the people are doing, what operating systems they're using, and finishes up with a conclusion that varies per ad.

The first ad shows the two people using the same free, open-source packages: firefox, thunderbird, OpenOffice, gaim, inkscape, gimp, and so on. The price tag on the left shows the cost of the version of Windows, and each time a package is used it's shown as free. At the end, the voice-over points out that they've been able to use exactly the same software, but the second person hasn't had to pay for their operating system - it's free. And they can give copies to their friends, but this is a minor point in this ad.

The second ad shows the two people using different software. On the right the Linux person is using the same software as before; on the left, their proprietary equivalents: IE, outlook, Microsoft Office, MSN, Photoshop, Illustrator. The price tag quickly goes up into the thousands of dollars. The voice-over points out that not only can the person on the right use the files generated by the person on the left, but they haven't had to pay for the software. And they can still give it away.

The third ad shows the two people using the same list of software as in the second ad. But this time, the price tag for the person on the left comes up as PIRATED each time. Just as the demo ends, two police officers appear beside the person on the left and take them away. The voice over points out that copying proprietary software is illegal and you can face criminal penalties, but that Linux and all its software is still free and legal to share with your friends and family as well.

I'm sure there are a few variants on this theme: you can play many games such as Quake 4 natively, use Wine or Cedega to run Windows software you can't do without, be protected by industry-proven firewalls and security technology, have the latest GUI wobbly window whiz-bangery, and so on. But I really like the idea of comparing the actual cost side-by-side, and showing that you can still do all the stuff you want to in Linux, without paying a cent, and you're allowed to share it with your family and friends. That's one of the key things that I think we overlook in the open software world - it's so obvious that we never think how much of a revolutionary change it is to people bound into proprietary software licensing.

Last updated: | path: tech / ideas | permanent link to this entry

Mon 6th Nov, 2006

Binary Lump Compatibility

I was thinking last night, as I vainly searched for sleep, about a long-standing idea of mine: the Blockless File System. If you imagine the entire disk as just a big string of bytes, then several problems go away. You don't need to have special ways of keeping small files or tail-ends of files in sub-parts of blocks (or, for that matter, waste the half-block (on average) at the end of files that aren't a neat multiple of the block length). The one I'm really excited about is the ability to insert and delete arbitrary lengths within a file. Or append to the start of a file. And what about file versioning?

Aaaanyway, for some reason I was thinking of what would go in the superblock. I have only done a tiny bit of study into what goes into the superblock in modern file systems, so I'm not speaking from the point of view of a learned expert. But I thought one idea would be for a header that could detect which endian-ness the file system was written in. A quick five seconds of thought produced the idea that the block 'HLhl' would not only be fairly easy to recognise in a binary, bit-oriented way, but would make it really easy for a human to check that it was big-endian ('HLhl'), little-endian ('lhLH'), middle-endian ('hlHL') or any of the twenty-one other combinations of 32-bit endianness the computing world has yet to explore.

It would also mean that the reader could simply translate whatever they read into their own endian-ness and then write that straight to disk (including setting the HLhl header in their native endian-ness). So writes are not penalised and reads are only penalised once. If the entire disk was written this way, it would mean that you could take the device to a machine with different endian-ness and it would behave almost exactly like normal - almost no loss of speed in writing or reading little-endian defined inode numbers or what-have-you. The entire disk could be a mix of endian-ness styles and it would still be perfectly readable.

Of course, it's not really going to matter if you don't take your disks to foreign architectures, or those architectures support some sort of SWAB instruction to swap endianness with little slow down, or if the users of disks that have been taken to foreign architectures don't mind a bit of slowdown reading the file system structure.

This is probably one to chalk up in the "Paul solves the problems of the early 1970s computer industry - in 2006" board.

Last updated: | path: tech / ideas | permanent link to this entry

Wed 7th Jun, 2006

The Self-Adjusting Interface Idea

I love how ideas 'chain'. I was listening to my LCA 2006 Arc Cafe Night second mix (Goa and Hard Trance) and thinking of the fun I had while mixing it. I manage to get away with using proprietary, closed-source, for-money software at LCA somehow, but though I'm absolutely dead keen to have a free, Open Source program that could do what MixMeister does I have neither the skills or the time for such a large project.

Still, I was thinking about the way I use the program. It has a main window divided up into three parts - the top half is divided into the catalogue of songs you have, and the playlist of songs actually in the mix. Then the bottom half is the graphical display of the mix and is where you tweak the mix so that the beats line up and the fade-in and fade-out happens correctly and so forth. The key problem with this in my view is that sometimes you want a lot of space for the catalogue so you can very quickly scan through the songs looking for something that has the right BPM and key signature and that you recognise as being a track that will blend smoothly with the current track, and sometimes you want a lot of space for your graphical mixing display.

So why not have an interface that watches what you're doing and gradually adjusts? The more time you spend in the mixing window, the more the top half gradually shrinks down to some minimum width. When you go back to choosing songs, the catalogue expands back to its median setting fairly quickly (over perhaps five seconds or so) and then gradually expands if you're spending more time there. In a way it's mimicking the actions you do with the grab bars to change the size of the window panes anyway; it's just doing it smoothly and unobtrusively in the background. You don't want it moving too quickly or changing your targets as you're using them, so all changes should happen over tens of seconds and any change in direction (from growing smaller to getting larger) should be preceded by a pause to check that the user hasn't just strayed into that area by accident. Even the relatively speedy 'return back to median' would happen over a long enough period that if you were able to to pick something quickly and move back to your work area then it wouldn't involve too much of a wait for the windows to return to where they just were.

Of course this would take a lot of engineering to apply to an application. Or would it. We've got Devil's Pie, an application that can procedurally apply windowing effects to application windows. Could something similar be taught about adjusting the controls within an application? The possibilities are endless, but I have no idea at all how to go about doing it...

Seems to be the story of my life, really...

Last updated: | path: tech / ideas | permanent link to this entry

Sat 29th Apr, 2006

The CanberraNet idea

I had a very nice afternoon drinking beer and eating at Das Kleinhaus (as long as I've used the correct gender - I don't know) on Saturday with Rainer, Chris and Matthew from Kororaa with brief appearances of a very tired Pascal. (We shared a brief complaint about how non-Canberrans, Sydneysiders especially, feel a need to disparage Canberra, then both dismiss any attempt at rebuttal with disdain and get all defensive about their native city as if no person in their right mind could question the urge to live in Sydney. Thank you, Hypocrisy Central.)

We talked about the idea of a wireless mesh network in Canberra; specifically, a network that existed separately to the internet (which avoids many of the legal problems that you get embroiled in if you look like an ISP). My concept here is that this mesh would duplicate many features of the internet; it would have its own IP range (possibly using IPv6), DNS TLD, and enthusiastic contributors could provide search engines, web pages, VOIP, Jabber, and so forth. Because the same basic structure and technology that powers the internet would be used in the mesh, it would be covered by the same laws: which means that publishing unauthorised copyrighted material is illegal but the network is not held responsible for enforcing that.

I know there's been a similar proposal hanging around here (and elsewhere) for years. I don't know the specifics, and to my mind it gets hung up on the whole "how do I know what people are using my internet connection" problem that's implied when you talk about making the mesh join the internet. I think there are deeper technical issues such as routing and address spaces and such that also need to be solved in that case. This is why I think that any successful mesh needs to have its aim solely as providing an extra backbone for data transfer on its own network that's completely independent of the internet. But this in itself is not a compelling reason to anyone individually to set it up.

There are two problems here: having useful content available to actually make it interesting, and having a way for end users to find that content. Again, these problems have been solved on the internet - we now have lots of people putting all sorts of interesting stuff there, and search engines go around and find out what's there and index it for easy finding later. The real problem is content; and to complicate it is the issue of why put anything on the mesh if you're not going to put it on the internet (and, vice versa, why put anything on the mesh that's already on the internet). There may be some things, like live video and audio or high-quality voice chat, that can be done better in the mesh than through the internet - but why reinvent the wheel?

Last updated: | path: tech / ideas | permanent link to this entry

Fri 28th Apr, 2006

More research...

Hmmm - maybe I don't want bluetooth, maybe I want Wireless USB. Of course, it's more pie-in-the-sky than existing technology at this stage, but the bandwidth (480Mbit/sec unwired!) is easily enough to just send a 48kHz 16-bit stereo pair unencoded across the wire. Hell, the microphone and the palm player could both be talking to the WUSB base station simultaneously and it wouldn't sweat at all. (I could send multiple HDTV streams across it and it would be only marginally irritated).

Of course, the distributor here in Australia doesn't have the headphone kit that I want for kate in stock, they only have the more irritating 'clip onto the ears and attach with a loose cord' style. And they want $200 for them. I wonder what their return policy is...

Last updated: | path: tech / ideas | permanent link to this entry

Thu 27th Apr, 2006

The Dance Caller's Gadget

I, as some people may have noticed, teach Irish Set Dance 1 - er, Dance. To do this without developing a voice that can kill at ten paces, I bought a PA system with a built-in wireless microphone, CD player, echo (!) and can run off its own internal batteries, which, when combined with my music player, means that I can go pretty much anywhere 2 to do a dance.

Of course, the Karma is plugged in via line-in, and it doesn't have a remote control, so I have to race back to the player to turn it off, all the while avoiding getting too close to the speaker to set up a quick bout of ear-pulverising feedback. I've got a belt-pack and somewhat uncomfortable headset to wear, which has some niggling internal fault that causes its gain control to not work, meaning that 2.11 on the dial on the back of the PA system is too soft to hear, and 2.13 is dangerously loud. I also occasionally have to carry around an index card with the notes for the dance on it, due to Irish Set Dance's one consistency: there's no rule about how a particular movement is done that isn't broken by at least one historical set. (Occasionally I even have to refer back to the book because something isn't quite clear in my notes, and sometimes even the book doesn't clarify it perfectly...). If I had a remote control, that'd be another thing to carry.

A while ago I started work on the Irish Set Dancing Markup Language, my "what's a DTD?" attempt at writing a XML specification for encoding set dance notes. (As an aside, here, I think that Irish Set Dancing and American Contra are the two forms of dancing that programmers and techies grok best: they involve keywords that code for a set of specific, usually standardised movements, they have recursive and iterative structure, and you almost always get to dance with members of the opposite sex.) The idea was that, with an appropriate browser on a palm computer, you could get the entire instructions for a dance in a modest size; and you could increase or decrease the complexity using some simple controls. You might have "{A&R, HHW}x2", but at the click of a button it turns into "Advance and Retire (in waltz hold) once, then House Half Way (in waltz hold), and repeat those two to get back to place". Sometimes all you need for the same instruction is "Slides". The ISDML was to try and give a short description at each level, so that each subgroup would have an abstract when 'rolled up'. If someone better at speaking XML and with time on their hands could email me, then I'd appreciate it.

And now that palm computers can play standard media files like mp3s (and possibly oggs), we can start to construct a palm device that can act as a reminder card and a music controller. I think what I need next is a bluetooth audio interface - a bluetooth device that provides an audio plug (two RCA sockets, a 1/4" jack, a 1/8" stereo jack, whatever), so that the palm computer can send its audio across the room wirelessly to my PA system. If the palm computer could also simultaneously have a connection to a bluetooth phone headset - i.e. a wireless microphone - then I'd throw all the other stuff away. Hell, half of this my Nokia 6230i (link not shown because it requires MacroDictator Flash 8) could do.

I'd be willing to pay $1000 for software that could do all this, and would open source it with whatever license you want. Anyone interested?

1: While 'Safe For Work', this picture may cause involuntary vomiting and the uncontrollable desire to poke ones eyes out. Be careful. And don't ask why all eight men in the set are dressed up as women. It's safer not to know.

2: A set dance weekend away camp at Katoomba YHA, where I'm told that they have a big wooden floor just right for doing set dancing on? Why, whatever made you think of that idea?

Last updated: | path: tech / ideas | permanent link to this entry

Bluetooth Audio Devices

Regarding my quest for bluetooth devices that can send and receive audio, I've found the Taiwanese manufacturer BlueTake, with their distributor in Australia being IP Depot (IP here standing for Innovative Products, natch). The bluetooth headphone pair BT-420EX with BT430 transmitter dongle looks particularly interesting, as Kate has been wanting a set of wireless headphones for TV watching that aren't heavy, ugly things like the Dick Smith monstrosities (those are the best we've seen so far, and they've still been uncomfortable, heavy, and bulky).

Time to try the idea out on Kate...

Last updated: | path: tech / ideas | permanent link to this entry

BeOS, Haiku, what else?

First there was BeOS;
From its ashes came Haiku.
Where do we go now?

I played around with the free distribution of BeOS R3. It was cool, although (like any shift to a new and quite different User Interface) it took a while to get used to how things were done. There was this feeling inside it (or inside me) that an "out with the old, in with the new" approach to Operating Systems was needed in the industry around 1998, as the kludge of Windows 3.1 on DOS 6, and Windows 95, and the growing snarl of problems that was Mac OS System 7, threatened to choke user interfaces in the legacy of their own dim, dark pasts.

One of my lectures in 1994 talked about a plan (by HP, I think) to have processors of 1GHz, with ten cores per processor, with ten processors per machine, by 2010. We can see the tip of it now, with processor speeds peaking at around 2GHz to 4GHz and more work being done on making multi-core chips and multi-processor boards. For Be to say in 1991 that they were making a full multi-processor OS for consumers, complete with multi-processor hardware to run it, was daring and inspirational. To talk about getting rid of the legacy of single processors and dedicated eight-bit hardware and kludgy file system designs could only be a step forward.

Cut to now. Be doesn't exist. A group of committed enthusiasts are working on Haiku, an attempt to build the BeOS that was hinted at in R5, working not from any Be source code but from the release of the BeOS APIs as codified in R5 Personal Edition. Blue Eyed OS and Cosmoe are other projects attempting the same thing but Haiku seems to have the most support. They can do this because BeOS was modular, so as they write each unit they can put it into the rest of the OS and see how it behaves. (Try doing that with a more expensive OS.)

Certainly one of the things that strikes me about the current state of play with commercial and free OSes is that the common thing they have is legacy code and legacy APIs, and in some cases legacy hardware, to support. Apple got caught in that trap back in the 1990s, where System 7 had to support the possibility of running on a Mac Plus and a Mac IIfx, which were quite different processor architectures. Now they only support a small group of relatively similar architectures. Microsoft is caught in a similar trap, with people trying to install Windows XP on Pentium IIs with 64MB of memory. I'd feel sorry for them both if it wasn't for the fact that Linux can run on most of these architectures almost equally well.

As far as I can see, this is because the Open Source community surrounding the GNU-Linux Kernel and the various distributions on top of it are relatively quick to take in new ideas and throw away old systems if the new one is better in some tangible way. Rather than some manager calling a meeting and starting a three month process to evaluate the stakeholders and maintain shareholder value, someone with a better way of doing something comes along and writes the code to do it. If this is seen to be better, it's included. These days they register a domain name and put up a web site and make it easy for other people to contribute - acknowledging that, although they might be expert in the field they're working on, other people are too.

But I still wonder, looking at the other OSes out there, if there are still legacy bits of code in GNU-Linux that are slowing things down. I can't help look at the horrible experience I've had trying to get printing to work on my brand new install of Fedora Core 5, or the hassle I have trying to get Bluetooth to do anything more complex than find out the equivalent of a MAC address on my phone, and wonder what's holding these things up. Programmer time, to be sure. But are there people being told "No, we can't just scrap the old Berkeley LP system, we've got to work on top of it?" or "You have to integrate bluetooth into a system designed for 2400 baud modems"? Is Fedora, or Debian, or Ubuntu, being held back in producing an OS that comprehensively and without question whips Microsoft's and Apple's arses to a bleeding pulp because mailing lists and IRC channels and web forums are clogged with old command-line hackers who refuse to grant anyone the ability to use a mouse or talk to their new mobile phone because "arr, in my day we din't 'ave none of that fancy wireless stuff, we had to toggle the opcodes of the boot loader in by hand, uphill both ways, and we enjoyed it!".

Fah.

Please mail me (or, as my fingers originally typed in a subconscious forecast of doom, 'maul me') with your opinions. I'm interested to know what you think holds GNU-Linux up from real World Domination.

Last updated: | path: tech / ideas | permanent link to this entry

Mon 27th Mar, 2006

More window effects.

I'm fired up to see what's involved in writing plugins for Compiz and Xgl. It seems to be a pretty good interface - something that's easy to add new effects into. I reckon there's a lot of visual coolness still to be written, and having this kind of environment, as well as the Open Source model to make it easy to learn from the ways of others, will mean that Compiz and Xgl have much more cool effects available than their proprietary Operating System 'competitors'.

One area to explore is the view of the workspaces. Sure, the default out-of-the-box configuration has four workspaces side-by-side, and this neatly maps onto the four side faces of a cube. But what about if you have more than four in a row? What if you have two by two? At one place I worked I had three across and two down, and I've seen some people that have four-by-four. So how does that map?

The shape we need to follow is a toroid. The first idea is to have the workspaces as panes tangential to the toroid. Two windows across get duplicated into four, you see the front of the current workspace and, behind it, a 'back-to-front' view of the same workspace. Three windows becomes a triangle formation, and more horizontal workspaces cause more faces to appear. Two workspaces vertically get seen as a 'two-sided' pane, like a picture with two sides, that gets flipped vertically when you move 'down' or 'up' a workspace. Three vertically is a triangular prism, and so on. There's no 'top' or 'bottom' faces like in a cube, so you can see across to the other workspace on the far side from where you are.

The second idea is to map the workspaces directly onto the toroid, by making them toroidal surface segments. This actually makes the display rules easier, as we don't have to have special rules for handling two workspaces vertically or horizontally. So a two-by-two workspace would have the workspace you just viewed as the nearest outer half of the toroid, the next horizontal space being the furthest outer half, the one 'below' where you were as being the nearest inner half, and the remaining workspace being the further inner half. The transformation animation between flat workspace and toroid surface segment is fairly easy to imagine.

At first glance I thought it would be a bicubic antialiased son of a bastard to do this, but then I realised that what's probably going to happen is that you break the screen up into subsegments and map them into positions on the surface of the toroid. Their location in space is actually fairly easy to calculate, even when they're flying into position to or from the torus. The user could specify the subdivision factor - less than two shouldn't be allowed, to keep the object looking nominally like a torus - and more subdivisions would make the object look more like a toroid at the expense of rendering speed and/or computation power.

And this is only the start of the possibilities. Hooray, Open Source and plug-in architectures!

Last updated: | path: tech / ideas | permanent link to this entry

Wed 22nd Mar, 2006

Prove that you wrote that - ten years ago

In the lab that I work in, I'm supposed to write down everything I do, and everything I think of, and date it, in a big red book. Should there ever be a time where someone else informs me that my idea is not original and they were first to think of that, this book is theoretically going to settle the matter. Of course, I might still be wrong, but I wouldn't be just me saying so. And, of course, it looks better if I've got it in a book and they haven't.

Of course, I'm lazy, I type much faster than I write, and I don't have ideas that can be neatly mapped out on a page. But files - ah, files are untrustworthy. Discs are untrustworthy. You can fiddle with their contents whenever you like. It doesn't take much work to make a file look it was created on the day I was born in 1971 - although going past the Unix epoch (or your local filesystem's equivalent) is a bit more difficult. And since any challenge like this happens over the course of weeks, not in midnight raids, the temptation to fudge things a little to make it look like you came up with that idea for a method of delivering formatted content through the internet two years before Tim Berners-Lee ever thought of HTML.

So what one needs in this situation is a trustworthy repository with an audit trail. It must be a trusted third party, so you can deny any direct involvement. The audit trail must itself be untamperable. The third party has to also prove to you that your files, and their record thereof, hasn't been tampered with - that no-one else has submitted work claiming to be you. And, most importantly, it must use fairly simple mechanisms that can act on a day-to-day (or even more frequent?) basis, so that I don't have to wait until a CD is filled before mailing it off to the escrow agency.

Now, all this isn't too hard. Public Key encryption allows you to sign your work in ways that are provably hard to forge or tamper; it also allows them to sign their logs in such a way that you can verify with their public key that the log is a true and correct account, even if you can't play with it directly. Rsync and other methods provide a simple way to make a copy of your work here to a remote location with a minimal transfer, also using a secure transport (ssh). There are other methods - scp, webdav, version contron systems; I don't think it should need to be one protocol for moving the files to the remote location, just so that you can be confident that it's been received and that no-one else can tamper with your work.

At the remote site, I would imagine a system where every change - and using rsync or diff here would make a lot of sense - would be written to a log. Each entry in that log would be digitally signed by the secret key of the logging system. This then gets written to a CD-R, or some other permanent "can't be changed again" system - stone tablets, for example - stored in enough locations that it's too difficult for an attacker to change or destroy them all. And because your changes were signed with your public key, you can now prove that the online log of your changes agrees with what you say happened.

In fact, if you use the model of open source backups (i.e. "Real men don't make backups, they just upload their work to a public FTP server and call it the Linux Kernel."), you could create a system similar to FreeNet where people hosted small chunks of this growing data corpus, hashed and encrypted and distributed to such an extent that no-one knew what was in the blocks that they held. If you had to do that to access the system, then while you might be saving your own competitors' information, at least you're improving your own security in doing so. And obviating the need for stone tablets is a Good Thing.

Now I just need to invent the system to protect this document, so that in a years' time when someone wants to do this I can say "Ah, but I invented it first! Your solution must be open source and free for everyone!"

Last updated: | path: tech / ideas | permanent link to this entry


All posts licensed under the CC-BY-NC license. Author Paul Wayper.


Main index / tbfw/ - © 2004-2023 Paul Wayper
Valid HTML5 Valid CSS!