Too Busy For Words - the PaulWay Blog

Sat 16th Mar, 2013

Recording video at LCA

A couple of people have asked me about the process of recording the talks at Linux Conference Australia, and it's worth publishing something about it so more people get a better idea of what goes on.

The basic process of recording each talk involves recording a video camera, a number of microphones, the video (and possibly audio) of the speaker's laptop, and possibly other video and audio sources. For keynotes we recorded three different cameras plus the speaker's laptop video. In 2013 in the Manning Clark theatres we were able to tie into ANU's own video projection system, which mixed together the audio from the speaker's lapel microphone, the wireless microphone and the lectern microphone, and the video from the speaker's laptop and the document scanner. Llewellyn Hall provided a mixed feed of the audio in the room.

Immediately the problems are: how do you digitise all these things, how do you get them together into one recording system, and how do you produce a final recording of all of these things together? The answer to this at present is DVswitch, a program which takes one or more audio and video feeds and acts as a live mixing console. The sources can be local to the machine or available on other machines on the network, and the DVswitch program itself acts as a source that can then be saved to disk or mixed elsewhere. DVswitch also allows some effects such as picture-in-picture and fades between sources. The aim is for the room editor to start the recording before the start of the talk and cut each recording after the talk finishes so that each file ends up containing an entire talk. It's always better to record too much and cut it out later rather than stop recording just before the applause or questions. The file path gives the room and time and date of recording.

The current system then feeds these final per-room recordings into a system called Veyepar. It uses the programme of the conference to match the time, date and room of each recording with the talk being given in the room at that time. A fairly simple editing system then allows multiple people to 'mark up' the video - choosing which recorded files form part of the talk, and optionally setting the start and/or end times of each segment (so that the video starts at the speaker's introduction, not at the minute of setup beforehand).

When ready, the talk is marked for encoding in Veyepar and a script then runs the necessar programs to assemble the talk title and credits and the files that form the entire video into one single entity and produce the desired output files. These are stored on the main server and uploaded via rsync to mirror.linux.org.au and are then mirrored or downloaded from there. Veyepar can also email the speakers, tweet the completion of video files, and do other things to announce their existence to the world.

There are a couple of hurdles in this process. Firstly, DVswitch only deals with raw DV files recorded via Firewire. These consume about a gigabyte per hour of video, per room - the whole of LCA's raw recorded video for a week comes to about 2.2 terabytes. These are recorded to the hard drive of the master machine in each room; from there they have to be rsync'ed to the main video server before any actual mark-up and processing in Veyepar can begin. It also means that previews must be generated of each raw file before it can be watched normally in Veyepar, a further slow-down to the process of speedily delivering raw video. We tried using a file sink on the main video server that talked to the master laptop's DVswitch program and saved its recordings directly onto the disk in real time, but despite having tested this process in November 2012 and it working perfectly, during the conference it tended to produce a new file each second or three even when the master laptop was recording single, hour-long files.

Most people these days are wary of "yak shaving" - starting a series of dependent side-tasks that become increasingly irrelevant to solving the main problem. We're also wary of spending a lot of time doing something by hand that can or should be automated. In any large endeavour it is important to strike a balance between these two behaviours - one must work out when to stop work and improve the system as a whole, and when to keep using the system as is because improving it would take too long or risk breaking things irrevocably. I fear in running the AV system at LCA I have tended toward the latter too much - partly because of the desire within the team (and myself) to make sure we got video from the conference at all, and partly because I sometimes prefer a known irritation to the unknown.

The other major hurdle is that Veyepar is not inherently set up for distributed processing. In order to have a second Veyepar machine processing video, one must duplicate the entire Veyepar environment (which is written in Django) and point both at the same database on the main server. Due to a variety of complications, this was not possible without stopping Veyepar and possibly having to rebuild its database from scratch, and I and the team lacked the experience with Veyepar to know how to easily set it up in this configuration. I didn't want to start to set up Veyepar on other machines and finding myself shaving a yak and looking for a piece of glass to mount a piece of 1000-grit wet and dry sandpaper on to sharpen the razor correctly.

Instead, I wrote a separate system that produced batch files in a 'todo' directory. A script running on each 'slave' encoding machine periodically checked this directory for new scripts; when it found one it would move it to a 'wip' directory, run it, and move it and its dependent file into a 'done' directory when finished. If the processes in the script failed it would be moved into a 'failed' directory and could be resumed manually without having to be regenerated. A separate script (already supplied in Veyepar and modified by me) periodically checked Veyepar for talks that were set to "encode", wrote their encode script and set them to "review". Thus, as each talk was marked up and saved as ready to encode, it would automatically be fed into the pipeline. If a slave saw multiple scripts it would try to execute them all, but would check that each script file existed before trying to execute it in case another encoding machine had got to it first.

That system took me about a week of gradual improvements to refine. It also took me giving a talk at the CLUG programming SIG on parallelising work (and the tricks thereof) to realise that instead of each machine trying to allocate work to itself in parallel, it was much more efficient to make each slave script do one thing at a time and then run multiple slave scripts on each encoder to get more parallel processing, thus avoiding the explicit communication of a single work queue per machine. It relies on NFS correctly handling the timing of a file move so that one slave script cannot execute the script another has already moved into work in progress, but that at this granularity of work is a very small time of overlap.

I admit that, really, I was unprepared for just how much could go wrong with the gear during the conference. I had actually prepared; I had used the same system to record a number of CLUG talks in months leading up to the conference; I'd used the system by myself at home; I'd set it up with others in the team and tested it out for a weekend; I've used similar recording equipment for many years. What I wasn't prepared for was that things that I'd previously tested and had found to work perfectly would break in unexpected ways:

The other main problem that galls me is that there are inconsistencies in the recordings that I could have fixed if I'd been aware of them at the time. Some rooms are very loud, others quite soft. Some rooms cut the recording at the start of the applause, so I had to join the next segment of recording on and cut it early to include the applause that the speaker deserved. There were a few recordings that we missed entirely for reasons I don't know. I was busy trying to sort out all the problems with the main server and I was immensely proud of and thankful for the team of Matt Franklin, Tomas Miljenovic, Leon Wright, Euan De Koch, Luke John and Jason Nicholls who got there early, left late, worked tirelessly, and leapt - literally - up to fix a problem when it was reported. Even with a time machine some of those problems would never be fixed - I consider it both rude and amateur to interrupt a speaker to tell them that we them to start again due to some glitch in the recording process.

But the main lesson to me is that you can only practice setting it up, using it, packing it up and trying again with something different in order to find out all the problems and know how to avoid them. The 2014 team were there in the AV room and they'll know all of what we faced, but they may still find their own unique problems that arise as a result of their location and technology.

There's a lot of interest and effort being put in to improve what we have. Tim Ansell has started producing gstswitch, a Gstreamer-based program similar to DVswitch which can cope with modern, high-definition, compressed media. There's a lot of interest in the LCA 2014 team and in other people to produce a better video system that is better suited to distributed processing, distributed storage and cloud computing. I'm hoping to be involved in this process but my time is already split between many different priorities and I don't have the raw knowledge of the technologies to be able to easily lead or contribute greatly such a process. All I can do is to contribute my knowledge of how this particular LCA worked, and what I would improve.

Last updated: | path: tech / lca | permanent link to this entry

Thu 27th Jan, 2011

Saying sorry and moving on

Today's keynote speech at LCA is from Eric Allman, the person who wrote 'sendmail', the main mail transfer agent that moves mail on the internet. There are many things in sendmail that have caused problems in the past, and its configuration syntax is known to cause brave men to wet themselves in fear. So I wanted to see how, or if, Eric would address these things in his talk.

Not only did he do this but he acknowledged the mistakes he had made. He talked about what he would do differently. He talked about the decisions he'd made that were forced by machine limitations, complete lack of standardisation of email address formats, and various other constraints. It's easy in hindsight to criticise some of these decisions but when you're starting out on a new system the horizon is wide open and you don't realise and sometimes can't even determine the scope of the consequences of your decisions.

Interestingly he pointed out the Postel principle - "be strict in what you emit and liberal in what you receive" as perhaps one of these mistakes. In his defence he said that there were many often completely incompatible email address formats and exchange methods, and professors get incredibly' irate when they find out that their grant application wasn't received. But it allows badly-written and incompletely-compatible systems to live and thrive, and I think we've seen this with HTML and other things - by Netscape allowing badly-written HTML to be rendered vaguely correctly it allowed IE to prosper.

But the thing I really appreciate is someone who will say "yeah, in hindsight that was a bad move". We all have reasons at the time, but there are a lot of bad decisions that are perpetuated - especially by large companies - because no-one is willing to admit that they made a mistake. It takes a lot of guts to stand up in front of eight hundred people who've all at one time or another struggled with sendmail and say "Yeah, M4, I don't think that was such a great idea". And yet it means we can now say "OK, well, let's get on with it anyway" and stop trying to blame sendmail for all our email problems.

His "takeaway" ideas were also really great and I think it validates Eric's experience as a programmer and system architect. The thing I would amplify from these was documentation: if you don't have documentation of your project, you'll never get any users.

Last updated: | path: tech / lca | permanent link to this entry

Mon 24th Jan, 2011

LCA 2013 bid process opens - Canberra at the ready!

For the last several months, a small group of people in Canberra including myself have been preparing a bid for LCA 2013. This is not just to give us more time to make the conference the most awesome, mind-pummelling LCA you've ever been to. No - 2013 is also the centenary of the founding of Canberra as the nation's capital. It's a very significant year for us and we'd all be thrilled if we could show the attendees of LCA our great city and Canberrans the great work the FOSS community does to improve everyone's lives.

So we're really stoked that the bidding process is going to be opened early, and I think it'll lead to a really interesting competition that will result, whoever wins, in the best LCA ever!

If you're interested in being a part of the team putting this event together, email me!

Last updated: | path: tech / lca | permanent link to this entry

Thu 15th Jul, 2010

Proposals submitted...

The Linux Conference Australia call for papers is now out, and I've submitted two papers - one for a talk and one for a tutorial. Now the waiting begins...

In 2009 I got accepted to give a talk on writing good user documentation. I'd submitted several papers before then but never got accepted; the chief reason was that I had submitted papers about stuff I was interesed in but was not actually a key contributor to. LCA is crazy hard to get to speak at, but is totally worth it because they really treat speakers well. And to me it's addictive - I loved it so much in 2009 I wanted to do awesome things just to get a place in 2010.

That didn't work out for me; mainly because I'm a neophile. I tend to be interested in a whole bunch of things but only shallowly - occasionally (such as when I decided to write the doco for LMMS) I dip in but I rarely seem to be able to sustain that involvement before the next thing comes along and lures me away. But I'm more hopeful I can get a speakership at 2011 because I'm putting forward two proposals for things that I'm actually really involved in and know about.

Ah well. Now for three months of anticipation. Better keep on working on my electric motorbike then...

Last updated: | path: tech / lca | permanent link to this entry

Sun 24th Jan, 2010

The device will submit!

I arrived a bit early for the Southern Plumbers miniconference at LCA 2011 and ended up watching people trying to work out why the projection system wasn't working - staring at various devices, switching things off and on, sulking, calling for other knowledgeable advisors, opening cupboards, etc. It was rather like that scene in The Diamond Age where Doctor X is trying to get his nanotech working.

And I realised that, with virtually any other conference, if the projection system had stopped working there it would have been "Sorry, everyone, we can't get the projection system working, we're going to have to move". But here at LCA you have so many knowledgeable, analytical, people - people for whom a piece of technology working is almost a personal affront - the problem won't resist for long. The problem will submit.

That's what makes it such a fun conference.

(We do need to realise, overall, that sometimes we need to take a step back and ask whether this is worth solving, or whether even we should solve this problem at all. I put it that if some engineers were told to open the gates of hell and let the unholy minions out upon the earth, they would try to work out how to do it rather than ask whether it was a good idea. But generally I think fixing things so they work - and knowing enough to fix things - is better than relying on someone else to do it.)

Last updated: | path: tech / lca | permanent link to this entry

Wed 28th Jan, 2009

LCA flies by

In certain circumstances, bringing an airplane, the sun and some clouds into the proper relationship will show you an interesting phenomena - a ring of brighter cloud, centred on the shadow of the plane. This happens at the angle where the crystals in clouds perfectly reflect the incident light back to you, and I'd love some optics physicist to explain it to me one day. But it has the unusual property, if you are close enough to the clouds, of focussing that small band - every detail of that area stands out. Individual filaments of cloud are shown to you before you swiftly move on to the next. If you watch one bit it fades into dullness and its detail is lost, but if you keep your eye moving every part of the cloud has its own delicate, infinitely detailed beauty.

I found myself in just such a conjunction of plane, sun and cloud on my flight back from Hobart to Melbourne after Linux Conference Australia, still dazed by the early morning start to get to the six o'clock plane. In this contemplation-conducive state, I thought the image above was a good metaphor for the conference overall - each little bit brilliant but fading when compared to the next bit of brilliance, and the overall brilliance only capturable in the human mind, where the individual experiences can be overlaid rather than replaced and forgotten as in a movie.

I'll stop trying to wax lyrical, and while lyrical waxes someone else will note down some highlights the whole week of fun.

While it was a bit of a slog up the hill to the college from the Uni, it wasn't too hard and certainly got a few of us a bit fitter, myself included. The rooms were very nice, and despite being shunted out of my original room with other Canberrans I got to meet a bunch of new people which I always enjoy. Special thanks to Ian Beardslee for whiskey and perspective.

The venues were pretty good, but the fact that speakers had to hold radio mikes up to their faces led to a lot of pretty variable audio. Some people, like Tridge, Jeff Waugh, and Rusty already know how to project well - others were a bit shyer and/or uncertain how to speak to a microphone. The trick is to have it up near your chin - close enough to pick up every sound, but out of the direct breath path so that your 'P' sounds don't pop. The main point is that you are trying to get your spoken words across to everyone in the room and on the video, and that is much more important than feeling embarrassed. And never, ever blow into the microphone to test if it's on - tap it or scrape the mesh on the top instead. There's much less chance of damaging the pickup that way, or having an audio professional decapitate you with your own shirt for maltreating their equipment.

Being a speaker for the first time, I was really blown away with how well they treat speakers at LCA. You get picked up at the airport, you get your own (speakers) dinner and you get to go to the Professional Delegates Networking Session. So not only did I get to go to two very nice places to eat and see some of the attractions around Hobart, but I also got to pretend to be a professional. Being a part of the process that makes LCA great - the talks - is pretty awesome too. And having people talk to and email you afterward about the topic and ask more questions and have more discussion is even better. Still very happy with that.

However. In order to really rock as a speaker giving a "here's the coding project I've been working on" talk, I think you need one simple thing: results. There were a couple of talks - the High Def H.264 decoding in Intel GPU talk for example - that gave an overview one might give to technical management and showed us almost nothing in the way of actual code or working software. Compare this with the CELT talk, where Tim not only demonstrated why the code was so clever and why low latency was important, but demonstrated it right there. I don't really need a working demo, but I do need to see that the code is in use by real live people, not still on the drawing board. If drawing-board projects were the criterion for a good talk I would be occupying my own day at LCA. :-)

The conference dinner was very good - buffet style wins! The fund raising was also pretty awesome - although I'm not a big fan of the whole 'auction' thing when pretty quickly it has got out of the reach of any single person in the audience, I still think that it's an excellent example of why Open Source really does rule when we can raise over $40,000 for a charity from essentially a bunch of individuals with one tangible and a few intangible prizes (pictures in the kernel, people's integrity, etc.). If anything, the guy who spoke about the disease could have talked more about the research - most of the table I was sitting with was pretty bored through the 'here's some pictures of bad stuff' part but were riveted when it came to the 'and here's why it's a technically interesting problem' part.

The laptop case cover was well received but needs some work to straighten it out and stop it from cracking. It no longer attaches to the laptop - the tension on the outer surface simply pulls the catches back off again.

A judicious balance between coffee, V and water is what kept me going for most of the conference. I've found the 700ml Nudie bottles are light, easy to use, and contain enough water to keep you hydrated. It took me most of Monday to really feel like I was fully compos mentis.

I met lots of nice people in the LUG Comms meeting and more nice people in the LinuxChix lunch. I now owe Jon Corbet two beers, as part of a "I must buy you a drink for your excellent Linux Weekly News" plan gone horribly wrong, and Steve Walsh, Cafuego, James Purser and others need to be pinned down in a bar somewhere so I can buy them beers. Jon Oxer and Flame (who really should be called Black Flame) were excellent value, the keysigning was underpopulated but still worthwhile, and the sheer quantity of BOFs happening in spare rooms, in corridors, up trees and elsewhere were just too much for me.

The MythTV miniconference was a highlight - giving my talk at it was a lowlight because I should really have had much more technical detail; the lesson is "if you see someone suggesting a miniconference, only volunteer to talk on the subject if you have something that is at the generally high quality of Linux Conference talks". There were a few other MythTV talks that left me wanting a bit more detail, but there's no feeling quite like realising that all the technical people have left the room for your talk, and the only developer remaining is working on his presentation....

Overall, the quality of LCAs is still high, and I have no doubt that Wellington will pull out all the stops for a top-quality LCA too. If they can get their videos up a bit quicker than this year...

Last updated: | path: tech / lca | permanent link to this entry

Fri 23rd Jan, 2009

LCA - the conference that keeps on giving

I haven't really been in the right frame of mind to blog more regularly about LCA. But my current criteria for any new employer is whether they consider it important enough to my employment to be interested in sending me to LCA -

Last updated: | path: tech / lca | permanent link to this entry

Sat 2nd Feb, 2008

LCA 2008 Google Party Mix

The day finally came, and though I was a ball of sweaty clothing from giving my Lightning Talk I was ready to do some mixing for the LCA 2008 Google Party. Afterward, thanks to some pre-prepared scripts, I put the mix up on my torrent server pretty soon afterward. If you want it, you can download the mix via BitTorrent or read the track listing. All the music is Creative Commons licensed and therefore my mix is also similarly licensed; I'll work out the exact license code when I've looked at the licenses on all the music, but for now I will release the mix under a Creative Commons 3.0 By-NC-SA license.

Thank you to Peter Lieverdink and the LCA 2008 team for allowing me to mix at LCA - I had a great time doing it. And my collection hat (thank you Stewart Smith) raised $24.30 to donate to the artists. I reckon that's pretty good for something completely voluntary where most people hadn't been really getting into the music much (that I could see). Now to work out how to donate it...

Last updated: | path: tech / lca | permanent link to this entry

Thu 31st Jan, 2008

Network Interactionativity

For some reason, on certain access points at LCA - for instance the one in the St. Mary's common room - I need to set my MTU to 1000 (i.e. down from 1500) in order to get Thunderbird to do secure POP. Everything else works fine, but Thunderbird just sits there timing out. I discovered this by watching the Wireshark log and noticing packet fragments disappearing (i.e. some packets where the tcp fragment analysis couldn't find parts of the packet to reassemble). Hopefully this isn't also causing Steve Walsh to pick up his specially sharpened LAN cable and hunt me down...

Last updated: | path: tech / lca | permanent link to this entry

On to other things

After spending four hours or so working on my hackfest entry, I was less than optimistic. My entry had yet to even be compiled on the test machines, and it still had huge areas of code that were completely unimplemented. When I went into the common room at St Mary's, Nick from OzLabs recognised me and helpfully mentioned that someone else not only had their code completely running but was in the process of optimising it. I promptly resigned.

I say "helpfully" sincerely there. It is a bit of a pity that my ideas won't see the light of day this hackfest, and that I won't be in the running to win whatever prizes they might offer. But since I don't have a snowball's chance in a furnace of winning anyway that's hardly a real disappointment. And I can go to bed with a clear head and prepare for my lightning talk and the Irish Set Dancing and mixing I plan to do at the Google party, which realistically are much higher priorities.

I do hope that we get to see the winning solutions, though...

Last updated: | path: tech / lca | permanent link to this entry

Hackfesty?

I've decided to have a more serious look at entering the hackfest, since I'm familiar with processing fractals with parallel algorithms. Downsides are that I've only done it with PVM, I haven't done anything with the Cell architecture and there's all these other really cool talks to go to. That and I need to have my eyes stop glazing over when I start reading anything more detailed than the "Fire hydrant and hose reel" sign opposite me.

Last updated: | path: tech / lca | permanent link to this entry

Wed 31st Jan, 2007

At last you know what you're getting

I've finally mangled up the track listings for the Flashing Google Badge Mix and the Wired Kernel Hacker Mix. Now you know what you're listening to!

Last updated: | path: tech / lca | permanent link to this entry

Thu 18th Jan, 2007

The "solving things" conference

Last year at LCa 2006 I had two guys take five minutes of their time to help me get my work laptop on the wifi, a process which included one of them lending me his PCMCIA card (that didn't require firmware) so that I could download the firmware for my inbuilt card. I've forgotten your names, whoever you were, but you guys rocked. And I've told the story many times to illustrate why groups of hackers getting together can achieve things that would take a single person a lot of work to troubleshoot.

This year I've had similar experiences. The first one was getting my DVD drive set up to use DMA. On the Intel 82801 ICH7 family of ATA bridges, it supplies a SATA interface for the hard disk and a PATA for the DVD. Unfortunately, the standard ATA driver doesn't interface with this combination correctly and doesn't enable DMA on the DVD drive (or allow you to set it via hdparm). To fix this, put the following incantation on the end of your kernel line in your GRUB configuration file (for me on Fedora Core 6, that's /boot/grub/grub.conf):

kernel ... combined_mode=libata hdc=noprobe

The other was finally getting CPU frequency scaling working on my Intel Core 2 Duo. It's an unfortunate but now well-known bug that the Fedore Core 6 anaconda installer will not correctly work out what type of chip this is. It therefore thinks that you need the Pentium (i586) kernel rather than the Pentium II and later (i686) kernel. Since Pentiums didn't come with frequency scaling, the kernel package doesn't include the necessary kernel objects for speed stepping. You'll know if this applies to you with the following command:

rpm -q kernel --queryformat "%{NAME} %{RELEASE} %{ARCH}\n"

The third column will have the architecture - standard rpm and rpm -qi commands won't tell you this. uname -a will tell you i686 even if the kernel is i586, so don't believe it. To download the new kernel version, use:

yum install kernel.i686

I think that you have to do some special magic to get it to install the i686 architecture of the same version. As of my writing, it picked up the 2.6.18-1.2868 version of the i686 and installed that beside the 2.6.18-1.2869 version already installed. Yum won't correctly replace the i586 architecture version with the i686 architecture version if it's the same release number, as far as I know. I don't know what you do in this case.

Of course, while you're running the current working kernel, download all your kernel-specific packages for things like wireless networking support. These you have to download the RPMs from your local mirror and install manually, because it's currently running a different kernel and yum will only install packages for that. Of course, if your ipw3945 driver is compiled from source, you'll have to make that clean and compile it and the ieee80211 module from scratch again. Take it from me, there's some weird voodoo to get this working that took me a day to correctly incant.

Then you should have an acpi_cpufreq.ko module installed and be able to use one of the CPU speed regulator daemons. I think I have both installed somehow, which means they're probably fighting it out or something. Go me. Still, I can blog about it, which hopefully means that Google will index it and someone else will learn from my mistakes. That's the only reason I'm doing this, you know.

Last updated: | path: tech / lca | permanent link to this entry

Rad GNOME Presenters

Andrew Cowie and Davyd Madeley put on a good show for how to start writing GNOME applications in C and Java. Andrew in particular is an enthusiastic speaker, and understands that it's very difficult to choose which talk/tutorial to go to and sitting for ninety minutes and listening to one topic is sometimes difficult. I totally appreciate that. And, by Torvald's Trousers, he's fast at using UIs - watching in work in Eclipse makes you realise how good programmers can churn out a fully implemented file browsers in an evening.

I'm going to have to kidnap one or both of them and bring them to the CLUG Programmer's SIG meeting. This talk was exactly what the people at the PSIG that I've been talking to have asked for.

Last updated: | path: tech / lca | permanent link to this entry

I'm on the shirt that killed River Phoenix

In crowds, there's always someone heading vaguely toward you but heading somewhere else entirely. There are a whole lot of little protocols - not meeting their eye, negotiating a slightly different course - that allow a certain social space. So it's always disconcerting to have someone stride directly up to you - when they actually do mean to meet you and you've now been pointedly ignoring them. He pointed to my LinuxChix Miniconf "Standing out from the crowd" T-shirt and said:

"Where can I get one of those?"

I gave him Mary Gardiner's email, and whatever other methods I could remember of how to get in contact with her. But though it's a long sleeved shirt and it's a warm day, I'm totally chuffed to have got one now. LinuxChix roxxors!

(BTW, the title is a reference to TISM's popular song (He'll Never Be An) Ol' Man River, of course.)

Last updated: | path: tech / lca | permanent link to this entry

Wed 17th Jan, 2007

Kernels meeting in the middle?

Andy Tanenbaum's talk on microkernels was, IMO, really cool. The interesting thing to me was that this almost exactly mirrored Van Jacobsen's talk at LCA 2006 on speeding up network access by moving the network drivers out of the kernel. Not only did this speed network access up, but it also removed a whole bunch of ugly locking stuff from the kernel, improving its quality as well. Another side benefit of this was that you could now run half a dozen network processes instead of one. With architectures like Sun's Niagara, Intel's quad cores and many other systems getting many cores on the same chip, this is going to deliver an increasing speed-up.

It occurs to me that this is the other good thing of Minix. The disk driver, the network driver and the screen driver can all run at full speed because they get 100% of their own CPU time. Separating these out onto separate processes that can run on separate CPUs will deliver better scaling than bloated kernels that have every driver and every system all bundled together. To me, this is not really a problem for Linux - we already have proof that these trends are happening. Linux might have a larger kernel, but we're meeting microkernels in the middle.

For Windows, though, I'd say that it will become increasingly obvious that it just can't compete on reliability and scaling in the area that they so desperately want to get into: the server market. The annoying thing about this is that it won't really matter, because Microsoft knows who to market to (the upper management who don't read technical journals) and have the budget to make anything look good. The fight is still on, but it's still not between Linux and Minix. Sorry, Marc, stirring that particular pot again does not get you any kudos.

Last updated: | path: tech / lca | permanent link to this entry

Tue 16th Jan, 2007

Submitting patches and watching devices

Two more excellent talks at the LinuxChix miniconf - how to work on open source if you're not a programmer and how to understand PCI if you're not a hardware hacker. It was amusing to see that the small room the miniconf has has been constantly full, with people often having to sit on side tables or stand in order to watch. For the latter talk in particular, a huge contingent of guys turned up to listen and strained the capacity of a room that had been boosted with lots of extra chairs. Very cool.

One of the key elements that has come out of the LinuxChix miniconf (in my opinion) is that social networking is just as important as digital networking. Part of this is meeting and greeting, something that even if LCA was twice as big would still be just as awesome. Another part is the smoothing of feathers, the shaking of hands, the stroking of egos - the little things that sometimes you have to do to get patches accepted or problems resolved. One trick which Val Henson mentioned is to submit a patch with one or two obvious errors (like submitting it in the wrong format) - then the developers can feel all important and tell you you did it wrong, and you quietly submit the correct patch and everyone feels happy.

Logically, it shouldn't have to be this way. Open Source prides itself on the idea that anyone can modify, anyone can help. But this, as Sulamita Garcia (the first LinuxChix speaker) pointed out, is a fiction - the reality is flame wars, shouting matches, and sexist comments. Getting patches accepted can often be as much a knowing who to talk to as what format to submit it in. A woman going along to a LUG meeting for the first time can be, as Sulamita described it, akin to the scene in the spaghetti western where the stranger walks into the bar and everything stops. This must change if we're to be anywhere as equal and egalitarian as we claim to be.

And certainly for men it's sometimes a huge struggle. I think of myself as a feminist and consciously support equality and fairness, yet I still make the same mistakes as all the other guys I personally shrink away from. And even after this example, when you'd think I should have put a cork in my mouth, I was still putting my foot in instead.

At the last session of the LinuxChix miniconf, where we went to the library lawn to sit in the dappled sunlight and talked about how difficult it is to get a fair rate of pay. This followed on from Val Henson's talk on negotiation and knowing how to get what you deserve, which was excellent and (I feel) applied to the wider community of computing workers. Mary Gardiner organised us into small groups and specifically cautioned the men in the groups to not talk too much (which would have been a good idea even if it wasn't a LinuxChix miniconf). So we start introducing ourselves, and what do I do?

Go into a long and tedious ramble about the pains of one of my previous jobs.

Mary, the lady organising our group, gently interrupted me and moved on, and I realised my error. Andre Pang, who was also in the group, was much better than I at keeping quiet and letting the women[1] talk. I silently made the motions of putting a cork in my mouth and managed, I think, to restrain myself.

Why must my urge to speak and be heard fight with my desire to be fair and equal?

[1] - Women? Ladies? Girls? Females? Whatever term I choose, I hit the age-old problem of them having social connotations.

Last updated: | path: tech / lca | permanent link to this entry

The invisible macho danger

I worked out that there were 38 women and 12 men for the first session of the LinuxChix miniconf. In the question time, it came out that the FOSSPOS study (I've yet to find it on the intarweb) showed that FOSS and Linux has an order of magnitude fewer women compared to the rest of the IT industry. And yet, when I asked my question "why is this?" Val Henson pointed out to us that, even with that proportion of women in the room, all of the questions up to and including mine had been asked by men.

Ouch.

Last updated: | path: tech / lca | permanent link to this entry

Getting your hands on a child's laptop

Chris Blizzard's talk today about the OLPC covered the question that everyone from the FOSS world (apparently) asks: can I have one. It's very true that even if you got 50,000 people wanting an OLPC (or whatever the actual thing is called), that's peanuts to delivering 20 times that number to one nation alone. However, the 50,000 number is being bandied around - what if there was a website for people to register their interest? And they could say how many they wanted? The more that people spread the word, the more people might find more uses for them. Entire classes or schools in first-world countries could sign up, whereas they would currently be denied. That's got to be good, right? Things like PledgeBank make it easy to get a good feel for how many people are interested in doing something - why not do something like that for the OLPC and see what the real interest from the people is?

Last updated: | path: tech / lca | permanent link to this entry

Mon 15th Jan, 2007

Odd hackery across operating systems

So the next step in getting MixMeister working under WINE seems to be to get a bunch of SELinux context problems sorted. The command to do this is chcon -r textrel_shlib_t <file> - it allows the file to be loaded as a shared object library. I must remember that. This got MixMeister to show its front screen, but it still complains that WMVCore.DLL is missing.

Aside: my fallback, if this WINE hackery wasn't going to work, was going to be starting up in Windows (I still have an XP Home license installed on a smallish partition, since I'm loath to throw away something that costs money when I'm given it and I can find a use for it.) Resigning myself to not using Linux, I restarted in Windows. And then discovered two problems. One is that I only have a demo version of MixMeister Studio 7 that I was trying out a while ago. I'd have to grab the install files and my registration key off my home server - not impossible despite its 34MB installer size.

The second problem was that my portable drive, which regular readers will recall I specifically formatted into FAT32 (yes, using the -F 32 switch to mkfs -t vfat, which otherwise will give you a 12 or 16 bit FAT) in order to make it accessible under Windows should I have to go to this fallback position, is not recognised under Windows. It sees the drive, and the partition, and can even determine that it's a 32-bit FAT partition and see how much free space there is. But it just remains greyed out, and there is no option active in the context menu apart from "Delete Partition". Strangely enough, I don't want to do that.

Of course, Windows won't bother the user with such useless information as why it won't allow me to see that partition. Or what I can do about it. Or why it sees my LVM partition and assigns it a drive letter without being able to read it in the slightest. That's totally useless information to the user. Oooh, sorry, I said too much: that's totally useless. That's all that I need to say.

So, it's back to Linux again to see what I can do. Using Windows has reminded me that I have a perfectly working copy of Windows on my hard disk, with MixMeister installed and working on it. This means that there's a fully working copy of WMVCore.DLL in there somewhere. And thanks to prescience, I have already loaded the kernel NTFS drivers and can mount NTFS partitions. A bit of finding later, I've copied the WMVCore.DLL and another one it seemed to need (wmasf.dll) over to my WINE \windows\system32 directory and given them the necessary permissions. And MixMeister is now no longer complaining about it missing DLL files, or producing SELinux audit messages in the system message log.

Instead, it's just crashing with the message "wineserver crashed, please report this."

*sigh*

Another thing to try and figure out. Another thing to wade into, blindly trying to find out any information I can about what's going wrong. Another thing to stab haphazardly at, pressing buttons at random just to see if anything changes. I'm sure at this point more clueful people just give up, knowing smiles on their faces, saying "only a true lunatic with far more time on their hands than is good for them would ever bother to try and work out what's going wrong at this point."

Maybe this would be a good Lightning Talk.

So, I find wineserver and find out how to run it in debug mode (-d2 -f). That didn't actually really help - nothing in the debug was any different between a good run (with the unpatched server) and a bad run except the bad run just cuts out with the helpful message (null)( ). Antti Roppola, looking over my shoulder at one point, suggested running wineserver under strace, and this revealed that wineserver is getting a segfault. Now to try and put a bit of debugging in the things I've added to see if this can tell me why.

Last updated: | path: tech / lca | permanent link to this entry

Conserving power

With my laptop on 32% charge (not quite a record for me) and my own internal batteries needing a bit of charging, I looked at the programme after afternoon tea. There wasn't anything that really stood out for me as a must-see, and as Jeff said in the opening speech it's important to conserve one's energy and pace oneself at LCA. So I enjoyed a quiet walk back down the slope to Shalom College, to see if they have Wifi here yet.

I have to say that I do like the UNSW campus. It has the same style as QUT in Brisbane - fairly closely packed, and a mixture of the old and the new with little nooks and lawns of greenery amongst it all to enjoy as one goes past. Passing the John Lions garden outside the Computing Science building is particularly poignant given last year's fundraiser. And, while some people might complain a bit about the walk uphill to the conference in the morning, and even I end up feeling unfit and slightly out of breath after tackling it, it's very pleasant to walk down in the afternoon.

After Jeff's talk was a talk on Conduit, a GNOME subsystem designed to make synchronising data between a source and one or more destinations easy. It looked absolutely awesome, because the guy who gave the talk understood the problem space well and had solved it in a way that allowed both 'headless' sync to happen behind the scecnes and fully GUI-enabled ad-hoc sync with conflict resolution all beautifully handled. This allows any application that uses DBus to facilitate syncing of data, without having to be an expert in asynchronous synchronisation (not, in this case, a contradiction in terms). Jeff's talk was about enabling social networking and putting GNOME on a phone (so to speak), and this talk was about making synchronising that data seamless and easy. Cool!

And now to my afternoon's entertainment: getting my WINE patch to work. I've got all the code working, including the two bits I'd commented out because I didn't quite understand where the data was coming from. It compiles with no errors and only one or two warnings, which as far as I can see aren't caused by my code. Now I have to go through and manually label my build directory so that it has the right SELinux contexts to have execmod permissions. Presumably Fedora's SELinux configuration assumes that any big bunch of libraries compiled quite recently in your home directory aren't guaranteed to be trustworthy to use as libraries. Fine by me.

Maybe I'll find out if Wifi is enabled in Shalom, so I can post this and Google for solutions to the execmod problem.

Update: Nope, Wifi is not working in Shalom. A quick call to my front man Steve revealed that it hasn't worked today, will be ready when the gods have been appeased and the troubleshooters have shaken their voodoo sticks over it, and is only likely to be sporadic even then. I realise that this is as much a question of getting bandwidth down here as getting the time to set stuff up, and I least of all people want to hassle the network guys with requests that they're already trying to handle. But it's not a great way to end the day for everyone staying on campus.

Last updated: | path: tech / lca | permanent link to this entry

Why are wii here too?

Just a quick clarification before dozens of Jdub and GNOME fanboys jump on me and toast me to a crisp: the overall idea is sound. Social networking and making nice interfaces and embedded GNOME, yay. But demonstrating this with half an hour of rambling Wii play was not my idea of a good way to get this across.

Last updated: | path: tech / lca | permanent link to this entry

Why are wii here?

Here at the GNOME miniconf we've been watching Jeff play with his Wii. The talk's called "connecting the dots", but as far as I can see it should be titled "watch Jeff play with his Wii and get absolutely nothing done". We've got some guy down from the audience and are creating a new 'Mii' character for him, but I've already switched off. And the repeats of the music are going from mildly irritating to annoying, and in another five minutes it'll be insane rage time. Sorry, Jdub, but this doesn't cut it as a talk.

Still, I've got another inconsistency with my WINE patch sorted out, and I've just discovered another, so I've got something to do.

Last updated: | path: tech / lca | permanent link to this entry

It's not just for sharing music?

First talk: virtualisation; and Jon Oxer talking about trying to manage Xen clients across multiple machines without twelve layers of abstraction and SAN thrashing. Very good stuff, and while a friend of mine commented that it was "welcome to the 1980s" as far as IBM and large-scale mainframes are concerned I think that Jon's got a lot of good ideas to bring this to the Open Source world. My tip to speakers from that talk is to assume that you get about half of your time to talk and half of your time to answer questions. Oh, and manage your questions - the microphones are being passed around so that questions can be recorded; take the question from the person with the microphone.

Second talk: Avahi. I'd noticed this a while back when I was using RhythmBox at work - all these extra playlists would turn up in my sources list. I soon realised that these were Apple iTMS instances of other people just broadcasting themselves on the net. Learning about how these things work - and learning some of the tools to add Avahi functionality to programs - is pretty cool. Now to create my Avahi Mandelbrot Set calculator that grabs whatever processors it can find around the network. World domination, here we come!

Aside: had one of those embarrassing moments that seem common at LCAs: meeting a friend and not quite remembering where I'd seen him. On the one hand, I'd gone to sell him my old car, so I should have remembered him instantly. On the other hand, he'd changed his beard (again), grown taller than I remembered him, and my memory was tricked into thinking he was a person I knew from 2001 or so and I still (logically) owe a CD of Renaissance music. My memory is weird.

Jeff confirmed that they're still working out the programme for the Conference Party, and will get back to me tomorrow regarding whether I can mix there. I'm good with that.

Now, with a good spicy Beef Rendang in me, a lead recharging my laptop and a can of V (bought along with seven others and a block of 70% cocoa chocolate on Sunday afternoon) ready to go, it's time to decide what new coolness I'll see this afternoon. Probably Jeff's GNOME talk.

Last updated: | path: tech / lca | permanent link to this entry

LCA Day 1 - so far so good

Waking up was as easy as ever at LCA conferences - at 7AM the alarm rang and I cursed. My tossing and turning at the new, hard and somewhat scratchy bed hadn't made it an easy sleep. Getting to bed at half past midnight after coming home from the Lowenbrau Keller hadn't helped either. But, strangely, I was energised - rather than dropping back into sleep I was ready for the first day at LCA. Shower. Dress. Eat breakfast. Head on up to the top of the campus. Get connected to the Wifi, courtesy of a random Seven team person who was keen to find out whether the network was working. All good.

I note that the Programme has been updated, and that my 'request' to do some mixing at the Conference Party hasn't. All good. I better start practicing mixing again, then. I don't think my patches to WINE are going to come good, and VMWare is having trouble getting installed. It's going to be back to closed-source sinning...

Last updated: | path: tech / lca | permanent link to this entry


All posts licensed under the CC-BY-NC license. Author Paul Wayper.


Main index / tbfw/ - © 2004-2016 Paul Wayper
Valid HTML5 Valid CSS!