Too Busy For Words - the PaulWay Blog

Fri 27th Nov, 2009

The new age of programming

I gave a lightning talk at OSDC this year and thought I'd write my thoughts up into my blog. It was the confluence of a number of ideas, technologies and thoughts gradually merging, and I think it's going to be an increasingly important issue in the future.

Most laptops now have at least two cores in them. It's hard to get a desktop machine without at least two. The same chips for ordinary x86-architecture machines will soon have six, eight and twelve cores. The Niagara architecture has at least this many and quite possibly more. The Cell architeture allows for up to sixty-four cores on-chip, with a different architecture and instruction set between the FPE and SPE cores. The TileGX architecture includes one variant with a hundred 64-bit cores, connected to three internal DDR-3 memory interfaces and four internal 10-gigabit ethernet interfaces.

The future, it can therefore be said, is in parallel processing. No matter what new technologies are introduced to decrease the size of the smallest on-die feature, it's now easier to include more cores than it is to make the old one faster. Furthermore, other parts of our computers are now hefting considerable computation power of their own - graphics cards, network cards, PhysX engines, video encoder cards and other peripherals are building in processors of no mean power themselves.

To harness these requires a shift in the way we program. The people who have grown up with programming in the last thirty years have, by and large, been working on small, single-processor systems. The languages we've used have been designed to work on these architectures - parallel processing is either supported using third-party libraries or just plain impossible in the language. There have been parallel and concurrent programming languages, but for the most part they haven't had anywhere near the popularity of languages like Basic, C, Pascal, Perl, Python, Java, and so forth.

So my point is that we all need to change our way of thinking and programming. We need to learn to program in small units that can be pipelined, streamed, scattered and distributed as necessary. We need larger toolkits that implement the various semantics of distributed operation in the best way, so that we don't have people reinventing thread processing badly all the time. We need to make languages, toolkits, and operating systems that can easily share processing power across multiple processors, distributed across cores, chips, and computers. We need to help eachother understand how things interact better, rather than controlling your own little environment and trying to optimise that in isolation.

I think it's going to be great.

Last updated: | path: tech / ideas | permanent link to this entry


All posts licensed under the CC-BY-NC license. Author Paul Wayper.


Main index / tbfw/ - © 2004-2023 Paul Wayper
Valid HTML5 Valid CSS!