It's a duck pond.
Why aren't there any ducks?
I don't know. There's never any ducks.
Then how do you know it's a duck pond?
Wednesday, June 16
When I was ten, I wanted to be a writer.
Young Pixy: You mean they pay people to make up stories?!When I was twelve, I wanted to be a teacher.
Teacher: Yes, child. It's called journalism.
Teacher: You don't want to be a teacher! You're too smart for that; it would be a waste of your talents.When I was fourteen, I knew I was going to be a computer programmer.
Young Pixy: Guk.
What I wanted to be, of course, was a wizard, but there's no money in wizarding.
And programming is the next best thing. In fact, if you think about it, it's the same thing, except that it sometimes works. You say the secret words, and if you get it right, the result is magic. If you get it wrong, of course, you get dragged off to hell by demons. (Don't try to tell me otherwise. I had sign off on a major Y2K project. I saw the contract.)
Programming allows you to work in a medium that is almost infinitely tractable. It's not like, say, sculpture, where one slip of the chisel and
Oops. That's got to hurt.If you make a mistake, you can go back and do it again. And again and again. And you can make multiple copies of your work and the touch of a button, and compare them, and make changes, and keep the good and reject the bad.
No, she's meant to have no nose. It's, like, allegorical.
It got bitten off by one, then?
The problem with that is that the results are largely limited only by your skill and your patience. More so now than fifteen or twenty years ago, when the absolute limits of the computer hardware put a clearly marked boundary around most projects. If you only have one megabyte of memory, and the feature list would require two megabytes of memory, then some of the features have to go. When you have eight gigabytes of memory, you can no longer make this argument.
Instead, today it's more a case of, Yes, you can have everything you want. If you can think of it, it can be done. Now which features did you want first?
It's a sort of a Limited Omnipotence. We can do anything, we just can't do everything. (And of course, we don't always know the consequences of what we do.)
And when it works, it's magic. Take Google, for example. If I wanted to learn about, say, magnetohydrodynamics, I can just type in the word (assuming I know how to spell it) and hit enter, and in three-tenths of a second (or to touch on something I'll come back to later, no time at all) I have the first ten of over forty-four thousand results. Bing! The demons of Jack Vance's Dying Earth books were never this helpful.
And the reason that it's magical is that we can't see how it works. Unless you already know how Google works, there's not much you can determine from using it. You can work out some of what it does, the way it ranks pages, for example, but those are the just rules that it follows, not the reason it follows those rules. It's kind of like our understanding of the atom before the discovery of subatomic particles - we can describe and predict how atoms behave, but we don't know why.
One of the most useful ways of finding out how something works is to look at a broken one - or indeed, to break one deliberately and see what happens. There's a lot of information in failure modes. That's why scientists built atom smashers, for example. And if not for a faulty connector, I would never have known that Cityrail ticket machines use EBCDIC.
And that's also why, I think, we get so mad when software doesn't work. There's no feedback on the internal operations when it does work, unlike the familiar machinery that surrounds us with its clanking and grinding and whirring. When a car is about to break down, it usually makes horrible noises first, and then makes a really horrible noise just as it fails. And even if you can't tell from the noise that the flange sprocket has worn through the fairing pin and fallen into the gearbox, you can at least tell that something has actually, physically, broken. And that replacing the broken thing will make the car work again, and that although this will cost lots of money, you can at least be reasonably assured that upon payment of said money, your car will be repaired.
But with computers...
Yesterday, when I clicked this, it retrieved my email. Today, it doesn't work. I don't get any errors, it just doesn't do anything. (My crystal ball doesn't work!)The problem is, we are wizards, near enough. We're just not good at it.
Have you tried rebooting? (Did you start again from page one in the Codex Emailulorum?)
Yes, that's the first thing I tried. (Yes, that's the first thing I tried!)
Did it help? (Did it help.)
No. (No. And can you do anything about these sales pitches for time-shares in Hell I keep getting?)
(End of Part 1)
Okay, now that I have that off my chest...
I was reading the latest catalogue from Software Warehouse on the way home tonight, and I noticed that they are selling some new tape drives, including the Sony SAIT. Now, I'd never heard of SAIT before, so I thought I'd do a quick Google to find some details, only I ended up at Google News instead and it took me half an hour to finally stop cursing and pry myself loose.
Anyway, one downside of the 12-month doubling period for hard disk densities I mentioned in the previous episode was that the capacity of backup tapes wasn't growing nearly as fast, so instead of (as it was in the old days) one tape backing up multiple disks, it took multiple tapes to back up one disk. Which was rather less convenient than the other way round.
Sony have come up with an interesting solution to the problem with SAIT: They cheated.*
Check out the physical specifications of the drive:
5.25" Full Height Extended (5.8"W x 3.3" H x 12" D)I haven't seen a full height 5¼-inch drive for years, and 12 inches deep?! Okay, so it can store 500GB compared to AIT-4's 200GB, but AIT-4 would actually fit inside a normal computer! AIT drives are usually half-height 3½-inch devices, about 4" by 6" by 1.6"... So about one sixth the size of the SAIT.
Bring back 9-track tapes, I say. At least you could watch them spin while the blinkenlights blinked...
* By the way, Sony guys and girls, your website lists "Desktops Computers" as a destination in the menu. Either they're awfully big or you've got a typo.
Tuesday, June 15
Things aren't looking that great either. From the invention of the hard disk, up until about 1998, storage densities had been doubling roughly every 18 months. Then for a little while things kicked into high gear, with densities doubling every year.
However, since the introduction of the 80GB 3½-inch platter late in 2002, things haven't moved at all. We're now not just behind the fast 12-month curve, but behind the older 18-month curve as well.
Seagate have just announced a new range of drives, including one with a capacity of 400GB across 3 platters. However, it's taken 18 months to bring about an increase of just 66%, which makes the doubling time more than two years.
(Hitachi already has a 400GB drive available, but it uses 5 80GB platters, so it doesn't represent any new technology. Also, the last desktop drive to use 5 platters was IBM's ill-fated GXP75, which was so unreliable that it landed IBM with a class-action suit, leading the company to sell off its disk-drive division... To Hitachi.)
Sunday, June 13
[I wrote most of this last weekend, but didn't post it then because it clearly needs an edit. I don't know when that's going to happen, though, so I decided that I'd post it anyway. This is a blog, after all, not Communications of the ACM — Pixy]
I've written recently on the untimely death of Moore's Law and on one of the first side-effects of the faltering and failure of that law. But, being somewhat dead myself, I didn't have the time or energy to go into any detail, and probably left my less-geeky readers saying something along the lines of Huh?
But this is important, so I'm going to give it another try.
Way back in 1965, just four years after the first integrated circuit was built, Gordon Moore, then working at Fairchild, made an observation and a prediction.
His observation was that the number of components in an integrated circuit was increasing, while the cost of each component was decreasing; his prediction was that this trend would continue. Intel has made his original paper available for you to read. It's a little bit complicated; Moore is talking about trends in the number of elements in a integrated circuit required to achieve the minimum cost per component - efficiencies of scale, in other words.
Reduced cost is one of the big attractions of integratedWhat he's saying is that by 1975, it would be cheaper to build a single integrated circuit with 65,000 components than to build two 32,500-component circuits - and, by comparison, a 130,000-component circuit (if such a thing could be built) would cost more than twice as much.
electronics, and the cost advantage continues to increase as
the technology evolves toward the production of larger and
larger circuit functions on a single semiconductor substrate.
For simple circuits, the cost per component is nearly inversely
proportional to the number of components, the result of the
equivalent piece of semiconductor in the equivalent package
containing more components. But as components are added,
decreased yields more than compensate for the increased
complexity, tending to raise the cost per component. Thus
there is a minimum cost at any given time in the evolution of
the technology. At present, it is reached when 50 components
are used per circuit. But the minimum is rising rapidly
while the entire cost curve is falling (see graph below). If we
look ahead five years, a plot of costs suggests that the minimum
cost per component might be expected in circuits with
about 1,000 components per circuit (providing such circuit
functions can be produced in moderate quantities.) In 1970,
the manufacturing cost per component can be expected to be
only a tenth of the present cost.
The complexity for minimum component costs has increased
at a rate of roughly a factor of two per year (see
graph on next page). Certainly over the short term this rate
can be expected to continue, if not to increase. Over the
longer term, the rate of increase is a bit more uncertain, although
there is no reason to believe it will not remain nearly
constant for at least 10 years. That means by 1975, the number
of components per integrated circuit for minimum cost
will be 65,000.
I believe that such a large circuit can be built on a single
Events since then have proved him right (and happily he is still around to enjoy it). And more right than he imagined because not only have the components been getting smaller and cheaper, but at the same time they have been getting faster and using less power. And this has been going on, following a curve where (to take the most widely noted example) processing power has been doubling every 18 months. For my entire life processing power has been doubling roughly every 18 months.
My first computer, which I bought as a teenager, saving pocket money every week until the day of the Big! Christmas! Sale! was a Tandy (Radio Shack to many) Colour Computer. It had 16k of ROM (which contained the BASIC interpreter; there was no operating system as such) and 16k of RAM. It was powered by a Motorola 6809 processor and a 6847 video chip. It had a maximum resolution of 256 by 192 - in black and white - or 16 lines of 32 columns in text mode.
It ran at 895kHz.
Yes, boys and girls, kiloherz. It was an 8 bit chip (with a few 16-bit tricks up its sleeve, admittedly); it could execute, at most, one instruction each cycle, and it ran at less than a megahertz. (Also, it had no disk drives at all; everything was stored on cassette tape, which fact is directly responsible for the irretrievable loss of my version of Star Trek and the completely original game Cheese Mites.)
Not quite twenty years on, I'm typing this on a system with a 2.6 gigahertz 32-bit processor than can execute as many as three instructions per cycle, some of which can perform multiple operations like doing 4 16-bit multiply-accumulates all at once. It has more level-one cache than my Colour Computer had total memory. Its front-side bus is eight times as wide and nearly a thousand times as fast. My display is running at 1792 by 1344 in glorious 24-bit colour. And it has six hundred and fifty gigabytes of disk.*
It cost a bit more, it's true. My 1984 Colour Computer cost me $199.95, and Kei, my 2003 Windows XP box, cost me around $2000. The best I can do today for $199.95 (ignoring for the moment two decades of inflation and the fact that this now represents a morning's earnings rather than a year's) is a Nintendo Gamecube. The Gamecube only runs at 485MHz (achieving a measly 1125 MIPS); it only has 40MB of memory; it only has 1.5GB of storage. Its peak floating-point performance is a mere 10.5 GFLOPS, compared to the Colour Computer's... I don't know, exactly, since the CoCo had no floating-point hardware at all, and I doubt that the software emulation achieved so much as 10.5 kiloFLOPS.
So, depending on exactly what you wish to measure, 20 years of innovation has given us somewhere between a thousand and a million times better value for money.
And here it is again: This has been going on for my entire life. Every year, tick tick tick, new and better and faster and cheaper. You buy the latest and greatest and it's obsolete before you get home from the mall. It's so much a part of our lives that it's a joke, a cliche.
The death of Moore's Law has been predicted many times, not least by Moore himself, but when you get IBM's Chief Technology Officer saying
Scaling is already dead but nobody noticed it had stopped breathing and its lips had turned blue.you know something's up. Particularly when he's not making a prediction, but talking about what's happening right now.
And everything was planned so neatly too. 90 nanometres was to come on line late '03, ramping up this year; 65 nanometres was to be the big thing of '05, followed by 45 nanometres in '07. Now, beyond that, at 30 nanometres and 20 nanometres, things were less clear, and beyond 20 nanometres not clear at all, but at least the path was marked out from the old 130 nanometre stuff down to 45, giving us 9 times the transistors and 3 times the speed. Only someone forgot to check with the laws of physics.
Wired: How long will Moore's Law hold?So, what exactly is the problem? It's not, as Moore and others predicted, a question of actually building the circuits - that's still working fine. IBM, Intel, AMD and others have all produced working chips at 90 nanometres. The problem is leakage. Each of the millions of transistors in a chip is a tiny switch, turning on and off and incredible speeds. Each time you turn the transistor on, or off, you need to use a little bit of electricity to do so. That's okay, and it's expected, because you don't get anything for free. The problem is that the transistors are now so small, and the layers of insulation - the dielectric - so thin, that they leak. There's a partial short-circuit, and so instead of only using power when the switch switches, it's using power all the time.
It'll go for at least a few more generations of technology. Then, in about a decade, we're going to see a distinct slowing in the rate at which the doubling occurs. I haven't tried to estimate what the rate will be, but it might be half as fast - three years instead of eighteen months.
What will cause the slowdown?
We're running into a barrier that we've run up against several times before: the limits of optical lithography. We use light to print the patterns of circuits, and we're reaching a point where the wavelengths are getting into a range where you can't build lenses anymore. You have to switch to something like X rays.
So what? Electricity is cheap. Well, the so what is heat. Modern microprocessors use as much electricity as a light bulb, and that means they produce just as much heat. If they didn't have huge heat sinks and fans bolted onto them, they'd very quickly overheat and fail - a fact that some people have inadvertantly discovered.
Until now, each new generation of scaling, each new node, has brought smaller, faster, cheaper and cooler transistors. At 90 nanometres, transistors are smaller, cheaper, probably faster again - but they run hotter. And the competition in the processor market has already driven power consumption (and heat generation) about as high as it can go. So when the new generation was discovered to increase the heat rather than decrease it, the whole forty-year process of accelerating change ran head-first into a wall.
Back at the end of 2002, I made the following set of predictions for the coming year. I felt pretty comfortable in all of them, the first no less than any of the others:
My predictions for 2003:But not only did we not see 4GHz processors in 2003, it's doubtful that we'll see them in 2004 either. (I was wrong about number 3, too. No-one resigned, and the media moved onto the next scandal. Rinse, repeat.)
1. Microprocessors will hit 4GHz by the end of the year. Marketers will try and largely fail to convince the public to buy them.
2. A major scientific breakthrough will lead to a new and deeper understanding of something.
3. A major political scandal will result in a huge media kerfuffle and only die down when someone resigns.
4. There will be a war.
5. Bad weather will affect the lives of millions of people.
6. There will not be any major, civilisation-destroying meteor impacts.
7. Astronomers will find new and interesting things in the sky.
8. Spam, pop-ups and viruses will continue to plague us. The Internet will fail to collapse under the strain. Pundits will predict that this will now happen in 2004.
9. A rocket will explode either on the launch pad or early in its flight, destroying its expensive payload - which will turn out to be uninsured.
10. Cod populations in European waters will continue to fall, and the European parliament will fail to act to prevent this.
11. A new species of mammal will be discovered.
12. A species of reptile or amphibian will be reported as extinct.
Now, assuming you're not a hard-core computer gamer, hanging out for the release of Doom 3 and Half-Life 2, why should you care?
Well if you have broadband internet, or a mobile phone, or a DVD player, or a PDA, or a notebook computer, or a digital camera (or a digital video camera), or you use GPS on your camping trips, or you enjoy the low cost of long-distance phone calls these days, if you download anime or the latest episode of Angel off the net, if take your iPod with you everywhere you go, if your job or your hobby involves using e-mail or looking things up on the Web, you can thank Moore's Law for it.
Modern communications depend critically on advanced signal processing techniques, performed by specialised chips called Digital Signal Processors, or DSPs. These things are everywhere - every modem, every mobile or cordless phone, every digital camera, every TV or VCR or DVD player, every stereo, every disk drive. It's the relentless advance of Moore's Law that has made DSPs fast enough and cheap enough to do all this, and made them efficient enough to run on batteries so well that your mobile phone might last a week between charging. (My first mobile was lucky to make it through the day.) Disk drives demand high-speed DSPs to sort out the signals coming from the magnetic patterns on the disk and turn them back into the original data. DVD players need them to turn the tiny pits pressed into the aluminium surface into a picture. The entire global telephone network, mobile and fixed, depends on DSPs. And any advances in any of these areas will require more and faster and cheaper DSPs and - uh-oh.
And there's more: The advances in computers and communications over the past four decades have been the primary driver of the global economy. The economy has been growing all that time, even though we have made no fundamental breakthroughs in finding new resources or new materials. If you're better off than your parents, you can thank Moore's Law for a big chunk of that - if not the effort you put in, then the new opportunities it opened up.
And it just died.
I don't think the financial markets have a clue yet what's going on, but in any case it's going to be a soft landing. All of the processor manufacturers have been in a mad rush over the last decade to produce faster chips at the expense of pretty much anything else. The funny thing is that they've been pushing so hard, they've left a lot of things behind. Take a look at this chart:
You don't have to understand exactly what this means, but the first number relates to "integer" performance, which is important for things like word processing and web browsing and databases, and the second number relates to "floating-point" performance, which is important for games. (Well, and other things too.)
1076 763 Pentium M 1.6GHz
805 635 Pentium M 1.1GHz
237 148 C3 1.0GHz (C5XL)
398 239 Celeron 1.2GHz (FSB100)
543 481 Athlon XP Barton 1.1GHz (FSB100 DDR)
581 513 Athlon XP Thoroughbred-B 1.35GHz (FSB100 DDR)
1040 909 Athlon XP 3200+ (Barton 2.2GHz, FSB200 DDR)
1276 1382 Pentium 4 3.0E GHz Prescott (FSB800), numbers from spec.org
1329 1349 Pentium 4 3.2E GHz Prescott (FSB800)
560 585 Athlon 64 3200+ 0.8GHz 1MB L2
1257 1146 Athlon 64 3200+ 2GHz 1MB L2
The Pentium M is a modified version of the Pentium III, customised for notebook computers. Since notebook computers run off batteries, and batteries don't hold much power at all, the Pentium M has been tweaked to provide as much speed as possible while using as little power as possible. The Pentium 4, on the other hand, is designed for speed at the expense of everything else. And what we find is that the 3.2GHz Pentium 4, despite having twice the clock speed of the 1.6GHz Pentium M, is just 25% faster on integer (useful work) and 75% faster on floating point (games).
And - here's the tricky bit, and the cause of Intel's recent and dramatic change in direction - the Pentium 4 uses four times as much power as the Pentium M. So if, instead of putting one Pentium 4 onto a chip, you put four Pentium Ms, it would use the same amount of power and produce the same amount of heat, but it would run up to three times as fast... Overall.
Which is great and wonderfuly if you can use four processors at once. I can, quite happily, and more than that. A word processor can't, not easily, but then word processors already run pretty well. Games, and other graphics-intensive stuff like Photoshop or 3D animation software certainly can, though most games haven't been written to do so. Not yet.
But they will. That's the next paradigm shift for programming, by necessity: Everything will be multithreaded. And it won't stop at two threads, or four. AMD has just announced the new Geode NX. It's a 1GHz processor that runs on just 6 watts of power, around a tenth as much the power-hungry monsters inside today's high-end desktops... Which run at around 3GHz, and would be stomped into the dirt, aggregate-performance-wise, by a chip with ten Geode NX cores on it.**
Apart from more cores, we can also expect cores that do more in one cycle. We've already started to see this with Intel's MMX and SSE, Motorola's Altivec and AMD's 3DNow, all of which are designed to take a 64-bit or 128-bit register and use it to perform multiple 8-bit, 16-bit, or 32-bit operations in one go.
The advantage of these instructions is that many DSP algorithms for video and audio applications - like MP3 files, or DVD video - only require 8 or 16 bit values, but modern processors are designed with 64-bit registers for doing floating-point arithmetic. If you can subdivide that register and do eight 8-bit calculations at once, you can get through the work eight times as fast - or you can run at one eighth the clock speeed, and use a fraction of the power.
The Intrinsity Fastmath LP, like the Geode NX, runs at 1GHz and draws 6 watts of power. Unlike the Geode, it is a single-issue in-order core, which makes it smaller and simpler, but also slower.
On the other hand, it has a 4x4 matrix of 32-bit arithmetic units, each of which can hold two 16-bit elements. It can perform 16 billion multiply-and-add operations (the core of many DSP algorithms) per second - which puts it equal to a dual 2GHz G5 Macintosh. (And Intrinsity have a 2.5GHz version of the Fastmath too, only it uses more than 6 watts.)
I have a Sony Vaio mini-notebook; it has a 733MHz Transmeta Crusoe processor. It's kind of slow, and when I try to play the opening sequence of Jungle wa Itsumo Hale nochi Guu on it, it pretty much freezes up. That could be fixed by using a big, power-hungry Pentium 4 processor (like my desktop), but then I'd have a battery life of about five minutes. If instead Transmeta included a matrix processing unit like Intrinsity's - and someone wrote a video codec that used it - I could watch the whole video without dropping frames, and without being tethered to by desk by a power cord.
The chip for Sony's upcoming Playstation 3, known as the Cell, takes this even further - judging from the patent applications, anyway; little technical information has been released. It has four cores on the chip; four effectively independent processors. Each of these cours has eight vector units attached to it, and each of those vector units is capable of processing 128 bits at a time - four 32-bit calculations, or 8 16-bit ones, or 16 8-bit ones. And you can be pretty sure that it can multiply-and-add in one go. So in a single cycle, it can perform as many as 16 times 8 times 4 times 2 = 1024 operations.
Which is rather a lot.
What's more, it's called the Cell because it's designed to be hooked up to other Cells in large networks, all working together. Which should make Dead or Alive 5 visually impressive, to say the least.
There's an article on lithography in April's Scientific American, and it plots the trend for CPU speeds forwards as far as 2020... Assuming that the trend continues as it has. Unfortunately, that doesn't seem like it will happen any more, and we won't be seeing 50GHz processors after all, at least not from conventional silicon chips.
Which is bad news for the people making those conventional silicon chips. But it's good news for designers of unusual devices like the Fastmath. And it's good news for programmers, because all those single-threaded applications are going to have to be re-written.
One of the regulars on the newsgroup comp.arch noted some time ago that even if Moore's Law failed tomorrow, we'd still have a factor of ten in performance improvements up our sleeve, because today's processors are designed to make it easy for programs to run fairly quickly, rather than to simply deliver the maximum theoretical performance. It's a trade-off, and it's been the right choice until now.
And now it's time to roll up our sleeves.
* That's dedicated disk; we'll set aside the terabyte or so living in the file server.
** And the Geode NX is a full Athlon core too, so you're not losing anything: It's still packed with 3-issue out-of-order-execution goodness.
Thursday, June 10
Darn security patches...
Thursday, June 03
I got INN to work! Look out, Usenet, I'm back in business!
(INN has to have one of the most god-awful configuration systems on the planet. Okay, so it's an order of magnitude better than Sendmail, but that still leaves it about three orders of magnitude short of "adequate".)
Wednesday, June 02
What's that Lassie?
You say that NTFS.SYS has got corrupted on the backup system, while the main system is down due to disk failure, leaving me with 650GB of files that may or may not be any good, and no easy way to tell, and what's more, the Windows XP install disk, the only copy I have with Service Pack 1a built in, and hence the only copy that will boot on this machine, has some sticky gunk on it and can no longer read the NTFS.SYS file when I try to use it in rescue mode?
But everything is backed up on DVD-R?
On 175 DVD-Rs to be precise? Well, I must admit that's slightly better than no backups at all, but still...
Woof woof bark!
And little Timmy's fallen down the well again? Sucks to be him, doesn't it?
57 queries taking 0.2301 seconds, 293 records returned.
Powered by Minx 1.1.6c-pink.