CAN I BE OF ASSISTANCE?

Wednesday, May 25

Geek

Situations Vacant

At my day job, we're looking for two or three sysadmin / programmer types.  The field is real-time social network and web analytics.

Location: Sydney, San Francisco, New York

Required: Solid working knowledge and at least two years practical experience with Linux system administration, Python, and databases.

Useful: MySQL, Redis, Cassandra, Xapian, RabbitMQ, PHP, Apache, Nginx, memcached, Mercurial, networking, virtualisation, system monitoring tools.

Other knowledge: Statistics, parallel processing & scalability.

Not of interest: Java, .Net, Microsoft platform in general

CS or similar degree is valuable but not critical if you have the equivalent practical experience.

Training will be provided at our main office in Sydney; we'll fly you out here for a few weeks if you're based in SF or NY.

Good salary and benefits (commensurate with experience and talent), flexible working hours, opportunity to telecommute part time.  Will be on an on-call roster for system outages.

Email me (use help@mee.nu) if you're interested or have questions and I'll put you in touch with the right people.  Oh, and I'll probably be doing the technical interview. wink

Posted by: Pixy Misa at 02:42 PM | No Comments | Add Comment | Trackbacks (Suck)
Post contains 179 words, total size 1 kb.

Monday, May 16

Geek

Another Story

One of the drives in Nagi (my Windows box) is on the way out, as evidenced by system free




zes that leave the drive light solidly on but no




thing happening.  If I don't mess around too much I can still move my mouse and maybe switch to an app




lication that doesn't want to access the disk right now.

This is not a good thing. 

/images/NotGood.jpg
Not good.

So I went out to get some external drives to back everything up.  My friendly local electronics and computer stuff store offered several options: A 1TB Western Digital MyBook Essential drive for $129.99, a 2TB Western Digital MyBook Essential drive for $129, and a 3TB Western Digital MyBook Essential drive for $269.

I love that kind of pricing; it makes decisions so easy.

Anyway, I bought four of them.

I actually have more than 8TB of internal disk across all my computers, quite a bit more, but a fair chunk of that is backups, and backups of backups, and really bad anime that I'll never ever watch, and about a terabyte of Steam content which I can download again with one click (and two weeks of waiting).

So I'm running backups.

/images/RunningBackups.jpg
Running backups.

I have enough spare disks sitting around to replace all the drives in Nagi, mainly because I bought them with the intention of replacing all the drives in Nagi.*  So once the backups are done that's probably what I'll do. 

But first I'm going to get me a USB 3 card, because as things are, just restoring my C drive would take me more than 24 hours.

The drives themselves are quite small and neat, certainly smaller and neater and much less flaky than my previous Western Digital MyBook experience.  That was a 500GB drive that I bought for $250 not all that long ago.**  It worked, for a while, but then it would go into death sleep (rather like a ferret) from which the only way it could be awoken was to unplug and replug the power cord (rather like a ferret). 

/images/MyBookFerret2.jpg
Western Digital MyBook Essential (left), ferret (right).

Not terribly convenient.  It never actually lost any data or failed while it was actively in use, but it was annoying enough that I ended up just filing it away in a drawer.

So far my new MyBooks are working flawlessly.  Which is good, because there's fundamentally only two ways a disk drive can work: Flawlessly and not at all.

* And that is because all the drives in Nagi are the infamous death-by-ring-buffer Seagate 7200.11.  The gist of the story is this: The drives have a ring buffer in non-volatile memory to store the last 256 SMART alerts.  But there's a bug such that if you power on while the pointer is on the last entry (255), rather than going back to zero, it increments to 256 and overwrites the drive firmware.  In other words, if you have a drive that's not quite perfect - even running a little warm - then every time you turn your computer on there's a 0.4% chance that your drive will brick itself.  As an added bonus, Seagate's first patch for the problem also bricked your drive.  So far my drives have survived unpatched and unbricked.
** But several centuries in computer years.

Posted by: Pixy Misa at 07:02 PM | Comments (12) | Add Comment | Trackbacks (Suck)
Post contains 553 words, total size 4 kb.

Geek

Improvements

USB 3.0 is full-duplex, catching up with serial ports of, oh, 1970 or thereabouts.

I was curious as to how close it is to PCIe 2.0.  The low-level encoding is the same (8b/10b) as is the raw speed (5Gb/s).  Beyond that there doesn't seem much detail floating around unless you download the entire specification.

So I did.

I was wondering whether the weird connectors on my new external drives* were standard or some propietary Western Digital nonsense.  Turns out they're standard micro-USB-3 connectors.  Which is good and bad; good because they're standard; bad because the standard is horrible.

Turns out USB 3, to maintain backwards compatibility with old-and-busted USB, includes old-and-busted USB.

/images/BearckwardsCompatibility.jpg
Bearckwards compatibility....  Sorry.

That is, the cable and plugs and sockets and controllers all provide the two differential pairs for USB 3 transmit and receive (four wires total) plus the original USB 1/2 bus (two wires) plus power and ground (two wires).  Which makes the connectors twice the size (except for the standard A-type connector (the flat one) which sneakily hides the four new contacts) and the cables twice as thick.

I can understand the need to switch from a turnaround bus to a proper full-duplex point-to-point connection.  I'm surprised USB 2 even works as well as it does, having to turn around the connection constantly at 480Mb/s.

/images/USBTurnaround.jpg
USB turnaround.

But this sort of kitchen-sink compatibility never turns out well.

On the other hand...  5Gb/s.

* Another story.

Posted by: Pixy Misa at 01:43 PM | Comments (1) | Add Comment | Trackbacks (Suck)
Post contains 241 words, total size 2 kb.

Sunday, May 15

Geek

Ohhhhhh

It's a Javascript thing.

IE lets Javascript copy from a field to the clipboard.  Other browsers don't.

That means that unless you're running IE, the cut/copy buttons won't show up in the editor.

The paste-from-Word button isn't showing in Firefox with the new editor either, and that is supposed to work.  (Edit: Fixed!)

Also, that nasty habit of inserting a line break into the More field has returned, only now it's slightly different.  (Edit: Fixed!)

Posted by: Pixy Misa at 02:22 PM | No Comments | Add Comment | Trackbacks (Suck)
Post contains 76 words, total size 1 kb.

Thursday, May 12

Geek

Sneaky Buggers, But In A Good Way

There are two kind of flash memory: The expensive enterprise kind, called SLC, and the cheap crappy kind that everyone actually uses, called MLC.

The difference is that SLC stores one bit in each memory cell, while MLC stores two or even three.  MLC does this trick by varying the voltage...  Or is it charge...  The something levels of the cell, so where an SLC cell is either on or off, an MLC cell has four or eight distinct levels of onness or offness.

The good thing about this (which is why everyone does it ) is that you get two (or three) times as much storage in a given amount of silicon.

The bad thing about this is that it's two (or four) times less robust.  Actually, in practice, MLC is twenty or thirty times less robust than SLC.

/images/AChannelSLCvsMLC.jpg
SLC vs. MLC.  MLC (right) has already suffered data corruption.


All flash memory cells have a limit to how many times they can be erased and overwritten before they get clogged up with discarded electrons and stop working.  With SLC, this is on the order of 100,000 times.  With current MLC, it's on the order of 3,000.

This isn't a huge problem with file storage, because you tend to write a file to disk and leave it here.  Every so often you'll delete a bunch of old files and create a bunch of new files, but the turnover isn't huge.

With database storage, things are completely different.  Every time you update something in a database, the updates have to be written back to disk, along with any changes to the indexes.  A single new record can easily trigger a dozen disk writes.  A busy database can fry a standard MLC SSD.

But, because MLC drives are cheaper, everyone buys those, and economies of scale kick in and MLC gets even cheaper.  SLC drives now run around $12 per gigabyte, and MLC less than $2, even though SLC only really costs twice as much to produce.

How do you resolve this problem?  Particularly when you want to move your entire server to SSD, but might only need 10% of the disk to be enterprise-database quality SSD?

Well, if you're Toshiba and Sandisk, what you do is make the flash memory block-configurable between MLC and SLC.  Well, pseudo-SLC.  Rather than writing only 1 or 0, for specifically selected blocks you can only write 11 or 00.  If that cell is flaking out and shifts down to the 10 level or up to the 01 level, not only can you detect and correct that at read time, you can mark that block as bad and allocate a spare block in its place.  So you have improved margins and better error detection.

/images/AnoHanaSLCvsMLC.jpg
Toshiba representatives demonstrate their new block-configurable flash devices.


Micron and Intel have announced something called eMLC - enterprise MLC - but have been short on details so far.  I'll be surprised if it's not something very much like this.  I'm hoping also that it will be twice the price of regular MLC, rather than six times.

Toshiba's block-configurable trick is even better, but there's not even the ghost of a standard of how to configure different parts of a single storage device to provide a different density/reliability tradeoff, so it will be a while before that idea hits the general storage arena.

Pictures from A Channel and Ano Hana via RandomC.

Posted by: Pixy Misa at 11:27 PM | Comments (2) | Add Comment | Trackbacks (Suck)
Post contains 576 words, total size 4 kb.

Geek

Self-Similar Loads And The Deaths Of Cloud Computing

The recent collapse of not one, but multiple entire Amazon Availability Blooples* into a smoking crater caused a certain amount of buzz in the webosphere.  It would have caused more of a buzz if it hadn't reduced a fair chunk of the webosphere to a smoking crater along with it.

What happened?

Well, someone at Amazon threw the wrong switch during a network upgrade.  Effectively, instead of rerouting traffic onto a carefully planned detour, they rerouted traffic onto the sidewalk.

This did not go over terribly well with all the servers trying to send data to their storage pigs* further along the sidewalk.  Since there's a significant variability in performance of Amazon storage pigs* many servers were set up to take any slowdown as an indication of a bad pig* and automatically try to set up a new pig* to replace it.  To do this, the data had to be replicated....

Along the sidewalk, which was already jammed beyond capacity.

To say that the problem snowballed at this point would be to waste a perfectly good video involving mousetraps and ping-ping balls.



You see, the idea of setting up a huge hosting cloud thingy like Amazon has done is that most servers run mostly idle most of the time.  (Ours, for example, has 12 cores and uses, on average, slightly less than one.)

So if you aggregate a whole lot of servers together into one huge bloople* you can get far more sites running on far less hardware and make a huge amount of money in the process.  Until someone drops a ping-pong ball; once that happens there's no way to stop the process.  It's far too big and complicated to control manually.  The entire bloople* is set to burn down, fall over, and sink into the swamp and all you can do is watch.

/images/AChannelCloudFailure1.jpg
All you can do is watch...


Because traffic (and hence load) doesn't neatly average out when you aggregate lots of different services together.  Instead, it piles up.  Internet activity levels are self-similar - everything everywhere tending to follow the same pattern of spikes and dips at the same time

When one service spikes, it's likely that everything else is spiking at exactly the same moment.  And since cloud computing gains efficiency by eliminating the huge amount of headroom you would traditionally plan into a dedicated server (or server farm, depending on how many shoestrings you have to throw around), this leads to everyone looking for extra capacity at the same moment.  And that puts more strain on everything right when it's at its busiest, and....

Splat.*

In Amazon's case, the splat* was triggered by someong dropping a ping-pong ball.  But that's just the proximate cause.  People drop ping-pong balls every day.  It's only a drama if you happen to have covered every level surface of your home including the ceiling with fully-armed spring-loaded ping-pong ball launchers.

But that's what every cloud provider, almost without exception, has done.  That's the entire business model.  It is cheap, but it's intrinsically flaky.

/images/AnoHanaCloudFailure.jpg
Intrinsically flaky.


It's no accident either that the piece of the puzzle - uh, bloople* - that flaked out in this was the flakiest flake of all, the network-attached storage.  Amazon's EBS gives you disks attached across a network. 

Disks suck.  There's no gentler way to put it.  At my day job, we have SSDs all over the place, because we'd be dead without them.  (We know, because we tried that at the start.  We died.  Then we went out and bought a bunch of SSDs and tried again.)  Disk access is on the order of ten million times slower than CPUs, and modern servers typically have more CPUs than disks.

Even so, when your disks are right there in your server, at least you can see how busy they are (too busy) and who's using them (you).  When the disks are abstracted away to free-roaming data pigs*, all you have is an end result.  Pig* too slow?  Don't try to investigate the problem.  You can't investigate the problem; it's been abstracted to such a degree that there's simply no information available.  People tried mounting new pigs* because that was the only thing they could do.  They were throwing gasoline onto a bonfire, but when you build a bonfire and hand everyone a free can of gasoline, you really shouldn't be surprised at the result.

So, how do we fix this?

Well, first, everyone everywhere who has anything to do with anything at all should be nailed to the floor and forced to read J. B. S. Haldane's On Being the Right Size.

Second, anyone planning to deploy a new server with disks used for anything other than backups and log files should be lightly shot.

Third, watch Ano Hana.

* The technical term.

Pictures from A Channel and Ano Hana via RandomC.

Posted by: Pixy Misa at 10:11 PM | Comments (3) | Add Comment | Trackbacks (Suck)
Post contains 815 words, total size 6 kb.

<< Page 1 of 1 >>
78kb generated in CPU 0.0227, elapsed 0.1473 seconds.
54 queries taking 0.1335 seconds, 353 records returned.
Powered by Minx 1.1.6c-pink.
Using https / https://ai.mee.nu / 351