Say Weeeeeee!
Ahhhhhh!

Thursday, July 14

Geek

Zambezled

According to this handy chart, AMD's new FX-8170P CPU (Order Orochi, Family Zambezi) will have 8 cores running at 4.2GHz base speed, 4.7GHz in turbo mode.

That looks like a worthwhile upgrade for my current 2.4GHz quad core.  Well over three times the compute power.  And because AMD has maintained a sensible continuity in their platform, I can build a system now with the latest AM3+ socket, drop my current AM3 CPU into it, swap in the octocore goodness when it lands, and use the spare CPU to upgrade my AM2 Linux box.  With Intel  you'd be faced with three different pin counts.

I really want to see the server versions of these chips now.  We're building a cluster of AMD-based servers at my day job, and we're using the cheapest current CPUs with the plan to swap them out for the newer models when they arrive.  I was expecting more cores but a slower clock speed, but based on what they've achieved on the desktop I could get more cores and a higher clock speed.  That would be very nice.

Posted by: Pixy Misa at 12:55 PM | Comments (30) | Add Comment | Trackbacks (Suck)
Post contains 182 words, total size 1 kb.

Tuesday, July 12

Anime

Mashimarium

An exotic atom with a nucleus comprising three cutinos and a chaon, orbited by a solitary oneeon.

Posted by: Pixy Misa at 02:15 PM | Comments (6) | Add Comment | Trackbacks (Suck)
Post contains 18 words, total size 1 kb.

Monday, July 11

Geek

So Close You Can Almost Download It...

centos.mirror.nexicom.net/6.0/

Not official yet, but clearly on its way.  Thanks for all your hard work, CentOS peeps.

Bimped: It's here!

That's one of the blockers for the new Minx platform rollout fixed.  The others include a stable release of OpenVZ for RedHat/CentOS 6, and Intel's 710 series SSDs.  The latter are expected this month.

Oh, and me getting time to do some work on it.  That's much more likely to happen now than it was six weeks ago, since we have now filled all our situations vacant at my day job, and I'm hoping to see my hours drop from ~60 to ~35 a week.

Posted by: Pixy Misa at 11:25 PM | No Comments | Add Comment | Trackbacks (Suck)
Post contains 112 words, total size 1 kb.

World

Rebecca And The Great Glass Elevator

A tip for guys: Don't proposition women you don't know in hotel elevators at 4AM if you don't want to come off as kind of creepy.

Which seems like a simple enough rule, and not one it had ever crossed my mind to breach.

Posted by: Pixy Misa at 11:03 PM | Comments (6) | Add Comment | Trackbacks (Suck)
Post contains 50 words, total size 1 kb.

Saturday, July 09

Geek

Pitafied

A while back, in between houses falling on me, I was working in a database written in Python, which I called Pita.  I actually got it working, enough to start doing some performance tests...

At which point I shelved the project, because (a) I was absurdly busy what with the houses and all and (b) even though it had pluggable low-level storage engines, the overhead of the Python layer made it significantly slower than just using MySQL.

What Pita could do, which was nice, was (a) offer a choice of in-memory or on-disk tables using identical syntax and selectable semantics and (b) provide a log-structured database that did sequential writes for random updates.  Cassandra also has this trick.  The advantage here is that it (a) can cope with a huge volume of incoming data, and (b) doesn't fry consumer-grade SSDs the way MySQL would.

Unfortunately, Cassandra is a bit of a cow.  Undeniably useful, but indubitably bovine.

Redis with AOF can offer similar performance, but only so long as your data fits in memory, because it's simply snapshot+log persistence (like Pita) and single threaded (unlike Pita) so it can't cope with I/O delays.  This makes Redis and its support for data structures beyond simple records (hashes, lists, sets, sorted sets) great for your hot data but no use for your long tail - if, say, you've been running a blogging service for 8 years.

What you could do in that situation is use Redis for your hot data (great performance, easy backups, easy replication) and stick your cold data in a key-value store.

Like Keyspace, except that's dead.
Or Cassandra, except that's a cow.
Or MySQL, except that defeats the purpose.
Or MongoDB, except that you'd like to keep your data.

Or Kyoto Tycoon, which has pluggable APIs (don't like REST - use RPC or memcached protocol)  and pluggable storage engines...  Like Google's LevelDB.  Kyoto Tycoon running Kyoto Cabinet uses snapshot+log for backups, but the database itself is a conventional B+ tree, so it needs to do random writes.  LevelDB, on the other hand, uses log-structured merge trees - sequential writes, even for the indexes.

So Redis and Kyoto Tycoon with LevelDB both provide:
  • Key-value store
  • Range lookups
  • Sequential writes (SSD friendly)
  • Snapshot+log backups (bulletproof)
  • Instant replication (just turn it on, unlike MySQL replication, which is a pain)
  • Lua scripting (not yet in mainstream Redis, but coming)
  • Key expiry (for caching)
Redis also provides:
  • Data structures
    • Hashes
    • Lists (which can be used to provide stacks, queues, and deques)
    • Sets
    • Sorted sets
    • Bitfields
    • Bytestrings (update-in-place binary data)
  • Pub/Sub messaging
And Kyoto Tycoon provides:
  • Support for databases larger than memory
  • Very fast data loads
Together they make a very powerful team.

Posted by: Pixy Misa at 03:11 PM | No Comments | Add Comment | Trackbacks (Suck)
Post contains 450 words, total size 3 kb.

Cool

Miracle Day

Torchwood is back!

10 episode run, starting...  Well, starting yesterday. smile

Posted by: Pixy Misa at 02:27 PM | Comments (5) | Add Comment | Trackbacks (Suck)
Post contains 13 words, total size 1 kb.

Tuesday, July 05

Life

How Windy Is It?

It's so windy, a light bulb just popped out of the ceiling.

No, really.

Posted by: Pixy Misa at 12:42 PM | No Comments | Add Comment | Trackbacks (Suck)
Post contains 18 words, total size 1 kb.

Saturday, July 02

Geek

Cool Fusion

I'll say this up front: I think AMD's new Fusion range of processors are some of the most important integrated circuits since Signetics' 555.

Why?  Let's start at the low end and work our way up.

The C-50 model provides two dual-issue, out-of-order x64 cores (codenamed Bobcat) at 1GHz, an 80-shader GPU at 280MHz (44 gigaflops), 1MB of cache, and a 1066MHz 64-bit memory bus.  That's enough hardware to make my SGI O2 look sad, and it has a total power consumption of 9 watts in a 40nm process.  The C-60 refresh due this quarter enables a turbo mode that can increase CPU speed by 33% and GPU speed by 44% when that fits within the power and thermal envelope, still with the same 9 watts draw.

The E-350 has the same architecture, but bumps the CPU clock to 1.6GHz and the GPU to 500MHz (80 gigaflops).  The power consumption goes up to 18 watts, but that's still pretty modest, less than a single-core 500MHz AMD K6-2, which lacked most of the features of these new chips and was obviously much, much slower.  (But a solid little workhorse in its day.)  An E-450 version is due out this quarter with a modest CPU speed bump and a 20% GPU and 25% memory speed increase.

They're small and cheap to produce, too - 75mm2 on a 40nm process, which is in itself not leading-edge.

The second half of AMD's Fusion range for 2011 is the Llano family, the A-series.  Where the C and E-series chips target netbooks, ultralight notebooks and embedded designs, the A-series are aimed at full-feature laptops and low-to-mid-range desktops.

These don't have a new CPU core; they're based on the K10.5 core, a derivative of the long-lived K7 Athlon.  But they deliver the goods nonetheless.

The A8-3500M is a notebook chip: 4 cores running at 1.5GHz standard, and up to 2.5GHz in turbo mode: If you are only using one of the cores right now, it will instantly shut off the other three to save power and speed up the one that is actually in use.  4MB of cache, a GPU with 400 shaders at 444MHz (355 gigaflops) and a 128-bit 1333MHz memory bus.  Maximum power consumption is 35 watts.

The A8-3800 is its desktop counterpart.  The 4 cores run at 2.4GHz and up to 2.7GHz in turbo mode; the 400 shaders at a zippy 600MHz (480 gigaflops), the memory bus at up to 1866MHz.  Total power draw is 65 watts.

That is, it's as fast as my curent desktop CPU, uses 30% less power, and throws in half the performance of my 110 watt graphics card for free.*

Or to look at it another way, AMD's new budget desktop solution offers twice the graphics performance of an Xbox 360 or Playstation 3, while costing no more and using less power than their existing CPUs alone.

Okay, so technically all very nice.  Now, why do I think they're so important?

Well, consider the Amiga.  Brilliant piece of work, but the fastest production model ever made was a 25MHz 68040.  The slowest of the Fusion chips can emulate an entire Amiga without breaking a sweat.  Want an Amiga?  C-50, Linux, emulator.  Job done.

Or the Be Box.  Neat concept, neat OS, ran out of money and died, but not before BeOS was ported to x86.  Want a Be Box?  C-50.

Want a game machine that can knock over any of the current-generation consoles?  A8-3500M or A8-3800.  No chip design, no integration hassles, your job is done.

Want a solid little desktop for Windows or Linux?  A8-3800, 16GB of cheap RAM, and you're set.  Okay, you won't want to play Civ 5 on a 30 inch monitor with that, but at 1920x1080 it should actually work pretty well.

Intel's Sandy Bridge chips (their current low-end desktop CPUs) have better single-threaded CPU performance, but suffer from truly second-rate GPUs.  With AMD's Fusion chips you don't have to compromise on graphics: Their new embedded GPUs are genuinely good.

The performance that any of these chips can deliver would make high-end workstation designers of a decade ago turn green, and they're just dirt cheap.  We live in a world of riches unimagined.

* Radeon 4850.  Still a solid card.

Posted by: Pixy Misa at 03:59 AM | Comments (1) | Add Comment | Trackbacks (Suck)
Post contains 708 words, total size 5 kb.

Friday, July 01

Geek

To Worry Or Not To Worry

Or, Much Ado About Random Write Endurance

Intel's 320-series 300GB SSD has a quoted 4KB random write endurance - that is, the minimum total volume of data you can write to it in individual 4KB randomly located blocks before it begins to fail - of 30TB.

30TB may sound a lot to you.  The primary MySQL server at my day job does 2.5TB of writes per day (and it's only one of several database servers).  MySQL writes tend to be random-ish, so you might at first glance expect the abovementioned drive under those conditions to burn out in 12 days.  For that reason (and the fact that the database is rather larger than 300GB), we don't use a 320-series SSD; we use a RAID-50 array of 20 enterprise drives each with about 60x the quoted write endurance.  Based on the quoted numbers and measured load, we should be good for at least 10 years.

The question is, though, what is the real-world longevity of SSDs under heavy random write conditions?  I've been very conservative about SSD deployment - for mee.nu I've used the more expensive enterprise SLC drives as well (and RAID-5 at that) even though our write activity is a couple of orders of magnitude lower.  The only MLC drives I've deployed in a production environment have been in applications where reads are random and writes are sequential - some of the Cassandra and Xapian databases at my day job fit this description.

However, this paper, presented at last year's Hot Storage conference, suggests that things might not be nearly so bad.  The authors examine a model of flash cell burnout, and note that if cells are given time to rest between write/erase cycles, their endurance can be expected to increase significantly.

How significantly?  Let's take our 300GB SSD and hit it with 2.5TB of data a day.  Let's assume a worst-case scenario on two  aspects - all of that is individual 4KB random writes, and there's no write-combining done by the OS or RAID controller.  Let's assume a best-case scenario on the other aspects - write multiplication is 1.0 (that is, no blocks need to be moved to allow for the updates) and wear-levelling is perfect across the drive (all blocks are updated evenly).  (All of these assumptions are completely implausible, but the idea is that they'll kind of balance out until I can get more precise data.)

That means that every block on the drive is updated every three hours.  A litle less than three hours, but near enough.  That paper suggests that with a 10,000 second - a little less than three hours - recovery period between write/erase cycles, write endurance of MLC cells can be expected to be 90 times the worst-case situation the manufacturers cite.

That is, rather than two weeks, the drive would last for three years.  And then drop dead all at once given our rather unreasonable scenario.

Which is a completely different picture from what the manufacturer's worst-case numbers might suggest.  And with a RAID controller with battery-backed write-back cache, the number of writes that actually hit the SSD can be significantly less.

The problem is, this is a simulation.  It's a very careful simultion based on the known physical properties of the semiconductor materials used in flash fabrication, but it's still a simulation.  I'm hoping I can get a couple of SSDs solely for the purpose of killing them, because I haven't seen anyone else publish good data on that.

The reason all this matters is that where a 300GB Intel MLC drive costst $600, 300GB of Intel SLC enterprise SSD storage comes to five drives totalling $4000.  The point may become somewhat moot when Intel's 710 MLC-HET drives launch.  The HET, I would guess, stands for something like high-endurance technology; these drives are based on cheaper MLC flash but optimised for reliability rather than capacity.  They will likely (based on reports in the trade press) cost twice as much as the regular MLC drives, but offer 20 to 40 times the endurance - nearly as good as SLC.  If the price and endurance turn out that way, then there will be 3x less reason to risk your data on a statistical model and a consumer drive.

Another thing: Intel's 320 series (unlike the earlier M-series) implement internal full-chip parity in the spare area, so even if one of the flash chips dies completely, the drive will continue operation unaffected.

Posted by: Pixy Misa at 03:56 AM | Comments (7) | Add Comment | Trackbacks (Suck)
Post contains 746 words, total size 5 kb.

<< Page 2 of 2 >>
90kb generated in CPU 0.0406, elapsed 0.3636 seconds.
56 queries taking 0.3317 seconds, 411 records returned.
Powered by Minx 1.1.6c-pink.
Using http / http://ai.mee.nu / 409