Thursday, March 12

Geek

Ah, Bind Mounts

Of course.

The new mu.nu and mee.nu servers - probably arriving next month, depending on Intel and the weather - are going to be running on a virtualised platform.  Current contenders are Xen and XenServer (which provide better isolation between virtual nodes) and OpenVZ and Virtuozzo (which provide better efficiency).*

The way you set up either platform is pretty similar: You allocate a big bucket of disk space (which had better be RAID, or you risk losing everything at once), and then you create your virtual environments in that bucket, granting them certain amounts of disk, memory, and CPU resources.

Which is easy to configure and works fine for a basic setup.  But one of the other new things about the new servers is that I'm going to be installing SSDs - Intel X25-E SLC drives, to be precise, which deliver 3,000 write IOPS and 30,000 read IOPS, which is a whole bunch faster than anything we have at the moment.

The SSDs will be used only for databases; they're far too expensive for general storage.  But if the general storage for the virtual nodes is allocated from the big storage bucket, how do I point databases at the SSDs?

The answer - at least for OpenVZ and Virtuozzo - is something called bind mounts.  This is a new Linux kernel trick which allows you to mount any existing directory as a filesystem elsewhere on the server.  With OpenVZ, that lets me mount a particular directory on the SSD as a filesystem within a particular virtual node - exactly what I need.

So I can, as needed, split off a particular blog (or group of blogs) into its (their) own virtual server with its own specific configuration of Linux and Apache and MySQL and so on. 

The only catch is that CPanel costs $12 a month per virtual server.**  I can run as many sites as I like under each virtual server, but each server that needs CPanel is another $12.

Minx doesn't need CPanel, of course; it doesn't even use Apache.  Because the mee.nu server is specifically set up for Minx, though, I couldn't put Protein Wisdom on there while the other server was being fixed, so it ended up overloading the main mu.nu server.  Having virtual nodes will make it hugely easier for me to move things around like that.

Virtuozzo is the commercial version of OpenVZ, and it offers some nice extra features, including integration with the Plesk control panel (a competitor to CPanel, and a pretty good one).  The problem is, OpenVZ is free, while Virtuozzo is licensed per virtual server per month.  A three-VPS*** license runs $60 a month; a ten-VPS license $100, which is more reasonable per node, but not exactly cheap.

A 100-user VPS license for Plesk is $10 per month, compared to $30 per month for a hardware server license.  But only if the VPS is running on Virtuozzo, whereas the CPanel license is the same regardless of what virtualisation platform you're on.  And while Virtuozzo is nice and offers a control panel integrated with Plesk, I don't really have any users who need that.

So right now it looks like it'll be OpenVZ.  This weekend I'll be setting up a test server to play around with it; there are some issues with both RedHat 5 (the kernel is fairly old) and Fedora 10 (some libraries are too new) which caused me problems when I first tried it a couple of months ago.  I need to get all that sorted out quickly so that we can move forward into our shiny virtual future.

* That's a trade-off.  If you want to say, this 2GB of memory belongs to this virtual node and no-one else can use it, then that means that node is protected from memory contention from other nodes.  But if it only uses 1GB of memory, the other gig is wasted.  Xen is oriented more toward isolation, OpenVZ towards efficiency.

** Including Fantastico.

*** Virtual private server.

Posted by: Pixy Misa at 01:10 AM | Comments (2) | Add Comment | Trackbacks (Suck)
Post contains 670 words, total size 4 kb.

1 There was recently an article showing the Intel SSDs slow down significantly after a few months of use.

Posted by: JV at Friday, March 13 2009 07:14 AM (uCUPt)

2 Yep.  That does happen for sequential I/O, and is expected.

But that's a result of low-level fragmentation (and a regular defrag program will have no effect on an SSD).  For a database, all you're interested in is random I/O performance, and fragmentation doesn't change that.

Posted by: Pixy Misa at Friday, March 13 2009 10:24 AM (PiXy!)

Hide Comments | Add Comment

Comments are disabled. Post is locked.
48kb generated in CPU 0.0571, elapsed 0.822 seconds.
56 queries taking 0.7859 seconds, 340 records returned.
Powered by Minx 1.1.6c-pink.