Sunday, May 26
Expanding Your ZFS Attached Storage On DigitalOcean
So, you're off to see DigitalOcean and you want to use ZFS because of the wonderful things it does.
So, you're off to see DigitalOcean and you want to use ZFS because of the wonderful things it does.
You can't - yet - use ZFS on your root partition, not without a lot of faffing around, but that gets automatically backed up so long as you're paying the extra $1 per month, so we'll focus on external volumes.
There are three ways to expand your ZFS storage on external volumes.
The Quick and Easy Way that May Destroy All Your Data at Some Unspecified Later Time for Basically No Reason
First, attach a second storage volume to your server. This will be
/dev/sdb
.Second, run
That's it. Done. No reboots, extra space is available to all your ZFS filesystems intantly.
zpool add platypus /dev/sdb
where platypus
is the name of your ZFS pool.That's it. Done. No reboots, extra space is available to all your ZFS filesystems intantly.
But you now have two attached drives that are basically RAID-0. If one of them fails to attach, you go splat. How likely that is, I don't know.
But as you add more drives the chance of something going wrong increases, and DigitalOcean limits the number of attached drives both per server and per account, so it doesn't scale.
But as you add more drives the chance of something going wrong increases, and DigitalOcean limits the number of attached drives both per server and per account, so it doesn't scale.
The More Complicated Way that May Destroy All Your Data Immediately If You Get it Wrong but Probably Scales Better
Before anything else, use the DigitalOcean control panel or API to take a snapshot of your attached disk. This is the big advantage of this method - because it uses a single attached disk, the snapshot tool provides consistent backups without having to shut down your server first.
First, increase the size of your attached drive. Probably best to do this in decent-sized chunks, rather than every time you need an extra 10GB.
Also, you can't shrink attached drives - but you can't shrink ZFS pools either, so that doesn't matter.
First, increase the size of your attached drive. Probably best to do this in decent-sized chunks, rather than every time you need an extra 10GB.
Also, you can't shrink attached drives - but you can't shrink ZFS pools either, so that doesn't matter.
Second, run
gdisk /dev/sda
. Select w
and exit without doing anything.Third, run
gdisk /dev/sda
again, select n
, and create a new partition. All the defaults should be correct. Select w
and exit. If you try to do this without the previous step, all the defaults will be wrong and nothing will work.
Fourth, run
partprobe
to tell Linux that new partitions have magically appeared.And finally, run
Again, all is now working, no reboots. (Without
The advantage of this method is you only have one network-attached drive. It's either all there, or not. This allows you to use DigitalOcean's own snapshot tool to take a complete, consistent backup.
Though I suspect that's stored on the same Ceph cluster as your own data and ZFS snapshots and if the whole thing goes bang you're out of luck anyway. So it protects you from mistakes but for catastrophic failure you'll want to take your ZFS snapshots and
Update: There might be a better way, just resizing the existing partition rather than adding new ones to the pool. Going to try that next.
Update: That seems to work, but I'll need to try it with something running live on the filesystem to make sure.
zpool add platypus /dev/sda2
(or whatever partition number you created in gdisk).Again, all is now working, no reboots. (Without
partprobe
you would probably need to reboot before ZFS could use the new partition.)The advantage of this method is you only have one network-attached drive. It's either all there, or not. This allows you to use DigitalOcean's own snapshot tool to take a complete, consistent backup.
Though I suspect that's stored on the same Ceph cluster as your own data and ZFS snapshots and if the whole thing goes bang you're out of luck anyway. So it protects you from mistakes but for catastrophic failure you'll want to take your ZFS snapshots and
rsync
them off to a remote location.Update: There might be a better way, just resizing the existing partition rather than adding new ones to the pool. Going to try that next.
Update: That seems to work, but I'll need to try it with something running live on the filesystem to make sure.
The Other Way Which I Haven't Actually Tried
If you have a RAID-Z array in ZFS, you can replace the volumes one-by-one with larger devices. So you can have three 100GB attached drives, detach one, increase it to 150GB, wait for RAID-Z to finish checking everything, and repeat for the other two drives.
This is safer than the first option, but frankly sounds like a pain in the bum.
This is safer than the first option, but frankly sounds like a pain in the bum.
Posted by: Pixy Misa at
05:34 PM
| No Comments
| Add Comment
| Trackbacks (Suck)
Post contains 615 words, total size 5 kb.
51kb generated in CPU 0.0128, elapsed 0.1035 seconds.
56 queries taking 0.0959 seconds, 347 records returned.
Powered by Minx 1.1.6c-pink.
56 queries taking 0.0959 seconds, 347 records returned.
Powered by Minx 1.1.6c-pink.