extending VPS disk space with sshfs and a loop mount
I rent a xen virtual private server pi.nipl.net
, it is excellent value for money at $11/month, and it has more than enough RAM and CPU power for my needs (even when sharing the virtual server with several other people). However the 12GiB disk space can be a bit tight.
I have a shell login on another server galactus.nipl.net
with a huge amount of disk space - but I don't control that server, and I don't have root access. ping from pi
is only 10ms, so I decided to try mounting some of galactus
' disk space on pi
I was hoping to use 9p for this (the plan 9 file system protocol), but it turns out that the available 9p clients and servers are fairly slow and inefficient for this task. So I tried sshfs instead. sshfs works extremely well, it caches and reads ahead, almost as fast as the local filesystem.
A difficulty with sshfs (or any network filesystem) is user identification - the users on galactus
are different and differently numbered from my users on pi
. I got around this by creating a sparse 10GiB ext3 image over sshfs to galactus
, and mounting that image on pi
. The ext3 image holds its own filesystem, which is only used on pi
, with pi
's users and uids.
If you want to try something like this yourself, here is what I did. You only need ssh on the disk server, no fancy tools, so you can use any cheap shared server account with ssh access. All these commands are executed on pi
, the VPS, as root.
mkdir -p /n/galactus /ext # mount points
sshfs firstname.lastname@example.org:/home/samwatkins /n/galactus
dd of=/n/galactus/pi-ext.img bs=1024 seek=$[10*1024*1024] </dev/null
# creates a 10GiB sparse file / hole, which takes up no disk space yet
mkfs.ext3 /n/galactus/pi-ext.img # answer 'y' to query
mount /n/galactus/pi-ext.img /ext -o loop
mkdir /ext/sam ; chown sam:sam /ext/sam
Then I added lines for the mounts to fstab:
email@example.com:/home/samwatkins /n/galactus fuse defaults,noauto 0 1
/n/galactus/pi-ext.img /ext ext3 defaults,loop,noauto 0 2
After a reboot, `mount -a` should bring them up. I decided to mark them "noauto" in case the disk server is unavailable, perhaps it might delay boot while trying to connect.
I found this really does work remarkably fast and well, almost as fast as the local filesystem.
I use an ssh key so I don't have to enter a password for sshfs:
ssh firstname.lastname@example.org 'mkdir -p .ssh; cat >>.ssh/authorized_keys' <~/.ssh/id_rsa.pub
I also set short hostnames and other options in .ssh/config: