Today's quest: move 40GiB of disk space (a logical volume used as a virtual machine's file space) over 100MBit/s network infrastructure from one server machine to another.
Over 100MBit/s network the average speed of the network stream is ~ 10MByte per second. So this leads to:
- 100MByte -- 10secs
- 1.000MByte -- 100secs
- 40.000MByte -- 4000secs (i.e. 67 minutes)
How can this be sped up?
On the target host (this command set has to issued first):
$ nc -l <port> | gunzip -f | pv | dd of=/dev/<volume-group>/<target-logical-volume>
On the source host:
$ dd if=/dev/<volume-group>/<source-logical-volume> | gzip -f --fast | pv | nc <target-host-name> <port>
Depending on the data on the logical volume this becomes twice-thrice as fast.
What is neat about injecting the pv command/tool into the command pipe is that it shows the uncompressed data stream on the target host (i.e. the data that's actually written to the LVM volume) whereas on the source machine it shows the amount of data going over the network stream. If you play with those commands, note the difference.
/me is getting delighted by such simple things...
light+love
sunweaver
use _parallel_ gzip
Hi Mike,
I also investigated some time in optimizing dumping logical volumes.
I´m using pigz¹ on my KVM host, which utilizes all cores while writing to a NFS4-share:
$ pv -cN Backup /dev/vg00/cfengine2 | pigz >/mnt/backup/ImagesVM/cfengine2/cfengine2-$(date -I).img.gz
best wishes
/thorsten
[1] http://packages.debian.org/stable/pigz