No edit summary |
No edit summary |
||
Line 9: | Line 9: | ||
Capture the instance name, user name, and IP address as an environment variables: | Capture the instance name, user name, and IP address as an environment variables: | ||
$> INSTANCE=<instance name> | |||
$> USER_NAME=<user name> | |||
$> IP_ADDR=$(nova show $INSTANCE | awk '/private network/ {print $5}') | |||
We also need a rootfs-style image, which may be download from: | We also need a rootfs-style image, which may be download from: | ||
$> wget http://images.ansolabs.com/cirros-0.3.0-x86_64-rootfs.img.gz | |||
|actions= | |actions= | ||
Line 21: | Line 21: | ||
Create a 1Gb volume, which we will make bootable: | Create a 1Gb volume, which we will make bootable: | ||
$> nova volume-create --display_name=bootable_volume 1 | |||
$> <nowiki>VOLUME_ID=$(nova volume-list | awk '/bootable_volume/ {print $2}')</nowiki> | |||
and wait for the volume to become available: | and wait for the volume to become available: | ||
$> <nowiki>watch "nova volume-show bootable_volume | grep status"</nowiki> | |||
Temporarily attach volume to your builder instance, this will allow us to copy image data into the volume | Temporarily attach volume to your builder instance, this will allow us to copy image data into the volume | ||
$> nova volume-attach $INSTANCE $VOLUME_ID /dev/vdb | |||
Wait for the volume status to show as in-use: | Wait for the volume status to show as in-use: | ||
$> <nowiki>watch "nova volume-show bootable_volume | grep status"</nowiki> | |||
Format and mount volume to a staging mount point: | Format and mount volume to a staging mount point: | ||
$> ssh -o StrictHostKeyChecking=no $USER_NAME@$IP_ADDR << EOF | |||
set -o errexit | |||
set -o xtrace | |||
sudo mkdir -p /tmp/stage | |||
sudo mkfs.ext3 -b 1024 /dev/vdb 1048576 | |||
sudo mount /dev/vdb /tmp/stage | |||
sudo touch /tmp/stage/cirros-0.3.0-x86_64-rootfs.img.gz | |||
sudo chown $USER_NAME /tmp/stage/cirros-0.3.0-x86_64-rootfs.img.gz | |||
EOF | |||
Copy image to the staging directory on the builder instance: | Copy image to the staging directory on the builder instance: | ||
$> scp -o StrictHostKeyChecking=no cirros-0.3.0-x86_64-rootfs.img.gz $USER_NAME@$IP_ADDR:/tmp/stage | |||
Unpack image into the volume (don't worry about an unmount failure). | Unpack image into the volume (don't worry about an unmount failure). | ||
$> ssh -o StrictHostKeyChecking=no -i $USER_NAME@$IP_ADDR << EOF | |||
set -o errexit | |||
set -o xtrace | |||
cd /tmp/stage | |||
sudo mkdir -p /tmp/image | |||
sudo gunzip cirros-0.3.0-x86_64-rootfs.img.gz | |||
sudo mount cirros-0.3.0-x86_64-rootfs.img /tmp/image | |||
sudo cp -pr /tmp/image/* /tmp/stage/ | |||
cd | |||
sync | |||
sudo umount /tmp/image | |||
<nowiki>sudo umount /tmp/stage || true</nowiki> | |||
EOF | |||
Detach volume for the builder instance: | Detach volume for the builder instance: | ||
$> nova volume-detach $INSTANCE $VOLUME_ID | |||
and wait for the volume status to show as availble: | and wait for the volume status to show as availble: | ||
$> <nowiki>watch "nova volume-show bootable_volume | grep status"</nowiki> | |||
Now snapshot the bootable volume we just created: | Now snapshot the bootable volume we just created: | ||
$> nova volume-snapshot-create --display_name bootable_snapshot $VOLUME_ID | |||
and wait for the snapshot to become available: | and wait for the snapshot to become available: | ||
$> nova volume-snapshot-show bootable_snapshot | |||
$> <nowiki>SNAPSHOT_ID=$(nova volume-snapshot-list | awk '/bootable_snapshot/ {print $2}')</nowiki> | |||
Now we can boot from the bootable volume. We use the same image as the builder instance but that is only in order to retrieve the image properties (kernel and ramdisk IDs) | Now we can boot from the bootable volume. We use the same image as the builder instance but that is only in order to retrieve the image properties (kernel and ramdisk IDs) | ||
$> <nowiki>IMAGE_ID=$(nova show $INSTANCE | awk '/image/ {print $5}' | sed 's,(\(.*\)),\1,')</nowiki> | |||
$> nova boot --flavor 1 --image $INSTANCE --block_device_mapping vda=${SNAPSHOT_ID}:snap::0 volume_backed | |||
|results= | |results= | ||
Line 96: | Line 96: | ||
Note that an additional snapshot now exists to back the image: | Note that an additional snapshot now exists to back the image: | ||
$> nova volume-snapshot-list | |||
Also note the volume-backed instance you've fired up, there is a volume cloned from the corresponding snapshot: | Also note the volume-backed instance you've fired up, there is a volume cloned from the corresponding snapshot: | ||
$> nova volume-list | |||
}} | }} |
Revision as of 16:55, 17 September 2012
Description
Nova instances can be booted from volume, analogous to EBS-backed volumes in EC2.
We construct a bootable volume, then fire up an instance backed by this volume.
Setup
We assume that an instance has already been booted in a previous test, and we use this as a builder to facilitate the creation of a bootable volume.
Capture the instance name, user name, and IP address as an environment variables:
$> INSTANCE=<instance name> $> USER_NAME=<user name> $> IP_ADDR=$(nova show $INSTANCE
How to test
Create a 1Gb volume, which we will make bootable:
$> nova volume-create --display_name=bootable_volume 1 $> VOLUME_ID=$(nova volume-list | awk '/bootable_volume/ {print $2}')
and wait for the volume to become available:
$> watch "nova volume-show bootable_volume | grep status"
Temporarily attach volume to your builder instance, this will allow us to copy image data into the volume
$> nova volume-attach $INSTANCE $VOLUME_ID /dev/vdb
Wait for the volume status to show as in-use:
$> watch "nova volume-show bootable_volume | grep status"
Format and mount volume to a staging mount point:
$> ssh -o StrictHostKeyChecking=no $USER_NAME@$IP_ADDR << EOF set -o errexit set -o xtrace sudo mkdir -p /tmp/stage sudo mkfs.ext3 -b 1024 /dev/vdb 1048576 sudo mount /dev/vdb /tmp/stage sudo touch /tmp/stage/cirros-0.3.0-x86_64-rootfs.img.gz sudo chown $USER_NAME /tmp/stage/cirros-0.3.0-x86_64-rootfs.img.gz EOF
Copy image to the staging directory on the builder instance:
$> scp -o StrictHostKeyChecking=no cirros-0.3.0-x86_64-rootfs.img.gz $USER_NAME@$IP_ADDR:/tmp/stage
Unpack image into the volume (don't worry about an unmount failure).
$> ssh -o StrictHostKeyChecking=no -i $USER_NAME@$IP_ADDR << EOF set -o errexit set -o xtrace cd /tmp/stage sudo mkdir -p /tmp/image sudo gunzip cirros-0.3.0-x86_64-rootfs.img.gz sudo mount cirros-0.3.0-x86_64-rootfs.img /tmp/image sudo cp -pr /tmp/image/* /tmp/stage/ cd sync sudo umount /tmp/image sudo umount /tmp/stage || true EOF
Detach volume for the builder instance:
$> nova volume-detach $INSTANCE $VOLUME_ID
and wait for the volume status to show as availble:
$> watch "nova volume-show bootable_volume | grep status"
Now snapshot the bootable volume we just created:
$> nova volume-snapshot-create --display_name bootable_snapshot $VOLUME_ID
and wait for the snapshot to become available:
$> nova volume-snapshot-show bootable_snapshot $> SNAPSHOT_ID=$(nova volume-snapshot-list | awk '/bootable_snapshot/ {print $2}')
Now we can boot from the bootable volume. We use the same image as the builder instance but that is only in order to retrieve the image properties (kernel and ramdisk IDs)
$> IMAGE_ID=$(nova show $INSTANCE | awk '/image/ {print $5}' | sed 's,(\(.*\)),\1,') $> nova boot --flavor 1 --image $INSTANCE --block_device_mapping vda=${SNAPSHOT_ID}:snap::0 volume_backed
Expected Results
You should be able able to shh into the volume-backed instance.
Note that an additional snapshot now exists to back the image:
$> nova volume-snapshot-list
Also note the volume-backed instance you've fired up, there is a volume cloned from the corresponding snapshot:
$> nova volume-list