|
|
(29 intermediate revisions by 11 users not shown) |
Line 1: |
Line 1: |
| = Basic Setup = | | = OpenStack in EPEL = |
|
| |
|
| These steps will setup OpenStack nova, glance, and keystone to be accessed by the OpenStack dashboard web UI on a single host, as well as launching our first instance (virtual machine).
| | The OpenStack Folsom was retired from EPEL 6. |
| | | Please visit [https://www.rdoproject.org/ RDO project] for running OpenStack on EL platforms. |
| Many of the examples here require 'sudo' to be properly configured, please see [[Configuring Sudo]] if you need help.
| |
| | |
| == Initial Installation ==
| |
| | |
| $> sudo rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-7.noarch.rpm
| |
| | |
| First let's pull in OpenStack and some optional dependencies:
| |
| | |
| $> sudo yum --enablerepo=epel-testing install \
| |
| openstack-nova openstack-glance openstack-keystone openstack-quantum \
| |
| openstack-swift\* openstack-dashboard openstack-utils memcached qpid-cpp-server \
| |
| mysql-server avahi
| |
| | |
| Ensure auth=no is set in /etc/qpidd.conf
| |
| | |
| Set selinux in permissive mode:
| |
| $> sudo setenforce permissive
| |
| | |
| Otherwise you will get issues like https://bugzilla.redhat.com/show_bug.cgi?id=734346
| |
| /usr/bin/nova-dhcpbridge: No such file or directory
| |
| | |
| if RHEL 6.2 based
| |
| $> sudo openstack-config --set /etc/nova/nova.conf DEFAULT force_dhcp_release False
| |
| else if RHEL 6.3 based
| |
| $> sudo yum install dnsmasq-utils # from the Red Hat '''optional''' channel
| |
| | |
| | |
| Run the helper script to get MySQL configured for use with openstack-nova. If <code>mysql-server</code> is not already installed, this script will install it for you.
| |
| | |
| $> sudo openstack-db --init --service nova
| |
| | |
| Similarly, run the helper script to get MySQL configured for use with openstack-glance.
| |
| | |
| $> sudo openstack-db --init --service glance
| |
| | |
| Nova requires the QPID messaging server to be running.
| |
| | |
| $> sudo service qpidd start && sudo chkconfig qpidd on
| |
| | |
| Nova requires the libvirtd server to be running:
| |
| | |
| $> sudo service libvirtd start && sudo chkconfig libvirtd on
| |
| | |
| Next, you should enable the Glance API and registry services:
| |
| $> for svc in api registry; do sudo service openstack-glance-$svc start; sudo chkconfig openstack-glance-$svc on ; done
| |
| | |
| | |
| The openstack-nova-volume service requires an LVM Volume Group called nova-volumes to exist. We simply create this using a loopback sparse disk image.
| |
| | |
| $> sudo dd if=/dev/zero of=/var/lib/nova/nova-volumes.img bs=1M seek=20k count=0
| |
| $> sudo vgcreate nova-volumes $(sudo losetup --show -f /var/lib/nova/nova-volumes.img)
| |
| | |
| If you are testing OpenStack in a virtual machine, you need to configure nova to use qemu without KVM and hardware virtualization.
| |
| The second command relaxes SELinux rules to allow this mode of operation (https://bugzilla.redhat.com/show_bug.cgi?id=753589)
| |
| The last 2 commands here work around a libvirt issue fixed in RHEL 6.4.
| |
| Note nested virtualization will be the much slower TCG variety, and you should provide lots of memory to the top level guest,
| |
| as the openstack created guests default to 2GM RAM with no overcommit.
| |
| | |
| $> sudo openstack-config --set /etc/nova/nova.conf DEFAULT libvirt_type qemu
| |
| $> setsebool -P virt_use_execmem on # This may take a while
| |
| $> sudo ln -s /usr/libexec/qemu-kvm /usr/bin/qemu-system-x86_64
| |
| $> sudo service libvirtd restart
| |
| | |
| If you intend to use guest images that don't have a single partition (like the Fedora 16 image linked below),
| |
| then allow libguestfs to inspect the image so that files can be injected, by setting:
| |
| | |
| $> sudo openstack-config --set /etc/nova/nova.conf DEFAULT libvirt_inject_partition -1
| |
| | |
| | |
| Now you can start the various services:
| |
| | |
| $> for svc in api objectstore compute network volume scheduler cert; do sudo service openstack-nova-$svc start ; sudo chkconfig openstack-nova-$svc on ; done
| |
| | |
| | |
| Check that all the services started up correctly and look in the logs in <code>/var/log/nova</code> for errors. If there are none, then Nova is up and running!
| |
| | |
| == Initial Keystone setup ==
| |
| | |
| Keystone is the openstack identity service, providing a central place to
| |
| set up openstack users, groups, and accounts that can be shared across all
| |
| other services. This deprecates the old style user accounts manually set
| |
| up with nova-manage.
| |
| | |
| Setting up keystone is required for using the Openstack dashboard.
| |
| | |
| * Configure the Keystone database, similar to how we do it for nova
| |
| $> sudo openstack-db --init --service keystone
| |
| | |
| * Set up a keystonerc file with a generated admin token and various passwords:
| |
| $> cat > keystonerc <<EOF
| |
| export ADMIN_TOKEN=$(openssl rand -hex 10)
| |
| export OS_USERNAME=admin
| |
| export OS_PASSWORD=verybadpass
| |
| export OS_TENANT_NAME=admin
| |
| export OS_AUTH_URL=http://127.0.0.1:5000/v2.0/
| |
| EOF
| |
| $> . ./keystonerc
| |
| | |
| * Set the administrative token in the config file
| |
| $> sudo openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token $ADMIN_TOKEN
| |
| | |
| * Start and enable Keystone service
| |
| $> sudo service openstack-keystone start && sudo chkconfig openstack-keystone on
| |
| | |
| * Create sample Tenants, Users and Roles
| |
| $> sudo ADMIN_PASSWORD=$OS_PASSWORD SERVICE_PASSWORD=servicepass openstack-keystone-sample-data
| |
| | |
| * Test the Keystone CLI is working
| |
| $> keystone user-list
| |
| +----------------------------------+---------+-------------------+-------+
| |
| | id | enabled | email | name |
| |
| +----------------------------------+---------+-------------------+-------+
| |
| | 05742d10109540d2892d17ec312a6cd9 | True | admin@example.com | admin |
| |
| | 25fe47659d6a4255a663e6add1979d6c | True | admin@example.com | demo |
| |
| +----------------------------------+---------+-------------------+-------+
| |
| | |
| == Configure nova to use keystone ==
| |
| | |
| * Change nova configuration to use keystone:
| |
| sudo openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_tenant_name service
| |
| sudo openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_user nova
| |
| sudo openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_password servicepass
| |
| sudo openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
| |
| for svc in api compute; do sudo service openstack-nova-$svc restart; done
| |
| | |
| * Verify that nova can talk with keystone (requires OS_* exports from previous keystone section)
| |
| | |
| $> nova flavor-list
| |
| +----+-----------+-----------+------+----------+-------+-------------+
| |
| | ID | Name | Memory_MB | Swap | Local_GB | VCPUs | RXTX_Factor |
| |
| +----+-----------+-----------+------+----------+-------+-------------+
| |
| | 1 | m1.tiny | 512 | | 0 | 1 | 1.0 |
| |
| | 2 | m1.small | 2048 | | 10 | 1 | 1.0 |
| |
| | 3 | m1.medium | 4096 | | 10 | 2 | 1.0 |
| |
| | 4 | m1.large | 8192 | | 10 | 4 | 1.0 |
| |
| | 5 | m1.xlarge | 16384 | | 10 | 8 | 1.0 |
| |
| +----+-----------+-----------+------+----------+-------+-------------+
| |
| | |
| == Configure glance to use keystone ==
| |
| | |
| * Change glance configuration to use keystone:
| |
| sudo openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
| |
| sudo openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone
| |
| sudo openstack-config --set /etc/glance/glance-api-paste.ini filter:authtoken admin_tenant_name service
| |
| sudo openstack-config --set /etc/glance/glance-api-paste.ini filter:authtoken admin_user glance
| |
| sudo openstack-config --set /etc/glance/glance-api-paste.ini filter:authtoken admin_password servicepass
| |
| sudo openstack-config --set /etc/glance/glance-registry-paste.ini filter:authtoken admin_tenant_name service
| |
| sudo openstack-config --set /etc/glance/glance-registry-paste.ini filter:authtoken admin_user glance
| |
| sudo openstack-config --set /etc/glance/glance-registry-paste.ini filter:authtoken admin_password servicepass
| |
| sudo service openstack-glance-api restart
| |
| sudo service openstack-glance-registry restart
| |
| | |
| * Verify that glance can talk with keystone (requires OS_* exports from the previous keystone section)
| |
| | |
| $> glance index
| |
| | |
| == Nova Network Setup ==
| |
| | |
| To create the network do:
| |
| | |
| $> sudo nova-manage network create demonet 10.0.0.0/24 1 256 --bridge=demonetbr0
| |
| | |
| NB the network range here, should *not* be the one used on your existing physical network. It should be a range dedicated for the network that OpenStack will configure. So if 10.0.0.0/24 clashes with your local network, pick another range
| |
| | |
| == Register an Image ==
| |
| | |
| To run an instance, you are going to need an image. There are prebuilt Fedora 16 JEOS (Just Enough OS) images that can be downloaded.
| |
| Note this will download a 200MB image (without a progress bar)
| |
| | |
| $> glance add name=f16-jeos is_public=true disk_format=qcow2 container_format=ovf \
| |
| copy_from=http://berrange.fedorapeople.org/images/2012-02-29/f16-x86_64-openstack-sda.qcow2
| |
| | |
| == Launch an Instance ==
| |
| | |
| Create a keypair:
| |
|
| |
| $> nova keypair-add mykey > oskey.priv
| |
| $> chmod 600 oskey.priv
| |
| | |
| Configure key injection mode, to allow guestfs to inject into multiple guest types:
| |
| $> sudo openstack-config --set /etc/nova/nova.conf DEFAULT libvirt_inject_partition -1
| |
| $> sudo service openstack-nova-compute restart
| |
| | |
| Launch an instance:
| |
| | |
| $> nova boot myserver --flavor 2 --key_name mykey \
| |
| --image $(glance index | grep f16-jeos | awk '{print $1}')
| |
| | |
| And then observe the instance running, observe the KVM VM running and SSH into the instance:
| |
| | |
| $> sudo virsh list
| |
| $> nova list
| |
| $> ssh -i oskey.priv root@10.0.0.2
| |
| $> nova console-log myserver
| |
| $> nova delete myserver
| |
| | |
| == Configure the OpenStack Dashboard ==
| |
| | |
| The OpenStack dashboard is the official web user interface for OpenStack. It should mostly work out of the box, as long as keystone has been configured properly. | |
| | |
| * Install the dashboard
| |
| $> sudo yum install openstack-dashboard
| |
| | |
| * Make sure httpd is running
| |
| $> sudo service httpd restart
| |
| $> sudo chkconfig httpd on
| |
| | |
| * If selinux is enabled, you will have to allow httpd to access other network services (the dashboard talks to the http API of the other OpenStack services)
| |
| $> sudo setsebool -P httpd_can_network_connect=on
| |
| | |
| The dashboard should then be accessed with a web browser at http://localhost/dashboard . Account and password should be
| |
| what you configured for the keystone setup.
| |
| | |
| | |
| | |
| To open up the firewall ports for HTTP:
| |
| $> sudo lokkit -p http:tcp
| |
| $> sudo lokkit -p https:tcp
| |
| | |
| == Configure swift with keystone ==
| |
| These are the minimal steps required to setup a swift installation on RHEL which keystone authentication, this wouldn't be considered a working swift system but at the very least will provide you with a working swift API to test clients against, most notibly it doesn't include replication, multiple zones and loadbalancing
| |
| | |
| Ensure the keystone env variables are still setup from the previous steps
| |
| | |
| We need to create 5 configuration files
| |
| | |
| $> cat > /tmp/swift.conf <<- EOF
| |
| [swift-hash]
| |
| swift_hash_path_suffix = randomestringchangeme
| |
| EOF
| |
| $> sudo mv /tmp/swift.conf /etc/swift/swift.conf
| |
| | |
| $> cat > /tmp/proxy-server.conf <<- EOF
| |
| [DEFAULT]
| |
| bind_port = 8080
| |
| workers = 8
| |
| user = swift
| |
| [pipeline:main]
| |
| pipeline = catch_errors healthcheck cache authtoken keystone proxy-server
| |
| [app:proxy-server]
| |
| use = egg:swift#proxy
| |
| account_autocreate = true
| |
| [filter:keystone]
| |
| paste.filter_factory = keystone.middleware.swift_auth:filter_factory
| |
| operator_roles = admin, swiftoperator
| |
| [filter:authtoken]
| |
| paste.filter_factory = keystone.middleware.auth_token:filter_factory
| |
| auth_port = 35357
| |
| auth_host = 127.0.0.1
| |
| auth_protocol = http
| |
| admin_token = ADMINTOKEN
| |
| # ??? Are these needed?
| |
| service_port = 5000
| |
| service_host = 127.0.0.1
| |
| service_protocol = http
| |
| auth_token = ADMINTOKEN
| |
| [filter:healthcheck]
| |
| use = egg:swift#healthcheck
| |
| [filter:cache]
| |
| use = egg:swift#memcache
| |
| memcache_servers = 127.0.0.1:11211
| |
| [filter:catch_errors]
| |
| use = egg:swift#catch_errors
| |
| EOF
| |
| $> sudo mv /tmp/proxy-server.conf /etc/swift/proxy-server.conf
| |
| | |
| $> cat > /tmp/account-server.conf <<- EOF
| |
| [DEFAULT]
| |
| bind_ip = 127.0.0.1
| |
| workers = 2
| |
| [pipeline:main]
| |
| pipeline = account-server
| |
| [app:account-server]
| |
| use = egg:swift#account
| |
| [account-replicator]
| |
| [account-auditor]
| |
| [account-reaper]
| |
| EOF
| |
| $> sudo mv /tmp/account-server.conf /etc/swift/account-server.conf
| |
| | |
| $> cat > /tmp/container-server.conf <<- EOF
| |
| [DEFAULT]
| |
| bind_ip = 127.0.0.1
| |
| workers = 2
| |
| [pipeline:main]
| |
| pipeline = container-server
| |
| [app:container-server]
| |
| use = egg:swift#container
| |
| [container-replicator]
| |
| [container-updater]
| |
| [container-auditor]
| |
| EOF
| |
| $> sudo mv /tmp/container-server.conf /etc/swift/container-server.conf
| |
| | |
| $> cat > /tmp/object-server.conf <<- EOF
| |
| [DEFAULT]
| |
| bind_ip = 127.0.0.1
| |
| workers = 2
| |
| [pipeline:main]
| |
| pipeline = object-server
| |
| [app:object-server]
| |
| use = egg:swift#object
| |
| [object-replicator]
| |
| [object-updater]
| |
| [object-auditor]
| |
| EOF
| |
| $> sudo mv /tmp/object-server.conf /etc/swift/object-server.conf
| |
| | |
| So that swift can authenticate tokens we need to set the keystone Admin token in the swift proxy file
| |
| $> sudo openstack-config --set /etc/swift/proxy-server.conf filter:authtoken admin_token $ADMIN_TOKEN
| |
| $> sudo openstack-config --set /etc/swift/proxy-server.conf filter:authtoken auth_token $ADMIN_TOKEN
| |
| | |
| Create the stoage device for swift, these instructions use a loopback device but a physical devive or logical volume can be used
| |
| $> truncate --size=20G /tmp/swiftstorage
| |
| $> DEVICE=$(sudo losetup --show -f /tmp/swiftstorage)
| |
| $> sudo mkfs.ext4 -I 1024 $DEVICE
| |
| $> sudo mkdir -p /srv/node/partitions
| |
| $> sudo mount $DEVICE /srv/node/partitions -t ext4 -o noatime,nodiratime,nobarrier,user_xattr
| |
| | |
| $> cd /etc/swift
| |
| | |
| Create the ring, with 1024 partitions (only suitable for a small test environment) and 1 zone
| |
| $> sudo swift-ring-builder account.builder create 10 1 1
| |
| $> sudo swift-ring-builder container.builder create 10 1 1
| |
| $> sudo swift-ring-builder object.builder create 10 1 1
| |
| | |
| Create a device for each of the account, container and object services
| |
| $> sudo swift-ring-builder account.builder add z1-127.0.0.1:6002/partitions 100
| |
| $> sudo swift-ring-builder container.builder add z1-127.0.0.1:6001/partitions 100
| |
| $> sudo swift-ring-builder object.builder add z1-127.0.0.1:6000/partitions 100
| |
| | |
| Rebalance the ring (allocates partitions to devices)
| |
| $> sudo swift-ring-builder account.builder rebalance
| |
| $> sudo swift-ring-builder container.builder rebalance
| |
| $> sudo swift-ring-builder object.builder rebalance
| |
| | |
| make sure swift owns appropriate files
| |
| $> sudo chown -R swift:swift /etc/swift /srv/node/partitions
| |
| | |
| Added the swift service and endpoint to keystone
| |
| $> SERVICEID=$(keystone service-create --name=swift --type=object-store --description="Swift Service" | grep "id " | cut -d "|" -f 3)
| |
| $> echo $SERVICEID # just making sure we got a SERVICEID
| |
| $> keystone endpoint-create --service_id $SERVICEID --publicurl "http://127.0.0.1:8080/v1/AUTH_\$(tenant_id)s" --adminurl "http://127.0.0.1:8080/v1/AUTH_\$(tenant_id)s" --internalurl "http://127.0.0.1:8080/v1/AUTH_\$(tenant_id)s"
| |
| | |
| Start the services
| |
| $> sudo service memcached start
| |
| $> for srv in account container object proxy ; do sudo service openstack-swift-$srv start ; done
| |
| | |
| Test the swift client and upload files
| |
| $> swift list
| |
| $> swift upload container /path/to/file
| |
| | |
| = Additional Functionality =
| |
| | |
| == Using Eucalyptus tools ==
| |
| | |
| Set up a rc file for EC2 access (this expects a prior keystone configuration)
| |
| | |
| $> . ./keystonerc
| |
| $> USER_ID=$(keystone user-list | awk '/admin / {print $2}')
| |
| $> ACCESS_KEY=$(keystone ec2-credentials-list --user $USER_ID | awk '/admin / {print $4}')
| |
| $> SECRET_KEY=$(keystone ec2-credentials-list --user $USER_ID | awk '/admin / {print $6}')
| |
| $> cat > novarc <<EOF
| |
| export EC2_URL=http://localhost:8773/services/Cloud
| |
| export EC2_ACCESS_KEY=$ACCESS_KEY
| |
| export EC2_SECRET_KEY=$SECRET_KEY
| |
| EOF
| |
| $> chmod 600 novarc
| |
| $> . ./novarc
| |
| | |
| You should now be able to launch an image:
| |
| | |
| $> euca-run-instances f16-jeos -k nova_key
| |
| $> euca-describe-instances
| |
| $> euca-get-console-output i-00000001
| |
| $> euca-terminate-instances i-00000001
| |
| | |
| == Images ==
| |
| | |
| Rather than the prebuilt Fedora 16 JEOS image referenced above, there are other image options.
| |
| | |
| # Building a Fedora 16 JEOS image using [http://aeolusproject.org/oz.html Oz]
| |
| # Downloading ttylinux based minimal images used by OpenStack developers for testing
| |
| | |
| === Building Fedora 16 JEOS Images With Oz ===
| |
| | |
| You can very easily build an image using Oz. First, make sure it's installed:
| |
| | |
| $> sudo yum install /usr/bin/oz-install
| |
| | |
| Create a template definition file called <code>f16-jeos.tdl</code> containing:
| |
| | |
| <nowiki>
| |
| <template>
| |
| <name>fedora16_x86_64</name>
| |
| <description>My Fedora 16 x86_64 template</description>
| |
| <os>
| |
| <name>Fedora</name>
| |
| <version>16</version>
| |
| <arch>x86_64</arch>
| |
| <install type='url'>
| |
| <url>http://download.fedoraproject.org/pub/fedora/linux/releases/16/Fedora/x86_64/os/</url>
| |
| </install>
| |
| </os>
| |
| <commands>
| |
| <command name='setup-rc-local'>
| |
| sed -i 's/rhgb quiet/console=ttyS0/' /boot/grub/grub.conf
| |
|
| |
| cat >> /etc/rc.local &lt;&lt; EOF
| |
| if [ ! -d /root/.ssh ]; then
| |
| mkdir -p /root/.ssh
| |
| chmod 700 /root/.ssh
| |
| fi
| |
|
| |
| # Fetch public key using HTTP
| |
| ATTEMPTS=10
| |
| while [ ! -f /root/.ssh/authorized_keys ]; do
| |
| curl -f http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key > /tmp/aws-key 2>/dev/null
| |
| if [ \$? -eq 0 ]; then
| |
| cat /tmp/aws-key >> /root/.ssh/authorized_keys
| |
| chmod 0600 /root/.ssh/authorized_keys
| |
| restorecon /root/.ssh/authorized_keys
| |
| rm -f /tmp/aws-key
| |
| echo "Successfully retrieved AWS public key from instance metadata"
| |
| else
| |
| FAILED=\$((\$FAILED + 1))
| |
| if [ \$FAILED -ge \$ATTEMPTS ]; then
| |
| echo "Failed to retrieve AWS public key after \$FAILED attempts, quitting"
| |
| break
| |
| fi
| |
| echo "Could not retrieve AWS public key (attempt #\$FAILED/\$ATTEMPTS), retrying in 5 seconds..."
| |
| sleep 5
| |
| fi
| |
| done
| |
| EOF
| |
| </command>
| |
| </commands>
| |
| </template>
| |
| </nowiki>
| |
| | |
| Then simply do:
| |
| | |
| $> sudo oz-install -d4 -u f16-jeos.tdl
| |
| | |
| Once built, you simply have to register the image with Nova:
| |
| | |
| $> glance add name=f16-jeos is_public=true container_format=bare disk_format=raw < /var/lib/libvirt/images/fedora16_x86_64.dsk
| |
| $> glance index
| |
| | |
| The last command should return a list of the images registered with the Glance image registry.
| |
| | |
| === Downloading Existing Images ===
| |
| | |
| If you don't need a functioning Fedora 16 and want the smallest possible images, just download this set of images commonly used by OpenStack developers for testing and register them with Nova:
| |
| | |
| $> mkdir images
| |
| $> cd images
| |
| $> curl http://images.ansolabs.com/tty.tgz | tar xvfzo -
| |
| $> glance add name=aki-tty disk_format=aki container_format=aki is_public=true < aki-tty/image
| |
| $> glance add name=ami-tty disk_format=ami container_format=ami is_public=true < ami-tty/image
| |
| $> glance add name=ari-tty disk_format=ari container_format=ari is_public=true < ari-tty/image
| |
| | |
| Then to start the image:
| |
| | |
| $> euca-run-instances ami-tty --kernel aki-tty --ramdisk ari-tty -k mykey
| |
| | |
| == Volumes ==
| |
| | |
| If you use the Chrome browser, kill it before embarking on this section, as it has been [https://bugzilla.redhat.com/show_bug.cgi?id=727925 known] to cause the lvcreate command to fail with 'incorrect semaphore state' errors.
| |
| | |
| Start the SCSI target daemon
| |
| | |
| $> sudo systemctl start tgtd.service
| |
| $> sudo systemctl enable tgtd.service
| |
| | |
| Create a new 1GB volume
| |
| | |
| $> VOLUME=$(euca-create-volume -s 1 -z nova | awk '{print $2}')
| |
| | |
| View the status of the new volume, and wait for it to become 'available'
| |
| | |
| $> watch "euca-describe-volumes | grep $VOLUME | grep available"
| |
| | |
| Re-run the previously terminated instance if necessary:
| |
| | |
| $> INSTANCE=$(euca-run-instances f16-jeos -k mykey | grep INSTANCE | awk '{print $2}')
| |
| | |
| or:
| |
| | |
| $> INSTANCE=$(euca-run-instances ami-tty --kernel aki-tty --ramdisk ari-tty -k mykey | grep INSTANCE | awk '{print $2}')
| |
| | |
| Make the storage available to the instance (note -d is the device on the compute node)
| |
| | |
| $> euca-attach-volume -i $INSTANCE -d /dev/vdc $VOLUME
| |
| | |
| ssh to the instance and verify that the vdc device is listed in /proc/partitions
| |
| | |
| $> cat /proc/partitions
| |
| | |
| Now make the device available if /dev/vdc is not already present
| |
| | |
| $> mknod /dev/vdc b 252 32
| |
| | |
| Create and mount a file system directly on the device
| |
| | |
| $> mkfs.ext3 /dev/vdc
| |
| $> mkdir /mnt/nova-volume
| |
| $> mount /dev/vdc /mnt/nova-volume
| |
| | |
| Display some file system details
| |
| | |
| $> df -h /dev/vdc
| |
| | |
| Create a temporary file:
| |
| | |
| $> echo foo > /mnt/nova-volume/bar
| |
| | |
| Terminate and re-run the instance, then re-attach the volume and re-mount within the instance as above. Your temporary file will have persisted:
| |
| | |
| $> cat /mnt/nova-volume/bar
| |
| | |
| Unmount the volume again:
| |
| | |
| $> umount /mnt/nova-volume
| |
| | |
| Exit from the ssh session, then detach and delete the volume:
| |
| | |
| $> euca-detach-volume $VOLUME
| |
| $> euca-delete-volume $VOLUME
| |
| | |
| == Floating IPs ==
| |
| | |
| You may carve out a block of public IPs and assign them to instances.
| |
| | |
| First thing you need to do is make sure that nova is configured with the correct public network interface. The default is eth0, but you can change it by e.g.
| |
| | |
| $> sudo openstack-config-set /etc/nova/nova.conf DEFAULT public_interface em1
| |
| $> sudo systemctl restart openstack-nova-network.service
| |
| | |
| Then you can do e.g.
| |
| | |
| $> sudo nova-manage floating create 172.31.0.224/28
| |
| $> euca-allocate-address
| |
| $> euca-associate-address -i i-00000012 172.31.0.224
| |
| $> ssh -i nova_key.priv root@172.31.0.224
| |
| $> euca-disassociate-address 172.31.0.224
| |
| $> euca-release-address 172.31.0.224
| |
| | |
| == VNC access ==
| |
| | |
| To setup VNC access to guests through the dashboard:
| |
| | |
| nova-novncproxy reads some parameters in /etc/nova/nova.conf file.
| |
| First you need to configure your cloud controller to enable VNC
| |
| | |
| novncproxy_host = 0.0.0.0
| |
| novncproxy_port = 6080
| |
| | |
| and in the nova compute nodes you need something like this
| |
| | |
| <pre>novncproxy_base_url=http://NOVNCPROXY_FQDN:6080/vnc_auto.html
| |
| vnc_enabled=true
| |
| vncserver_listen=COMPUTE_FQDN
| |
| vncserver_proxyclient_address=COMPUTE_FQDN</pre>
| |
| | |
| You should also make sure that openstack-nova-consoleauth has been started on the controller node:
| |
| <pre>
| |
| $ controller> sudo /etc/init.d/openstack-nova-consoleauth restart</pre>
| |
| | |
| After restarting nova services on both nodes the newly created machines will run the qemu-kvm with a parameter -vnc compute_fqdn:display_number.
| |
| Then after starting the novncproxy and connecting to the dashboard it will discover the host and point to the novncproxy with the appropriate values and connect to the VM.
| |
| | |
| Note ensure than the iptables entries for VNC ports (5900+DISPLAYNUMBER) are allowed.
| |
| | |
| = Deployment =
| |
| | |
| == Adding a Compute Node ==
| |
| | |
| Okay, everything so far has been done on a single node. The next step is to add another node for running VMs.
| |
| | |
| Let's assume the machine you've set up above is called 'controller' and the new machine is called 'node'.
| |
| | |
| First, open the qpid, MySQL, Nova API and iSCSI ports on controller:
| |
| | |
| $ controller> sudo lokkit -p 3306:tcp
| |
| $ controller> sudo lokkit -p 5672:tcp
| |
| $ controller> sudo lokkit -p 9292:tcp
| |
| $ controller> sudo lokkit -p 3260:tcp
| |
| $ controller> sudo service libvirtd reload
| |
| | |
| Then make sure that ntp is enabled on both machines:
| |
| | |
| $> sudo yum install -y ntp
| |
| $> sudo service ntpd start
| |
| $> sudo chkconfig ntpd on
| |
| | |
| Install libvirt and nova on node:
| |
| | |
| $ node> sudo yum install --enablerepo=epel-testing openstack-nova python-keystone openstack-utils
| |
| $ node> sudo service libvirtd start
| |
| $ node> sudo chkconfig libvirtd on
| |
| $ node> sudo setenforce 0
| |
| | |
| Configure nova so that node can find the services on controller:
| |
| | |
| $ node> sudo openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname controller
| |
| $ node> sudo openstack-config --set /etc/nova/nova.conf DEFAULT sql_connection mysql://nova:nova@controller/nova
| |
| $ node> sudo openstack-config --set /etc/nova/nova.conf DEFAULT glance_api_servers controller:9292
| |
| $ node> sudo openstack-config --set /etc/nova/nova.conf DEFAULT iscsi_ip_prefix 172.31.0.107
| |
| $ node> sudo openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
| |
| (The {{{iscsi_ip_prefix}}} value is the IP address of the controller node)
| |
| | |
| Configure the Network interfaces
| |
| The bridge name should match what use used in the nova-manage command on the controller
| |
| $ node> sudo openstack-config --set /etc/nova/nova.conf DEFAULT flat_network_bridge demonetbr0
| |
| | |
| The device which should be moved onto the bridge (nova will set up this bridge, once it done you can view it with the brctl command
| |
| $ node> sudo openstack-config --set /etc/nova/nova.conf DEFAULT flat_interface eth0
| |
| $ controller> sudo openstack-config --set /etc/nova/nova.conf DEFAULT flat_interface eth0
| |
| | |
| $ node> brctl show
| |
| | |
| Enable the compute service:
| |
| | |
| $ node> sudo service openstack-nova-compute start
| |
| | |
| Now everything should be running as before, except the VMs are launched either on controller or node. You will only be able to ping/ssh to vm's from the controller node.
| |
| | |
| == Manual Setup of MySQL ==
| |
| | |
| As of <code>openstack-nova-2011.3-9.el6</code> and <code>openstack-nova-2011.3-8.fc16</code>, <code>openstack-nova</code> is now set up to use MySQL by default. If you're updating an older installation or prefer to set up MySQL manually instead of using the <code>openstack-nova-db-setup</code> script, this section shows how to do it.
| |
| | |
| First install and enable MySQL:
| |
| | |
| $> sudo yum install -y mysql-server
| |
| $> sudo service mysqld start
| |
| $> sudo chkconfig mysqld on
| |
| | |
| Set a password for the root account and delete the anonymous accounts:
| |
| | |
| $> mysql -u root
| |
| mysql> update mysql.user set password = password('iamroot') where user = 'root';
| |
| mysql> <nowiki>delete from mysql.user where user = ''</nowiki>;
| |
| | |
| Create a database and user account specifically for nova:
| |
| | |
| mysql> create database nova;
| |
| mysql> create user 'nova'@'localhost' identified by 'nova';
| |
| mysql> create user 'nova'@'%' identified by 'nova';
| |
| mysql> grant all on nova.* to 'nova'@'%';
| |
| | |
| (If anyone can explain why nova@localhost is required even though the anonymous accounts have been deleted, I'd be very grateful :-)
| |
| | |
| Then configure nova to use the DB and install the schema:
| |
| | |
| $> sudo openstack-config-set /etc/nova/nova.conf DEFAULT sql_connection mysql://nova:nova@localhost/nova
| |
| $> sudo nova-manage db sync
| |
| | |
| As a final sanity check:
| |
| | |
| $> mysql -u nova -p nova
| |
| Enter password:
| |
| mysql> select * from migrate_version;
| |
| | |
| = Miscellaneous =
| |
| | |
| == Smoke Tests ==
| |
| | |
| Nova comes with a selection of fairly basic smoke tests which you can run against your installation. It can be useful to use these to sanity check your configuration.
| |
| | |
| First off, you need the nova-adminclient python library which isn't yet packaged:
| |
| | |
| $> sudo yum install python-pip
| |
| $> sudo pip-python install nova-adminclient
| |
| | |
| Then you need a user and project both named admin:
| |
| | |
| $> sudo nova-manage user admin admin
| |
| $> sudo nova-manage project create admin admin
| |
| $> sudo nova-manage project zipfile admin admin
| |
| $> unzip nova.zip
| |
| $> . ./novarc
| |
| | |
| Make sure you have the tty images imported as described above. You also need a block of floating IPs created, also as described above.
| |
| | |
| Then, run the tests from a fedpkg checkout:
| |
| | |
| $> fedpkg clone openstack-nova
| |
| $> cd openstack-nova
| |
| $> fedpkg switch-branch f16
| |
| $> fedpkg prep
| |
| $> cd nova-2011.3/smoketests
| |
| $> python ./run_tests.py
| |
| | |
| All the tests should pass.
| |
| | |
| If you run into import errors such as:
| |
| | |
| ImportError: No module named nose
| |
| | |
| or:
| |
| | |
| ImportError (No module named paramiko)
| |
| | |
| simply install the missing dependency as follows:
| |
| | |
| $> sudo yum install -y python-nose.noarch
| |
| $> sudo yum install -y python-paramiko.noarch
| |
| | |
| == Cleanup ==
| |
| | |
| While testing OpenStack, you might want to delete everything related to OpenStack and start testing with a clean slate again.
| |
| | |
| Here's how. First, make sure to terminate all running instances:
| |
| | |
| $> euca-terminate-instances ...
| |
| | |
| Double check that you have no lingering VMs, perhaps saved to disk:
| |
| | |
| $> virsh list --all && virsh undefine
| |
| $> rm -f /var/lib/libvirt/qemu/save/instance-00000*
| |
| | |
| Then stop all the services:
| |
| | |
| $> for iii in /usr/lib/systemd/system/openstack-*.service; do sudo systemctl stop $(basename $iii); done
| |
| | |
| Delete all the packages:
| |
| | |
| $> sudo yum erase python-glance python-nova* python-keystone* openstack-swift* memcached
| |
| | |
| Delete the nova and keystone tables from the MySQL DB:
| |
| | |
| $> mysql -u root -p -e 'drop database nova;'
| |
| $> mysql -u root -p -e 'drop database keystone;'
| |
| | |
| Delete the nova-volumes VG:
| |
| | |
| $> sudo vgchange -an nova-volumes
| |
| $> sudo losetup -d /dev/loop0
| |
| $> sudo rm -f /var/lib/nova/nova-volumes.img
| |
| | |
| Take down the bridge and kill dnsmasq:
| |
| | |
| $> sudo ip link set demonetbr0 down
| |
| $> sudo brctl delbr demonetbr0
| |
| $> sudo kill -9 $(cat /var/lib/nova/networks/nova-demonetbr0.pid)
| |
| | |
| Remove all directories left behind from the packages:
| |
| | |
| $> sudo rm -rf /etc/{glance,nova,swift,keystone,openstack-dashboard} /var/lib/{glance,nova,swift,keystone} /var/log/{glance,nova,swift,keystone} /var/run/{glance,nova,swift,keystone}
| |
| | |
| Remove swift storage device (if we don't want the data)
| |
| $> sudo umount /srv/node/partitions
| |
| $> sudo losetup -d $DEVICE
| |
| $> rm /tmp/swiftstorage
| |
| | |
| Finally, restart iptables to clear out all rules added by Nova. You also need to reload libvirt's iptables rules:
| |
| | |
| $> sudo service iptables restart
| |
| $> sudo service libvirtd restart
| |
| | |
| [[Category:OpenStack]]
| |