On a CentOS7 system
[hamzy@oscloud5 ~]$ lsb_release -a LSB Version: :core-4.1-amd64:core-4.1-noarch Distributor ID: CentOS Description: CentOS Linux release 7.3.1611 (Core) Release: 7.3.1611 Codename: Core [stack@oscloud5 ~]$ uname -a Linux oscloud5.stglabs.ibm.com 3.10.0-514.16.1.el7.x86_64 #1 SMP Wed Apr 12 15:04:24 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
Unfortunately, it seems that Environment setup for baremetal environment does not explain how to install the undercloud. There are three machines in this scenario:
arch | use | portname1 | MAC1 | IP1 | portname2 | MAC2 | IP2 |
x86_64 | undercloud | eno2 | 6c:ae:8b:29:2a:02 | 9.114.219.30 | eno4 | 6c:ae:8b:29:2a:04 | 9.114.118.98 |
ppc64le | overcloud control | enP3p9s0f0 | 6c:ae:8b:6a:74:14 | 9.114.219.134 | enp1s0 | 34:40:b5:b6:ea:bc | 9.114.118.50 |
ppc64le | overcloud compute | enP3p5s0f2 | 00:90:FA:74:05:52 | 9.114.219.49 | enP3p5s0f3 | 00:90:FA:74:05:53 | 9.114.118.154 |
So, following Undercloud installation, I perform the following:
[hamzy@oscloud5 ~]$ sudo useradd stack [hamzy@oscloud5 ~]$ sudo passwd stack [hamzy@oscloud5 ~]$ echo "stack ALL=(root) NOPASSWD:ALL" | sudo tee -a /etc/sudoers.d/stack [hamzy@oscloud5 ~]$ sudo chmod 0440 /etc/sudoers.d/stack [hamzy@oscloud5 ~]$ sudo su - stack [stack@oscloud5 ~]$ sudo hostnamectl set-hostname oscloud5.stglabs.ibm.com [stack@oscloud5 ~]$ sudo hostnamectl set-hostname --transient oscloud5.stglabs.ibm.com [stack@oscloud5 ~]$ sudo curl -L -o /etc/yum.repos.d/delorean.repo https://trunk.rdoproject.org/centos7-master/current-passed-ci/delorean.repo [stack@oscloud5 ~]$ sudo curl -L -o /etc/yum.repos.d/delorean-deps.repo https://trunk.rdoproject.org/centos7/delorean-deps.repo [stack@oscloud5 ~]$ sudo yum install -y python-tripleoclient [stack@oscloud5 ~]$ cp /usr/share/instack-undercloud/undercloud.conf.sample ~/undercloud.conf [stack@oscloud5 ~]$ cat << '__EOF__' > instackenv.json { "nodes": [ { "pm_type":"pxe_ipmitool", "mac":[ "6c:ae:8b:6a:74:14" ], "cpu":"16", "memory":"1048576", "disk":"1000", "arch":"ppc64le", "pm_password":"update", "pm_addr":"9.114.219.133" }, { "pm_type":"pxe_ipmitool", "mac":[ "00:90:fa:74:05:53" ], "cpu":"16", "memory":"1048576", "disk":"1000", "arch":"ppc64le", "pm_password":"update", "pm_addr":"9.114.118.155" } ] } __EOF__
I transfer over the built overcloud images:
[hamzy@pkvmci853 ~]$ (OCB=$(dig @192.168.122.1 -4 +short Overcloud.virbr0); UC=9.114.118.98; ssh-keygen -f ~/.ssh/known_hosts -R ${UC}; ssh-keyscan ${UC} >> ~/.ssh/known_hosts; scp -3 hamzy@${OCB}:~/*{initrd,initramfs,kernel,vmlinuz,qcow2}* stack@${UC}:~/)
I then modify undercloud.conf as follows:
[stack@oscloud5 ~]$ cat << __EOF__ | patch -p0 --- undercloud.conf.orig 2017-08-25 12:04:54.935063830 +0000 +++ undercloud.conf 2017-08-25 12:05:17.561063576 +0000 @@ -17,21 +17,25 @@ # defined by local_interface, with the netmask defined by the prefix # portion of the value. (string value) #local_ip = 192.168.24.1/24 +local_ip = 9.114.118.98/24 # Network gateway for the Neutron-managed network for Overcloud # instances. This should match the local_ip above when using # masquerading. (string value) #network_gateway = 192.168.24.1 +network_gateway = 9.114.118.98 # Virtual IP or DNS address to use for the public endpoints of # Undercloud services. Only used with SSL. (string value) # Deprecated group/name - [DEFAULT]/undercloud_public_vip #undercloud_public_host = 192.168.24.2 +undercloud_public_host = 9.114.118.98 # Virtual IP or DNS address to use for the admin endpoints of # Undercloud services. Only used with SSL. (string value) # Deprecated group/name - [DEFAULT]/undercloud_admin_vip #undercloud_admin_host = 192.168.24.3 +undercloud_admin_host = 9.114.118.98 # DNS nameserver(s) to use for the undercloud node. (list value) #undercloud_nameservers = @@ -74,6 +78,7 @@ # Network interface on the Undercloud that will be handling the PXE # boots and DHCP for Overcloud instances. (string value) #local_interface = eth1 +local_interface = eno4 # MTU to use for the local_interface. (integer value) #local_mtu = 1500 @@ -82,18 +87,22 @@ # instances. This should be the subnet used for PXE booting. (string # value) #network_cidr = 192.168.24.0/24 +network_cidr = 9.114.118.0/24 # Network that will be masqueraded for external access, if required. # This should be the subnet used for PXE booting. (string value) #masquerade_network = 192.168.24.0/24 +masquerade_network = 9.114.118.0/24 # Start of DHCP allocation range for PXE and DHCP of Overcloud # instances. (string value) #dhcp_start = 192.168.24.5 +dhcp_start = 9.114.118.220 # End of DHCP allocation range for PXE and DHCP of Overcloud # instances. (string value) #dhcp_end = 192.168.24.24 +dhcp_end = 9.114.118.225 # Path to hieradata override file. If set, the file will be copied # under /etc/puppet/hieradata and set as the first file in the hiera @@ -112,12 +121,14 @@ # doubt, use the default value. (string value) # Deprecated group/name - [DEFAULT]/discovery_interface #inspection_interface = br-ctlplane +inspection_interface = br-ctlplane # Temporary IP range that will be given to nodes during the inspection # process. Should not overlap with the range defined by dhcp_start # and dhcp_end, but should be in the same network. (string value) # Deprecated group/name - [DEFAULT]/discovery_iprange #inspection_iprange = 192.168.24.100,192.168.24.120 +inspection_iprange = 9.114.118.230,9.114.118.235 # Whether to enable extra hardware collection during the inspection # process. Requires python-hardware or python-hardware-detect package __EOF__
And install the undercloud:
[stack@oscloud5 ~]$ time openstack undercloud install 2>&1 | tee output.undercloud.install ... Undercloud install complete. ...
There is a bug for needing the userid for machines using ipmi that needs to be patched around.
[stack@oscloud5 ~]$ (cd /usr/lib/python2.7/site-packages/tripleo_common/utils/; cat << __EOF__ | sudo patch -p0) --- nodes.py.orig 2017-08-24 15:54:07.614226329 +0000 +++ nodes.py 2017-08-24 15:54:29.699440619 +0000 @@ -105,7 +105,7 @@ 'pm_user': '%s_username' % prefix, 'pm_password': '%s_password' % prefix, } - mandatory_fields = list(mapping) + mandatory_fields = ['pm_addr', 'pm_password'] # list(mapping) if has_port: mapping['pm_port'] = '%s_port' % prefix __EOF__ [stack@undercloud ~]$ (for SERVICE in openstack-mistral-api.service openstack-mistral-engine.service openstack-mistral-executor.service; do sudo systemctl restart ${SERVICE}; done)
I then go through the process of installing the overcloud:
[stack@oscloud5 ~]$ source stackrc (undercloud) [stack@oscloud5 ~]$ time openstack overcloud image upload ... (undercloud) [stack@oscloud5 ~]$ time openstack overcloud node import --provide instackenv.json 2>&1 | tee output.overcloud.node.import ... (undercloud) [stack@oscloud5 ~]$ openstack overcloud profiles list +--------------------------------------+-----------+-----------------+-----------------+-------------------+ | Node UUID | Node Name | Provision State | Current Profile | Possible Profiles | +--------------------------------------+-----------+-----------------+-----------------+-------------------+ | 032a8e33-e371-44e3-8513-04028a4de95b | | available | None | | | 612b49a6-1407-42cd-bb41-d10bd5173712 | | available | None | | +--------------------------------------+-----------+-----------------+-----------------+-------------------+ (undercloud) [stack@oscloud5 ~]$ ironic node-update 032a8e33-e371-44e3-8513-04028a4de95b replace properties/capabilities=profile:compute,boot_option:local (undercloud) [stack@oscloud5 ~]$ ironic node-update 612b49a6-1407-42cd-bb41-d10bd5173712 replace properties/capabilities=profile:control,boot_option:local
And now do the deploy:
(undercloud) [stack@oscloud5 ~]$ openstack overcloud deploy --debug --templates -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml --control-scale 1 --compute-scale 1 --control-flavor control --compute-flavor compute 2>&1 | tee output.overcloud.deploy ... "GET /v1/defc4cff35d84851a464fa86eeb2db61/stacks/overcloud/51cde441-da64-4bae-81f9-40483e8a6b14/events?marker=2bb2862d-19fb-4c6f-8808-8bae7c8806d1&nested_depth=2&sort_dir=asc HTTP/1.1" 200 676 RESP: [200] Date: Tue, 29 Aug 2017 19:39:12 GMT Server: Apache x-openstack-request-id: req-abf7eb19-a3dd-477a-a096-c66ae85d639c Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 676 Keep-Alive: timeout=15, max=86 Connection: Keep-Alive Content-Type: application/json RESP BODY: {"events": [{"resource_name": "NovaCompute", "event_time": "2017-08-29T19:39:07Z", "links": [{"href": "http://9.114.118.98:8004/v1/defc4cff35d84851a464fa86eeb2db61/stacks/overcloud-Compute-cfy62xy2sabd-0-d7ybc653r3wm/bba9cd17-3971-4a81-83ac-bb568e401842/resources/NovaCompute/events/5bc4af09-b5e3-4fdf-ba3e-889b362bb9e8", "rel": "self"}, {"href": "http://9.114.118.98:8004/v1/defc4cff35d84851a464fa86eeb2db61/stacks/overcloud-Compute-cfy62xy2sabd-0-d7ybc653r3wm/bba9cd17-3971-4a81-83ac-bb568e401842/resources/NovaCompute", "rel": "resource"}, {"href": "http://9.114.118.98:8004/v1/defc4cff35d84851a464fa86eeb2db61/stacks/overcloud-Compute-cfy62xy2sabd-0-d7ybc653r3wm/bba9cd17-3971-4a81-83ac-bb568e401842", "rel": "stack"}, {"href": "http://9.114.118.98:8004/v1/defc4cff35d84851a464fa86eeb2db61/stacks/overcloud/51cde441-da64-4bae-81f9-40483e8a6b14", "rel": "root_stack"}], "logical_resource_id": "NovaCompute", "resource_status": "CREATE_FAILED", "resource_status_reason": "ResourceInError: resources.NovaCompute: Went to status ERROR due to \"Message: No valid host was found. There are not enough hosts available., Code: 500\"", "physical_resource_id": "eb26985a-b7bb-441b-8bf1-3abc97de0e42", "id": "5bc4af09-b5e3-4fdf-ba3e-889b362bb9e8"}, {"resource_name": "NovaCompute", "event_time": "2017-08-29T19:39:07Z", "links": [{"href": "http://9.114.118.98:8004/v1/defc4cff35d84851a464fa86eeb2db61/stacks/overcloud-Compute-cfy62xy2sabd-0-d7ybc653r3wm/bba9cd17-3971-4a81-83ac-bb568e401842/resources/NovaCompute/events/e05b423d-9081-4d87-8bf6-2ca5db416a9f", "rel": "self"}, {"href": "http://9.114.118.98:8004/v1/defc4cff35d84851a464fa86eeb2db61/stacks/overcloud-Compute-cfy62xy2sabd-0-d7ybc653r3wm/bba9cd17-3971-4a81-83ac-bb568e401842/resources/NovaCompute", "rel": "resource"}, {"href": "http://9.114.118.98:8004/v1/defc4cff35d84851a464fa86eeb2db61/stacks/overcloud-Compute-cfy62xy2sabd-0-d7ybc653r3wm/bba9cd17-3971-4a81-83ac-bb568e401842", "rel": "stack"}, {"href": "http://9.114.118.98:8004/v1/defc4cff35d84851a464fa86eeb2db61/stacks/overcloud/51cde441-da64-4bae-81f9-40483e8a6b14", "rel": "root_stack"}], "logical_resource_id": "NovaCompute", "resource_status": "DELETE_IN_PROGRESS", "resource_status_reason": "state changed", "physical_resource_id": "eb26985a-b7bb-441b-8bf1-3abc97de0e42", "id": "e05b423d-9081-4d87-8bf6-2ca5db416a9f"}, {"resource_name": "NovaCompute", "event_time": "2017-08-29T19:39:09Z", "links": [{"href": "http://9.114.118.98:8004/v1/defc4cff35d84851a464fa86eeb2db61/stacks/overcloud-Compute-cfy62xy2sabd-0-d7ybc653r3wm/bba9cd17-3971-4a81-83ac-bb568e401842/resources/NovaCompute/events/0ada35f8-adbc-47cc-b51a-667deaac91a6", "rel": "self"}, {"href": "http://9.114.118.98:8004/v1/defc4cff35d84851a464fa86eeb2db61/stacks/overcloud-Compute-cfy62xy2sabd-0-d7ybc653r3wm/bba9cd17-3971-4a81-83ac-bb568e401842/resources/NovaCompute", "rel": "resource"}, {"href": "http://9.114.118.98:8004/v1/defc4cff35d84851a464fa86eeb2db61/stacks/overcloud-Compute-cfy62xy2sabd-0-d7ybc653r3wm/bba9cd17-3971-4a81-83ac-bb568e401842", "rel": "stack"}, {"href": "http://9.114.118.98:8004/v1/defc4cff35d84851a464fa86eeb2db61/stacks/overcloud/51cde441-da64-4bae-81f9-40483e8a6b14", "rel": "root_stack"}], "logical_resource_id": "NovaCompute", "resource_status": "DELETE_COMPLETE", "resource_status_reason": "state changed", "physical_resource_id": "eb26985a-b7bb-441b-8bf1-3abc97de0e42", "id": "0ada35f8-adbc-47cc-b51a-667deaac91a6"}, {"resource_name": "NovaCompute", "event_time": "2017-08-29T19:39:13Z", "links": [{"href": "http://9.114.118.98:8004/v1/defc4cff35d84851a464fa86eeb2db61/stacks/overcloud-Compute-cfy62xy2sabd-0-d7ybc653r3wm/bba9cd17-3971-4a81-83ac-bb568e401842/resources/NovaCompute/events/68b6d02c-533e-49cb-9a15-062d26e0b01b", "rel": "self"}, {"href": "http://9.114.118.98:8004/v1/defc4cff35d84851a464fa86eeb2db61/stacks/overcloud-Compute-cfy62xy2sabd-0-d7ybc653r3wm/bba9cd17-3971-4a81-83ac-bb568e401842/resources/NovaCompute", "rel": "resource"}, {"href": "http://9.114.118.98:8004/v1/defc4cff35d84851a464fa86eeb2db61/stacks/overcloud-Compute-cfy62xy2sabd-0-d7ybc653r3wm/bba9cd17-3971-4a81-83ac-bb568e401842", "rel": "stack"}, {"href": "http://9.114.118.98:8004/v1/defc4cff35d84851a464fa86eeb2db61/stacks/overcloud/51cde441-da64-4bae-81f9-40483e8a6b14", "rel": "root_stack"}], "logical_resource_id": "NovaCompute", "resource_status": "CREATE_IN_PROGRESS", "resource_status_reason": "state changed", "physical_resource_id": "eb26985a-b7bb-441b-8bf1-3abc97de0e42", "id": "68b6d02c-533e-49cb-9a15-062d26e0b01b"}]} ... (undercloud) [stack@oscloud5 ~]$ sudo cat /var/log/nova/nova-compute.log ... 2017-08-29 18:34:54.833 13444 ERROR nova.compute.manager [req-f9963802-86b5-4791-a6d1-1a88bc2104fe - - - - -] No compute node record for host oscloud5.stglabs.ibm.com: ComputeHostNotFound_Remote: Compute host oscloud5.stglabs.ibm.com could not be found. Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 123, in _object_dispatch return getattr(target, method)(*args, **kwargs) File "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper result = fn(cls, context, *args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/objects/compute_node.py", line 437, in get_all_by_host use_slave=use_slave) File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 235, in wrapper return f(*args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/objects/compute_node.py", line 432, in _db_compute_node_get_all_by_host return db.compute_node_get_all_by_host(context, host) File "/usr/lib/python2.7/site-packages/nova/db/api.py", line 297, in compute_node_get_all_by_host return IMPL.compute_node_get_all_by_host(context, host) File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 280, in wrapped return f(context, *args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 713, in compute_node_get_all_by_host raise exception.ComputeHostNotFound(host=host) ComputeHostNotFound: Compute host oscloud5.stglabs.ibm.com could not be found. ... (undercloud) [stack@oscloud5 ~]$ openstack compute service list +----+----------------+--------------------------+----------+---------+-------+----------------------------+ | ID | Binary | Host | Zone | Status | State | Updated At | +----+----------------+--------------------------+----------+---------+-------+----------------------------+ | 11 | nova-conductor | oscloud5.stglabs.ibm.com | internal | enabled | up | 2017-08-30T03:04:26.000000 | | 13 | nova-scheduler | oscloud5.stglabs.ibm.com | internal | enabled | up | 2017-08-30T03:04:27.000000 | | 14 | nova-compute | oscloud5.stglabs.ibm.com | nova | enabled | up | 2017-08-30T03:04:28.000000 | +----+----------------+--------------------------+----------+---------+-------+----------------------------+ -+-------------+ (undercloud) [stack@oscloud5 ~]$ openstack hypervisor list +----+--------------------------------------+-----------------+--------------+-------+ | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | +----+--------------------------------------+-----------------+--------------+-------+ | 1 | 03e1c11c-177e-4adc-b0e1-f650825c8822 | ironic | 9.114.219.30 | up | | 2 | d46794ed-9371-4a2d-bf55-9aefe48f2fb8 | ironic | 9.114.219.30 | up | +----+--------------------------------------+-----------------+--------------+-------+ (undercloud) [stack@oscloud5 ~]$ openstack compute agent list --hypervisor 03e1c11c-177e-4adc-b0e1-f650825c8822 (undercloud) [stack@oscloud5 ~]$ openstack compute agent list --hypervisor d46794ed-9371-4a2d-bf55-9aefe48f2fb8 (undercloud) [stack@oscloud5 ~]$ sudo mysql ... MariaDB [(none)]> use nova; ... MariaDB [nova]> select uuid from compute_nodes where host = 'oscloud5.stglabs.ibm.com'; +--------------------------------------+ | uuid | +--------------------------------------+ | e569d3c8-599b-480c-82a9-a16d684dcea8 | | a44c64ca-cf21-4c37-9f1c-68c7034cb109 | +--------------------------------------+ 2 rows in set (0.00 sec)