On a CentOS7 system
[hamzy@oscloud5 ~]$ lsb_release -a LSB Version: :core-4.1-amd64:core-4.1-noarch Distributor ID: CentOS Description: CentOS Linux release 7.3.1611 (Core) Release: 7.3.1611 Codename: Core [stack@oscloud5 ~]$ uname -a Linux oscloud5.stglabs.ibm.com 3.10.0-514.16.1.el7.x86_64 #1 SMP Wed Apr 12 15:04:24 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
Unfortunately, it seems that Environment setup for baremetal environment does not explain how to install the undercloud. There are three machines in this scenario:
arch | use | portname1 | MAC1 | IP1 | portname2 | MAC2 | IP2 |
x86_64 | undercloud | eno2 | 6c:ae:8b:29:2a:02 | 9.114.219.30 | eno4 | 6c:ae:8b:29:2a:04 | 9.114.118.98 |
ppc64le | overcloud control | enP3p9s0f0 | 6c:ae:8b:6a:74:14 | 9.114.219.134 | enp1s0 | 34:40:b5:b6:ea:bc | 9.114.118.50 |
ppc64le | overcloud compute | enP3p5s0f2 | 00:90:fa:74:05:52 | 9.114.219.49 | enP3p5s0f3 | 00:90:fa:74:05:53 | 9.114.118.154 |
So, following Undercloud installation, I perform the following:
[hamzy@oscloud5 ~]$ sudo useradd stack [hamzy@oscloud5 ~]$ sudo passwd stack [hamzy@oscloud5 ~]$ echo "stack ALL=(root) NOPASSWD:ALL" | sudo tee -a /etc/sudoers.d/stack [hamzy@oscloud5 ~]$ sudo chmod 0440 /etc/sudoers.d/stack [hamzy@oscloud5 ~]$ sudo su - stack [stack@oscloud5 ~]$ sudo hostnamectl set-hostname oscloud5.stglabs.ibm.com [stack@oscloud5 ~]$ sudo hostnamectl set-hostname --transient oscloud5.stglabs.ibm.com [stack@oscloud5 ~]$ sudo curl -L -o /etc/yum.repos.d/delorean.repo https://trunk.rdoproject.org/centos7-master/current-passed-ci/delorean.repo [stack@oscloud5 ~]$ sudo curl -L -o /etc/yum.repos.d/delorean-deps.repo https://trunk.rdoproject.org/centos7/delorean-deps.repo [stack@oscloud5 ~]$ sudo yum install -y python-tripleoclient [stack@oscloud5 ~]$ cp /usr/share/instack-undercloud/undercloud.conf.sample ~/undercloud.conf [stack@oscloud5 ~]$ cat << '__EOF__' > instackenv.json { "nodes": [ { "pm_type":"agent_ipmitool", "mac":[ "34:40:b5:b6:ea:bc" ], "cpu":"16", "memory":"1048576", "disk":"1000", "arch":"ppc64le", "pm_password":"update", "pm_addr":"9.114.118.51" }, { "pm_type":"agent_ipmitool", "mac":[ "00:90:fa:74:05:53" ], "cpu":"16", "memory":"1048576", "disk":"1000", "arch":"ppc64le", "pm_password":"update", "pm_addr":"9.114.118.155" } ] } __EOF__
I transfer over the built overcloud images:
[hamzy@pkvmci853 ~]$ (OCB=$(dig @192.168.122.1 -4 +short Overcloud.virbr0); UC=9.114.118.98; ssh-keygen -f ~/.ssh/known_hosts -R ${UC}; ssh-keyscan ${UC} >> ~/.ssh/known_hosts; scp -3 hamzy@${OCB}:~/*{initrd,initramfs,kernel,vmlinuz,qcow2}* stack@${UC}:~/)
I then modify undercloud.conf as follows:
[stack@oscloud5 ~]$ cat << __EOF__ | patch -p0 --- undercloud.conf.orig 2017-08-25 12:04:54.935063830 +0000 +++ undercloud.conf 2017-08-25 12:05:17.561063576 +0000 @@ -17,21 +17,25 @@ # defined by local_interface, with the netmask defined by the prefix # portion of the value. (string value) #local_ip = 192.168.24.1/24 +local_ip = 9.114.118.98/24 # Network gateway for the Neutron-managed network for Overcloud # instances. This should match the local_ip above when using # masquerading. (string value) #network_gateway = 192.168.24.1 +network_gateway = 9.114.118.98 # Virtual IP or DNS address to use for the public endpoints of # Undercloud services. Only used with SSL. (string value) # Deprecated group/name - [DEFAULT]/undercloud_public_vip #undercloud_public_host = 192.168.24.2 +undercloud_public_host = 9.114.118.98 # Virtual IP or DNS address to use for the admin endpoints of # Undercloud services. Only used with SSL. (string value) # Deprecated group/name - [DEFAULT]/undercloud_admin_vip #undercloud_admin_host = 192.168.24.3 +undercloud_admin_host = 9.114.118.98 # DNS nameserver(s) to use for the undercloud node. (list value) #undercloud_nameservers = @@ -74,6 +78,7 @@ # Network interface on the Undercloud that will be handling the PXE # boots and DHCP for Overcloud instances. (string value) #local_interface = eth1 +local_interface = eno4 # MTU to use for the local_interface. (integer value) #local_mtu = 1500 @@ -82,18 +87,22 @@ # instances. This should be the subnet used for PXE booting. (string # value) #network_cidr = 192.168.24.0/24 +network_cidr = 9.114.118.0/24 # Network that will be masqueraded for external access, if required. # This should be the subnet used for PXE booting. (string value) #masquerade_network = 192.168.24.0/24 +masquerade_network = 9.114.118.0/24 # Start of DHCP allocation range for PXE and DHCP of Overcloud # instances. (string value) #dhcp_start = 192.168.24.5 +dhcp_start = 9.114.118.245 # End of DHCP allocation range for PXE and DHCP of Overcloud # instances. (string value) #dhcp_end = 192.168.24.24 +dhcp_end = 9.114.118.248 # Path to hieradata override file. If set, the file will be copied # under /etc/puppet/hieradata and set as the first file in the hiera @@ -112,12 +121,14 @@ # doubt, use the default value. (string value) # Deprecated group/name - [DEFAULT]/discovery_interface #inspection_interface = br-ctlplane +inspection_interface = br-ctlplane # Temporary IP range that will be given to nodes during the inspection # process. Should not overlap with the range defined by dhcp_start # and dhcp_end, but should be in the same network. (string value) # Deprecated group/name - [DEFAULT]/discovery_iprange #inspection_iprange = 192.168.24.100,192.168.24.120 +inspection_iprange = 9.114.118.249,9.114.118.250 # Whether to enable extra hardware collection during the inspection # process. Requires python-hardware or python-hardware-detect package __EOF__
And install the undercloud:
[stack@oscloud5 ~]$ time openstack undercloud install 2>&1 | tee output.undercloud.install ... Undercloud install complete. ...
There is a bug for needing the userid for machines using ipmi that needs to be patched around.
[stack@oscloud5 ~]$ (cd /usr/lib/python2.7/site-packages/tripleo_common/utils/; cat << __EOF__ | sudo patch -p0) --- nodes.py.orig 2017-08-24 15:54:07.614226329 +0000 +++ nodes.py 2017-08-24 15:54:29.699440619 +0000 @@ -105,7 +105,7 @@ 'pm_user': '%s_username' % prefix, 'pm_password': '%s_password' % prefix, } - mandatory_fields = list(mapping) + mandatory_fields = ['pm_addr', 'pm_password'] # list(mapping) if has_port: mapping['pm_port'] = '%s_port' % prefix __EOF__ [stack@undercloud ~]$ (for SERVICE in openstack-mistral-api.service openstack-mistral-engine.service openstack-mistral-executor.service; do sudo systemctl restart ${SERVICE}; done)
Ironic needs some different settings to be able to support agent_ipmitool for ppc64le:
[stack@oscloud5 ~]$ (cd /etc/ironic; cat << '__EOF__' | sudo patch -p0) --- ironic.conf.orig 2017-09-11 18:35:37.847711251 +0000 +++ ironic.conf 2017-09-12 15:45:18.699206643 +0000 @@ -30,7 +30,7 @@ # "ironic.drivers" entrypoint. An example may be found in the # developer documentation online. (list value) #enabled_drivers = pxe_ipmitool -enabled_drivers=pxe_drac,pxe_ilo,pxe_ipmitool +enabled_drivers=pxe_drac,pxe_ilo,pxe_ipmitool,agent_ipmitool # Specify the list of hardware types to load during service # initialization. Missing hardware types, or hardware types @@ -1511,6 +1511,9 @@ [glance] +temp_url_endpoint_type = swift +swift_temp_url_key = secretkey + # # From ironic # @@ -1644,6 +1647,7 @@ # "endpoint_url/api_version/[account/]container/object_id" # (string value) #swift_account = <None> +swift_account = AUTH_e1cd79d92e1649de805e91c062c65a17 # The Swift API version to create a temporary URL for. # Defaults to "v1". Swift temporary URL format: @@ -1657,6 +1661,7 @@ # "endpoint_url/api_version/[account/]container/object_id" # (string value) #swift_container = glance +swift_container = glance # The "endpoint" (scheme, hostname, optional port) for the # Swift URL of the form @@ -1667,6 +1672,7 @@ # will be appended. Required for temporary URLs. (string # value) #swift_endpoint_url = <None> +swift_endpoint_url = http://9.114.118.98:8080 # This should match a config by the same name in the Glance # configuration file. When set to 0, a single-tenant store @@ -1706,6 +1712,7 @@ # The secret token given to Swift to allow temporary URL # downloads. Required for temporary URLs. (string value) #swift_temp_url_key = <None> +swift_temp_url_key = secretkey # Tenant ID (string value) #tenant_id = <None> @@ -3512,6 +3519,7 @@ # configuration per node architecture. For example: # aarch64:/opt/share/grubaa64_pxe_config.template (dict value) #pxe_config_template_by_arch = +pxe_config_template_by_arch = ppc64le:$pybasedir/drivers/modules/pxe_config.template # IP address of ironic-conductor node's TFTP server. (string # value) @@ -3551,10 +3559,12 @@ # Bootfile DHCP parameter per node architecture. For example: # aarch64:grubaa64.efi (dict value) #pxe_bootfile_name_by_arch = +pxe_bootfile_name_by_arch = ppc64le:config # Enable iPXE boot. (boolean value) #ipxe_enabled = false -ipxe_enabled=True +#ipxe_enabled=True +ipxe_enabled = false # On ironic-conductor node, the path to the main iPXE script # file. (string value) __EOF__ [stack@oscloud5 ~]$ source stackrc (undercloud) [stack@oscloud5 ~]$ (AUTH=$(openstack object store account show -f shell | grep account | sed -r 's,^account="([^"]*)",\1,'); sudo sed -i 's,AUTH_e1cd79d92e1649de805e91c062c65a17,'${AUTH}',' /etc/ironic/ironic.conf) (undercloud) [stack@oscloud5 ~]$ openstack object store account set --property Temp-URL-Key=secretkey (undercloud) [stack@oscloud5 ~]$ for I in openstack-ironic-conductor.service openstack-ironic-inspector.service openstack-ironic-inspector-dnsmasq.service; do sudo systemctl restart ${I}; done
I then go through the process of installing the overcloud:
(undercloud) [stack@oscloud5 ~]$ time openstack overcloud image upload ... (undercloud) [stack@oscloud5 ~]$ time openstack overcloud node import --provide instackenv.json 2>&1 | tee output.overcloud.node.import ...
There is a bug with ironic, swift, and swift_temp_url_key. There needs to be a glance container created, and the image also uploaded into it. There is another bug where both the kernel_id and the ramdisk_id cannot be set in glance for a full disk image to be deployed.
(undercloud) [stack@oscloud5 ~]$ (if openstack container list -f value | grep --quiet --invert-match glance; then openstack container create glance; fi) (undercloud) [stack@oscloud5 ~]$ (FILE="overcloud-full.qcow2"; UUID=$(openstack image list -f value | grep 'overcloud-full ' | awk '{print $1;}'); openstack image delete ${UUID}; openstack image create --container-format bare --disk-format qcow2 --min-disk 0 --min-ram 0 --file ${FILE} --public overcloud-full) (undercloud) [stack@oscloud5 ~]$ (export OS_AUTH_URL="http://9.114.118.98:5000/v3"; swift post -m "Temp-URL-Key:secretkey") # A (works?) (undercloud) [stack@oscloud5 ~]$ (FILE="overcloud-full.qcow2"; export OS_AUTH_URL="http://9.114.118.98:5000/v3"; UUID=$(openstack image list -f value | grep 'overcloud-full ' | awk '{print $1;}'); echo "${FILE} in glance is ${UUID}"; cat ${FILE} | swift upload glance --object-name ${UUID} -) # B (fails?) (undercloud) [stack@oscloud5 ~]$ (FILE="overcloud-full.qcow2"; UUID=$(openstack image list -f value | grep 'overcloud-full ' | awk '{print $1;}'); echo "${FILE} in glance is ${UUID}"; if openstack object list glance -f value | grep --quiet --invert-match ${UUID}; then echo "Uploading to swift..."; openstack object create --name ${UUID} glance ${FILE}; fi)
(undercloud) [stack@oscloud5 ~]$ openstack overcloud profiles list +--------------------------------------+-----------+-----------------+-----------------+-------------------+ | Node UUID | Node Name | Provision State | Current Profile | Possible Profiles | +--------------------------------------+-----------+-----------------+-----------------+-------------------+ | f42a71b9-a8fd-4bae-aeab-eac3b2ddb211 | | available | None | | | 2a66ce3b-1bbc-4142-b9c2-8c2346748c7e | | available | None | | +--------------------------------------+-----------+-----------------+-----------------+-------------------+ (undercloud) [stack@oscloud5 ~]$ (COMPUTE=""; CONTROL=""; while IFS=$' ' read -r -a PROFILES; do if [ -z "${COMPUTE}" ]; then COMPUTE=${PROFILES[0]}; ironic node-update ${COMPUTE} replace properties/capabilities=profile:compute,boot_option:local; continue; fi; if [ -z "${CONTROL}" ]; then CONTROL=${PROFILES[0]}; ironic node-update ${CONTROL} replace properties/capabilities=profile:control,boot_option:local; continue; fi; done < <(openstack overcloud profiles list -f value)) (undercloud) [stack@oscloud5 ~]$ openstack overcloud profiles list +--------------------------------------+-----------+-----------------+-----------------+-------------------+ | Node UUID | Node Name | Provision State | Current Profile | Possible Profiles | +--------------------------------------+-----------+-----------------+-----------------+-------------------+ | f42a71b9-a8fd-4bae-aeab-eac3b2ddb211 | | available | compute | | | 2a66ce3b-1bbc-4142-b9c2-8c2346748c7e | | available | control | | +--------------------------------------+-----------+-----------------+-----------------+-------------------+
And now do the deploy:
(undercloud) [stack@oscloud5 ~]$ time openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml --control-scale 1 --compute-scale 1 --control-flavor control --compute-flavor compute 2>&1 | tee output.overcloud.deploy ... 2017-09-18 23:33:06Z [overcloud.ControllerAllNodesValidationDeployment]: CREATE_COMPLETE state changed 2017-09-18 23:43:08Z [overcloud.ComputeAllNodesValidationDeployment.0]: SIGNAL_IN_PROGRESS Signal: deployment 70e82773-5598-4aaa-951e-d82faa7d7550 failed (1) 2017-09-18 23:43:09Z [overcloud.ComputeAllNodesValidationDeployment.0]: CREATE_FAILED Error: resources[0]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 1 2017-09-18 23:43:09Z [overcloud.ComputeAllNodesValidationDeployment]: CREATE_FAILED Resource CREATE failed: Error: resources[0]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 1 2017-09-18 23:43:09Z [overcloud.ComputeAllNodesValidationDeployment]: CREATE_FAILED Error: resources.ComputeAllNodesValidationDeployment.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 1 2017-09-18 23:43:09Z [overcloud]: CREATE_FAILED Resource CREATE failed: Error: resources.ComputeAllNodesValidationDeployment.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 1 Stack overcloud CREATE_FAILED overcloud.ComputeAllNodesValidationDeployment.0: resource_type: OS::Heat::StructuredDeployment physical_resource_id: 70e82773-5598-4aaa-951e-d82faa7d7550 status: CREATE_FAILED status_reason: | Error: resources[0]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 1 deploy_stdout: | ... Ping to 172.16.0.18 failed. Retrying... Ping to 172.16.0.18 failed. Retrying... Ping to 172.16.0.18 failed. Retrying... Ping to 172.16.0.18 failed. Retrying... Heat Stack create failed. Heat Stack create failed. Ping to 172.16.0.18 failed. Retrying... Ping to 172.16.0.18 failed. Retrying... Ping to 172.16.0.18 failed. Retrying... Ping to 172.16.0.18 failed. Retrying... Ping to 172.16.0.18 failed. Retrying... FAILURE (truncated, view all with --long) deploy_stderr: | 172.16.0.18 is not pingable. Local Network: 172.16.0.0/24 (undercloud) [stack@oscloud5 ~]$ openstack stack resource list -f value overcloud | grep CREATE_FAILED ComputeAllNodesValidationDeployment 6a821729-41e6-4ee6-9699-e83b15a9c004 OS::Heat::StructuredDeployments CREATE_FAILED 2017-09-18T23:05:45Z (undercloud) [stack@oscloud5 ~]$ openstack stack resource show overcloud ComputeAllNodesValidationDeployment -f shell attributes="{u'deploy_stderrs': None, u'deploy_stdouts': None, u'deploy_status_codes': None}" creation_time="2017-09-18T23:05:45Z" description="" links="[{u'href': u'http://9.114.118.98:8004/v1/82d04ccc57604a159ee4cffa1ceaa8d7/stacks/overcloud/41d35f51-6b0a-4214-8aae-7d822e321a7d/resources/ComputeAllNodesValidationDeployment', u'rel': u'self'}, {u'href': u'http://9.114.118.98:8004/v1/82d04ccc57604a159ee4cffa1ceaa8d7/stacks/overcloud/41d35f51-6b0a-4214-8aae-7d822e321a7d', u'rel': u'stack'}, {u'href': u'http://9.114.118.98:8004/v1/82d04ccc57604a159ee4cffa1ceaa8d7/stacks/overcloud-ComputeAllNodesValidationDeployment-dtq7w7f2qlci/6a821729-41e6-4ee6-9699-e83b15a9c004', u'rel': u'nested'}]" logical_resource_id="ComputeAllNodesValidationDeployment" physical_resource_id="6a821729-41e6-4ee6-9699-e83b15a9c004" required_by="[u'AllNodesExtraConfig']" resource_name="ComputeAllNodesValidationDeployment" resource_status="CREATE_FAILED" resource_status_reason="Error: resources.ComputeAllNodesValidationDeployment.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 1" resource_type="OS::Heat::StructuredDeployments" updated_time="2017-09-18T23:05:45Z" (undercloud) [stack@oscloud5 ~]$ openstack baremetal node list -f value 835d9888-c05d-4b8b-a8b4-b3c09cd50a4e None 771d8913-f979-45c4-81ca-b14143ef4653 power on active False 8f1d29d2-e855-4c7e-b926-32a3d73e907a None abc10ccd-13b2-4440-b385-e9342e680972 power on active False (undercloud) [stack@oscloud5 ~]$ openstack server list -f value abc10ccd-13b2-4440-b385-e9342e680972 overcloud-controller-0 ACTIVE ctlplane=9.114.118.247 overcloud-full control 771d8913-f979-45c4-81ca-b14143ef4653 overcloud-novacompute-0 ACTIVE ctlplane=9.114.118.248 overcloud-full compute (undercloud) [stack@oscloud5 ~]$ openstack subnet list +--------------------------------------+---------------------+--------------------------------------+----------------+ | ID | Name | Network | Subnet | +--------------------------------------+---------------------+--------------------------------------+----------------+ | 02e2adf9-cfe9-4452-bbaf-5ee09226d3dd | tenant_subnet | 6a0be84a-26af-4383-a65a-fe14d97d941a | 172.16.0.0/24 | | 10413389-bbf3-4fdd-b0de-f6785b9feba1 | ctlplane-subnet | cc7f9bc7-fb14-49d1-9616-7cfbb952167e | 9.114.118.0/24 | | 5c8a5786-f5e0-4b8f-b3f6-e05986e65322 | storage_subnet | 07e9d6e5-f3f6-4c79-983c-6a5a9ef2cfc5 | 172.18.0.0/24 | | 8ccfc932-6ea6-490a-8622-b17df00d4216 | internal_api_subnet | 2fcb7275-82a7-4250-b793-5c96169c6889 | 172.17.0.0/24 | | a29a82ad-d56a-4ec3-a277-c76289162db5 | external_subnet | 8a64269e-710c-4541-864a-24f9231a48ca | 10.0.0.0/24 | | dfd98786-9c19-4ea4-8ae9-abd08f87fcf9 | storage_mgmt_subnet | cb797f88-5778-4b3a-9b73-191488721ff7 | 172.19.0.0/24 | +--------------------------------------+---------------------+--------------------------------------+----------------+ (undercloud) [stack@oscloud5 ~]$ sudo cat /var/log/heat/heat-engine.log ... 2017-09-18 20:56:52.509 25230 INFO heat.engine.resource [req-bffba8de-f530-4968-84e0-f8ca12537e72 - admin - default default] CREATE: ServerUpdateAllowed "NovaCompute" [480cabc4-e375-4b0d-95cc-2cf9f8b3e339] Stack "overcloud-Compute-ugqpl4cckxfw-0-fj7r7oj5ffr7" [9f6a369e-5227-4a02-a5e7-88f3fb429039] 2017-09-18 20:56:52.509 25230 ERROR heat.engine.resource Traceback (most recent call last): 2017-09-18 20:56:52.509 25230 ERROR heat.engine.resource File "/usr/lib/python2.7/site-packages/heat/engine/resource.py", line 831, in _action_recorder 2017-09-18 20:56:52.509 25230 ERROR heat.engine.resource yield 2017-09-18 20:56:52.509 25230 ERROR heat.engine.resource File "/usr/lib/python2.7/site-packages/heat/engine/resource.py", line 939, in _do_action 2017-09-18 20:56:52.509 25230 ERROR heat.engine.resource yield self.action_handler_task(action, args=handler_args) 2017-09-18 20:56:52.509 25230 ERROR heat.engine.resource File "/usr/lib/python2.7/site-packages/heat/engine/scheduler.py", line 351, in wrapper 2017-09-18 20:56:52.509 25230 ERROR heat.engine.resource step = next(subtask) 2017-09-18 20:56:52.509 25230 ERROR heat.engine.resource File "/usr/lib/python2.7/site-packages/heat/engine/resource.py", line 890, in action_handler_task 2017-09-18 20:56:52.509 25230 ERROR heat.engine.resource done = check(handler_data) 2017-09-18 20:56:52.509 25230 ERROR heat.engine.resource File "/usr/lib/python2.7/site-packages/heat/engine/resources/openstack/nova/server.py", line 869, in check_create_complete 2017-09-18 20:56:52.509 25230 ERROR heat.engine.resource check = self.client_plugin()._check_active(server_id) 2017-09-18 20:56:52.509 25230 ERROR heat.engine.resource File "/usr/lib/python2.7/site-packages/heat/engine/clients/os/nova.py", line 238, in _check_active 2017-09-18 20:56:52.509 25230 ERROR heat.engine.resource 'code': fault.get('code', _('Unknown')) 2017-09-18 20:56:52.509 25230 ERROR heat.engine.resource ResourceInError: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500" ... (undercloud) [stack@oscloud5 ~]$ (IP=$(openstack server list --name overcloud-novacompute-0 -f value | sed -rn -e 's,^.*ctlplane=([^ ]*).*$,\1,p'); ssh-keygen -f ~/.ssh/known_hosts -R ${IP}; ssh-keyscan ${IP} >> ~/.ssh/known_hosts; ssh -t heat-admin@${IP}) [heat-admin@overcloud-novacompute-0 ~]$ sudo grep 172.16.0.18 /etc/hosts 172.16.0.18 overcloud-controller-0.tenant.localdomain overcloud-controller-0.tenant