Fedora 19 is when OpenShift Origin first became a feature.
This page is here to show how to setup OpenShift Origin on Fedora 19 using the packages in Fedora, as opposed to the packages published from upstream. These steps are written out to be done by hand. Yes, people can script and/or puppetize these steps. But these are written out so that people can see, and fine tune them.
Goal: By the end of this, you should have two machines. A broker machine, and one node machine. You should be able to create applications, that will be put on the node machine. You should be able to check the status of those applications. You should be able to point your web browser to the URL of those applications.
Note: There is no web console in Fedora 19. That will be in Fedora 20.
These instructions were created most from the following two places.
- https://www.openshift.com/wiki/build-your-own
- https://www.openshift.com/forums/openshift/fedora-18-openshift-origin-setup-steps-and-testing
Initial Setup of Broker and Node Machines
ON BOTH BROKER AND NODE
# Start with a Fedora 19 minimal install yum -y update # avoid clock skew yum -y install ntp /bin/systemctl enable ntpd.service /bin/systemctl start ntpd.service
ON BROKER
export DOMAIN="example.com" export BROKERIP="$(nm-tool | grep Address | grep -v HW | awk '{print $2}')" export BROKERNAME="broker.example.com" export NODEIP="--- IP Address from Node machine ---" export NODENAME="node.example.com" # Here is the IP Address from Broker machine nm-tool | grep Address | grep -v HW | awk '{print $2}'
ON NODE
export DOMAIN="example.com" export BROKERIP="--- IP Address from Broker machine ---" export BROKERNAME="broker.example.com" export NODEIP="$(nm-tool | grep Address | grep -v HW | awk '{print $2}')" export NODENAME="node.example.com" # Here is the IP Address from Node machine nm-tool | grep Address | grep -v HW | awk '{print $2}'
Setup and Configure Broker
Broker: Bind DNS
yum -y install bind bind-utils KEYFILE=/var/named/${DOMAIN}.key
cd /var/named/ dnssec-keygen -a HMAC-MD5 -b 512 -n USER -r /dev/urandom ${DOMAIN} KEY="$(grep Key: K${DOMAIN}*.private | cut -d ' ' -f 2)" cd - rndc-confgen -a -r /dev/urandom echo $KEY
restorecon -v /etc/rndc.* /etc/named.* chown -v root:named /etc/rndc.key chmod -v 640 /etc/rndc.key
echo "forwarders { 8.8.8.8; 8.8.4.4; } ;" >> /var/named/forwarders.conf restorecon -v /var/named/forwarders.conf chmod -v 755 /var/named/forwarders.conf
rm -rvf /var/named/dynamic mkdir -vp /var/named/dynamic
cat <<EOF > /var/named/dynamic/${DOMAIN}.db \$ORIGIN . \$TTL 1 ; 1 seconds (for testing only) ${DOMAIN} IN SOA ns1.${DOMAIN}. hostmaster.${DOMAIN}. ( 2011112904 ; serial 60 ; refresh (1 minute) 15 ; retry (15 seconds) 1800 ; expire (30 minutes) 10 ; minimum (10 seconds) ) NS ns1.${DOMAIN}. MX 10 mail.${DOMAIN}. \$ORIGIN ${DOMAIN}. ns1 A 127.0.0.1 EOF
cat <<EOF > /var/named/${DOMAIN}.key key ${DOMAIN} { algorithm HMAC-MD5; secret "${KEY}"; }; EOF
cat /var/named/dynamic/${DOMAIN}.db cat /var/named/${DOMAIN}.key chown -Rv named:named /var/named restorecon -rv /var/named
mv /etc/named.conf /etc/named.conf.openshift cat <<EOF > /etc/named.conf // named.conf // // Provided by Red Hat bind package to configure the ISC BIND named(8) DNS // server as a caching only nameserver (as a localhost DNS resolver only). // // See /usr/share/doc/bind*/sample/ for example named configuration files. // options { listen-on port 53 { any; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query { any; }; recursion yes; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; // set forwarding to the next nearest server (from DHCP response forward only; include "forwarders.conf"; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; // use the default rndc key include "/etc/rndc.key"; controls { inet 127.0.0.1 port 953 allow { 127.0.0.1; } keys { "rndc-key"; }; }; include "/etc/named.rfc1912.zones"; include "${DOMAIN}.key"; zone "${DOMAIN}" IN { type master; file "dynamic/${DOMAIN}.db"; allow-update { key ${DOMAIN} ; } ; }; EOF cat /etc/named.conf chown -v root:named /etc/named.conf restorecon /etc/named.conf firewall-cmd --add-service=dns firewall-cmd --permanent --add-service=dns firewall-cmd --list-all /bin/systemctl enable named.service /bin/systemctl start named.service
nsupdate -k ${KEYFILE} > server 127.0.0.1 > update delete broker.example.com A > update add **your broker full name ** 180 A **your broker ip address** (example: update add broker.example.com 180 A 192.168.122.220 ) > send > quit
ping broker.example.com dig @127.0.0.1 broker.example.com
Broker: DHCP client and hostname
echo "prepend domain-name-servers **your broker ip address**;" >> /etc/dhcp/dhclient-eth0.conf echo "supersede host-name \"broker\";" >> /etc/dhcp/dhclient-eth0.conf echo "supersede domain-name \"example.com\";" >> /etc/dhcp/dhclient-eth0.conf
Broker: hostname
echo "broker.example.com" > /etc/hostname
Broker: MongoDB
yum -y install mongodb-server
vi /etc/mongodb.conf # Uncomment auth = true # Add smallfiles = true
/usr/bin/systemctl enable mongod.service /usr/bin/systemctl status mongod.service /usr/bin/systemctl start mongod.service /usr/bin/systemctl status mongod.service
# Testing mongo > show dbs > exit
Broker: QPID
Activemq on F19 isn't ready for OpenShift production. When it is, we'll use that For now we'll use QPID with mcollective.
yum install mcollective-qpid-plugin qpid-cpp-server firewall-cmd --add-port=5672/tcp firewall-cmd --permanent --add-port=5672/tcp firewall-cmd --list-all
/usr/bin/systemctl enable qpidd.service /usr/bin/systemctl start qpidd.service /usr/bin/systemctl status qpidd.service
Broker: MCollective client ( using QPID)
yum -y install mcollective-client
mv /etc/mcollective/client.cfg /etc/mcollective/client.cfg.orig cat <<EOF > /etc/mcollective/client.cfg topicprefix = /topic/ main_collective = mcollective collectives = mcollective libdir = /usr/libexec/mcollective loglevel = debug logfile = /var/log/mcollective-client.log # Plugins securityprovider = psk plugin.psk = unset connector = qpid plugin.qpid.host=broker.example.com plugin.qpid.secure=false plugin.qpid.timeout=5 # Facts factsource = yaml plugin.yaml = /etc/mcollective/facts.yaml EOF
Broker: broker application
yum -y install openshift-origin-broker openshift-origin-broker-util rubygem-openshift-origin-auth-remote-user rubygem-openshift-origin-msg-broker-mcollective rubygem-openshift-origin-dns-bind
sed -i -e "s/ServerName .*$/ServerName broker.example.com/" /etc/httpd/conf.d/000002_openshift_origin_broker_servername.conf cat /etc/httpd/conf.d/000002_openshift_origin_broker_servername.conf
/usr/bin/systemctl enable httpd.service /usr/bin/systemctl enable ntpd.service /usr/bin/systemctl enable sshd.service
firewall-cmd --add-service=ssh firewall-cmd --add-service=http firewall-cmd --add-service=https firewall-cmd --permanent --add-service=ssh firewall-cmd --permanent --add-service=http firewall-cmd --permanent --add-service=https firewall-cmd --list-all
openssl genrsa -out /etc/openshift/server_priv.pem 2048 openssl rsa -in /etc/openshift/server_priv.pem -pubout > /etc/openshift/server_pub.pem ssh-keygen -t rsa -b 2048 -f ~/.ssh/rsync_id_rsa cp -v ~/.ssh/rsync_id_rsa* /etc/openshift/
setsebool -P httpd_unified=on httpd_can_network_connect=on httpd_can_network_relay=on httpd_run_stickshift=on named_write_master_zones=on fixfiles -R rubygem-passenger restore fixfiles -R mod_passenger restore restorecon -rv /var/run restorecon -rv /usr/share/gems/gems/passenger-*
vi /etc/openshift/broker.conf # Might not have to do anything but make sure you have the following lines CLOUD_DOMAIN="example.com" VALID_GEAR_SIZES="small,medium"
Broker: broker plugins and MongoDB user accounts
cp /usr/share/gems/gems/openshift-origin-auth-remote-user-*/conf/openshift-origin- auth-remote-user.conf.example /etc/openshift/plugins.d/openshift-origin-auth-remote-user.conf cp /etc/openshift/plugins.d/openshift-origin-msg-broker-mcollective.conf.example /etc/openshift/plugins.d/openshift-origin-msg-broker-mcollective.conf
cd /var/named/ KEY="$(grep Key: K${DOMAIN}*.private | cut -d ' ' -f 2)" cat $KEYFILE echo $KEY
cat <<EOF > /etc/openshift/plugins.d/openshift-origin-dns-bind.conf BIND_SERVER="127.0.0.1" BIND_PORT=53 BIND_KEYNAME="${DOMAIN}" BIND_KEYVALUE="${KEY}" BIND_ZONE="${DOMAIN}" EOF
cp -v /var/www/openshift/broker/httpd/conf.d/openshift-origin-auth-remote-user-basic.conf.sample /var/www/openshift/broker/httpd/conf.d/openshift-origin-auth-remote-user.conf htpasswd -c -b -s /etc/openshift/htpasswd demo demo # Don't forget your password. <demo password> cat /etc/openshift/htpasswd
grep MONGO /etc/openshift/broker.conf mongo openshift_broker_dev --eval 'db.addUser("openshift", "mooo")' # If you are going to change the username and/or password, change broker.conf
yum -y install rubygem-psych cd /var/www/openshift/broker gem install mongoid bundle --local
/usr/bin/systemctl enable openshift-broker.service /usr/bin/systemctl start httpd.service /usr/bin/systemctl start openshift-broker.service /usr/bin/systemctl status openshift-broker.service
curl -k -u demo:demopassword https://localhost/broker/rest/api
Setup and Configure Node
Node: Initial setup/configure
ON BROKER
keyfile=/var/named/${DOMAIN}.key oo-register-dns -h node -d ${DOMAIN} -n ${NODEIP} -k ${keyfile}
scp /etc/openshift/rsync_id_rsa.pub root@${NODENAME}:/root/.ssh/
ON NODE
cat /root/.ssh/rsync_id_rsa.pub >> /root/.ssh/authorized_keys rm -f /root/.ssh/rsync_id_rsa.pub
ON BROKER
ssh -i /root/.ssh/rsync_id_rsa root@${NODENAME} exit
Node: DHCP client
echo "prepend domain-name-servers **your broker ip address**;" >> /etc/dhcp/dhclient-eth0.conf echo "supersede host-name \"node\";" >> /etc/dhcp/dhclient-eth0.conf echo "supersede domain-name \"example.com\";" >> /etc/dhcp/dhclient-eth0.conf
Node: hostname
echo "node.example.com" > /etc/hostname
Node: MCollective
ON NODE
yum -y install openshift-origin-msg-node-mcollective
mv /etc/mcollective/server.cfg /etc/mcollective/server.cfg.orig cat <<EOF > /etc/mcollective/server.cfg topicprefix = /topic/ main_collective = mcollective collectives = mcollective libdir = /usr/libexec/mcollective logfile = /var/log/mcollective.log loglevel = debug daemonize = 1 direct_addressing = n # Plugins securityprovider = psk plugin.psk = unset connector = qpid plugin.qpid.host=${BROKERNAME} plugin.qpid.secure=false plugin.qpid.timeout=5 # Facts factsource = yaml plugin.yaml = /etc/mcollective/facts.yaml EOF
/bin/systemctl enable mcollective.service /bin/systemctl start mcollective.service
ON BROKER
mco ping
Node: node application
yum -y install rubygem-openshift-origin-node rubygem-passenger-native openshift-origin-port-proxy openshift-origin-node-util yum -y install openshift-origin-cartridge-cron-1.4 openshift-origin-cartridge-diy-0.1
firewall-cmd --add-service=ssh firewall-cmd --add-service=http firewall-cmd --add-service=https firewall-cmd --permanent --add-service=ssh firewall-cmd --permanent --add-service=http firewall-cmd --permanent --add-service=https firewall-cmd --list-all
Node: PAM namespace module, cgropus, and user quotas
ON NODE
- PAM
sed -i -e 's|pam_selinux|pam_openshift|g' /etc/pam.d/sshd for f in "runuser" "runuser-l" "sshd" "su" "system-auth-ac" do t="/etc/pam.d/$f" if ! grep -q "pam_namespace.so" "$t" then echo -e "session\t\trequired\tpam_namespace.so no_unmount_on_close" >> "$t" fi done
- CGROUPS
Need to still fixup the cgroup instructions
#echo "mount {" >> /etc/cgconfig.conf #echo " cpu = /cgroup/all;" >> /etc/cgconfig.conf #echo " cpuacct = /cgroup/all;" >> /etc/cgconfig.conf #echo " memory = /cgroup/all;" >> /etc/cgconfig.conf #echo " freezer = /cgroup/all;" >> /etc/cgconfig.conf #echo " net_cls = /cgroup/all;" >> /etc/cgconfig.conf #echo "}" >> /etc/cgconfig.conf #restorecon -v /etc/cgconfig.conf #mkdir /cgroup #restorecon -RFvv /cgroup
/bin/systemctl enable cgconfig.service /bin/systemctl enable cgred.service /usr/sbin/chkconfig openshift-cgroups on /bin/systemctl restart cgconfig.service /bin/systemctl restart cgred.service /usr/sbin/service openshift-cgroups restart
- DISK QUOTA
# Edit fstab and add usrquota to whichever filesystem # has /var/lib/openshift on it UUID=b9e21eae-4b8c-4936-9f5d-d10631ff535e / ext4 defaults,usrquota 1 1 # reboot or remount mount -o remount / quotacheck -cmug /
Node: SELinux and System Control
ON NODE
- SELINUX
setsebool -P httpd_unified=on httpd_can_network_connect=on httpd_can_network_relay=on httpd_read_user_content=on httpd_enable_homedirs=on httpd_run_stickshift=on allow_polyinstantiation=on
restorecon -rv /var/run restorecon -rv /usr/sbin/mcollectived /var/log/mcollective.log /var/run/mcollectived.pid restorecon -rv /var/lib/openshift /etc/openshift/node.conf /etc/httpd/conf.d/openshift
- SYSTEM CONTROL SETTINGS
echo "# Added for OpenShift" >> /etc/sysctl.d/openshift.conf echo "kernel.sem = 250 32000 32 4096" >> /etc/sysctl.d/openshift.conf echo "net.ipv4.ip_local_port_range = 15000 35530" >> /etc/sysctl.d/openshift.conf echo "net.netfilter.nf_conntrack_max = 1048576" >> /etc/sysctl.d/openshift.conf sysctl -p /etc/sysctl.d/openshift.conf
Node: SSH, Port Proxy, and Node application
ON NODE
- SSH
vi /etc/ssh/sshd_config > AcceptEnv GIT_SSH perl -p -i -e "s/^#MaxSessions .*$/MaxSessions 40/" /etc/ssh/sshd_config perl -p -i -e "s/^#MaxStartups .*$/MaxStartups 40/" /etc/ssh/sshd_config /bin/systemctl restart sshd.service
- PORT PROXY
firewall-cmd --add-port=35531-65535/tcp firewall-cmd --permanent --add-port=35531-65535/tcp firewall-cmd --list-all /bin/systemctl enable openshift-port-proxy.service /bin/systemctl restart openshift-port-proxy.service
- NODE SETUP
/bin/systemctl enable openshift-gears.service vi /etc/openshift/node.conf > PUBLIC_HOSTNAME="node.example.com" > PUBLIC_IP="192.168.122.161" (Node IP Address) > BROKER_HOST="192.168.122.220" (Broker IP Address) > CLOUD_DOMAIN="example.com" /etc/cron.minutely/openshift-facts
Node: Reboot
reboot
Test
ON BROKER (after node is back up)
mco ping curl -k -u demo:demo https://localhost/broker/rest/api
yum -y install rubygem-rhc LIBRA_SERVER=broker.example.com rhc setup