We have many TurboGears applications deployed in our infrastructure. This SOP and the HaProxy SOP explain how TurboGears apps are deployed.
Contact Information
Owner: Fedora Infrastructure Team
Contact: #fedora-admin
Persons: sysadmin-web, sysadmin-main
Location: Phoenix
Servers: bapp1, app1, app2, app3 and app4, app5, puppet1
Purpose: Provide In-House Web Applications for our users
Deploying a new App
These instructions will help you setup a load balanced turbogears application that runs on a URL of the form:
https://admin.fedoraproject.org/myapp
Configuration of the new application is done on puppet1. If you need to drop rpms of the application into the fedora infrastructure repository (because they are not available in Fedora), that presently occurs on puppet1.
Add RPMs to the Fedora Infrastructure Repo
This part may require assistance from somebody with access to the Infrastructure repo.
1. Copy the rpms to puppet1 2. Sign the rpms with the Fedora Infrastructure Key
rpm --add-sign foo-1.0-1.el5.*.rpm
3. Copy the rpms to the repo directory
mv foo-1.0-1.el5.src.rpm /mnt/fedora/app/fi-repo/5/SRPMS/ mv foo-1.0-1.el5.x86_64.rpm /mnt/fedora/app/fi-repo/5/x86_64/
4. Run createrepo to regenerate the repo metadata
cd /mnt/fedora/app/fi-repo/el/5/SRPMS/ sudo createrepo --update . cd /mnt/fedora/app/fi-repo/el/5/x86_64/ sudo createrepo --update .
Configure the application
First log into puppet1 and checkout the repositories our configs are stored in:
git clone /git/puppet
Create a module
1. cd modules; mkdir -p packagename/{files,manifests,templates} 2. create a file named manifests/init.pp with something similar to the following:
class myapp::app { include httpd::app package { "mypackage": ensure => installed, } file { "/etc/myapp/myapp.cfg": owner => "root", group => "root", mode => 0600, content => template("myapp/myapp-prod.cfg"), notify => Service["httpd"], require => Package["mypackage"], } # ... and similar setup for all files needed file { "/etc/httpd/conf.d/myapp.conf": owner => "root", group => "root", mode => 0644, source => "file:///myapp/myapp-app.conf", notify => Service["httpd"], require => Package["httpd"], } }
This defines a server class that we'll add to the app servers. The package definition uses the name of your application's rpm package to install from a yum repo and get required dependencies. If you are developing and building the application yourself and have control over when new releases make it to the yum repo, set ensure => latest
to automatically get the latest version otherwise set ensure => present
so we can vette the latest releases before installing them on the server.
3. Continue editing myapp.pp and add something like the following:
define myapp::proxy( $website, $path, $proxyurl ) { include httpd::proxy file { "/etc/httpd/conf.d/$website/myapp.conf": owner => "root", group => "root", mode => 0644, content => template("myapp/myapp-proxy.conf.erb"), notify => Service["httpd"], require => Httpd::Website[$website], } }
This defines a class that we'll add to the proxy servers to send requests to the application running on the app servers.
Now that we've defined the files and packages our app uses we need to define which machines the files and packages belong on.
1. cd ~/manifests/servergroups 2. edit appRhel.pp to include your myapp::app class.
class appRhel { [...] include pkgdb::app include myapp::app
3. Next edit the manifest for the proxy servers, proxy.pp:
class proxy { [...] myapp { "admin.fedoraproject.org/myapp": website => "admin.fedoraproject.org", path => "/myapp", proxyurl => "http://localhost:10014", }
That's it for the manifests, now we need to create the config files we reference in the manifest file.
Create the app config
1. cd ~/puppet/modules/myapp/files 2. create a myapp-app.conf that may look something like this:
WSGISocketPrefix run/wsgi # TG implements its own signal handler. WSGIRestrictSignal Off # These are the real tunables WSGIDaemonProcess myapp processes=8 threads=2 maximum-requests=50000 user=apache group=apache display-name=myapp inactivity-timeout=300 shutdown-timeout=10 WSGIPythonOptimize 1 WSGIScriptAlias /myapp /usr/lib/python2.4/site-packages/myapp/myapp.wsgi/myapp <Directory /usr/lib/python2.4/site-packages/myapp> WSGIProcessGroup myapp Order deny,allow Allow from all </Directory>
Create the proxy config
1. cd ~/puppet/modules/myapp/templates 2. create myapp-proxy.erb.conf and put the following into the file:
ProxyPass <%= path %> <%= proxyurl %>/myapp ProxyPassReverse <%= path %> <%= proxyurl %>/myapp
3. Follow the HAProxy SOP to add your app there. This is the addres that you give for proxyurl in proxy.pp.
Application config file
The final piece is to create a config file template for your app. 1. cd ~/puppet/modules/myapp/tepmlates 2. edit myapp-prod.cfg.erb
You should look at other application's config files and the one you've been using for testing locally. A few things to note:
- This file is a template. So using:
<%= myappDatabasePassword %>
will substitute the password from the config file into the template. This keeps passwords out of the puppet repository and thus keeps them from being logged to a publicly readable list. Contact somebody in sysadmin-main about getting the password into the private repo.
Upgrading an App
First put the new packages in the infrastructure repo as noted above.
Then on puppet1 run:
sudo func '*app[1-6].fedora*' call command run 'yum clean metadata' sudo func '*app[1-6].fedora*' call yumcmd update APPPKGNAME sudo func '*app[1-2].fedora*' call command run '/etc/init.d/httpd graceful' sudo func '*app[3-6].fedora*' call command run '/etc/init.d/httpd graceful'
The first two commands upgrade the package on the app server.
The second two commands restart apache. We do it in two parts so that we always have some app servers ready to handle requests. This should avoid downtime. However, if the new package contains changes to the WSGI app or config, you may need to do a hard httpd restart.
After restarting the servers it may be necessary to clean the cache of static files. This is because javscript, css, and other static files are cached. If those reference things that are not available in the new server, then we will get errors. Cleaning the cache is done by rm'ing the cache on the proxy servers.
ssh proxy[1-5] sudo su - rm -rf /srv/cache/mod_cache/*
Troubleshooting and Resolution
[COMMON ISSUES AND HOW TO FIX THEM]