We have many TurboGears applications deployed in our infrastructure. This SOP, the Supervisor SOP, and HaProxy SOP explain how TurboGears apps are deployed.
Contact Information
Owner: Fedora Infrastructure Team
Contact: #fedora-admin
Persons: sysadmin-web
Location: Phoenix
Servers: bapp1, app1, app2, app3 and app4, puppet1
Purpose: Provide In-House Web Applications for our users
Deploying a new App
These instructions will help you setup a load balanced turbogears application that runs on a URL of the form:
https://admin.fedoraproject.org/myapp
Configuration of the new application is done on puppet1. If you need to drop rpms of the application into the fedora infrastructure repository (because they are not available in Fedora), that presently occurs on lockbox.
Add RPMs to the Fedora Infrastructure Repo
1. Copy the rpms to puppet1 2. Sign the rpms with the Fedora Infrastructure Key
rpm --add-sign foo-1.0-1.el5.*.rpm
3. Copy the rpms to the repo directory
mv foo-1.0-1.el5.src.rpm /mnt/fedora/app/fi-repo/5/SRPMS/ mv foo-1.0-1.el5.x86_64.rpm /mnt/fedora/app/fi-repo/5/x86_64/
4. Run createrepo to regenerate the repo metadata
cd /mnt/fedora/app/fi-repo/el/5/SRPMS/ sudo createrepo --update . cd /mnt/fedora/app/fi-repo/el/5/x86_64/ sudo createrepo --update .
Configure the application
First log into puppet1 and checkout the repositories our configs are stored in:
git clone /git/puppet
Create a module
1. cd modules; mkdir -p packagename/{files,manifests,templates} 2. create a file named manifests/init.pp with something similar to the following:
class myapp::app { include httpd::app package { "mypackage": ensure => installed, } file { "/etc/myapp/myapp.cfg": owner => "root", group => "root", mode => 0600, content => template("myapp/myapp-prod.cfg"), notify => Service["httpd"], require => Package["mypackage"], } # ... and similar setup for all files needed file { "/etc/httpd/conf.d/myapp.conf": owner => "root", group => "root", mode => 0644, source => "file:///myapp/myapp-app.conf", notify => Service["httpd"], require => Package["httpd"], } }
This defines a server class that we'll add to the app servers. The package definition uses the name of your application's rpm package to install from a yum repo and get required dependencies. If you are developing and building the application yourself and have control over when new releases make it to the yum repo, set ensure => latest
to automatically get the latest version otherwise set ensure => present
so we can vette the latest releases before installing them on the server.
3. Continue editing myapp.pp and add something like the following:
define myapp::proxy( $website, $path, $proxyurl ) { include httpd::proxy file { "/etc/httpd/conf.d/$website/myapp.conf": owner => "root", group => "root", mode => 0644, content => template("myapp/myapp-proxy.conf.erb"), notify => Service["httpd"], require => Httpd::Website[$website], } }
This defines a class that we'll add to the proxy servers to send requests to the application running on the app servers.
Now that we've defined the files and packages our app uses we need to define which machines the files and packages belong on.
1. cd ~/manifests/servergroups 2. edit appRhel.pp to include your myapp::app class.
class appRhel { [...] include pkgdb::app include myapp::app
3. Next edit the manifest for the proxy servers, proxy.pp:
class proxy { [...] myapp { "admin.fedoraproject.org/myapp": website => "admin.fedoraproject.org", path => "/myapp", proxyurl => "http://localhost:10014", }
That's it for the manifests, now we need to create the config files we reference in the manifest file.
Create the app config
1. cd ~/puppet/modules/myapp/files 2. create a myapp-app.conf that may look something like this:
WSGISocketPrefix run/wsgi # TG implements its own signal handler. WSGIRestrictSignal Off # These are the real tunables WSGIDaemonProcess myapp processes=8 threads=2 maximum-requests=50000 user=apache group=apache display-name=fas inactivity-timeout=300 shutdown-timeout=10 WSGIPythonOptimize 1 WSGIScriptAlias /myapp /usr/lib/python2.4/site-packages/myapp/myapp.wsgi/myapp <Directory /usr/lib/python2.4/site-packages/myapp> WSGIProcessGroup myapp Order deny,allow Allow from all </Directory>
Create the proxy config
1. cd ~/puppet/modules/myapp/templates 2. create myapp-proxy.erb.conf and put the following into the file:
ProxyPass <%= path %> <%= proxyurl %>/myapp ProxyPassReverse <%= path %> <%= proxyurl %>/myapp
3. Follow the HAProxy SOP to add your app there. This is the addres that you give for proxyurl in proxy.pp.
Application config file
The final piece is to create a config file template for your app. 1. cd ~/puppet/modules/myapp/tepmlates 2. edit myapp-prod.cfg.erb
You should look at other application's config files and the one you've been using for testing locally. A few things to note:
- This file is a template. So using:
<%= myappDatabasePassword %>
will substitute the password from the config file into the template. This keeps passwords out of the configs repository and thus keeps them from being logged to a publicly readable list.
Upgrading an App
First put the new packages in the infrastructure repo as noted above.
Then on puppet1 run:
sudo func '*app[1-6].fedora*' call command run 'yum clean metadata' sudo func '*app[1-6].fedora*' call command run 'yum -y upgrade APPPKGNAME' sudo func '*app[1-2].fedora*' call command run '/etc/init.d/httpd graceful' sudo func '*app[3-6].fedora*' call command run '/etc/init.d/httpd graceful'
When running yum upgrade, make sure you specify the APPPKGNAME! We don't want to have yum upgrade every package on the box as, in many cases, we need to review the packages that will be updated instead of blindly applying them.
The first two commands upgrade the package on the app server.
The second two commands restart apache. We do it in two parts so that we always have some app servers ready to handle requests. This should avoid downtime.
After restarting the servers it may be necessary to clean the cache of static files. This is because javscript, css, and other static files are cached. If those reference things that are not available in the new server, then we will get errors. Cleaning the cache is done by rm'ing the cache on the proxy servers.
ssh proxy[1-5] sudo su - rm -rf /srv/cache/mod_cache/*
Troubleshooting and Resolution
[COMMON ISSUES AND HOW TO FIX THEM]