Line 127: | Line 127: | ||
"Transaction Summary" gives: | "Transaction Summary" gives: | ||
< | <pre> | ||
Install 8 Package(s) | Install 8 Package(s) | ||
Upgrade 120 Package(s) | Upgrade 120 Package(s) | ||
Line 139: | Line 139: | ||
user 0m2.977s | user 0m2.977s | ||
sys 0m0.337s | sys 0m0.337s | ||
</ | </pre> | ||
< | <pre> | ||
# time yum install "*" | # time yum install "*" | ||
real 1m45.649s | real 1m45.649s | ||
user 1m42.596s | user 1m42.596s | ||
sys 0m2.391s | sys 0m2.391s | ||
</pre> | |||
Sends a lot of debug messages to stdout, and this is across ssh, though the bulk of time is spent in "user" above, so perhaps not a significant part of the cost. | Sends a lot of debug messages to stdout, and this is across ssh, though the bulk of time is spent in "user" above, so perhaps not a significant part of the cost. | ||
Piping to /dev/null confirms this: | |||
<pre> | |||
# time (yum install "*" > /dev/null) | # time (yum install "*" > /dev/null) | ||
real 1m44.228s | real 1m44.228s | ||
user 1m41.885s | user 1m41.885s | ||
sys 0m2.239s | sys 0m2.239s | ||
</ | </pre> | ||
== Various optimization notes == | == Various optimization notes == |
Revision as of 19:34, 24 August 2010
Some speed optimization ideas for yum
- use Cython to compile one or more of the .py files to .c code and compile them into DSOs
- use PyPy; would require building out a full PyPy stack: an alternative implementation of Python. Last time I looked a the generated .c code, I wasn't comfortable debugging the result (I didn't feel that debugging a crash in the result would be feasible at 3am)
- use Unladen Swallow for Python 2: would require porting the US 2.6 stack to 2.7, and a separate Python stack
- use Unladen Swallow for Python 3: wait until it gets merged (in Python 3.3); port yum to python 3
Using Cython seems to be the least invasive approach.
I've looked at the generated code and it seems debuggable; I'd be able to debug issues arising.
Notes on Cython
From upstream, on one simple example: "Simply compiling this in Cython merely gives a 35% speedup. This is better than nothing, but adding some static types can make a much larger difference."
The approach I'm currently thinking of is to simply compile the .py files with Cython, without using any of the non-Python extensions that Cython adds.
Cython can't yet deal with all of yum's code, but the latest (unreleased) devel version of Cython can compile about half of yum's source files, including depsolve.py Given that depsolve.py contains profiler hooks, I'm guessing that that's a known area for performance tuning, so perhaps even just optimizing that file might be a win.
This would ensure that yum can continue to be usable without Cython: no non-standard language features.
This would mean that yum upstream could continue to target Python, but in, say Fedora 15 onwards we'd build our yum packages with this optimization.
It might perhaps be appropriate to add targeted optimizations to Cython, so that it does a better job when handling yum as input.
Going further, we could perhaps introduce pragma-style comments to yum sources e.g
# cython: isinstance(foo, dict)
which Cython could take as an optimization hint to generate specialized code for handling the case when foo is a PyDictObject (might be more productive to support exact type matches here, though).
Example of generated code
See depsolve.html. You can see the generated .c code by clicking on the yellow-colored .py code. This was generated using the "-a" option to Cython. Note that this was generated using a development copy of Cython, which has the support for some lambdas.
Note in particular that the generated C code has comments throughout indicating which line of .py code (in context) it corresponds to: this is important for my own comfort level, in feeling that this is supportable.
Correctness bugs
Method:
$ cp /usr/lib/python2.7/site-packages/yum/depsolve.py . $ python cython-devel/cython.py -a depsolve.py $ gcc -I/usr/include/python2.7 -lpython2.7 -g -O2 -fPIC -shared -o depsolve.so depsolve.c # cp ~dmalcolm/depsolve.so /usr/lib/python2.7/site-packages/yum # ls -al /usr/lib/python2.7/site-packages/yum/depsolve.* -rw-r--r--. 1 root root 57757 Jul 22 16:51 /usr/lib/python2.7/site-packages/yum/depsolve.py -rw-r--r--. 1 root root 35697 Jul 22 16:51 /usr/lib/python2.7/site-packages/yum/depsolve.pyc -rw-r--r--. 1 root root 35641 Jul 22 16:51 /usr/lib/python2.7/site-packages/yum/depsolve.pyo -rwxr-xr-x. 1 root root 1498275 Aug 24 15:21 /usr/lib/python2.7/site-packages/yum/depsolve.so # python Python 2.7 (r27:82500, Jul 28 2010, 18:07:57) [GCC 4.5.0 20100716 (Red Hat 4.5.0-3)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import yum >>> import yum.depsolve >>> yum.depsolve.__file__ '/usr/lib/python2.7/site-packages/yum/depsolve.so' >>> dir (yum.depsolve) ['BOOLEAN_STATES', 'Conflicts', 'DBVERSION', 'DepCheck', 'Depsolve', 'Errors', 'LETTERFLAGS', 'ListPackageSack', 'PATTERNS_INDEXED_MAX', 'PATTERNS_MAX', 'PLUG_OPT_BOOL', 'PLUG_OPT_FLOAT', 'PLUG_OPT_INT', 'PLUG_OPT_STRING', 'PLUG_OPT_WHERE_ALL', 'PLUG_OPT_WHERE_MAIN', 'PLUG_OPT_WHERE_REPO', 'PO_CONFIG', 'PO_DIR', 'PO_DOC', 'PO_FILE', 'PO_GHOST', 'PO_INSTALLEDPKG', 'PO_LOCALPKG', 'PO_REMOTEPKG', 'REPO_PROBLEM_COMPS', 'REPO_PROBLEM_METADATA', 'REPO_PROBLEM_OTHER', 'REPO_PROBLEM_PACKAGE', 'REPO_PROBLEM_REPOMD', 'RPM_CHECKSUM_TYPES', 'RPM_TO_SQLITE', 'Requires', 'SYMBOLFLAGS', 'TR_DEPENDS', 'TR_DEPENDSON', 'TR_OBSOLETEDBY', 'TR_OBSOLETES', 'TR_UPDATEDBY', 'TR_UPDATES', 'TS_AVAILABLE', 'TS_ERASE', 'TS_FAILED', 'TS_INSTALL', 'TS_INSTALL_STATES', 'TS_OBSOLETED', 'TS_OBSOLETING', 'TS_REMOVE_STATES', 'TS_TRUEINSTALL', 'TS_UPDATE', 'TS_UPDATED', 'TX_BLACK', 'TX_GREY', 'TX_WHITE', 'TransactionMember', 'YUM_PID_FILE', '_', '__builtins__', '__doc__', '__file__', '__name__', '__package__', '__pyx_scope_struct__compare_providers', '__pyx_scope_struct__requiringFromInstalled', '__test__', 'archDifference', 'canCoinstall', 'flags', 'logging', 'logginglevels', 'max', 'min', 'misc', 'os', 'packages', 'rpm', 'rpmUtils', 'types', 'unique', 'version_tuple_to_string', 'warnings']
Up to this point, everything's looking good.
Unfortunately, we're running into a bug with the generated code:
# yum update Loaded plugins: auto-update-debuginfo, presto, refresh-packagekit Found 9 installed debuginfo package(s) Setting up Update Process Resolving Dependencies --> Running transaction check ---> Package GConf2.x86_64 0:2.31.7-1.fc14 set to be updated Traceback (most recent call last): File "/usr/bin/yum", line 29, in <module> yummain.user_main(sys.argv[1:], exit_code=True) File "/usr/share/yum-cli/yummain.py", line 258, in user_main errcode = main(args) File "/usr/share/yum-cli/yummain.py", line 154, in main (result, resultmsgs) = base.buildTransaction() File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 910, in buildTransaction (rescode, restring) = self.resolveDeps() File "depsolve.py", line 714, in depsolve.Depsolve.resolveDeps (depsolve.c:13321) File "depsolve.py", line 814, in depsolve.Depsolve._resolveRequires (depsolve.c:15125) File "depsolve.py", line 882, in depsolve.Depsolve._checkInstall (depsolve.c:16046) TypeError: unbound method _sort_req_key() must be called with Depsolve instance as first argument (got tuple instance instead)
Optimization potential
Why would it be faster to simply compile with Cython?
- it avoids the bytecode dispatch loop: operations by the module are "unrolled" directly into libpython API calls, and this can be individually optimized by the C compiler: error handling can be marked as unlikely, and the unrolled .c code should give us better CPU branch prediction; the result should also be more directly amenable to further optimization work: C-level profiling tools such as oprofile would indicate specifically where we're spending time in the .py code, rather than it all showing up in the bytecode dispatch loop.
- it avoids some stack manipulation: temporaries are stored directly as locals within the .c code, and twiddling of ob_refcnt fields can be reduced
Using Cython "bakes in" some values for builtins: calls to the builtin "len" are turned directly into calls to PyObject_Length, rather than doublechecking each time what the value of __builtins__.len is, and calling it. So this is a semantic difference from regular Python, and some monkey-patching is ruled out, but I think it's a reasonable optimization.
Correctness risks
Adding a compile-to-C and compile-to-DSO stages to the process introduces various points of failure, and possible semantic changes (see e.g the notes on builtins above).
Need a set of reproducible test cases to ensure that yum continues to behave as expected, to minimize risk here, and to quickly locate failures.
What about all of the plugins? In theory, nothing should change: a Cython-generated module has the same dictionary and its names are importable in the same way. But "the proof of the pudding is in the eating". Would need some unit tests, and some time to soak in rawhide.
Pessimization risks
- older versions of Cython didn't do as good a job of constant folding/propagation as Python's peephole optimizer. Largely seems to be fixed in the devel branch, but worth keeping an eye out for. For example, yum's transactioninfo.py has:
return txmember.ts_state in ('u', 'i')
and not isinstance(txmember.po, (YumInstalledPackage, YumAvailablePackageSqlite))
and the 0.12.1-generated C was constructing the tuples: ('u', 'i') and (YumInstalledPackage, YumAvailablePackageSqlite) every time; the former can be constant; the latest devel version of Cython does, reducing it to a pair of comparisons for equality against const "u" and const "i"; see Cython/Compiler/Optimize.py (and we could profile yum and find further optimizations)
- looking at _sort_reqs(), the line
mapper = {'EQ' : 1, 'LT' : 2, 'LE' : 3, 'GT' : 4, 'GE' : 5,
uses a dictionary which is never modified. The Cython-generated C code is rebuilding this dictionary every time the function is called. As it turns out, Python 2.6's peephole optimizer doesn't optimize this either:
>>> dis(depsolve.Depsolve._sort_reqs) 811 0 BUILD_MAP 6 3 LOAD_CONST 1 (1) 6 LOAD_CONST 2 ('EQ') 9 STORE_MAP 10 LOAD_CONST 3 (2) 13 LOAD_CONST 4 ('LT') 16 STORE_MAP 17 LOAD_CONST 5 (3) 20 LOAD_CONST 6 ('LE') 23 STORE_MAP 24 LOAD_CONST 7 (4) 27 LOAD_CONST 8 ('GT') 30 STORE_MAP 31 LOAD_CONST 9 (5) 34 LOAD_CONST 10 ('GE') 37 STORE_MAP
Profiling notes
Method: x86_64 F14 box with yum-3.2.27-19.fc14.noarch
Running "time yum update", with a warm local cache of metadata, with /usr/share/yum-cli/output.py:userconfirm modified to always return False, so that "Is this ok [y/N]: " immediately returns and does nothing, so that we can time the taken to download metadata and depsolve.
"Transaction Summary" gives:
Install 8 Package(s) Upgrade 120 Package(s) Remove 2 Package(s) Total download size: 202 M Exiting on user Command Complete! real 0m3.318s user 0m2.977s sys 0m0.337s
# time yum install "*" real 1m45.649s user 1m42.596s sys 0m2.391s
Sends a lot of debug messages to stdout, and this is across ssh, though the bulk of time is spent in "user" above, so perhaps not a significant part of the cost.
Piping to /dev/null confirms this:
# time (yum install "*" > /dev/null) real 1m44.228s user 1m41.885s sys 0m2.239s
Various optimization notes
Profile! Profile! Profile!
- lots of ideas here; approach: profile yum with a Cython build and find the slow spots, and add optimizations to Cython to generate better .c code
- foo.has_key() is generating a function call each time (with tuple construction for the args); might perhaps test for dictionaries and have a specialized implementation that goes directly to PyDict_ API (though arguably this is a semantic change)
- const string literal "" % (args): it's doing a PyNumber_Remainder; it could use typechecking and convert it in-place to a PyString_Format call, avoiding various indirections (much harder: compile this down to specialized C-code, based on the exact format strings) profile!!! This is a special-case of specialization of binops for where we know the types: would this be a new optimization for Cython? BinopNode.
- min() builtin is being invoked on a pair of ints, when it could just inline directly: (in _common_prefix_len(): num = min(len(x), len(y)) ); profile, profile, profile!!!
- quite a lot of complexity in parsing the arguments as well. Perhaps we could directly inline functions, with a fast path that detects when a function call is calling the known function, and avoid the thunking at both ends??? profile profile profile!
TODO
- try adding Cython to the build of yum
- measure the impact of using a Cython .c build of depsolve.py