(→Other question: responses) |
|||
(4 intermediate revisions by 4 users not shown) | |||
Line 13: | Line 13: | ||
Alternate approach: make the debuginfo sources be downloadable from an internet-visible fileserver, and do the analysis on the user's machine. The rpms could be unpacked on demand. The user's computer would merely be downloading small amounts of public information, rather than sending large amounts of private information. I believe Will Woods was working on something like this. | Alternate approach: make the debuginfo sources be downloadable from an internet-visible fileserver, and do the analysis on the user's machine. The rpms could be unpacked on demand. The user's computer would merely be downloading small amounts of public information, rather than sending large amounts of private information. I believe Will Woods was working on something like this. | ||
:--[[User:Mtoman|Mtoman]] 18:30, 16 November 2010 (UTC): | |||
:* It is one of the problems. The user has to trust Retrace Server's administrator, that's why only HTTPS communication will be allowed. | |||
:* At the moment Retrace Server uses xz compression. It is able to compress all the information (including coredump) to a suitable size. :For my testing crashes it always was < 7MB, even for 100MB coredumps from OOo or web browser. Other compression algorithms and no :compression (it is not really needed to compress if the Retrace Server is running locally) will be available in the future. | |||
: | |||
:I guess the alternate project you are talking about is [[Features/DebuginfoFS|DebuginfoFS]] and I agree it would be better in many :cases. The main advantage of Retrace Server is that it archives all the versions of all packages, so you are able to process crashes :even from not fully updated system. That's why we would like to implement both. | |||
::--[[User:Dmalcolm|Dmalcolm]] 22:50, 2 December 2010 (UTC): why would DebuginfoFS not have access to all versions of all packages? Can't it simply be wired up to Koji's NFS server, and access them on-demand? | |||
::: Do you mean http://kojipkgs.fedoraproject.org/packages/? Is it available via NFS? | |||
::: We also need yum metadata (currently file paths only) for all packages, to find debuginfo packages with certain build id, and to find which sub-package contain an executable/library. --[[User:Kklic|Kklic]] 15:35, 24 January 2011 (UTC) | |||
--[[User:Jcm|Jcm]] 05:50, 13 December 2010 (UTC): I fully agree with David. Doing this on the user's system with a simple export of debuginfofs available over the network sounds like a much easier solution, with less security risk too :) | |||
: Yes, exposing all packages unpackaged to internet handles is more secure. | |||
: It does not seem to me that doing the server part (expose all packages in 2 releases + rawhide, unpackaged, plus some metadata), is simpler than implementing retrace server, but it would handle much more clients at once. On the client side, GDB would need to be extended to support reading binaries/libraries/debuginfo from various subdirectories. A modification would probably be needed to ensure that correct GDB pretty-printers are used. Yum (or similar) metadata for all packages need to be downloaded to determine the packages to access. --[[User:Kklic|Kklic]] 15:35, 24 January 2011 (UTC) |
Latest revision as of 15:35, 24 January 2011
Wrangler Review 2010-11-04
Please complete the How To Test section--we need some idea how a person would go about testing this feature.
Thank you. poelcat 16:41, 4 November 2010 (UTC)
How To Test looks good. Somehow we are missing the User Experience and Release Notes sections. Please complete them too. This needs to completed before FESCo reviews. Thanks poelcat 17:50, 16 November 2010 (UTC)
Other question
--Dmalcolm 20:44, 15 November 2010 (UTC) This approach seems to have some drawbacks:
- the user has to send the coredump across the internet to a Fedora site, and the coredump might contain sensitive information. The user has no way of telling if the backtrace will contain sensitive information until the analysis is received back from the remote server
- the coredump may be rather large (many megabytes); some people may object to uploading many megabytes to a remote site; many people have asymmetric connections to the internet, where upload rates are considerably slower than download rates.
Alternate approach: make the debuginfo sources be downloadable from an internet-visible fileserver, and do the analysis on the user's machine. The rpms could be unpacked on demand. The user's computer would merely be downloading small amounts of public information, rather than sending large amounts of private information. I believe Will Woods was working on something like this.
- --Mtoman 18:30, 16 November 2010 (UTC):
- It is one of the problems. The user has to trust Retrace Server's administrator, that's why only HTTPS communication will be allowed.
- At the moment Retrace Server uses xz compression. It is able to compress all the information (including coredump) to a suitable size. :For my testing crashes it always was < 7MB, even for 100MB coredumps from OOo or web browser. Other compression algorithms and no :compression (it is not really needed to compress if the Retrace Server is running locally) will be available in the future.
- I guess the alternate project you are talking about is DebuginfoFS and I agree it would be better in many :cases. The main advantage of Retrace Server is that it archives all the versions of all packages, so you are able to process crashes :even from not fully updated system. That's why we would like to implement both.
- --Dmalcolm 22:50, 2 December 2010 (UTC): why would DebuginfoFS not have access to all versions of all packages? Can't it simply be wired up to Koji's NFS server, and access them on-demand?
- Do you mean http://kojipkgs.fedoraproject.org/packages/? Is it available via NFS?
- We also need yum metadata (currently file paths only) for all packages, to find debuginfo packages with certain build id, and to find which sub-package contain an executable/library. --Kklic 15:35, 24 January 2011 (UTC)
- --Dmalcolm 22:50, 2 December 2010 (UTC): why would DebuginfoFS not have access to all versions of all packages? Can't it simply be wired up to Koji's NFS server, and access them on-demand?
--Jcm 05:50, 13 December 2010 (UTC): I fully agree with David. Doing this on the user's system with a simple export of debuginfofs available over the network sounds like a much easier solution, with less security risk too :)
- Yes, exposing all packages unpackaged to internet handles is more secure.
- It does not seem to me that doing the server part (expose all packages in 2 releases + rawhide, unpackaged, plus some metadata), is simpler than implementing retrace server, but it would handle much more clients at once. On the client side, GDB would need to be extended to support reading binaries/libraries/debuginfo from various subdirectories. A modification would probably be needed to ensure that correct GDB pretty-printers are used. Yum (or similar) metadata for all packages need to be downloaded to determine the packages to access. --Kklic 15:35, 24 January 2011 (UTC)