(Add tracker bug) |
|||
(14 intermediate revisions by 2 users not shown) | |||
Line 8: | Line 8: | ||
* Email: [mailto:besser82@fedoraproject.org besser82@fedoraproject.org] | * Email: [mailto:besser82@fedoraproject.org besser82@fedoraproject.org] | ||
* Release notes owner: <!--- To be assigned by docs team [[User:FASAccountName| Release notes owner name]] <email address> --> | * Release notes owner: <!--- To be assigned by docs team [[User:FASAccountName| Release notes owner name]] <email address> --> | ||
== Current status == | == Current status == | ||
* Targeted release: [[Releases/21 | Fedora 21]] | * Targeted release: [[Releases/21 | Fedora 21]] | ||
* Last updated: 2013- | * Last updated: 2013-12-15 | ||
* Tracker bug: | * Tracker bug: [https://bugzilla.redhat.com/show_bug.cgi?id=1098143 #1098143] | ||
== Detailed Description == | == Detailed Description == | ||
Line 21: | Line 18: | ||
* SCM-repo: [https://github.com/shogun-toolbox/shogun on GitHub] | * SCM-repo: [https://github.com/shogun-toolbox/shogun on GitHub] | ||
* Documentation: [http://shogun-toolbox.org/doc/en/current/ is available here] | * Documentation: [http://shogun-toolbox.org/doc/en/current/ is available here] | ||
* further Information: [http://en.wikipedia.org/wiki/Shogun_%28toolbox%29 on Wikipedia] | |||
The machine learning toolbox's focus is on large scale kernel methods and especially on [http://en.wikipedia.org/wiki/Support_vector_machine Support Vector Machines (SVM)]. It provides a generic SVM object interfacing to several different SVM implementations, among them the state of the art LibSVM | The machine learning toolbox's focus is on large scale kernel methods and especially on [http://en.wikipedia.org/wiki/Support_vector_machine Support Vector Machines (SVM)]. It provides a generic [http://en.wikipedia.org/wiki/Support_vector_machine SVM object] interfacing to several different [http://en.wikipedia.org/wiki/Support_vector_machine SVM implementations], among them the state of the art [http://en.wikipedia.org/wiki/LIBSVM LibSVM]. Each of the [http://en.wikipedia.org/wiki/Support_vector_machine SVMs] can be combined with a variety of kernels. The toolbox not only provides efficient implementations of the most common kernels, like the Linear, Polynomial, Gaussian and Sigmoid Kernel but also comes with a number of recent string kernels as e.g. the Locality Improved, Fischer, TOP, Spectrum, Weighted Degree Kernel (with shifts). For the latter the efficient LINADD optimizations are implemented. Also SHOGUN offers the freedom of working with custom pre-computed kernels. One of its key features is the "combined kernel" which can be constructed by a weighted linear combination of a number of sub-kernels, each of which not necessarily working on the same domain. An optimal sub-kernel weighting can be learned using Multiple Kernel Learning. Currently [http://en.wikipedia.org/wiki/Support_vector_machine SVM] 2-class classification and regression problems can be dealt with. However SHOGUN also implements a number of linear methods like Linear Discriminant Analysis (LDA), Linear Programming Machine (LPM), (Kernel) Perceptrons and features algorithms to train hidden Markov-models. The input feature-objects can be dense, sparse or strings and of type int/short/double/char and can be converted into different feature types. Chains of "pre-processors" (e.g. subtracting the mean) can be attached to each feature object allowing for on-the-fly pre-processing. | ||
offers the freedom of working with custom pre-computed kernels. One of its key features is the "combined kernel" which can be constructed by a weighted linear combination of a number of sub-kernels, each of which not necessarily working on the same domain. An optimal sub-kernel weighting can be learned using Multiple Kernel Learning. Currently SVM 2-class classification and regression problems can be dealt with. However SHOGUN also implements a number of linear methods like Linear Discriminant Analysis (LDA), Linear Programming Machine (LPM), (Kernel) Perceptrons and features algorithms to train hidden Markov-models. The input feature-objects can be dense, sparse or strings and of type int/short/double/char and can be converted into different feature types. Chains of "pre-processors" (e.g. subtracting the mean) can be attached to each feature object allowing for on-the-fly pre-processing. | |||
== Benefit to Fedora == | == Benefit to Fedora == | ||
This will bring a Machine Learning Toolkit with a unique selection and versatility to Fedora. | |||
== Scope == | == Scope == | ||
* Proposal owners: | * Proposal owners: Create the rpm-spec and file a review bug. Have the package build after review was granted. | ||
* Other developers: N/A (not a System Wide Change) | |||
* Release engineering: N/A (not a System Wide Change) | |||
* Other developers: N/A (not a System Wide Change) | * Policies and guidelines: N/A (not a System Wide Change) | ||
* Release engineering: N/A (not a System Wide Change) | |||
* Policies and guidelines: N/A (not a System Wide Change) | |||
== Upgrade/compatibility impact == | == Upgrade/compatibility impact == | ||
N/A (not a System Wide Change) | N/A (not a System Wide Change) | ||
== How To Test == | == How To Test == | ||
N/A (not a System Wide Change) | N/A (not a System Wide Change) | ||
== User Experience == | == User Experience == | ||
N/A (not a System Wide Change) | N/A (not a System Wide Change) | ||
== Dependencies == | == Dependencies == | ||
N/A (not a System Wide Change) | N/A (not a System Wide Change) | ||
== Contingency Plan == | == Contingency Plan == | ||
* Contingency mechanism: N/A (not a System Wide Change) <!-- REQUIRED FOR SYSTEM WIDE CHANGES --> | * Contingency mechanism: N/A (not a System Wide Change) <!-- REQUIRED FOR SYSTEM WIDE CHANGES --> | ||
* Contingency deadline: N/A (not a System Wide Change) <!-- REQUIRED FOR SYSTEM WIDE CHANGES --> | * Contingency deadline: N/A (not a System Wide Change) <!-- REQUIRED FOR SYSTEM WIDE CHANGES --> | ||
* Blocks release? N/A (not a System Wide Change) <!-- REQUIRED FOR SYSTEM WIDE CHANGES --> | * Blocks release? N/A (not a System Wide Change) <!-- REQUIRED FOR SYSTEM WIDE CHANGES --> | ||
== Documentation == | == Documentation == | ||
N/A (not a System Wide Change) | N/A (not a System Wide Change) | ||
== Release Notes == | == Release Notes == | ||
see Detailed Description | |||
[[Category:ChangeAcceptedF21]] | |||
[[Category:SelfContainedChange]] | [[Category:SelfContainedChange]] | ||
Latest revision as of 11:41, 15 May 2014
The Shogun Machine Learning Toolbox
Summary
SHOGUN is a large Scale Machine Learning Toolbox, being implemented in C++ and offering interfaces to C#, Java, Lua, Octave, Perl, Python, R and Ruby.
Owner
- Name: Björn Esser
- Email: besser82@fedoraproject.org
- Release notes owner:
Current status
Detailed Description
- Homepage: The SHOGUN Machine Learning Toolbox
- SCM-repo: on GitHub
- Documentation: is available here
- further Information: on Wikipedia
The machine learning toolbox's focus is on large scale kernel methods and especially on Support Vector Machines (SVM). It provides a generic SVM object interfacing to several different SVM implementations, among them the state of the art LibSVM. Each of the SVMs can be combined with a variety of kernels. The toolbox not only provides efficient implementations of the most common kernels, like the Linear, Polynomial, Gaussian and Sigmoid Kernel but also comes with a number of recent string kernels as e.g. the Locality Improved, Fischer, TOP, Spectrum, Weighted Degree Kernel (with shifts). For the latter the efficient LINADD optimizations are implemented. Also SHOGUN offers the freedom of working with custom pre-computed kernels. One of its key features is the "combined kernel" which can be constructed by a weighted linear combination of a number of sub-kernels, each of which not necessarily working on the same domain. An optimal sub-kernel weighting can be learned using Multiple Kernel Learning. Currently SVM 2-class classification and regression problems can be dealt with. However SHOGUN also implements a number of linear methods like Linear Discriminant Analysis (LDA), Linear Programming Machine (LPM), (Kernel) Perceptrons and features algorithms to train hidden Markov-models. The input feature-objects can be dense, sparse or strings and of type int/short/double/char and can be converted into different feature types. Chains of "pre-processors" (e.g. subtracting the mean) can be attached to each feature object allowing for on-the-fly pre-processing.
Benefit to Fedora
This will bring a Machine Learning Toolkit with a unique selection and versatility to Fedora.
Scope
- Proposal owners: Create the rpm-spec and file a review bug. Have the package build after review was granted.
- Other developers: N/A (not a System Wide Change)
- Release engineering: N/A (not a System Wide Change)
- Policies and guidelines: N/A (not a System Wide Change)
Upgrade/compatibility impact
N/A (not a System Wide Change)
How To Test
N/A (not a System Wide Change)
User Experience
N/A (not a System Wide Change)
Dependencies
N/A (not a System Wide Change)
Contingency Plan
- Contingency mechanism: N/A (not a System Wide Change)
- Contingency deadline: N/A (not a System Wide Change)
- Blocks release? N/A (not a System Wide Change)
Documentation
N/A (not a System Wide Change)
Release Notes
see Detailed Description