(→ThinProvisioning: changed last updated date to match wiki history) |
|||
(25 intermediate revisions by 3 users not shown) | |||
Line 5: | Line 5: | ||
== Owner == | == Owner == | ||
* Name: Joe Thornber and [[User: | * Name: Joe Thornber and [[User:msnitzer|Mike Snitzer]] | ||
* Email: | * Email: | ||
** thornber AT redhat DOT com | ** thornber AT redhat DOT com | ||
Line 12: | Line 12: | ||
== Current status == | == Current status == | ||
* Targeted release: [[Releases/17 | Fedora 17 ]] | * Targeted release: [[Releases/17 | Fedora 17 ]] | ||
* Last updated: | * Last updated: 2012-03-23 | ||
* Percentage of completion: | * Percentage of completion: | ||
** kernel: 100% | |||
** device-mapper-persistent-data tools: 100% | |||
** lvm2 thinp support: 100% | |||
== Detailed Description == | == Detailed Description == | ||
Line 39: | Line 40: | ||
== Benefit to Fedora == | == Benefit to Fedora == | ||
Scalable snapshots of thinly provisioned volumes may be used as the foundation of compelling virtualization and/or cloud services. Fedora would be positioned to be the first distribution to provide this unique advance in Linux block storage. | |||
== Scope == | == Scope == | ||
The bulk of the change is in the kernel (localized to the DM layer) but userspace tools for dumping, restoring and repairing the metadata are also under development. These tools will be provided in a new 'device-mapper-persistent-data' package. In addition the lvm2 package will be updated to ease configuration and management of thin provisioned volumes and their associated snapshots. | |||
== How To Test == | == How To Test == | ||
A comprehensive [https://github.com/jthornber/thinp-test-suite test suite] has been developed to verify the kernel code works as expected (depends on ruby, dt and dmsetup). | |||
Any additional IO workloads (or benchmarks that model real workloads) that the community has an interest in would be welcomed tests. Data integrity is of utmost importance so all tests that increase confidence in the feature are encouraged. | |||
See Documention for pointers to "how to" style usage/test guidance. | |||
== User Experience == | == User Experience == | ||
Users will create a shared pool of storage that will host all thin provisioned volumes and their associated snapshots. So in contrast to the old dm-snapshot implementation the user will not need to manage or monitor the free space of N snapshot volumes -- the storage for thin and snapshot volumes is allocated on-demand from the backing shared pool of storage. | |||
== Dependencies == | == Dependencies == | ||
No other packages depend on this feature (and vice-versa). If not ready the associated lvm2 thinp code, if included in lvm2, will error out accordingly. | |||
== Contingency Plan == | == Contingency Plan == | ||
None necessary, no other packages or capabilities will depend on this feature. | |||
== Documentation == | == Documentation == | ||
See documentation that is in the kernel tree: | |||
* [https://github.com/jthornber/linux-2.6/blob/thin-stable/Documentation/device-mapper/thin-provisioning.txt thin-provisioning] -- overview and usage "how-to". | |||
* [https://github.com/jthornber/linux-2.6/blob/thin-stable/Documentation/device-mapper/persistent-data.txt persistent-data] -- some details on the kernel library that enables storing metadata for DM targets (block and transaction manager, data structures, etc). | |||
Man pages will cover the LVM2 extensions. | |||
== Release Notes == | == Release Notes == | ||
STORAGE: | |||
Scalable snapshots of thinly provisioned volumes. | |||
The main highlight of this implementation, compared to the previous | |||
implementation of DM snapshots, is that it allows many virtual devices to | |||
be stored on the same data volume. This simplifies administration and | |||
allows the sharing of data between volumes, thus reducing disk usage. | |||
Another significant feature is support for an arbitrary depth of | |||
recursive snapshots (snapshots of snapshots of snapshots ...). The | |||
previous implementation of snapshots did this by chaining together | |||
lookup tables, and so performance was O(depth). This new | |||
implementation uses a single data structure to avoid this degradation | |||
with depth. Fragmentation may still be an issue, however, in some | |||
scenarios. | |||
Metadata is stored on a separate device from data, giving the | |||
administrator some freedom, for example to: | |||
* Improve metadata resilience by storing metadata on a mirrored volume but data on a non-mirrored one. | |||
* Improve performance by storing the metadata on SSD. | |||
The 'device-mapper-persistent-data' package provides tools to dump and restore the metadata for thinly provisioned volumes. LVM2 has been enhanced to allow the creation and management of thinly-provisioned volumes. | |||
== Comments and Discussion == | == Comments and Discussion == | ||
Line 81: | Line 99: | ||
[[Category: | [[Category:FeatureAcceptedF17]] | ||
<!-- When your feature page is completed and ready for review --> | <!-- When your feature page is completed and ready for review --> | ||
<!-- remove Category:FeaturePageIncomplete and change it to Category:FeatureReadyForWrangler --> | <!-- remove Category:FeaturePageIncomplete and change it to Category:FeatureReadyForWrangler --> | ||
<!-- After review, the feature wrangler will move your page to Category:FeatureReadyForFesco... if it still needs more work it will move back to Category:FeaturePageIncomplete--> | <!-- After review, the feature wrangler will move your page to Category:FeatureReadyForFesco... if it still needs more work it will move back to Category:FeaturePageIncomplete--> | ||
<!-- A pretty picture of the page category usage is at: https://fedoraproject.org/wiki/Features/Policy/Process --> | <!-- A pretty picture of the page category usage is at: https://fedoraproject.org/wiki/Features/Policy/Process --> |
Latest revision as of 11:58, 26 March 2012
ThinProvisioning
Summary
Provide the thin provisioning Device Mapper (DM) target and supporting userspace utilities. This DM target allows a single pool of storage to be the backing store of multiple thinly provisioned volumes. Numerous snapshots (and snapshots of snapshots) may be taken of the thinly provisioned volumes.
Owner
- Name: Joe Thornber and Mike Snitzer
- Email:
- thornber AT redhat DOT com
- snitzer AT redhat DOT com
Current status
- Targeted release: Fedora 17
- Last updated: 2012-03-23
- Percentage of completion:
- kernel: 100%
- device-mapper-persistent-data tools: 100%
- lvm2 thinp support: 100%
Detailed Description
The main highlight of this implementation, compared to the previous implementation of snapshots, is that it allows many virtual devices to be stored on the same data volume. This simplifies administration and allows the sharing of data between volumes, thus reducing disk usage.
Another significant feature is support for an arbitrary depth of recursive snapshots (snapshots of snapshots of snapshots ...). The previous implementation of snapshots did this by chaining together lookup tables, and so performance was O(depth). This new implementation uses a single data structure to avoid this degradation with depth. Fragmentation may still be an issue, however, in some scenarios.
Metadata is stored on a separate device from data, giving the administrator some freedom, for example to:
- Improve metadata resilience by storing metadata on a mirrored volume but data on a non-mirrored one.
- Improve performance by storing the metadata on SSD.
Benefit to Fedora
Scalable snapshots of thinly provisioned volumes may be used as the foundation of compelling virtualization and/or cloud services. Fedora would be positioned to be the first distribution to provide this unique advance in Linux block storage.
Scope
The bulk of the change is in the kernel (localized to the DM layer) but userspace tools for dumping, restoring and repairing the metadata are also under development. These tools will be provided in a new 'device-mapper-persistent-data' package. In addition the lvm2 package will be updated to ease configuration and management of thin provisioned volumes and their associated snapshots.
How To Test
A comprehensive test suite has been developed to verify the kernel code works as expected (depends on ruby, dt and dmsetup).
Any additional IO workloads (or benchmarks that model real workloads) that the community has an interest in would be welcomed tests. Data integrity is of utmost importance so all tests that increase confidence in the feature are encouraged.
See Documention for pointers to "how to" style usage/test guidance.
User Experience
Users will create a shared pool of storage that will host all thin provisioned volumes and their associated snapshots. So in contrast to the old dm-snapshot implementation the user will not need to manage or monitor the free space of N snapshot volumes -- the storage for thin and snapshot volumes is allocated on-demand from the backing shared pool of storage.
Dependencies
No other packages depend on this feature (and vice-versa). If not ready the associated lvm2 thinp code, if included in lvm2, will error out accordingly.
Contingency Plan
None necessary, no other packages or capabilities will depend on this feature.
Documentation
See documentation that is in the kernel tree:
- thin-provisioning -- overview and usage "how-to".
- persistent-data -- some details on the kernel library that enables storing metadata for DM targets (block and transaction manager, data structures, etc).
Man pages will cover the LVM2 extensions.
Release Notes
STORAGE: Scalable snapshots of thinly provisioned volumes.
The main highlight of this implementation, compared to the previous implementation of DM snapshots, is that it allows many virtual devices to be stored on the same data volume. This simplifies administration and allows the sharing of data between volumes, thus reducing disk usage.
Another significant feature is support for an arbitrary depth of recursive snapshots (snapshots of snapshots of snapshots ...). The previous implementation of snapshots did this by chaining together lookup tables, and so performance was O(depth). This new implementation uses a single data structure to avoid this degradation with depth. Fragmentation may still be an issue, however, in some scenarios.
Metadata is stored on a separate device from data, giving the administrator some freedom, for example to:
- Improve metadata resilience by storing metadata on a mirrored volume but data on a non-mirrored one.
- Improve performance by storing the metadata on SSD.
The 'device-mapper-persistent-data' package provides tools to dump and restore the metadata for thinly provisioned volumes. LVM2 has been enhanced to allow the creation and management of thinly-provisioned volumes.