System Storage Manager
Summary
System Storage Manager provides easy to use command line interface to manage your storage using various technologies like lvm, btrfs, encrypted volumes and more.
In more sophisticated enterprise storage environments, management with Device Mapper (dm), Logical Volume Manager (LVM), or Multiple Devices (md) is becoming increasingly more difficult. With file systems added to the mix, the number of tools needed to configure and manage storage has grown so large that it is simply not user friendly. With so many options for a system administrator to consider, the opportunity for errors and problems is large.
The btrfs administration tools have shown us that storage management can be simplified, and we are working to bring that ease of use to Linux filesystems in general.
Owner
- Name: Lukáš Czerner
- Email: lczerner@redhat.com
Current status
- Targeted release: Fedora 18
- Last updated: 2012-10-16
- Percentage of completion: 100%
Detailed Description
System Storage Manager is CLI tool which is aimed to simplify storage management in Fedora. Currently every Linux storage technology has its own set of tools and its own unique way of doing things. Moreover, there is not any single point in the system where users can view information abut the system storage setup.
Often, even with the specific tools, even the simplest operations, like for example creating linear lvm volume with file system on top is divided into multiple steps, which might get confusing error prone and time consuming for the user to follow.
SSM (System Storage Manager) simplify the user interface by providing unified abstraction and interface for multiple storage technologies, like lvm, btrfs and md raid while implementing set of commands with the same syntax regardless of the technology used.
The storage abstraction is divided into three main domains:
- Devices - provides information about the devices which could be used as a building blocks for more advanced storage setups. In this domain you can find for example your regular SATA drive.
- Pools - Represent set of grouped devices. In this domain you can usually find logical volume groups (with lvm backend) or btrfs root volume.
- Volumes - Represents the final volume constructed with any of the backend. It can also be used as a building block for even more sophisticated storage setups. Note that even your regular partition, or SATA drive can be found in this domain if it contains usable file system.
- Snapshots - Represents snapshots in the system. This will usually contain your btrfs, or lvm snapshot volumes/subvolumes.
None of those three domain are mandatory to be implemented by the backend. If any of the domains is not implemented, the backend simply would no be able to perform certain actions with those domains.
SSM supports set of commands for managing the storage with the same syntax regardless on which backend is used.
Create command
This command creates a new volume with defined parameters. If device is provided it will be used to create a volume, hence it will be added into the pool prior the volume creation.
Example:
Creating a volume of defined size with the defined file system. The default back-end is set to lvm and lvm default pool name is lvm_pool: # ssm create --fs ext4 -s 15G /dev/loop0 /dev/loop1
List command
List information about all detected devices, pools, volumes and snapshots found in the system. list command can be used either alone to list all the information, or you can request specific section only.
Example:
# ssm list ----------------------------------------------------------------- Device Free Used Total Pool Mount point ----------------------------------------------------------------- /dev/loop0 0.00 KB 10.00 GB 10.00 GB lvm_pool /dev/loop1 0.00 KB 10.00 GB 10.00 GB lvm_pool /dev/loop2 0.00 KB 10.00 GB 10.00 GB lvm_pool /dev/loop3 8.05 GB 1.95 GB 10.00 GB btrfs_pool /dev/loop4 6.54 GB 1.93 GB 8.47 GB btrfs_pool /dev/sda 149.05 GB PARTITIONED /dev/sda1 19.53 GB / /dev/sda2 78.12 GB /dev/sda3 1.95 GB SWAP /dev/sda4 1.00 KB /dev/sda5 49.44 GB /mnt/test ----------------------------------------------------------------- ------------------------------------------------------- Pool Type Devices Free Used Total ------------------------------------------------------- lvm_pool lvm 3 0.00 KB 29.99 GB 29.99 GB btrfs_pool btrfs 2 3.84 MB 18.47 GB 18.47 GB ------------------------------------------------------- ----------------------------------------------------------------------------------------------- Volume Pool Volume size FS FS size Free Type Mount point ----------------------------------------------------------------------------------------------- /dev/lvm_pool/lvol001 lvm_pool 25.00 GB ext4 25.00 GB 23.19 GB linear /dev/lvm_pool/myvolume lvm_pool 4.99 GB xfs 4.98 GB 4.98 GB linear /mnt/test1 /dev/dm-0 dm-crypt 78.12 GB ext4 78.12 GB 45.33 GB crypt /home btrfs_pool btrfs_pool 18.47 GB btrfs 18.47 GB 18.47 GB btrfs /dev/sda1 19.53 GB ext4 19.53 GB 12.67 GB part / /dev/sda5 49.44 GB ext4 49.44 GB 29.77 GB part /mnt/test -----------------------------------------------------------------------------------------------
Remove command
This command removes item from the system. Multiple items can be specified. If the item can not be removed for some reason, it will be skipped.
Example:
Remove the whole lvm pool and one of the btrfs subvolume, and one unused device from the btrfs pool. Note that with btrfs, pool have the same name as the volume: # ssm remove lvm_pool /dev/loop2 /mnt/test1/new_subvolume/
Resize command
Change size of the volume and file system. If there is no file system only the volume itself will be resized. You can specify device to add into the volume pool prior the resize. Note that device will only be added into the pool if the volume size is going to grow.
Example:
Shrink the volume '/dev/lvm_pool/lvol001' by 5GB including the file system: # ssm resize -s-5G /dev/lvm_pool/lvol001
Check command
Check the file system consistency on the volume. You can specify multiple volumes to check. If there is no file system on the volume, this volume will be skipped.
Example:
Check the file system consistency on devices /dev/lvm_pool/lvol001 and /dev/sda5 # ssm check /dev/lvm_pool/lvol001 /dev/sda5
Snapshot command
Take a snapshot of existing volume. This operation will fail if back-end which the volume belongs to does not support snapshotting. Note that you can not specify both NAME and DESC since those options are mutually exclusive.
Example:
Take a snapshot of lvm volume /dev/lvm_loop/lvol001: # ssm snapshot /dev/lvm_pool/lvol001
Add command
This command adds device into the pool. The device will not be added if it’s already part of different pool. When multiple devices are provided, all of them are added into the pool. If one of the devices can not be added into the pool for some reason, it will be skipped. If no pool is specified, default pool will be chosen. In the case of non existing pool, it will be created using provided devices.
Example:
Add device to the btrfs volume btrfs_pool: # ssm add /dev/loop2 -p btrfs_pool
The SSM is written in python and to do its job it uses tools native to the respective technology (lvm, btrfs, mdadm, cryptsetup etc...). It is also modular, so adding more backends (currently lvm, btrfs, crypt) is possible.
System Storage Manager however is not meant to replace the native tools. We will never implement all the features available to all the storage technologies available in Fedora. We focus on providing functionality for generally most used actions and most importantly providing unified interface for such functionality.
More information about the System Storage Manager can be found on the project page http://storagemanager.sf.net .
Benefit to Fedora
- Fedora will have better, more user friendly storage management utility
- System Storage Manager also represents single point of information about system storage without forcing user to use multiple tools.
Scope
System Storage Manager should have lvm, btrfs, crypt and MD raid support, while reliably providing functionality like create, remove, add, resize, check, snapshot and list with each backend (if possible).
How To Test
System Storage Manager has its own test suite which consists of:
- doctests
- unittests - testing the basic ssm functionality and proper behaviour without executing backend specific code.
- bash tests - set of scripts which are testing and validating the ssm functionality on "real" system. In order not to disturb system storage configuration each test creates its own set of "drives" using loop driver and mdsetup and then use those drives to exercise ssm commands.
For the test suite to work correctly the system should contain all the tools needed by the backend:
- lvm2
- btrfs-progs
- mdadm
- device-mapper
- cryptsetup
- e2fsprogs
- xfsprogs
- util-linux
In case of manual testing each command should be tested with each supported backend to see if it produces required storage configuration.
User Experience
Administrators and users will be able to manage their storage more easily and faster with the System Storage Manager tool. It will also provide the source of information about the system storage without the need to gather the information manually using various different tools.
Dependencies
Currently System Storage Manager depends on:
- python >= 2.6 and <= 3.0
- python-libs >= 2.6 and <= 3.0
- util-linux
- which
- xfsprogs
- e2fsprogs
If any of the tools used by the backend are not present in the system, backend will not be used.
Contingency Plan
The System Storage Manages is already in usable state. With the Fedora inclusion process already started (https://bugzilla.redhat.com/show_bug.cgi?id=828879).
If we fail to finish all the features (MD raid support), the ssm can live without it.
Documentation
- http://storagemanager.sf.net
- Manual page in the project sources
Release Notes
- Fedora 18 will include tool to ease common storage management tasks with the unified command line interface.