Feature Name
Resource Management
Summary
Resource Management is an upstream feature that allows system resources to be partitioned/divided up amongst different processes, or a group of processes.
Owner
- Name: lwang
- email: lwang@redhat.com
Current status
- Targeted release: Fedora 42
- Last updated: (DATE)
- Percentage of completion: XX%
Detailed Description
Resource Management/Control Groups
Control Groups provide a mechanism for aggregating/partitioning sets of tasks, and all their future children, into hierarchical groups with specialized behaviour.
Definitions:
A *cgroup* associates a set of tasks with a set of parameters for one or more subsystems.
A *subsystem* is a module that makes use of the task grouping facilities provided by cgroups to treat groups of tasks in particular ways. A subsystem is typically a "resource controller" that schedules a resource or applies per-cgroup limits, but it may be anything that wants to act on a group of processes, e.g. a virtualization subsystem.
A *hierarchy* is a set of cgroups arranged in a tree, such that every task in the system is in exactly one of the cgroups in the hierarchy, and a set of subsystems; each subsystem has system-specific state attached to each cgroup in the hierarchy. Each hierarchy has an instance of the cgroup virtual filesystem associated with it.
At any one time there may be multiple active hierachies of task cgroups. Each hierarchy is a partition of all tasks in the system.
User level code may create and destroy cgroups by name in an instance of the cgroup virtual file system, specify and query to which cgroup a task is assigned, and list the task pids assigned to a cgroup. Those creations and assignments only affect the hierarchy associated with that instance of the cgroup file system.
On their own, the only use for cgroups is for simple job tracking. The intention is that other subsystems hook into the generic cgroup support to provide new attributes for cgroups, such as accounting/limiting the resources which processes in a cgroup can access. For example, cpusets (see Documentation/cpusets.txt) allows you to associate a set of CPUs and a set of memory nodes with the tasks in each cgroup.
Benefit to Fedora
To enable the cgroup sub-features will help fedora to be exposed to various resource partitioning scheme, and allow the fedora users to experience a new feature set that helps them partition their resource anyway they want.
Scope
There are several sub-features under control group:
- CGROUPS (grouping mechanism)
CGROUPS=y
- CPUSET (cpuset controller)
CPUSET=y
- CPUACCT (cpu account controller)
CGROUP_CPUACCT=y
- SCHED (schedule controller)
CGROUP_SCHED=y
- MEMCTL (memory controller)
CGROUP_MEM_CONT=y
- DEVICE
CGROUP_DEVICE=y
- NETCTL (network controller)
NET_CLS_CGROUP=y
- IOCTL (I/O controller)
?? still under development
How To Test
To help test, and use the control group features in Fedora; there are multiple way to test, depends on the feature set that you are interested in.
For CPUSET:
0. targeted mostly for x86, x86_64 1. Documentation/cgroups/cpusets.txt, section 2, Usage Examples and Syntax: To start a new job that is to be contained within a cpuset, the steps are:
1) mkdir /dev/cpuset 2) mount -t cgroup -ocpuset cpuset /dev/cpuset 3) Create the new cpuset by doing mkdir's and write's (or echo's) in the /dev/cpuset virtual file system. 4) Start a task that will be the "founding father" of the new job. 5) Attach that task to the new cpuset by writing its pid to the /dev/cpuset tasks file for that cpuset. 6) fork, exec or clone the job tasks from this founding father task.
For example, the following sequence of commands will setup a cpuset named "Charlie", containing just CPUs 2 and 3, and Memory Node 1, and then start a subshell 'sh' in that cpuset:
mount -t cgroup -ocpuset cpuset /dev/cpuset cd /dev/cpuset mkdir Charlie cd Charlie /bin/echo 2-3 > cpus /bin/echo 1 > mems /bin/echo $$ > tasks sh # The subshell 'sh' is now running in cpuset Charlie # The next line should display '/Charlie' cat /proc/self/cpuset
2.
User Experience
Dependencies
Contingency Plan
Documentation
Release Notes
Comments and Discussion