5 cgroup subsys "blkio" implements the block io controller. There seems to be
6 a need of various kinds of IO control policies (like proportional BW, max BW)
7 both at leaf nodes as well as at intermediate nodes in a storage hierarchy.
8 Plan is to use the same cgroup based management interface for blkio controller
9 and based on user options switch IO policies in the background.
11 In the first phase, this patchset implements proportional weight time based
12 division of disk policy. It is implemented in CFQ. Hence this policy takes
13 effect only on leaf nodes when CFQ is being used.
17 You can do a very simple testing of running two dd threads in two different
18 cgroups. Here is what you can do.
20 - Enable group scheduling in CFQ
21 CONFIG_CFQ_GROUP_IOSCHED=y
23 - Compile and boot into kernel and mount IO controller (blkio).
25 mount -t cgroup -o blkio none /cgroup
28 mkdir -p /cgroup/test1/ /cgroup/test2
30 - Set weights of group test1 and test2
31 echo 1000 > /cgroup/test1/blkio.weight
32 echo 500 > /cgroup/test2/blkio.weight
34 - Create two same size files (say 512MB each) on same disk (file1, file2) and
35 launch two dd threads in different cgroup to read those files.
38 echo 3 > /proc/sys/vm/drop_caches
40 dd if=/mnt/sdb/zerofile1 of=/dev/null &
41 echo $! > /cgroup/test1/tasks
42 cat /cgroup/test1/tasks
44 dd if=/mnt/sdb/zerofile2 of=/dev/null &
45 echo $! > /cgroup/test2/tasks
46 cat /cgroup/test2/tasks
48 - At macro level, first dd should finish first. To get more precise data, keep
49 on looking at (with the help of script), at blkio.disk_time and
50 blkio.disk_sectors files of both test1 and test2 groups. This will tell how
51 much disk time (in milli seconds), each group got and how many secotors each
52 group dispatched to the disk. We provide fairness in terms of disk time, so
53 ideally io.disk_time of cgroups should be in proportion to the weight.
55 Various user visible config options
56 ===================================
57 CONFIG_CFQ_GROUP_IOSCHED
58 - Enables group scheduling in CFQ. Currently only 1 level of group
61 CONFIG_DEBUG_CFQ_IOSCHED
62 - Enables some debugging messages in blktrace. Also creates extra
63 cgroup file blkio.dequeue.
65 Config options selected automatically
66 =====================================
67 These config options are not user visible and are selected/deselected
68 automatically based on IO scheduler configuration.
71 - Block IO controller. Selected by CONFIG_CFQ_GROUP_IOSCHED.
73 CONFIG_DEBUG_BLK_CGROUP
74 - Debug help. Selected by CONFIG_DEBUG_CFQ_IOSCHED.
76 Details of cgroup files
77 =======================
79 - Specifies per cgroup weight.
80 Currently allowed range of weights is from 100 to 1000.
83 - disk time allocated to cgroup per device in milliseconds. First
84 two fields specify the major and minor number of the device and
85 third field specifies the disk time allocated to group in
89 - number of sectors transferred to/from disk by the group. First
90 two fields specify the major and minor number of the device and
91 third field specifies the number of sectors transferred by the
92 group to/from the device.
94 - blkio.io_service_bytes
95 - Number of bytes transferred to/from the disk by the group. These
96 are further divided by the type of operation - read or write, sync
97 or async. First two fields specify the major and minor number of the
98 device, third field specifies the operation type and the fourth field
99 specifies the number of bytes.
102 - Number of IOs completed to/from the disk by the group. These
103 are further divided by the type of operation - read or write, sync
104 or async. First two fields specify the major and minor number of the
105 device, third field specifies the operation type and the fourth field
106 specifies the number of IOs.
108 - blkio.io_service_time
109 - Total amount of time between request dispatch and request completion
110 for the IOs done by this cgroup. This is in nanoseconds to make it
111 meaningful for flash devices too. For devices with queue depth of 1,
112 this time represents the actual service time. When queue_depth > 1,
113 that is no longer true as requests may be served out of order. This
114 may cause the service time for a given IO to include the service time
115 of multiple IOs when served out of order which may result in total
116 io_service_time > actual time elapsed. This time is further divided by
117 the type of operation - read or write, sync or async. First two fields
118 specify the major and minor number of the device, third field
119 specifies the operation type and the fourth field specifies the
120 io_service_time in ns.
123 - Total amount of time the IOs for this cgroup spent waiting in the
124 scheduler queues for service. This can be greater than the total time
125 elapsed since it is cumulative io_wait_time for all IOs. It is not a
126 measure of total time the cgroup spent waiting but rather a measure of
127 the wait_time for its individual IOs. For devices with queue_depth > 1
128 this metric does not include the time spent waiting for service once
129 the IO is dispatched to the device but till it actually gets serviced
130 (there might be a time lag here due to re-ordering of requests by the
131 device). This is in nanoseconds to make it meaningful for flash
132 devices too. This time is further divided by the type of operation -
133 read or write, sync or async. First two fields specify the major and
134 minor number of the device, third field specifies the operation type
135 and the fourth field specifies the io_wait_time in ns.
138 - Total number of bios/requests merged into requests belonging to this
139 cgroup. This is further divided by the type of operation - read or
140 write, sync or async.
143 - Debugging aid only enabled if CONFIG_DEBUG_CFQ_IOSCHED=y. This
144 gives the statistics about how many a times a group was dequeued
145 from service tree of the device. First two fields specify the major
146 and minor number of the device and third field specifies the number
147 of times a group was dequeued from a particular device.
150 - Writing an int to this file will result in resetting all the stats
155 /sys/block/<disk>/queue/iosched/group_isolation
157 If group_isolation=1, it provides stronger isolation between groups at the
158 expense of throughput. By default group_isolation is 0. In general that
159 means that if group_isolation=0, expect fairness for sequential workload
160 only. Set group_isolation=1 to see fairness for random IO workload also.
162 Generally CFQ will put random seeky workload in sync-noidle category. CFQ
163 will disable idling on these queues and it does a collective idling on group
164 of such queues. Generally these are slow moving queues and if there is a
165 sync-noidle service tree in each group, that group gets exclusive access to
166 disk for certain period. That means it will bring the throughput down if
167 group does not have enough IO to drive deeper queue depths and utilize disk
168 capacity to the fullest in the slice allocated to it. But the flip side is
169 that even a random reader should get better latencies and overall throughput
170 if there are lots of sequential readers/sync-idle workload running in the
173 If group_isolation=0, then CFQ automatically moves all the random seeky queues
174 in the root group. That means there will be no service differentiation for
175 that kind of workload. This leads to better throughput as we do collective
176 idling on root sync-noidle tree.
178 By default one should run with group_isolation=0. If that is not sufficient
179 and one wants stronger isolation between groups, then set group_isolation=1
180 but this will come at cost of reduced throughput.
184 - Currently only sync IO queues are support. All the buffered writes are
185 still system wide and not per group. Hence we will not see service
186 differentiation between buffered writes between groups.