1 Documentation for /proc/sys/fs/* kernel version 2.2.10
2 (c) 1998, 1999, Rik van Riel <riel@nl.linux.org>
3 (c) 2009, Shen Feng<shen@cn.fujitsu.com>
5 For general info and legal blurb, please look in README.
7 ==============================================================
9 This file contains documentation for the sysctl files in
10 /proc/sys/fs/ and is valid for Linux kernel version 2.2.
12 The files in this directory can be used to tune and monitor
13 miscellaneous and general things in the operation of the Linux
14 kernel. Since some of the files _can_ be used to screw up your
15 system, it is advisable to read both documentation and source
16 before actually making adjustments.
19 ----------------------------------------------------------
21 Currently, these files are in /proc/sys/fs:
35 - pipe-user-pages-hard
36 - pipe-user-pages-soft
41 ==============================================================
45 aio-nr is the running total of the number of events specified on the
46 io_setup system call for all currently active aio contexts. If aio-nr
47 reaches aio-max-nr then io_setup will fail with EAGAIN. Note that
48 raising aio-max-nr does not result in the pre-allocation or re-sizing
49 of any kernel data structures.
51 ==============================================================
55 From linux/fs/dentry.c:
56 --------------------------------------------------------------
60 int age_limit; /* age in seconds */
61 int want_pages; /* pages requested by system */
63 } dentry_stat = {0, 0, 45, 0,};
64 --------------------------------------------------------------
66 Dentries are dynamically allocated and deallocated, and
67 nr_dentry seems to be 0 all the time. Hence it's safe to
68 assume that only nr_unused, age_limit and want_pages are
69 used. Nr_unused seems to be exactly what its name says.
70 Age_limit is the age in seconds after which dcache entries
71 can be reclaimed when memory is short and want_pages is
72 nonzero when shrink_dcache_pages() has been called and the
73 dcache isn't pruned yet.
75 ==============================================================
79 The file dquot-max shows the maximum number of cached disk
82 The file dquot-nr shows the number of allocated disk quota
83 entries and the number of free disk quota entries.
85 If the number of free cached disk quotas is very low and
86 you have some awesome number of simultaneous system users,
87 you might want to raise the limit.
89 ==============================================================
93 The value in file-max denotes the maximum number of file-
94 handles that the Linux kernel will allocate. When you get lots
95 of error messages about running out of file handles, you might
96 want to increase this limit.
98 Historically,the kernel was able to allocate file handles
99 dynamically, but not to free them again. The three values in
100 file-nr denote the number of allocated file handles, the number
101 of allocated but unused file handles, and the maximum number of
102 file handles. Linux 2.6 always reports 0 as the number of free
103 file handles -- this is not an error, it just means that the
104 number of allocated file handles exactly matches the number of
107 Attempts to allocate more file descriptors than file-max are
108 reported with printk, look for "VFS: file-max limit <number>
110 ==============================================================
114 This denotes the maximum number of file-handles a process can
115 allocate. Default value is 1024*1024 (1048576) which should be
116 enough for most machines. Actual limit depends on RLIMIT_NOFILE
119 ==============================================================
121 inode-max, inode-nr & inode-state:
123 As with file handles, the kernel allocates the inode structures
124 dynamically, but can't free them yet.
126 The value in inode-max denotes the maximum number of inode
127 handlers. This value should be 3-4 times larger than the value
128 in file-max, since stdin, stdout and network sockets also
129 need an inode struct to handle them. When you regularly run
130 out of inodes, you need to increase this value.
132 The file inode-nr contains the first two items from
133 inode-state, so we'll skip to that file...
135 Inode-state contains three actual numbers and four dummies.
136 The actual numbers are, in order of appearance, nr_inodes,
137 nr_free_inodes and preshrink.
139 Nr_inodes stands for the number of inodes the system has
140 allocated, this can be slightly more than inode-max because
141 Linux allocates them one pageful at a time.
143 Nr_free_inodes represents the number of free inodes (?) and
144 preshrink is nonzero when the nr_inodes > inode-max and the
145 system needs to prune the inode list instead of allocating
148 ==============================================================
150 overflowgid & overflowuid:
152 Some filesystems only support 16-bit UIDs and GIDs, although in Linux
153 UIDs and GIDs are 32 bits. When one of these filesystems is mounted
154 with writes enabled, any UID or GID that would exceed 65535 is translated
155 to a fixed value before being written to disk.
157 These sysctls allow you to change the value of the fixed UID and GID.
158 The default is 65534.
160 ==============================================================
162 pipe-user-pages-hard:
164 Maximum total number of pages a non-privileged user may allocate for pipes.
165 Once this limit is reached, no new pipes may be allocated until usage goes
166 below the limit again. When set to 0, no limit is applied, which is the default
169 ==============================================================
171 pipe-user-pages-soft:
173 Maximum total number of pages a non-privileged user may allocate for pipes
174 before the pipe size gets limited to a single page. Once this limit is reached,
175 new pipes will be limited to a single page in size for this user in order to
176 limit total memory usage, and trying to increase them using fcntl() will be
177 denied until usage goes below the limit again. The default value allows to
178 allocate up to 1024 pipes at their default size. When set to 0, no limit is
181 ==============================================================
185 This value can be used to query and set the core dump mode for setuid
186 or otherwise protected/tainted binaries. The modes are
188 0 - (default) - traditional behaviour. Any process which has changed
189 privilege levels or is execute only will not be dumped.
190 1 - (debug) - all processes dump core when possible. The core dump is
191 owned by the current user and no security is applied. This is
192 intended for system debugging situations only. Ptrace is unchecked.
193 This is insecure as it allows regular users to examine the memory
194 contents of privileged processes.
195 2 - (suidsafe) - any binary which normally would not be dumped is dumped
196 anyway, but only if the "core_pattern" kernel sysctl is set to
197 either a pipe handler or a fully qualified path. (For more details
198 on this limitation, see CVE-2006-2451.) This mode is appropriate
199 when administrators are attempting to debug problems in a normal
200 environment, and either have a core dump pipe handler that knows
201 to treat privileged core dumps with care, or specific directory
202 defined for catching core dumps. If a core dump happens without
203 a pipe handler or fully qualifid path, a message will be emitted
204 to syslog warning about the lack of a correct setting.
206 ==============================================================
208 super-max & super-nr:
210 These numbers control the maximum number of superblocks, and
211 thus the maximum number of mounted filesystems the kernel
212 can have. You only need to increase super-max if you need to
213 mount more filesystems than the current value in super-max
216 ==============================================================
220 aio-nr shows the current system-wide number of asynchronous io
221 requests. aio-max-nr allows you to change the maximum value
224 ==============================================================
227 2. /proc/sys/fs/binfmt_misc
228 ----------------------------------------------------------
230 Documentation for the files in /proc/sys/fs/binfmt_misc is
231 in Documentation/binfmt_misc.txt.
234 3. /proc/sys/fs/mqueue - POSIX message queues filesystem
235 ----------------------------------------------------------
237 The "mqueue" filesystem provides the necessary kernel features to enable the
238 creation of a user space library that implements the POSIX message queues
239 API (as noted by the MSG tag in the POSIX 1003.1-2001 version of the System
240 Interfaces specification.)
242 The "mqueue" filesystem contains values for determining/setting the amount of
243 resources used by the file system.
245 /proc/sys/fs/mqueue/queues_max is a read/write file for setting/getting the
246 maximum number of message queues allowed on the system.
248 /proc/sys/fs/mqueue/msg_max is a read/write file for setting/getting the
249 maximum number of messages in a queue value. In fact it is the limiting value
250 for another (user) limit which is set in mq_open invocation. This attribute of
251 a queue must be less or equal then msg_max.
253 /proc/sys/fs/mqueue/msgsize_max is a read/write file for setting/getting the
254 maximum message size value (it is every message queue's attribute set during
258 4. /proc/sys/fs/epoll - Configuration options for the epoll interface
259 --------------------------------------------------------
261 This directory contains configuration options for the epoll(7) interface.
266 Every epoll file descriptor can store a number of files to be monitored
267 for event readiness. Each one of these monitored files constitutes a "watch".
268 This configuration option sets the maximum number of "watches" that are
269 allowed for each user.
270 Each "watch" costs roughly 90 bytes on a 32bit kernel, and roughly 160 bytes
272 The current default value for max_user_watches is the 1/32 of the available
273 low memory, divided for the "watch" cost in bytes.