4 * Provide support for fcntl()'s F_GETLK, F_SETLK, and F_SETLKW calls.
5 * Doug Evans (dje@spiff.uucp), August 07, 1992
7 * Deadlock detection added.
8 * FIXME: one thing isn't handled yet:
9 * - mandatory locks (requires lots of changes elsewhere)
10 * Kelly Carmichael (kelly@[142.24.8.65]), September 17, 1994.
12 * Miscellaneous edits, and a total rewrite of posix_lock_file() code.
13 * Kai Petzke (wpp@marie.physik.tu-berlin.de), 1994
15 * Converted file_lock_table to a linked list from an array, which eliminates
16 * the limits on how many active file locks are open.
17 * Chad Page (pageone@netcom.com), November 27, 1994
19 * Removed dependency on file descriptors. dup()'ed file descriptors now
20 * get the same locks as the original file descriptors, and a close() on
21 * any file descriptor removes ALL the locks on the file for the current
22 * process. Since locks still depend on the process id, locks are inherited
23 * after an exec() but not after a fork(). This agrees with POSIX, and both
24 * BSD and SVR4 practice.
25 * Andy Walker (andy@lysaker.kvaerner.no), February 14, 1995
27 * Scrapped free list which is redundant now that we allocate locks
28 * dynamically with kmalloc()/kfree().
29 * Andy Walker (andy@lysaker.kvaerner.no), February 21, 1995
31 * Implemented two lock personalities - FL_FLOCK and FL_POSIX.
33 * FL_POSIX locks are created with calls to fcntl() and lockf() through the
34 * fcntl() system call. They have the semantics described above.
36 * FL_FLOCK locks are created with calls to flock(), through the flock()
37 * system call, which is new. Old C libraries implement flock() via fcntl()
38 * and will continue to use the old, broken implementation.
40 * FL_FLOCK locks follow the 4.4 BSD flock() semantics. They are associated
41 * with a file pointer (filp). As a result they can be shared by a parent
42 * process and its children after a fork(). They are removed when the last
43 * file descriptor referring to the file pointer is closed (unless explicitly
46 * FL_FLOCK locks never deadlock, an existing lock is always removed before
47 * upgrading from shared to exclusive (or vice versa). When this happens
48 * any processes blocked by the current lock are woken up and allowed to
49 * run before the new lock is applied.
50 * Andy Walker (andy@lysaker.kvaerner.no), June 09, 1995
52 * Removed some race conditions in flock_lock_file(), marked other possible
53 * races. Just grep for FIXME to see them.
54 * Dmitry Gorodchanin (pgmdsg@ibi.com), February 09, 1996.
56 * Addressed Dmitry's concerns. Deadlock checking no longer recursive.
57 * Lock allocation changed to GFP_ATOMIC as we can't afford to sleep
58 * once we've checked for blocking and deadlocking.
59 * Andy Walker (andy@lysaker.kvaerner.no), April 03, 1996.
61 * Initial implementation of mandatory locks. SunOS turned out to be
62 * a rotten model, so I implemented the "obvious" semantics.
63 * See 'Documentation/mandatory.txt' for details.
64 * Andy Walker (andy@lysaker.kvaerner.no), April 06, 1996.
66 * Don't allow mandatory locks on mmap()'ed files. Added simple functions to
67 * check if a file has mandatory locks, used by mmap(), open() and creat() to
68 * see if system call should be rejected. Ref. HP-UX/SunOS/Solaris Reference
70 * Andy Walker (andy@lysaker.kvaerner.no), April 09, 1996.
72 * Tidied up block list handling. Added '/proc/locks' interface.
73 * Andy Walker (andy@lysaker.kvaerner.no), April 24, 1996.
75 * Fixed deadlock condition for pathological code that mixes calls to
76 * flock() and fcntl().
77 * Andy Walker (andy@lysaker.kvaerner.no), April 29, 1996.
79 * Allow only one type of locking scheme (FL_POSIX or FL_FLOCK) to be in use
80 * for a given file at a time. Changed the CONFIG_LOCK_MANDATORY scheme to
81 * guarantee sensible behaviour in the case where file system modules might
82 * be compiled with different options than the kernel itself.
83 * Andy Walker (andy@lysaker.kvaerner.no), May 15, 1996.
85 * Added a couple of missing wake_up() calls. Thanks to Thomas Meckel
86 * (Thomas.Meckel@mni.fh-giessen.de) for spotting this.
87 * Andy Walker (andy@lysaker.kvaerner.no), May 15, 1996.
89 * Changed FL_POSIX locks to use the block list in the same way as FL_FLOCK
90 * locks. Changed process synchronisation to avoid dereferencing locks that
91 * have already been freed.
92 * Andy Walker (andy@lysaker.kvaerner.no), Sep 21, 1996.
94 * Made the block list a circular list to minimise searching in the list.
95 * Andy Walker (andy@lysaker.kvaerner.no), Sep 25, 1996.
97 * Made mandatory locking a mount option. Default is not to allow mandatory
99 * Andy Walker (andy@lysaker.kvaerner.no), Oct 04, 1996.
101 * Some adaptations for NFS support.
102 * Olaf Kirch (okir@monad.swb.de), Dec 1996,
104 * Fixed /proc/locks interface so that we can't overrun the buffer we are handed.
105 * Andy Walker (andy@lysaker.kvaerner.no), May 12, 1997.
107 * Use slab allocator instead of kmalloc/kfree.
108 * Use generic list implementation from <linux/list.h>.
109 * Sped up posix_locks_deadlock by only considering blocked locks.
110 * Matthew Wilcox <willy@debian.org>, March, 2000.
112 * Leases and LOCK_MAND
113 * Matthew Wilcox <willy@debian.org>, June, 2000.
114 * Stephen Rothwell <sfr@canb.auug.org.au>, June, 2000.
117 #include <linux/capability.h>
118 #include <linux/file.h>
119 #include <linux/fs.h>
120 #include <linux/init.h>
121 #include <linux/module.h>
122 #include <linux/security.h>
123 #include <linux/slab.h>
124 #include <linux/smp_lock.h>
125 #include <linux/syscalls.h>
126 #include <linux/time.h>
127 #include <linux/rcupdate.h>
129 #include <asm/semaphore.h>
130 #include <asm/uaccess.h>
132 #define IS_POSIX(fl) (fl->fl_flags & FL_POSIX)
133 #define IS_FLOCK(fl) (fl->fl_flags & FL_FLOCK)
134 #define IS_LEASE(fl) (fl->fl_flags & FL_LEASE)
136 int leases_enable = 1;
137 int lease_break_time = 45;
139 #define for_each_lock(inode, lockp) \
140 for (lockp = &inode->i_flock; *lockp != NULL; lockp = &(*lockp)->fl_next)
142 static LIST_HEAD(file_lock_list);
143 static LIST_HEAD(blocked_list);
145 static struct kmem_cache *filelock_cache __read_mostly;
147 /* Allocate an empty lock structure. */
148 static struct file_lock *locks_alloc_lock(void)
150 return kmem_cache_alloc(filelock_cache, GFP_KERNEL);
153 static void locks_release_private(struct file_lock *fl)
156 if (fl->fl_ops->fl_release_private)
157 fl->fl_ops->fl_release_private(fl);
161 if (fl->fl_lmops->fl_release_private)
162 fl->fl_lmops->fl_release_private(fl);
168 /* Free a lock which is not in use. */
169 static void locks_free_lock(struct file_lock *fl)
171 BUG_ON(waitqueue_active(&fl->fl_wait));
172 BUG_ON(!list_empty(&fl->fl_block));
173 BUG_ON(!list_empty(&fl->fl_link));
175 locks_release_private(fl);
176 kmem_cache_free(filelock_cache, fl);
179 void locks_init_lock(struct file_lock *fl)
181 INIT_LIST_HEAD(&fl->fl_link);
182 INIT_LIST_HEAD(&fl->fl_block);
183 init_waitqueue_head(&fl->fl_wait);
185 fl->fl_fasync = NULL;
191 fl->fl_start = fl->fl_end = 0;
196 EXPORT_SYMBOL(locks_init_lock);
199 * Initialises the fields of the file lock which are invariant for
202 static void init_once(struct kmem_cache *cache, void *foo)
204 struct file_lock *lock = (struct file_lock *) foo;
206 locks_init_lock(lock);
209 static void locks_copy_private(struct file_lock *new, struct file_lock *fl)
212 if (fl->fl_ops->fl_copy_lock)
213 fl->fl_ops->fl_copy_lock(new, fl);
214 new->fl_ops = fl->fl_ops;
217 if (fl->fl_lmops->fl_copy_lock)
218 fl->fl_lmops->fl_copy_lock(new, fl);
219 new->fl_lmops = fl->fl_lmops;
224 * Initialize a new lock from an existing file_lock structure.
226 static void __locks_copy_lock(struct file_lock *new, const struct file_lock *fl)
228 new->fl_owner = fl->fl_owner;
229 new->fl_pid = fl->fl_pid;
231 new->fl_flags = fl->fl_flags;
232 new->fl_type = fl->fl_type;
233 new->fl_start = fl->fl_start;
234 new->fl_end = fl->fl_end;
236 new->fl_lmops = NULL;
239 void locks_copy_lock(struct file_lock *new, struct file_lock *fl)
241 locks_release_private(new);
243 __locks_copy_lock(new, fl);
244 new->fl_file = fl->fl_file;
245 new->fl_ops = fl->fl_ops;
246 new->fl_lmops = fl->fl_lmops;
248 locks_copy_private(new, fl);
251 EXPORT_SYMBOL(locks_copy_lock);
253 static inline int flock_translate_cmd(int cmd) {
255 return cmd & (LOCK_MAND | LOCK_RW);
267 /* Fill in a file_lock structure with an appropriate FLOCK lock. */
268 static int flock_make_lock(struct file *filp, struct file_lock **lock,
271 struct file_lock *fl;
272 int type = flock_translate_cmd(cmd);
276 fl = locks_alloc_lock();
281 fl->fl_pid = current->tgid;
282 fl->fl_flags = FL_FLOCK;
284 fl->fl_end = OFFSET_MAX;
290 static int assign_type(struct file_lock *fl, int type)
304 /* Verify a "struct flock" and copy it to a "struct file_lock" as a POSIX
307 static int flock_to_posix_lock(struct file *filp, struct file_lock *fl,
312 switch (l->l_whence) {
320 start = i_size_read(filp->f_path.dentry->d_inode);
326 /* POSIX-1996 leaves the case l->l_len < 0 undefined;
327 POSIX-2001 defines it. */
331 fl->fl_end = OFFSET_MAX;
333 end = start + l->l_len - 1;
335 } else if (l->l_len < 0) {
342 fl->fl_start = start; /* we record the absolute position */
343 if (fl->fl_end < fl->fl_start)
346 fl->fl_owner = current->files;
347 fl->fl_pid = current->tgid;
349 fl->fl_flags = FL_POSIX;
353 return assign_type(fl, l->l_type);
356 #if BITS_PER_LONG == 32
357 static int flock64_to_posix_lock(struct file *filp, struct file_lock *fl,
362 switch (l->l_whence) {
370 start = i_size_read(filp->f_path.dentry->d_inode);
379 fl->fl_end = OFFSET_MAX;
381 fl->fl_end = start + l->l_len - 1;
382 } else if (l->l_len < 0) {
383 fl->fl_end = start - 1;
388 fl->fl_start = start; /* we record the absolute position */
389 if (fl->fl_end < fl->fl_start)
392 fl->fl_owner = current->files;
393 fl->fl_pid = current->tgid;
395 fl->fl_flags = FL_POSIX;
403 fl->fl_type = l->l_type;
413 /* default lease lock manager operations */
414 static void lease_break_callback(struct file_lock *fl)
416 kill_fasync(&fl->fl_fasync, SIGIO, POLL_MSG);
419 static void lease_release_private_callback(struct file_lock *fl)
424 f_delown(fl->fl_file);
425 fl->fl_file->f_owner.signum = 0;
428 static int lease_mylease_callback(struct file_lock *fl, struct file_lock *try)
430 return fl->fl_file == try->fl_file;
433 static struct lock_manager_operations lease_manager_ops = {
434 .fl_break = lease_break_callback,
435 .fl_release_private = lease_release_private_callback,
436 .fl_mylease = lease_mylease_callback,
437 .fl_change = lease_modify,
441 * Initialize a lease, use the default lock manager operations
443 static int lease_init(struct file *filp, int type, struct file_lock *fl)
445 if (assign_type(fl, type) != 0)
448 fl->fl_owner = current->files;
449 fl->fl_pid = current->tgid;
452 fl->fl_flags = FL_LEASE;
454 fl->fl_end = OFFSET_MAX;
456 fl->fl_lmops = &lease_manager_ops;
460 /* Allocate a file_lock initialised to this type of lease */
461 static struct file_lock *lease_alloc(struct file *filp, int type)
463 struct file_lock *fl = locks_alloc_lock();
467 return ERR_PTR(error);
469 error = lease_init(filp, type, fl);
472 return ERR_PTR(error);
477 /* Check if two locks overlap each other.
479 static inline int locks_overlap(struct file_lock *fl1, struct file_lock *fl2)
481 return ((fl1->fl_end >= fl2->fl_start) &&
482 (fl2->fl_end >= fl1->fl_start));
486 * Check whether two locks have the same owner.
488 static int posix_same_owner(struct file_lock *fl1, struct file_lock *fl2)
490 if (fl1->fl_lmops && fl1->fl_lmops->fl_compare_owner)
491 return fl2->fl_lmops == fl1->fl_lmops &&
492 fl1->fl_lmops->fl_compare_owner(fl1, fl2);
493 return fl1->fl_owner == fl2->fl_owner;
496 /* Remove waiter from blocker's block list.
497 * When blocker ends up pointing to itself then the list is empty.
499 static void __locks_delete_block(struct file_lock *waiter)
501 list_del_init(&waiter->fl_block);
502 list_del_init(&waiter->fl_link);
503 waiter->fl_next = NULL;
508 static void locks_delete_block(struct file_lock *waiter)
511 __locks_delete_block(waiter);
515 /* Insert waiter into blocker's block list.
516 * We use a circular list so that processes can be easily woken up in
517 * the order they blocked. The documentation doesn't require this but
518 * it seems like the reasonable thing to do.
520 static void locks_insert_block(struct file_lock *blocker,
521 struct file_lock *waiter)
523 BUG_ON(!list_empty(&waiter->fl_block));
524 list_add_tail(&waiter->fl_block, &blocker->fl_block);
525 waiter->fl_next = blocker;
526 if (IS_POSIX(blocker))
527 list_add(&waiter->fl_link, &blocked_list);
530 /* Wake up processes blocked waiting for blocker.
531 * If told to wait then schedule the processes until the block list
532 * is empty, otherwise empty the block list ourselves.
534 static void locks_wake_up_blocks(struct file_lock *blocker)
536 while (!list_empty(&blocker->fl_block)) {
537 struct file_lock *waiter;
539 waiter = list_first_entry(&blocker->fl_block,
540 struct file_lock, fl_block);
541 __locks_delete_block(waiter);
542 if (waiter->fl_lmops && waiter->fl_lmops->fl_notify)
543 waiter->fl_lmops->fl_notify(waiter);
545 wake_up(&waiter->fl_wait);
549 /* Insert file lock fl into an inode's lock list at the position indicated
550 * by pos. At the same time add the lock to the global file lock list.
552 static void locks_insert_lock(struct file_lock **pos, struct file_lock *fl)
554 list_add(&fl->fl_link, &file_lock_list);
556 /* insert into file's list */
560 if (fl->fl_ops && fl->fl_ops->fl_insert)
561 fl->fl_ops->fl_insert(fl);
565 * Delete a lock and then free it.
566 * Wake up processes that are blocked waiting for this lock,
567 * notify the FS that the lock has been cleared and
568 * finally free the lock.
570 static void locks_delete_lock(struct file_lock **thisfl_p)
572 struct file_lock *fl = *thisfl_p;
574 *thisfl_p = fl->fl_next;
576 list_del_init(&fl->fl_link);
578 fasync_helper(0, fl->fl_file, 0, &fl->fl_fasync);
579 if (fl->fl_fasync != NULL) {
580 printk(KERN_ERR "locks_delete_lock: fasync == %p\n", fl->fl_fasync);
581 fl->fl_fasync = NULL;
584 if (fl->fl_ops && fl->fl_ops->fl_remove)
585 fl->fl_ops->fl_remove(fl);
587 locks_wake_up_blocks(fl);
591 /* Determine if lock sys_fl blocks lock caller_fl. Common functionality
592 * checks for shared/exclusive status of overlapping locks.
594 static int locks_conflict(struct file_lock *caller_fl, struct file_lock *sys_fl)
596 if (sys_fl->fl_type == F_WRLCK)
598 if (caller_fl->fl_type == F_WRLCK)
603 /* Determine if lock sys_fl blocks lock caller_fl. POSIX specific
604 * checking before calling the locks_conflict().
606 static int posix_locks_conflict(struct file_lock *caller_fl, struct file_lock *sys_fl)
608 /* POSIX locks owned by the same process do not conflict with
611 if (!IS_POSIX(sys_fl) || posix_same_owner(caller_fl, sys_fl))
614 /* Check whether they overlap */
615 if (!locks_overlap(caller_fl, sys_fl))
618 return (locks_conflict(caller_fl, sys_fl));
621 /* Determine if lock sys_fl blocks lock caller_fl. FLOCK specific
622 * checking before calling the locks_conflict().
624 static int flock_locks_conflict(struct file_lock *caller_fl, struct file_lock *sys_fl)
626 /* FLOCK locks referring to the same filp do not conflict with
629 if (!IS_FLOCK(sys_fl) || (caller_fl->fl_file == sys_fl->fl_file))
631 if ((caller_fl->fl_type & LOCK_MAND) || (sys_fl->fl_type & LOCK_MAND))
634 return (locks_conflict(caller_fl, sys_fl));
637 static int interruptible_sleep_on_locked(wait_queue_head_t *fl_wait, int timeout)
640 DECLARE_WAITQUEUE(wait, current);
642 __set_current_state(TASK_INTERRUPTIBLE);
643 add_wait_queue(fl_wait, &wait);
647 result = schedule_timeout(timeout);
648 if (signal_pending(current))
649 result = -ERESTARTSYS;
650 remove_wait_queue(fl_wait, &wait);
651 __set_current_state(TASK_RUNNING);
655 static int locks_block_on_timeout(struct file_lock *blocker, struct file_lock *waiter, int time)
658 locks_insert_block(blocker, waiter);
659 result = interruptible_sleep_on_locked(&waiter->fl_wait, time);
660 __locks_delete_block(waiter);
665 posix_test_lock(struct file *filp, struct file_lock *fl)
667 struct file_lock *cfl;
670 for (cfl = filp->f_path.dentry->d_inode->i_flock; cfl; cfl = cfl->fl_next) {
673 if (posix_locks_conflict(fl, cfl))
677 __locks_copy_lock(fl, cfl);
679 fl->fl_type = F_UNLCK;
684 EXPORT_SYMBOL(posix_test_lock);
687 * Deadlock detection:
689 * We attempt to detect deadlocks that are due purely to posix file
692 * We assume that a task can be waiting for at most one lock at a time.
693 * So for any acquired lock, the process holding that lock may be
694 * waiting on at most one other lock. That lock in turns may be held by
695 * someone waiting for at most one other lock. Given a requested lock
696 * caller_fl which is about to wait for a conflicting lock block_fl, we
697 * follow this chain of waiters to ensure we are not about to create a
700 * Since we do this before we ever put a process to sleep on a lock, we
701 * are ensured that there is never a cycle; that is what guarantees that
702 * the while() loop in posix_locks_deadlock() eventually completes.
704 * Note: the above assumption may not be true when handling lock
705 * requests from a broken NFS client. It may also fail in the presence
706 * of tasks (such as posix threads) sharing the same open file table.
708 * To handle those cases, we just bail out after a few iterations.
711 #define MAX_DEADLK_ITERATIONS 10
713 /* Find a lock that the owner of the given block_fl is blocking on. */
714 static struct file_lock *what_owner_is_waiting_for(struct file_lock *block_fl)
716 struct file_lock *fl;
718 list_for_each_entry(fl, &blocked_list, fl_link) {
719 if (posix_same_owner(fl, block_fl))
725 static int posix_locks_deadlock(struct file_lock *caller_fl,
726 struct file_lock *block_fl)
730 while ((block_fl = what_owner_is_waiting_for(block_fl))) {
731 if (i++ > MAX_DEADLK_ITERATIONS)
733 if (posix_same_owner(caller_fl, block_fl))
739 /* Try to create a FLOCK lock on filp. We always insert new FLOCK locks
740 * after any leases, but before any posix locks.
742 * Note that if called with an FL_EXISTS argument, the caller may determine
743 * whether or not a lock was successfully freed by testing the return
746 static int flock_lock_file(struct file *filp, struct file_lock *request)
748 struct file_lock *new_fl = NULL;
749 struct file_lock **before;
750 struct inode * inode = filp->f_path.dentry->d_inode;
755 if (request->fl_flags & FL_ACCESS)
758 if (request->fl_type != F_UNLCK) {
760 new_fl = locks_alloc_lock();
766 for_each_lock(inode, before) {
767 struct file_lock *fl = *before;
772 if (filp != fl->fl_file)
774 if (request->fl_type == fl->fl_type)
777 locks_delete_lock(before);
781 if (request->fl_type == F_UNLCK) {
782 if ((request->fl_flags & FL_EXISTS) && !found)
788 * If a higher-priority process was blocked on the old file lock,
789 * give it the opportunity to lock the file.
795 for_each_lock(inode, before) {
796 struct file_lock *fl = *before;
801 if (!flock_locks_conflict(request, fl))
804 if (request->fl_flags & FL_SLEEP)
805 locks_insert_block(fl, request);
808 if (request->fl_flags & FL_ACCESS)
810 locks_copy_lock(new_fl, request);
811 locks_insert_lock(before, new_fl);
818 locks_free_lock(new_fl);
822 static int __posix_lock_file(struct inode *inode, struct file_lock *request, struct file_lock *conflock)
824 struct file_lock *fl;
825 struct file_lock *new_fl = NULL;
826 struct file_lock *new_fl2 = NULL;
827 struct file_lock *left = NULL;
828 struct file_lock *right = NULL;
829 struct file_lock **before;
830 int error, added = 0;
833 * We may need two file_lock structures for this operation,
834 * so we get them in advance to avoid races.
836 * In some cases we can be sure, that no new locks will be needed
838 if (!(request->fl_flags & FL_ACCESS) &&
839 (request->fl_type != F_UNLCK ||
840 request->fl_start != 0 || request->fl_end != OFFSET_MAX)) {
841 new_fl = locks_alloc_lock();
842 new_fl2 = locks_alloc_lock();
846 if (request->fl_type != F_UNLCK) {
847 for_each_lock(inode, before) {
851 if (!posix_locks_conflict(request, fl))
854 locks_copy_lock(conflock, fl);
856 if (!(request->fl_flags & FL_SLEEP))
859 if (posix_locks_deadlock(request, fl))
862 locks_insert_block(fl, request);
867 /* If we're just looking for a conflict, we're done. */
869 if (request->fl_flags & FL_ACCESS)
873 * Find the first old lock with the same owner as the new lock.
876 before = &inode->i_flock;
878 /* First skip locks owned by other processes. */
879 while ((fl = *before) && (!IS_POSIX(fl) ||
880 !posix_same_owner(request, fl))) {
881 before = &fl->fl_next;
884 /* Process locks with this owner. */
885 while ((fl = *before) && posix_same_owner(request, fl)) {
886 /* Detect adjacent or overlapping regions (if same lock type)
888 if (request->fl_type == fl->fl_type) {
889 /* In all comparisons of start vs end, use
890 * "start - 1" rather than "end + 1". If end
891 * is OFFSET_MAX, end + 1 will become negative.
893 if (fl->fl_end < request->fl_start - 1)
895 /* If the next lock in the list has entirely bigger
896 * addresses than the new one, insert the lock here.
898 if (fl->fl_start - 1 > request->fl_end)
901 /* If we come here, the new and old lock are of the
902 * same type and adjacent or overlapping. Make one
903 * lock yielding from the lower start address of both
904 * locks to the higher end address.
906 if (fl->fl_start > request->fl_start)
907 fl->fl_start = request->fl_start;
909 request->fl_start = fl->fl_start;
910 if (fl->fl_end < request->fl_end)
911 fl->fl_end = request->fl_end;
913 request->fl_end = fl->fl_end;
915 locks_delete_lock(before);
922 /* Processing for different lock types is a bit
925 if (fl->fl_end < request->fl_start)
927 if (fl->fl_start > request->fl_end)
929 if (request->fl_type == F_UNLCK)
931 if (fl->fl_start < request->fl_start)
933 /* If the next lock in the list has a higher end
934 * address than the new one, insert the new one here.
936 if (fl->fl_end > request->fl_end) {
940 if (fl->fl_start >= request->fl_start) {
941 /* The new lock completely replaces an old
942 * one (This may happen several times).
945 locks_delete_lock(before);
948 /* Replace the old lock with the new one.
949 * Wake up anybody waiting for the old one,
950 * as the change in lock type might satisfy
953 locks_wake_up_blocks(fl);
954 fl->fl_start = request->fl_start;
955 fl->fl_end = request->fl_end;
956 fl->fl_type = request->fl_type;
957 locks_release_private(fl);
958 locks_copy_private(fl, request);
963 /* Go on to next lock.
966 before = &fl->fl_next;
970 * The above code only modifies existing locks in case of
971 * merging or replacing. If new lock(s) need to be inserted
972 * all modifications are done bellow this, so it's safe yet to
975 error = -ENOLCK; /* "no luck" */
976 if (right && left == right && !new_fl2)
981 if (request->fl_type == F_UNLCK) {
982 if (request->fl_flags & FL_EXISTS)
991 locks_copy_lock(new_fl, request);
992 locks_insert_lock(before, new_fl);
997 /* The new lock breaks the old one in two pieces,
998 * so we have to use the second new lock.
1002 locks_copy_lock(left, right);
1003 locks_insert_lock(before, left);
1005 right->fl_start = request->fl_end + 1;
1006 locks_wake_up_blocks(right);
1009 left->fl_end = request->fl_start - 1;
1010 locks_wake_up_blocks(left);
1015 * Free any unused locks.
1018 locks_free_lock(new_fl);
1020 locks_free_lock(new_fl2);
1025 * posix_lock_file - Apply a POSIX-style lock to a file
1026 * @filp: The file to apply the lock to
1027 * @fl: The lock to be applied
1028 * @conflock: Place to return a copy of the conflicting lock, if found.
1030 * Add a POSIX style lock to a file.
1031 * We merge adjacent & overlapping locks whenever possible.
1032 * POSIX locks are sorted by owner task, then by starting address
1034 * Note that if called with an FL_EXISTS argument, the caller may determine
1035 * whether or not a lock was successfully freed by testing the return
1036 * value for -ENOENT.
1038 int posix_lock_file(struct file *filp, struct file_lock *fl,
1039 struct file_lock *conflock)
1041 return __posix_lock_file(filp->f_path.dentry->d_inode, fl, conflock);
1043 EXPORT_SYMBOL(posix_lock_file);
1046 * posix_lock_file_wait - Apply a POSIX-style lock to a file
1047 * @filp: The file to apply the lock to
1048 * @fl: The lock to be applied
1050 * Add a POSIX style lock to a file.
1051 * We merge adjacent & overlapping locks whenever possible.
1052 * POSIX locks are sorted by owner task, then by starting address
1054 int posix_lock_file_wait(struct file *filp, struct file_lock *fl)
1059 error = posix_lock_file(filp, fl, NULL);
1060 if ((error != -EAGAIN) || !(fl->fl_flags & FL_SLEEP))
1062 error = wait_event_interruptible(fl->fl_wait, !fl->fl_next);
1066 locks_delete_block(fl);
1071 EXPORT_SYMBOL(posix_lock_file_wait);
1074 * locks_mandatory_locked - Check for an active lock
1075 * @inode: the file to check
1077 * Searches the inode's list of locks to find any POSIX locks which conflict.
1078 * This function is called from locks_verify_locked() only.
1080 int locks_mandatory_locked(struct inode *inode)
1082 fl_owner_t owner = current->files;
1083 struct file_lock *fl;
1086 * Search the lock list for this inode for any POSIX locks.
1089 for (fl = inode->i_flock; fl != NULL; fl = fl->fl_next) {
1092 if (fl->fl_owner != owner)
1096 return fl ? -EAGAIN : 0;
1100 * locks_mandatory_area - Check for a conflicting lock
1101 * @read_write: %FLOCK_VERIFY_WRITE for exclusive access, %FLOCK_VERIFY_READ
1103 * @inode: the file to check
1104 * @filp: how the file was opened (if it was)
1105 * @offset: start of area to check
1106 * @count: length of area to check
1108 * Searches the inode's list of locks to find any POSIX locks which conflict.
1109 * This function is called from rw_verify_area() and
1110 * locks_verify_truncate().
1112 int locks_mandatory_area(int read_write, struct inode *inode,
1113 struct file *filp, loff_t offset,
1116 struct file_lock fl;
1119 locks_init_lock(&fl);
1120 fl.fl_owner = current->files;
1121 fl.fl_pid = current->tgid;
1123 fl.fl_flags = FL_POSIX | FL_ACCESS;
1124 if (filp && !(filp->f_flags & O_NONBLOCK))
1125 fl.fl_flags |= FL_SLEEP;
1126 fl.fl_type = (read_write == FLOCK_VERIFY_WRITE) ? F_WRLCK : F_RDLCK;
1127 fl.fl_start = offset;
1128 fl.fl_end = offset + count - 1;
1131 error = __posix_lock_file(inode, &fl, NULL);
1132 if (error != -EAGAIN)
1134 if (!(fl.fl_flags & FL_SLEEP))
1136 error = wait_event_interruptible(fl.fl_wait, !fl.fl_next);
1139 * If we've been sleeping someone might have
1140 * changed the permissions behind our back.
1142 if (__mandatory_lock(inode))
1146 locks_delete_block(&fl);
1153 EXPORT_SYMBOL(locks_mandatory_area);
1155 /* We already had a lease on this file; just change its type */
1156 int lease_modify(struct file_lock **before, int arg)
1158 struct file_lock *fl = *before;
1159 int error = assign_type(fl, arg);
1163 locks_wake_up_blocks(fl);
1165 locks_delete_lock(before);
1169 EXPORT_SYMBOL(lease_modify);
1171 static void time_out_leases(struct inode *inode)
1173 struct file_lock **before;
1174 struct file_lock *fl;
1176 before = &inode->i_flock;
1177 while ((fl = *before) && IS_LEASE(fl) && (fl->fl_type & F_INPROGRESS)) {
1178 if ((fl->fl_break_time == 0)
1179 || time_before(jiffies, fl->fl_break_time)) {
1180 before = &fl->fl_next;
1183 lease_modify(before, fl->fl_type & ~F_INPROGRESS);
1184 if (fl == *before) /* lease_modify may have freed fl */
1185 before = &fl->fl_next;
1190 * __break_lease - revoke all outstanding leases on file
1191 * @inode: the inode of the file to return
1192 * @mode: the open mode (read or write)
1194 * break_lease (inlined for speed) has checked there already is at least
1195 * some kind of lock (maybe a lease) on this file. Leases are broken on
1196 * a call to open() or truncate(). This function can sleep unless you
1197 * specified %O_NONBLOCK to your open().
1199 int __break_lease(struct inode *inode, unsigned int mode)
1201 int error = 0, future;
1202 struct file_lock *new_fl, *flock;
1203 struct file_lock *fl;
1204 unsigned long break_time;
1205 int i_have_this_lease = 0;
1207 new_fl = lease_alloc(NULL, mode & FMODE_WRITE ? F_WRLCK : F_RDLCK);
1211 time_out_leases(inode);
1213 flock = inode->i_flock;
1214 if ((flock == NULL) || !IS_LEASE(flock))
1217 for (fl = flock; fl && IS_LEASE(fl); fl = fl->fl_next)
1218 if (fl->fl_owner == current->files)
1219 i_have_this_lease = 1;
1221 if (mode & FMODE_WRITE) {
1222 /* If we want write access, we have to revoke any lease. */
1223 future = F_UNLCK | F_INPROGRESS;
1224 } else if (flock->fl_type & F_INPROGRESS) {
1225 /* If the lease is already being broken, we just leave it */
1226 future = flock->fl_type;
1227 } else if (flock->fl_type & F_WRLCK) {
1228 /* Downgrade the exclusive lease to a read-only lease. */
1229 future = F_RDLCK | F_INPROGRESS;
1231 /* the existing lease was read-only, so we can read too. */
1235 if (IS_ERR(new_fl) && !i_have_this_lease
1236 && ((mode & O_NONBLOCK) == 0)) {
1237 error = PTR_ERR(new_fl);
1242 if (lease_break_time > 0) {
1243 break_time = jiffies + lease_break_time * HZ;
1244 if (break_time == 0)
1245 break_time++; /* so that 0 means no break time */
1248 for (fl = flock; fl && IS_LEASE(fl); fl = fl->fl_next) {
1249 if (fl->fl_type != future) {
1250 fl->fl_type = future;
1251 fl->fl_break_time = break_time;
1252 /* lease must have lmops break callback */
1253 fl->fl_lmops->fl_break(fl);
1257 if (i_have_this_lease || (mode & O_NONBLOCK)) {
1258 error = -EWOULDBLOCK;
1263 break_time = flock->fl_break_time;
1264 if (break_time != 0) {
1265 break_time -= jiffies;
1266 if (break_time == 0)
1269 error = locks_block_on_timeout(flock, new_fl, break_time);
1272 time_out_leases(inode);
1273 /* Wait for the next lease that has not been broken yet */
1274 for (flock = inode->i_flock; flock && IS_LEASE(flock);
1275 flock = flock->fl_next) {
1276 if (flock->fl_type & F_INPROGRESS)
1284 if (!IS_ERR(new_fl))
1285 locks_free_lock(new_fl);
1289 EXPORT_SYMBOL(__break_lease);
1294 * @time: pointer to a timespec which will contain the last modified time
1296 * This is to force NFS clients to flush their caches for files with
1297 * exclusive leases. The justification is that if someone has an
1298 * exclusive lease, then they could be modifiying it.
1300 void lease_get_mtime(struct inode *inode, struct timespec *time)
1302 struct file_lock *flock = inode->i_flock;
1303 if (flock && IS_LEASE(flock) && (flock->fl_type & F_WRLCK))
1304 *time = current_fs_time(inode->i_sb);
1306 *time = inode->i_mtime;
1309 EXPORT_SYMBOL(lease_get_mtime);
1312 * fcntl_getlease - Enquire what lease is currently active
1315 * The value returned by this function will be one of
1316 * (if no lease break is pending):
1318 * %F_RDLCK to indicate a shared lease is held.
1320 * %F_WRLCK to indicate an exclusive lease is held.
1322 * %F_UNLCK to indicate no lease is held.
1324 * (if a lease break is pending):
1326 * %F_RDLCK to indicate an exclusive lease needs to be
1327 * changed to a shared lease (or removed).
1329 * %F_UNLCK to indicate the lease needs to be removed.
1331 * XXX: sfr & willy disagree over whether F_INPROGRESS
1332 * should be returned to userspace.
1334 int fcntl_getlease(struct file *filp)
1336 struct file_lock *fl;
1340 time_out_leases(filp->f_path.dentry->d_inode);
1341 for (fl = filp->f_path.dentry->d_inode->i_flock; fl && IS_LEASE(fl);
1343 if (fl->fl_file == filp) {
1344 type = fl->fl_type & ~F_INPROGRESS;
1353 * generic_setlease - sets a lease on an open file
1354 * @filp: file pointer
1355 * @arg: type of lease to obtain
1356 * @flp: input - file_lock to use, output - file_lock inserted
1358 * The (input) flp->fl_lmops->fl_break function is required
1361 * Called with kernel lock held.
1363 int generic_setlease(struct file *filp, long arg, struct file_lock **flp)
1365 struct file_lock *fl, **before, **my_before = NULL, *lease;
1366 struct file_lock *new_fl = NULL;
1367 struct dentry *dentry = filp->f_path.dentry;
1368 struct inode *inode = dentry->d_inode;
1369 int error, rdlease_count = 0, wrlease_count = 0;
1371 if ((current->fsuid != inode->i_uid) && !capable(CAP_LEASE))
1373 if (!S_ISREG(inode->i_mode))
1375 error = security_file_lock(filp, arg);
1379 time_out_leases(inode);
1381 BUG_ON(!(*flp)->fl_lmops->fl_break);
1386 if ((arg == F_RDLCK) && (atomic_read(&inode->i_writecount) > 0))
1388 if ((arg == F_WRLCK)
1389 && ((atomic_read(&dentry->d_count) > 1)
1390 || (atomic_read(&inode->i_count) > 1)))
1394 new_fl = locks_alloc_lock();
1399 * At this point, we know that if there is an exclusive
1400 * lease on this file, then we hold it on this filp
1401 * (otherwise our open of this file would have blocked).
1402 * And if we are trying to acquire an exclusive lease,
1403 * then the file is not open by anyone (including us)
1404 * except for this filp.
1406 for (before = &inode->i_flock;
1407 ((fl = *before) != NULL) && IS_LEASE(fl);
1408 before = &fl->fl_next) {
1409 if (lease->fl_lmops->fl_mylease(fl, lease))
1411 else if (fl->fl_type == (F_INPROGRESS | F_UNLCK))
1413 * Someone is in the process of opening this
1414 * file for writing so we may not take an
1415 * exclusive lease on it.
1422 if ((arg == F_RDLCK && (wrlease_count > 0)) ||
1423 (arg == F_WRLCK && ((rdlease_count + wrlease_count) > 0)))
1426 if (my_before != NULL) {
1428 error = lease->fl_lmops->fl_change(my_before, arg);
1440 locks_copy_lock(new_fl, lease);
1441 locks_insert_lock(before, new_fl);
1448 locks_free_lock(new_fl);
1451 EXPORT_SYMBOL(generic_setlease);
1454 * vfs_setlease - sets a lease on an open file
1455 * @filp: file pointer
1456 * @arg: type of lease to obtain
1457 * @lease: file_lock to use
1459 * Call this to establish a lease on the file.
1460 * The (*lease)->fl_lmops->fl_break operation must be set; if not,
1461 * break_lease will oops!
1463 * This will call the filesystem's setlease file method, if
1464 * defined. Note that there is no getlease method; instead, the
1465 * filesystem setlease method should call back to setlease() to
1466 * add a lease to the inode's lease list, where fcntl_getlease() can
1467 * find it. Since fcntl_getlease() only reports whether the current
1468 * task holds a lease, a cluster filesystem need only do this for
1469 * leases held by processes on this node.
1471 * There is also no break_lease method; filesystems that
1472 * handle their own leases shoud break leases themselves from the
1473 * filesystem's open, create, and (on truncate) setattr methods.
1475 * Warning: the only current setlease methods exist only to disable
1476 * leases in certain cases. More vfs changes may be required to
1477 * allow a full filesystem lease implementation.
1480 int vfs_setlease(struct file *filp, long arg, struct file_lock **lease)
1485 if (filp->f_op && filp->f_op->setlease)
1486 error = filp->f_op->setlease(filp, arg, lease);
1488 error = generic_setlease(filp, arg, lease);
1493 EXPORT_SYMBOL_GPL(vfs_setlease);
1496 * fcntl_setlease - sets a lease on an open file
1497 * @fd: open file descriptor
1498 * @filp: file pointer
1499 * @arg: type of lease to obtain
1501 * Call this fcntl to establish a lease on the file.
1502 * Note that you also need to call %F_SETSIG to
1503 * receive a signal when the lease is broken.
1505 int fcntl_setlease(unsigned int fd, struct file *filp, long arg)
1507 struct file_lock fl, *flp = &fl;
1508 struct dentry *dentry = filp->f_path.dentry;
1509 struct inode *inode = dentry->d_inode;
1512 locks_init_lock(&fl);
1513 error = lease_init(filp, arg, &fl);
1519 error = vfs_setlease(filp, arg, &flp);
1520 if (error || arg == F_UNLCK)
1523 error = fasync_helper(fd, filp, 1, &flp->fl_fasync);
1525 /* remove lease just inserted by setlease */
1526 flp->fl_type = F_UNLCK | F_INPROGRESS;
1527 flp->fl_break_time = jiffies - 10;
1528 time_out_leases(inode);
1532 error = __f_setown(filp, task_pid(current), PIDTYPE_PID, 0);
1539 * flock_lock_file_wait - Apply a FLOCK-style lock to a file
1540 * @filp: The file to apply the lock to
1541 * @fl: The lock to be applied
1543 * Add a FLOCK style lock to a file.
1545 int flock_lock_file_wait(struct file *filp, struct file_lock *fl)
1550 error = flock_lock_file(filp, fl);
1551 if ((error != -EAGAIN) || !(fl->fl_flags & FL_SLEEP))
1553 error = wait_event_interruptible(fl->fl_wait, !fl->fl_next);
1557 locks_delete_block(fl);
1563 EXPORT_SYMBOL(flock_lock_file_wait);
1566 * sys_flock: - flock() system call.
1567 * @fd: the file descriptor to lock.
1568 * @cmd: the type of lock to apply.
1570 * Apply a %FL_FLOCK style lock to an open file descriptor.
1571 * The @cmd can be one of
1573 * %LOCK_SH -- a shared lock.
1575 * %LOCK_EX -- an exclusive lock.
1577 * %LOCK_UN -- remove an existing lock.
1579 * %LOCK_MAND -- a `mandatory' flock. This exists to emulate Windows Share Modes.
1581 * %LOCK_MAND can be combined with %LOCK_READ or %LOCK_WRITE to allow other
1582 * processes read and write access respectively.
1584 asmlinkage long sys_flock(unsigned int fd, unsigned int cmd)
1587 struct file_lock *lock;
1588 int can_sleep, unlock;
1596 can_sleep = !(cmd & LOCK_NB);
1598 unlock = (cmd == LOCK_UN);
1600 if (!unlock && !(cmd & LOCK_MAND) && !(filp->f_mode & 3))
1603 error = flock_make_lock(filp, &lock, cmd);
1607 lock->fl_flags |= FL_SLEEP;
1609 error = security_file_lock(filp, cmd);
1613 if (filp->f_op && filp->f_op->flock)
1614 error = filp->f_op->flock(filp,
1615 (can_sleep) ? F_SETLKW : F_SETLK,
1618 error = flock_lock_file_wait(filp, lock);
1621 locks_free_lock(lock);
1630 * vfs_test_lock - test file byte range lock
1631 * @filp: The file to test lock for
1632 * @fl: The lock to test; also used to hold result
1634 * Returns -ERRNO on failure. Indicates presence of conflicting lock by
1635 * setting conf->fl_type to something other than F_UNLCK.
1637 int vfs_test_lock(struct file *filp, struct file_lock *fl)
1639 if (filp->f_op && filp->f_op->lock)
1640 return filp->f_op->lock(filp, F_GETLK, fl);
1641 posix_test_lock(filp, fl);
1644 EXPORT_SYMBOL_GPL(vfs_test_lock);
1646 static int posix_lock_to_flock(struct flock *flock, struct file_lock *fl)
1648 flock->l_pid = fl->fl_pid;
1649 #if BITS_PER_LONG == 32
1651 * Make sure we can represent the posix lock via
1652 * legacy 32bit flock.
1654 if (fl->fl_start > OFFT_OFFSET_MAX)
1656 if (fl->fl_end != OFFSET_MAX && fl->fl_end > OFFT_OFFSET_MAX)
1659 flock->l_start = fl->fl_start;
1660 flock->l_len = fl->fl_end == OFFSET_MAX ? 0 :
1661 fl->fl_end - fl->fl_start + 1;
1662 flock->l_whence = 0;
1663 flock->l_type = fl->fl_type;
1667 #if BITS_PER_LONG == 32
1668 static void posix_lock_to_flock64(struct flock64 *flock, struct file_lock *fl)
1670 flock->l_pid = fl->fl_pid;
1671 flock->l_start = fl->fl_start;
1672 flock->l_len = fl->fl_end == OFFSET_MAX ? 0 :
1673 fl->fl_end - fl->fl_start + 1;
1674 flock->l_whence = 0;
1675 flock->l_type = fl->fl_type;
1679 /* Report the first existing lock that would conflict with l.
1680 * This implements the F_GETLK command of fcntl().
1682 int fcntl_getlk(struct file *filp, struct flock __user *l)
1684 struct file_lock file_lock;
1689 if (copy_from_user(&flock, l, sizeof(flock)))
1692 if ((flock.l_type != F_RDLCK) && (flock.l_type != F_WRLCK))
1695 error = flock_to_posix_lock(filp, &file_lock, &flock);
1699 error = vfs_test_lock(filp, &file_lock);
1703 flock.l_type = file_lock.fl_type;
1704 if (file_lock.fl_type != F_UNLCK) {
1705 error = posix_lock_to_flock(&flock, &file_lock);
1710 if (!copy_to_user(l, &flock, sizeof(flock)))
1717 * vfs_lock_file - file byte range lock
1718 * @filp: The file to apply the lock to
1719 * @cmd: type of locking operation (F_SETLK, F_GETLK, etc.)
1720 * @fl: The lock to be applied
1721 * @conf: Place to return a copy of the conflicting lock, if found.
1723 * A caller that doesn't care about the conflicting lock may pass NULL
1724 * as the final argument.
1726 * If the filesystem defines a private ->lock() method, then @conf will
1727 * be left unchanged; so a caller that cares should initialize it to
1728 * some acceptable default.
1730 * To avoid blocking kernel daemons, such as lockd, that need to acquire POSIX
1731 * locks, the ->lock() interface may return asynchronously, before the lock has
1732 * been granted or denied by the underlying filesystem, if (and only if)
1733 * fl_grant is set. Callers expecting ->lock() to return asynchronously
1734 * will only use F_SETLK, not F_SETLKW; they will set FL_SLEEP if (and only if)
1735 * the request is for a blocking lock. When ->lock() does return asynchronously,
1736 * it must return -EINPROGRESS, and call ->fl_grant() when the lock
1737 * request completes.
1738 * If the request is for non-blocking lock the file system should return
1739 * -EINPROGRESS then try to get the lock and call the callback routine with
1740 * the result. If the request timed out the callback routine will return a
1741 * nonzero return code and the file system should release the lock. The file
1742 * system is also responsible to keep a corresponding posix lock when it
1743 * grants a lock so the VFS can find out which locks are locally held and do
1744 * the correct lock cleanup when required.
1745 * The underlying filesystem must not drop the kernel lock or call
1746 * ->fl_grant() before returning to the caller with a -EINPROGRESS
1749 int vfs_lock_file(struct file *filp, unsigned int cmd, struct file_lock *fl, struct file_lock *conf)
1751 if (filp->f_op && filp->f_op->lock)
1752 return filp->f_op->lock(filp, cmd, fl);
1754 return posix_lock_file(filp, fl, conf);
1756 EXPORT_SYMBOL_GPL(vfs_lock_file);
1758 /* Apply the lock described by l to an open file descriptor.
1759 * This implements both the F_SETLK and F_SETLKW commands of fcntl().
1761 int fcntl_setlk(unsigned int fd, struct file *filp, unsigned int cmd,
1762 struct flock __user *l)
1764 struct file_lock *file_lock = locks_alloc_lock();
1766 struct inode *inode;
1769 if (file_lock == NULL)
1773 * This might block, so we do it before checking the inode.
1776 if (copy_from_user(&flock, l, sizeof(flock)))
1779 inode = filp->f_path.dentry->d_inode;
1781 /* Don't allow mandatory locks on files that may be memory mapped
1784 if (mandatory_lock(inode) && mapping_writably_mapped(filp->f_mapping)) {
1790 error = flock_to_posix_lock(filp, file_lock, &flock);
1793 if (cmd == F_SETLKW) {
1794 file_lock->fl_flags |= FL_SLEEP;
1798 switch (flock.l_type) {
1800 if (!(filp->f_mode & FMODE_READ))
1804 if (!(filp->f_mode & FMODE_WRITE))
1814 error = security_file_lock(filp, file_lock->fl_type);
1819 error = vfs_lock_file(filp, cmd, file_lock, NULL);
1820 if (error != -EAGAIN || cmd == F_SETLK)
1822 error = wait_event_interruptible(file_lock->fl_wait,
1823 !file_lock->fl_next);
1827 locks_delete_block(file_lock);
1832 * Attempt to detect a close/fcntl race and recover by
1833 * releasing the lock that was just acquired.
1835 if (!error && fcheck(fd) != filp && flock.l_type != F_UNLCK) {
1836 flock.l_type = F_UNLCK;
1841 locks_free_lock(file_lock);
1845 #if BITS_PER_LONG == 32
1846 /* Report the first existing lock that would conflict with l.
1847 * This implements the F_GETLK command of fcntl().
1849 int fcntl_getlk64(struct file *filp, struct flock64 __user *l)
1851 struct file_lock file_lock;
1852 struct flock64 flock;
1856 if (copy_from_user(&flock, l, sizeof(flock)))
1859 if ((flock.l_type != F_RDLCK) && (flock.l_type != F_WRLCK))
1862 error = flock64_to_posix_lock(filp, &file_lock, &flock);
1866 error = vfs_test_lock(filp, &file_lock);
1870 flock.l_type = file_lock.fl_type;
1871 if (file_lock.fl_type != F_UNLCK)
1872 posix_lock_to_flock64(&flock, &file_lock);
1875 if (!copy_to_user(l, &flock, sizeof(flock)))
1882 /* Apply the lock described by l to an open file descriptor.
1883 * This implements both the F_SETLK and F_SETLKW commands of fcntl().
1885 int fcntl_setlk64(unsigned int fd, struct file *filp, unsigned int cmd,
1886 struct flock64 __user *l)
1888 struct file_lock *file_lock = locks_alloc_lock();
1889 struct flock64 flock;
1890 struct inode *inode;
1893 if (file_lock == NULL)
1897 * This might block, so we do it before checking the inode.
1900 if (copy_from_user(&flock, l, sizeof(flock)))
1903 inode = filp->f_path.dentry->d_inode;
1905 /* Don't allow mandatory locks on files that may be memory mapped
1908 if (mandatory_lock(inode) && mapping_writably_mapped(filp->f_mapping)) {
1914 error = flock64_to_posix_lock(filp, file_lock, &flock);
1917 if (cmd == F_SETLKW64) {
1918 file_lock->fl_flags |= FL_SLEEP;
1922 switch (flock.l_type) {
1924 if (!(filp->f_mode & FMODE_READ))
1928 if (!(filp->f_mode & FMODE_WRITE))
1938 error = security_file_lock(filp, file_lock->fl_type);
1943 error = vfs_lock_file(filp, cmd, file_lock, NULL);
1944 if (error != -EAGAIN || cmd == F_SETLK64)
1946 error = wait_event_interruptible(file_lock->fl_wait,
1947 !file_lock->fl_next);
1951 locks_delete_block(file_lock);
1956 * Attempt to detect a close/fcntl race and recover by
1957 * releasing the lock that was just acquired.
1959 if (!error && fcheck(fd) != filp && flock.l_type != F_UNLCK) {
1960 flock.l_type = F_UNLCK;
1965 locks_free_lock(file_lock);
1968 #endif /* BITS_PER_LONG == 32 */
1971 * This function is called when the file is being removed
1972 * from the task's fd array. POSIX locks belonging to this task
1973 * are deleted at this time.
1975 void locks_remove_posix(struct file *filp, fl_owner_t owner)
1977 struct file_lock lock;
1980 * If there are no locks held on this file, we don't need to call
1981 * posix_lock_file(). Another process could be setting a lock on this
1982 * file at the same time, but we wouldn't remove that lock anyway.
1984 if (!filp->f_path.dentry->d_inode->i_flock)
1987 lock.fl_type = F_UNLCK;
1988 lock.fl_flags = FL_POSIX | FL_CLOSE;
1990 lock.fl_end = OFFSET_MAX;
1991 lock.fl_owner = owner;
1992 lock.fl_pid = current->tgid;
1993 lock.fl_file = filp;
1995 lock.fl_lmops = NULL;
1997 vfs_lock_file(filp, F_SETLK, &lock, NULL);
1999 if (lock.fl_ops && lock.fl_ops->fl_release_private)
2000 lock.fl_ops->fl_release_private(&lock);
2003 EXPORT_SYMBOL(locks_remove_posix);
2006 * This function is called on the last close of an open file.
2008 void locks_remove_flock(struct file *filp)
2010 struct inode * inode = filp->f_path.dentry->d_inode;
2011 struct file_lock *fl;
2012 struct file_lock **before;
2014 if (!inode->i_flock)
2017 if (filp->f_op && filp->f_op->flock) {
2018 struct file_lock fl = {
2019 .fl_pid = current->tgid,
2021 .fl_flags = FL_FLOCK,
2023 .fl_end = OFFSET_MAX,
2025 filp->f_op->flock(filp, F_SETLKW, &fl);
2026 if (fl.fl_ops && fl.fl_ops->fl_release_private)
2027 fl.fl_ops->fl_release_private(&fl);
2031 before = &inode->i_flock;
2033 while ((fl = *before) != NULL) {
2034 if (fl->fl_file == filp) {
2036 locks_delete_lock(before);
2040 lease_modify(before, F_UNLCK);
2046 before = &fl->fl_next;
2052 * posix_unblock_lock - stop waiting for a file lock
2053 * @filp: how the file was opened
2054 * @waiter: the lock which was waiting
2056 * lockd needs to block waiting for locks.
2059 posix_unblock_lock(struct file *filp, struct file_lock *waiter)
2064 if (waiter->fl_next)
2065 __locks_delete_block(waiter);
2072 EXPORT_SYMBOL(posix_unblock_lock);
2075 * vfs_cancel_lock - file byte range unblock lock
2076 * @filp: The file to apply the unblock to
2077 * @fl: The lock to be unblocked
2079 * Used by lock managers to cancel blocked requests
2081 int vfs_cancel_lock(struct file *filp, struct file_lock *fl)
2083 if (filp->f_op && filp->f_op->lock)
2084 return filp->f_op->lock(filp, F_CANCELLK, fl);
2088 EXPORT_SYMBOL_GPL(vfs_cancel_lock);
2090 #ifdef CONFIG_PROC_FS
2091 #include <linux/seq_file.h>
2093 static void lock_get_status(struct seq_file *f, struct file_lock *fl,
2096 struct inode *inode = NULL;
2098 if (fl->fl_file != NULL)
2099 inode = fl->fl_file->f_path.dentry->d_inode;
2101 seq_printf(f, "%d:%s ", id, pfx);
2103 seq_printf(f, "%6s %s ",
2104 (fl->fl_flags & FL_ACCESS) ? "ACCESS" : "POSIX ",
2105 (inode == NULL) ? "*NOINODE*" :
2106 mandatory_lock(inode) ? "MANDATORY" : "ADVISORY ");
2107 } else if (IS_FLOCK(fl)) {
2108 if (fl->fl_type & LOCK_MAND) {
2109 seq_printf(f, "FLOCK MSNFS ");
2111 seq_printf(f, "FLOCK ADVISORY ");
2113 } else if (IS_LEASE(fl)) {
2114 seq_printf(f, "LEASE ");
2115 if (fl->fl_type & F_INPROGRESS)
2116 seq_printf(f, "BREAKING ");
2117 else if (fl->fl_file)
2118 seq_printf(f, "ACTIVE ");
2120 seq_printf(f, "BREAKER ");
2122 seq_printf(f, "UNKNOWN UNKNOWN ");
2124 if (fl->fl_type & LOCK_MAND) {
2125 seq_printf(f, "%s ",
2126 (fl->fl_type & LOCK_READ)
2127 ? (fl->fl_type & LOCK_WRITE) ? "RW " : "READ "
2128 : (fl->fl_type & LOCK_WRITE) ? "WRITE" : "NONE ");
2130 seq_printf(f, "%s ",
2131 (fl->fl_type & F_INPROGRESS)
2132 ? (fl->fl_type & F_UNLCK) ? "UNLCK" : "READ "
2133 : (fl->fl_type & F_WRLCK) ? "WRITE" : "READ ");
2136 #ifdef WE_CAN_BREAK_LSLK_NOW
2137 seq_printf(f, "%d %s:%ld ", fl->fl_pid,
2138 inode->i_sb->s_id, inode->i_ino);
2140 /* userspace relies on this representation of dev_t ;-( */
2141 seq_printf(f, "%d %02x:%02x:%ld ", fl->fl_pid,
2142 MAJOR(inode->i_sb->s_dev),
2143 MINOR(inode->i_sb->s_dev), inode->i_ino);
2146 seq_printf(f, "%d <none>:0 ", fl->fl_pid);
2149 if (fl->fl_end == OFFSET_MAX)
2150 seq_printf(f, "%Ld EOF\n", fl->fl_start);
2152 seq_printf(f, "%Ld %Ld\n", fl->fl_start, fl->fl_end);
2154 seq_printf(f, "0 EOF\n");
2158 static int locks_show(struct seq_file *f, void *v)
2160 struct file_lock *fl, *bfl;
2162 fl = list_entry(v, struct file_lock, fl_link);
2164 lock_get_status(f, fl, (long)f->private, "");
2166 list_for_each_entry(bfl, &fl->fl_block, fl_block)
2167 lock_get_status(f, bfl, (long)f->private, " ->");
2173 static void *locks_start(struct seq_file *f, loff_t *pos)
2176 f->private = (void *)1;
2177 return seq_list_start(&file_lock_list, *pos);
2180 static void *locks_next(struct seq_file *f, void *v, loff_t *pos)
2182 return seq_list_next(v, &file_lock_list, pos);
2185 static void locks_stop(struct seq_file *f, void *v)
2190 struct seq_operations locks_seq_operations = {
2191 .start = locks_start,
2199 * lock_may_read - checks that the region is free of locks
2200 * @inode: the inode that is being read
2201 * @start: the first byte to read
2202 * @len: the number of bytes to read
2204 * Emulates Windows locking requirements. Whole-file
2205 * mandatory locks (share modes) can prohibit a read and
2206 * byte-range POSIX locks can prohibit a read if they overlap.
2208 * N.B. this function is only ever called
2209 * from knfsd and ownership of locks is never checked.
2211 int lock_may_read(struct inode *inode, loff_t start, unsigned long len)
2213 struct file_lock *fl;
2216 for (fl = inode->i_flock; fl != NULL; fl = fl->fl_next) {
2218 if (fl->fl_type == F_RDLCK)
2220 if ((fl->fl_end < start) || (fl->fl_start > (start + len)))
2222 } else if (IS_FLOCK(fl)) {
2223 if (!(fl->fl_type & LOCK_MAND))
2225 if (fl->fl_type & LOCK_READ)
2236 EXPORT_SYMBOL(lock_may_read);
2239 * lock_may_write - checks that the region is free of locks
2240 * @inode: the inode that is being written
2241 * @start: the first byte to write
2242 * @len: the number of bytes to write
2244 * Emulates Windows locking requirements. Whole-file
2245 * mandatory locks (share modes) can prohibit a write and
2246 * byte-range POSIX locks can prohibit a write if they overlap.
2248 * N.B. this function is only ever called
2249 * from knfsd and ownership of locks is never checked.
2251 int lock_may_write(struct inode *inode, loff_t start, unsigned long len)
2253 struct file_lock *fl;
2256 for (fl = inode->i_flock; fl != NULL; fl = fl->fl_next) {
2258 if ((fl->fl_end < start) || (fl->fl_start > (start + len)))
2260 } else if (IS_FLOCK(fl)) {
2261 if (!(fl->fl_type & LOCK_MAND))
2263 if (fl->fl_type & LOCK_WRITE)
2274 EXPORT_SYMBOL(lock_may_write);
2276 static int __init filelock_init(void)
2278 filelock_cache = kmem_cache_create("file_lock_cache",
2279 sizeof(struct file_lock), 0, SLAB_PANIC,
2284 core_initcall(filelock_init);