dm thin: set minimum_io_size to pool's data block size
authorMike Snitzer <snitzer@redhat.com>
Fri, 18 Jul 2014 21:59:43 +0000 (17:59 -0400)
committerMike Snitzer <snitzer@redhat.com>
Fri, 1 Aug 2014 16:30:35 +0000 (12:30 -0400)
commitfdfb4c8c1a9fc8dd8cf8eeb4e3ed83573b375285
tree1d74a9957919f623797358268a4cdadfc53dbb19
parent298a9fa08a1577211d42a75e8fc073baef61e0d9
dm thin: set minimum_io_size to pool's data block size

Before, if the block layer's limit stacking didn't establish an
optimal_io_size that was compatible with the thin-pool's data block size
we'd set optimal_io_size to the data block size and minimum_io_size to 0
(which the block layer adjusts to be physical_block_size).

Update pool_io_hints() to set both minimum_io_size and optimal_io_size
to the thin-pool's data block size.  This fixes an issue reported where
mkfs.xfs would create more XFS Allocation Groups on thinp volumes than
on a normal linear LV of comparable size, see:
https://bugzilla.redhat.com/show_bug.cgi?id=1003227

Reported-by: Chris Murphy <lists@colorremedies.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
drivers/md/dm-thin.c