x86: mtrr_cleanup safe to get more spare regs now
authorYinghai Lu <yhlu.kernel@gmail.com>
Mon, 29 Sep 2008 20:39:17 +0000 (13:39 -0700)
committerH. Peter Anvin <hpa@zytor.com>
Tue, 30 Sep 2008 01:34:08 +0000 (18:34 -0700)
Delay exit to make sure we can actually get the optimal result in as
many cases as possible.

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
arch/x86/kernel/cpu/mtrr/main.c

index bccf57f..ae0ca97 100644 (file)
@@ -1353,10 +1353,8 @@ static int __init mtrr_cleanup(unsigned address_bits)
                nr_mtrr_spare_reg = num_var_ranges - 1;
        num_reg_good = -1;
        for (i = num_var_ranges - nr_mtrr_spare_reg; i > 0; i--) {
-               if (!min_loss_pfn[i]) {
+               if (!min_loss_pfn[i])
                        num_reg_good = i;
-                       break;
-               }
        }
 
        index_good = -1;