tcp: eliminate negative reordering in tcp_clean_rtx_queue
authorSoheil Hassas Yeganeh <soheil@google.com>
Mon, 15 May 2017 21:05:47 +0000 (17:05 -0400)
committerBen Hutchings <ben@decadent.org.uk>
Fri, 15 Sep 2017 17:30:41 +0000 (18:30 +0100)
commit5a0fe5df7de562f40d800c6797a8370cf139ec2c
tree08dc08bbbd2ce7d51a132f7299e1430de596ee57
parent5503a42c94dfc823dd005818444deb8d5efd9a98
tcp: eliminate negative reordering in tcp_clean_rtx_queue

commit bafbb9c73241760023d8981191ddd30bb1c6dbac upstream.

tcp_ack() can call tcp_fragment() which may dededuct the
value tp->fackets_out when MSS changes. When prior_fackets
is larger than tp->fackets_out, tcp_clean_rtx_queue() can
invoke tcp_update_reordering() with negative values. This
results in absurd tp->reodering values higher than
sysctl_tcp_max_reordering.

Note that tcp_update_reordering indeeds sets tp->reordering
to min(sysctl_tcp_max_reordering, metric), but because
the comparison is signed, a negative metric always wins.

Fixes: c7caf8d3ed7a ("[TCP]: Fix reord detection due to snd_una covered holes")
Reported-by: Rebecca Isaacs <risaacs@google.com>
Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
net/ipv4/tcp_input.c