kernel: bump kernel 4.4 to 4.4.129 for 17.01
[openwrt/openwrt.git] / target / linux / generic / patches-4.4 / 061-softirq-let-ksoftirqd-do-its-job.patch
1 From: Eric Dumazet <edumazet@google.com>
2 Date: Wed, 31 Aug 2016 10:42:29 -0700
3 Subject: [PATCH] softirq: let ksoftirqd do its job
4
5 A while back, Paolo and Hannes sent an RFC patch adding threaded-able
6 napi poll loop support : (https://patchwork.ozlabs.org/patch/620657/)
7
8 The problem seems to be that softirqs are very aggressive and are often
9 handled by the current process, even if we are under stress and that
10 ksoftirqd was scheduled, so that innocent threads would have more chance
11 to make progress.
12
13 This patch makes sure that if ksoftirq is running, we let it
14 perform the softirq work.
15
16 Jonathan Corbet summarized the issue in https://lwn.net/Articles/687617/
17
18 Tested:
19
20 - NIC receiving traffic handled by CPU 0
21 - UDP receiver running on CPU 0, using a single UDP socket.
22 - Incoming flood of UDP packets targeting the UDP socket.
23
24 Before the patch, the UDP receiver could almost never get cpu cycles and
25 could only receive ~2,000 packets per second.
26
27 After the patch, cpu cycles are split 50/50 between user application and
28 ksoftirqd/0, and we can effectively read ~900,000 packets per second,
29 a huge improvement in DOS situation. (Note that more packets are now
30 dropped by the NIC itself, since the BH handlers get less cpu cycles to
31 drain RX ring buffer)
32
33 Since the load runs in well identified threads context, an admin can
34 more easily tune process scheduling parameters if needed.
35
36 Reported-by: Paolo Abeni <pabeni@redhat.com>
37 Reported-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
38 Signed-off-by: Eric Dumazet <edumazet@google.com>
39 Cc: David Miller <davem@davemloft.net
40 Cc: Jesper Dangaard Brouer <jbrouer@redhat.com>
41 Cc: Peter Zijlstra <peterz@infradead.org>
42 Cc: Rik van Riel <riel@redhat.com>
43 ---
44
45 --- a/kernel/softirq.c
46 +++ b/kernel/softirq.c
47 @@ -78,6 +78,17 @@ static void wakeup_softirqd(void)
48 }
49
50 /*
51 + * If ksoftirqd is scheduled, we do not want to process pending softirqs
52 + * right now. Let ksoftirqd handle this at its own rate, to get fairness.
53 + */
54 +static bool ksoftirqd_running(void)
55 +{
56 + struct task_struct *tsk = __this_cpu_read(ksoftirqd);
57 +
58 + return tsk && (tsk->state == TASK_RUNNING);
59 +}
60 +
61 +/*
62 * preempt_count and SOFTIRQ_OFFSET usage:
63 * - preempt_count is changed by SOFTIRQ_OFFSET on entering or leaving
64 * softirq processing.
65 @@ -313,7 +324,7 @@ asmlinkage __visible void do_softirq(voi
66
67 pending = local_softirq_pending();
68
69 - if (pending)
70 + if (pending && !ksoftirqd_running())
71 do_softirq_own_stack();
72
73 local_irq_restore(flags);
74 @@ -340,6 +351,9 @@ void irq_enter(void)
75
76 static inline void invoke_softirq(void)
77 {
78 + if (ksoftirqd_running())
79 + return;
80 +
81 if (!force_irqthreads) {
82 #ifdef CONFIG_HAVE_IRQ_EXIT_ON_IRQ_STACK
83 /*