1 From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
2 From: "Jason A. Donenfeld" <Jason@zx2c4.com>
3 Date: Mon, 9 Dec 2019 00:27:34 +0100
4 Subject: [PATCH] net: WireGuard secure network tunnel
6 commit e7096c131e5161fa3b8e52a650d7719d2857adfd upstream.
8 WireGuard is a layer 3 secure networking tunnel made specifically for
9 the kernel, that aims to be much simpler and easier to audit than IPsec.
10 Extensive documentation and description of the protocol and
11 considerations, along with formal proofs of the cryptography, are
14 * https://www.wireguard.com/
15 * https://www.wireguard.com/papers/wireguard.pdf
17 This commit implements WireGuard as a simple network device driver,
18 accessible in the usual RTNL way used by virtual network drivers. It
19 makes use of the udp_tunnel APIs, GRO, GSO, NAPI, and the usual set of
20 networking subsystem APIs. It has a somewhat novel multicore queueing
21 system designed for maximum throughput and minimal latency of encryption
22 operations, but it is implemented modestly using workqueues and NAPI.
23 Configuration is done via generic Netlink, and following a review from
24 the Netlink maintainer a year ago, several high profile userspace tools
25 have already implemented the API.
27 This commit also comes with several different tests, both in-kernel
28 tests and out-of-kernel tests based on network namespaces, taking profit
29 of the fact that sockets used by WireGuard intentionally stay in the
30 namespace the WireGuard interface was originally created, exactly like
31 the semantics of userspace tun devices. See wireguard.com/netns/ for
32 pictures and examples.
34 The source code is fairly short, but rather than combining everything
35 into a single file, WireGuard is developed as cleanly separable files,
36 making auditing and comprehension easier. Things are laid out as
39 * noise.[ch], cookie.[ch], messages.h: These implement the bulk of the
40 cryptographic aspects of the protocol, and are mostly data-only in
41 nature, taking in buffers of bytes and spitting out buffers of
42 bytes. They also handle reference counting for their various shared
43 pieces of data, like keys and key lists.
45 * ratelimiter.[ch]: Used as an integral part of cookie.[ch] for
46 ratelimiting certain types of cryptographic operations in accordance
47 with particular WireGuard semantics.
49 * allowedips.[ch], peerlookup.[ch]: The main lookup structures of
50 WireGuard, the former being trie-like with particular semantics, an
51 integral part of the design of the protocol, and the latter just
52 being nice helper functions around the various hashtables we use.
54 * device.[ch]: Implementation of functions for the netdevice and for
55 rtnl, responsible for maintaining the life of a given interface and
56 wiring it up to the rest of WireGuard.
58 * peer.[ch]: Each interface has a list of peers, with helper functions
59 available here for creation, destruction, and reference counting.
61 * socket.[ch]: Implementation of functions related to udp_socket and
62 the general set of kernel socket APIs, for sending and receiving
63 ciphertext UDP packets, and taking care of WireGuard-specific sticky
64 socket routing semantics for the automatic roaming.
66 * netlink.[ch]: Userspace API entry point for configuring WireGuard
67 peers and devices. The API has been implemented by several userspace
68 tools and network management utility, and the WireGuard project
69 distributes the basic wg(8) tool.
71 * queueing.[ch]: Shared function on the rx and tx path for handling
72 the various queues used in the multicore algorithms.
74 * send.c: Handles encrypting outgoing packets in parallel on
75 multiple cores, before sending them in order on a single core, via
76 workqueues and ring buffers. Also handles sending handshake and cookie
77 messages as part of the protocol, in parallel.
79 * receive.c: Handles decrypting incoming packets in parallel on
80 multiple cores, before passing them off in order to be ingested via
81 the rest of the networking subsystem with GRO via the typical NAPI
82 poll function. Also handles receiving handshake and cookie messages
83 as part of the protocol, in parallel.
85 * timers.[ch]: Uses the timer wheel to implement protocol particular
86 event timeouts, and gives a set of very simple event-driven entry
87 point functions for callers.
89 * main.c, version.h: Initialization and deinitialization of the module.
91 * selftest/*.h: Runtime unit tests for some of the most security
94 * tools/testing/selftests/wireguard/netns.sh: Aforementioned testing
95 script using network namespaces.
97 This commit aims to be as self-contained as possible, implementing
98 WireGuard as a standalone module not needing much special handling or
99 coordination from the network subsystem. I expect for future
100 optimizations to the network stack to positively improve WireGuard, and
101 vice-versa, but for the time being, this exists as intentionally
104 We introduce a menu option for CONFIG_WIREGUARD, as well as providing a
105 verbose debug log and self-tests via CONFIG_WIREGUARD_DEBUG.
107 Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
108 Cc: David Miller <davem@davemloft.net>
109 Cc: Greg KH <gregkh@linuxfoundation.org>
110 Cc: Linus Torvalds <torvalds@linux-foundation.org>
111 Cc: Herbert Xu <herbert@gondor.apana.org.au>
112 Cc: linux-crypto@vger.kernel.org
113 Cc: linux-kernel@vger.kernel.org
114 Cc: netdev@vger.kernel.org
115 Signed-off-by: David S. Miller <davem@davemloft.net>
116 [Jason: ported to 5.4 by doing the following:
117 - wg_get_device_start uses genl_family_attrbuf
118 - trival skb_redirect_reset change from 2c64605b590e is folded in
119 - skb_list_walk_safe was already backported prior]
120 Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
123 drivers/net/Kconfig | 41 +
124 drivers/net/Makefile | 1 +
125 drivers/net/wireguard/Makefile | 18 +
126 drivers/net/wireguard/allowedips.c | 381 +++++++++
127 drivers/net/wireguard/allowedips.h | 59 ++
128 drivers/net/wireguard/cookie.c | 236 ++++++
129 drivers/net/wireguard/cookie.h | 59 ++
130 drivers/net/wireguard/device.c | 458 ++++++++++
131 drivers/net/wireguard/device.h | 65 ++
132 drivers/net/wireguard/main.c | 64 ++
133 drivers/net/wireguard/messages.h | 128 +++
134 drivers/net/wireguard/netlink.c | 648 +++++++++++++++
135 drivers/net/wireguard/netlink.h | 12 +
136 drivers/net/wireguard/noise.c | 828 +++++++++++++++++++
137 drivers/net/wireguard/noise.h | 137 +++
138 drivers/net/wireguard/peer.c | 240 ++++++
139 drivers/net/wireguard/peer.h | 83 ++
140 drivers/net/wireguard/peerlookup.c | 221 +++++
141 drivers/net/wireguard/peerlookup.h | 64 ++
142 drivers/net/wireguard/queueing.c | 53 ++
143 drivers/net/wireguard/queueing.h | 197 +++++
144 drivers/net/wireguard/ratelimiter.c | 223 +++++
145 drivers/net/wireguard/ratelimiter.h | 19 +
146 drivers/net/wireguard/receive.c | 595 +++++++++++++
147 drivers/net/wireguard/selftest/allowedips.c | 683 +++++++++++++++
148 drivers/net/wireguard/selftest/counter.c | 104 +++
149 drivers/net/wireguard/selftest/ratelimiter.c | 226 +++++
150 drivers/net/wireguard/send.c | 413 +++++++++
151 drivers/net/wireguard/socket.c | 437 ++++++++++
152 drivers/net/wireguard/socket.h | 44 +
153 drivers/net/wireguard/timers.c | 243 ++++++
154 drivers/net/wireguard/timers.h | 31 +
155 drivers/net/wireguard/version.h | 1 +
156 include/uapi/linux/wireguard.h | 196 +++++
157 tools/testing/selftests/wireguard/netns.sh | 537 ++++++++++++
158 36 files changed, 7753 insertions(+)
159 create mode 100644 drivers/net/wireguard/Makefile
160 create mode 100644 drivers/net/wireguard/allowedips.c
161 create mode 100644 drivers/net/wireguard/allowedips.h
162 create mode 100644 drivers/net/wireguard/cookie.c
163 create mode 100644 drivers/net/wireguard/cookie.h
164 create mode 100644 drivers/net/wireguard/device.c
165 create mode 100644 drivers/net/wireguard/device.h
166 create mode 100644 drivers/net/wireguard/main.c
167 create mode 100644 drivers/net/wireguard/messages.h
168 create mode 100644 drivers/net/wireguard/netlink.c
169 create mode 100644 drivers/net/wireguard/netlink.h
170 create mode 100644 drivers/net/wireguard/noise.c
171 create mode 100644 drivers/net/wireguard/noise.h
172 create mode 100644 drivers/net/wireguard/peer.c
173 create mode 100644 drivers/net/wireguard/peer.h
174 create mode 100644 drivers/net/wireguard/peerlookup.c
175 create mode 100644 drivers/net/wireguard/peerlookup.h
176 create mode 100644 drivers/net/wireguard/queueing.c
177 create mode 100644 drivers/net/wireguard/queueing.h
178 create mode 100644 drivers/net/wireguard/ratelimiter.c
179 create mode 100644 drivers/net/wireguard/ratelimiter.h
180 create mode 100644 drivers/net/wireguard/receive.c
181 create mode 100644 drivers/net/wireguard/selftest/allowedips.c
182 create mode 100644 drivers/net/wireguard/selftest/counter.c
183 create mode 100644 drivers/net/wireguard/selftest/ratelimiter.c
184 create mode 100644 drivers/net/wireguard/send.c
185 create mode 100644 drivers/net/wireguard/socket.c
186 create mode 100644 drivers/net/wireguard/socket.h
187 create mode 100644 drivers/net/wireguard/timers.c
188 create mode 100644 drivers/net/wireguard/timers.h
189 create mode 100644 drivers/net/wireguard/version.h
190 create mode 100644 include/uapi/linux/wireguard.h
191 create mode 100755 tools/testing/selftests/wireguard/netns.sh
195 @@ -17585,6 +17585,14 @@ L: linux-gpio@vger.kernel.org
197 F: drivers/gpio/gpio-ws16c48.c
199 +WIREGUARD SECURE NETWORK TUNNEL
200 +M: Jason A. Donenfeld <Jason@zx2c4.com>
202 +F: drivers/net/wireguard/
203 +F: tools/testing/selftests/wireguard/
204 +L: wireguard@lists.zx2c4.com
205 +L: netdev@vger.kernel.org
207 WISTRON LAPTOP BUTTON DRIVER
208 M: Miloslav Trmac <mitr@volny.cz>
210 --- a/drivers/net/Kconfig
211 +++ b/drivers/net/Kconfig
212 @@ -71,6 +71,47 @@ config DUMMY
213 To compile this driver as a module, choose M here: the module
214 will be called dummy.
217 + tristate "WireGuard secure network tunnel"
218 + depends on NET && INET
219 + depends on IPV6 || !IPV6
220 + select NET_UDP_TUNNEL
223 + select CRYPTO_LIB_CURVE25519
224 + select CRYPTO_LIB_CHACHA20POLY1305
225 + select CRYPTO_LIB_BLAKE2S
226 + select CRYPTO_CHACHA20_X86_64 if X86 && 64BIT
227 + select CRYPTO_POLY1305_X86_64 if X86 && 64BIT
228 + select CRYPTO_BLAKE2S_X86 if X86 && 64BIT
229 + select CRYPTO_CURVE25519_X86 if X86 && 64BIT
230 + select CRYPTO_CHACHA20_NEON if (ARM || ARM64) && KERNEL_MODE_NEON
231 + select CRYPTO_POLY1305_NEON if ARM64 && KERNEL_MODE_NEON
232 + select CRYPTO_POLY1305_ARM if ARM
233 + select CRYPTO_CURVE25519_NEON if ARM && KERNEL_MODE_NEON
234 + select CRYPTO_CHACHA_MIPS if CPU_MIPS32_R2
235 + select CRYPTO_POLY1305_MIPS if CPU_MIPS32 || (CPU_MIPS64 && 64BIT)
237 + WireGuard is a secure, fast, and easy to use replacement for IPSec
238 + that uses modern cryptography and clever networking tricks. It's
239 + designed to be fairly general purpose and abstract enough to fit most
240 + use cases, while at the same time remaining extremely simple to
241 + configure. See www.wireguard.com for more info.
243 + It's safe to say Y or M here, as the driver is very lightweight and
244 + is only in use when an administrator chooses to add an interface.
246 +config WIREGUARD_DEBUG
247 + bool "Debugging checks and verbose messages"
248 + depends on WIREGUARD
250 + This will write log messages for handshake and other events
251 + that occur for a WireGuard interface. It will also perform some
252 + extra validation checks and unit tests at various points. This is
253 + only useful for debugging.
255 + Say N here unless you know what you're doing.
258 tristate "EQL (serial line load balancing) support"
260 --- a/drivers/net/Makefile
261 +++ b/drivers/net/Makefile
262 @@ -10,6 +10,7 @@ obj-$(CONFIG_BONDING) += bonding/
263 obj-$(CONFIG_IPVLAN) += ipvlan/
264 obj-$(CONFIG_IPVTAP) += ipvlan/
265 obj-$(CONFIG_DUMMY) += dummy.o
266 +obj-$(CONFIG_WIREGUARD) += wireguard/
267 obj-$(CONFIG_EQUALIZER) += eql.o
268 obj-$(CONFIG_IFB) += ifb.o
269 obj-$(CONFIG_MACSEC) += macsec.o
271 +++ b/drivers/net/wireguard/Makefile
274 +ccflags-y += -D'pr_fmt(fmt)=KBUILD_MODNAME ": " fmt'
275 +ccflags-$(CONFIG_WIREGUARD_DEBUG) += -DDEBUG
276 +wireguard-y := main.o
277 +wireguard-y += noise.o
278 +wireguard-y += device.o
279 +wireguard-y += peer.o
280 +wireguard-y += timers.o
281 +wireguard-y += queueing.o
282 +wireguard-y += send.o
283 +wireguard-y += receive.o
284 +wireguard-y += socket.o
285 +wireguard-y += peerlookup.o
286 +wireguard-y += allowedips.o
287 +wireguard-y += ratelimiter.o
288 +wireguard-y += cookie.o
289 +wireguard-y += netlink.o
290 +obj-$(CONFIG_WIREGUARD) := wireguard.o
292 +++ b/drivers/net/wireguard/allowedips.c
294 +// SPDX-License-Identifier: GPL-2.0
296 + * Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
299 +#include "allowedips.h"
302 +static void swap_endian(u8 *dst, const u8 *src, u8 bits)
305 + *(u32 *)dst = be32_to_cpu(*(const __be32 *)src);
306 + } else if (bits == 128) {
307 + ((u64 *)dst)[0] = be64_to_cpu(((const __be64 *)src)[0]);
308 + ((u64 *)dst)[1] = be64_to_cpu(((const __be64 *)src)[1]);
312 +static void copy_and_assign_cidr(struct allowedips_node *node, const u8 *src,
316 + node->bit_at_a = cidr / 8U;
317 +#ifdef __LITTLE_ENDIAN
318 + node->bit_at_a ^= (bits / 8U - 1U) % 8U;
320 + node->bit_at_b = 7U - (cidr % 8U);
321 + node->bitlen = bits;
322 + memcpy(node->bits, src, bits / 8U);
324 +#define CHOOSE_NODE(parent, key) \
325 + parent->bit[(key[parent->bit_at_a] >> parent->bit_at_b) & 1]
327 +static void node_free_rcu(struct rcu_head *rcu)
329 + kfree(container_of(rcu, struct allowedips_node, rcu));
332 +static void push_rcu(struct allowedips_node **stack,
333 + struct allowedips_node __rcu *p, unsigned int *len)
335 + if (rcu_access_pointer(p)) {
336 + WARN_ON(IS_ENABLED(DEBUG) && *len >= 128);
337 + stack[(*len)++] = rcu_dereference_raw(p);
341 +static void root_free_rcu(struct rcu_head *rcu)
343 + struct allowedips_node *node, *stack[128] = {
344 + container_of(rcu, struct allowedips_node, rcu) };
345 + unsigned int len = 1;
347 + while (len > 0 && (node = stack[--len])) {
348 + push_rcu(stack, node->bit[0], &len);
349 + push_rcu(stack, node->bit[1], &len);
354 +static void root_remove_peer_lists(struct allowedips_node *root)
356 + struct allowedips_node *node, *stack[128] = { root };
357 + unsigned int len = 1;
359 + while (len > 0 && (node = stack[--len])) {
360 + push_rcu(stack, node->bit[0], &len);
361 + push_rcu(stack, node->bit[1], &len);
362 + if (rcu_access_pointer(node->peer))
363 + list_del(&node->peer_list);
367 +static void walk_remove_by_peer(struct allowedips_node __rcu **top,
368 + struct wg_peer *peer, struct mutex *lock)
370 +#define REF(p) rcu_access_pointer(p)
371 +#define DEREF(p) rcu_dereference_protected(*(p), lockdep_is_held(lock))
372 +#define PUSH(p) ({ \
373 + WARN_ON(IS_ENABLED(DEBUG) && len >= 128); \
374 + stack[len++] = p; \
377 + struct allowedips_node __rcu **stack[128], **nptr;
378 + struct allowedips_node *node, *prev;
381 + if (unlikely(!peer || !REF(*top)))
384 + for (prev = NULL, len = 0, PUSH(top); len > 0; prev = node) {
385 + nptr = stack[len - 1];
386 + node = DEREF(nptr);
391 + if (!prev || REF(prev->bit[0]) == node ||
392 + REF(prev->bit[1]) == node) {
393 + if (REF(node->bit[0]))
394 + PUSH(&node->bit[0]);
395 + else if (REF(node->bit[1]))
396 + PUSH(&node->bit[1]);
397 + } else if (REF(node->bit[0]) == prev) {
398 + if (REF(node->bit[1]))
399 + PUSH(&node->bit[1]);
401 + if (rcu_dereference_protected(node->peer,
402 + lockdep_is_held(lock)) == peer) {
403 + RCU_INIT_POINTER(node->peer, NULL);
404 + list_del_init(&node->peer_list);
405 + if (!node->bit[0] || !node->bit[1]) {
406 + rcu_assign_pointer(*nptr, DEREF(
407 + &node->bit[!REF(node->bit[0])]));
408 + call_rcu(&node->rcu, node_free_rcu);
409 + node = DEREF(nptr);
421 +static unsigned int fls128(u64 a, u64 b)
423 + return a ? fls64(a) + 64U : fls64(b);
426 +static u8 common_bits(const struct allowedips_node *node, const u8 *key,
430 + return 32U - fls(*(const u32 *)node->bits ^ *(const u32 *)key);
431 + else if (bits == 128)
432 + return 128U - fls128(
433 + *(const u64 *)&node->bits[0] ^ *(const u64 *)&key[0],
434 + *(const u64 *)&node->bits[8] ^ *(const u64 *)&key[8]);
438 +static bool prefix_matches(const struct allowedips_node *node, const u8 *key,
441 + /* This could be much faster if it actually just compared the common
442 + * bits properly, by precomputing a mask bswap(~0 << (32 - cidr)), and
443 + * the rest, but it turns out that common_bits is already super fast on
444 + * modern processors, even taking into account the unfortunate bswap.
445 + * So, we just inline it like this instead.
447 + return common_bits(node, key, bits) >= node->cidr;
450 +static struct allowedips_node *find_node(struct allowedips_node *trie, u8 bits,
453 + struct allowedips_node *node = trie, *found = NULL;
455 + while (node && prefix_matches(node, key, bits)) {
456 + if (rcu_access_pointer(node->peer))
458 + if (node->cidr == bits)
460 + node = rcu_dereference_bh(CHOOSE_NODE(node, key));
465 +/* Returns a strong reference to a peer */
466 +static struct wg_peer *lookup(struct allowedips_node __rcu *root, u8 bits,
469 + /* Aligned so it can be passed to fls/fls64 */
470 + u8 ip[16] __aligned(__alignof(u64));
471 + struct allowedips_node *node;
472 + struct wg_peer *peer = NULL;
474 + swap_endian(ip, be_ip, bits);
476 + rcu_read_lock_bh();
478 + node = find_node(rcu_dereference_bh(root), bits, ip);
480 + peer = wg_peer_get_maybe_zero(rcu_dereference_bh(node->peer));
484 + rcu_read_unlock_bh();
488 +static bool node_placement(struct allowedips_node __rcu *trie, const u8 *key,
489 + u8 cidr, u8 bits, struct allowedips_node **rnode,
490 + struct mutex *lock)
492 + struct allowedips_node *node = rcu_dereference_protected(trie,
493 + lockdep_is_held(lock));
494 + struct allowedips_node *parent = NULL;
495 + bool exact = false;
497 + while (node && node->cidr <= cidr && prefix_matches(node, key, bits)) {
499 + if (parent->cidr == cidr) {
503 + node = rcu_dereference_protected(CHOOSE_NODE(parent, key),
504 + lockdep_is_held(lock));
510 +static int add(struct allowedips_node __rcu **trie, u8 bits, const u8 *key,
511 + u8 cidr, struct wg_peer *peer, struct mutex *lock)
513 + struct allowedips_node *node, *parent, *down, *newnode;
515 + if (unlikely(cidr > bits || !peer))
518 + if (!rcu_access_pointer(*trie)) {
519 + node = kzalloc(sizeof(*node), GFP_KERNEL);
520 + if (unlikely(!node))
522 + RCU_INIT_POINTER(node->peer, peer);
523 + list_add_tail(&node->peer_list, &peer->allowedips_list);
524 + copy_and_assign_cidr(node, key, cidr, bits);
525 + rcu_assign_pointer(*trie, node);
528 + if (node_placement(*trie, key, cidr, bits, &node, lock)) {
529 + rcu_assign_pointer(node->peer, peer);
530 + list_move_tail(&node->peer_list, &peer->allowedips_list);
534 + newnode = kzalloc(sizeof(*newnode), GFP_KERNEL);
535 + if (unlikely(!newnode))
537 + RCU_INIT_POINTER(newnode->peer, peer);
538 + list_add_tail(&newnode->peer_list, &peer->allowedips_list);
539 + copy_and_assign_cidr(newnode, key, cidr, bits);
542 + down = rcu_dereference_protected(*trie, lockdep_is_held(lock));
544 + down = rcu_dereference_protected(CHOOSE_NODE(node, key),
545 + lockdep_is_held(lock));
547 + rcu_assign_pointer(CHOOSE_NODE(node, key), newnode);
551 + cidr = min(cidr, common_bits(down, key, bits));
554 + if (newnode->cidr == cidr) {
555 + rcu_assign_pointer(CHOOSE_NODE(newnode, down->bits), down);
557 + rcu_assign_pointer(*trie, newnode);
559 + rcu_assign_pointer(CHOOSE_NODE(parent, newnode->bits),
562 + node = kzalloc(sizeof(*node), GFP_KERNEL);
563 + if (unlikely(!node)) {
567 + INIT_LIST_HEAD(&node->peer_list);
568 + copy_and_assign_cidr(node, newnode->bits, cidr, bits);
570 + rcu_assign_pointer(CHOOSE_NODE(node, down->bits), down);
571 + rcu_assign_pointer(CHOOSE_NODE(node, newnode->bits), newnode);
573 + rcu_assign_pointer(*trie, node);
575 + rcu_assign_pointer(CHOOSE_NODE(parent, node->bits),
581 +void wg_allowedips_init(struct allowedips *table)
583 + table->root4 = table->root6 = NULL;
587 +void wg_allowedips_free(struct allowedips *table, struct mutex *lock)
589 + struct allowedips_node __rcu *old4 = table->root4, *old6 = table->root6;
592 + RCU_INIT_POINTER(table->root4, NULL);
593 + RCU_INIT_POINTER(table->root6, NULL);
594 + if (rcu_access_pointer(old4)) {
595 + struct allowedips_node *node = rcu_dereference_protected(old4,
596 + lockdep_is_held(lock));
598 + root_remove_peer_lists(node);
599 + call_rcu(&node->rcu, root_free_rcu);
601 + if (rcu_access_pointer(old6)) {
602 + struct allowedips_node *node = rcu_dereference_protected(old6,
603 + lockdep_is_held(lock));
605 + root_remove_peer_lists(node);
606 + call_rcu(&node->rcu, root_free_rcu);
610 +int wg_allowedips_insert_v4(struct allowedips *table, const struct in_addr *ip,
611 + u8 cidr, struct wg_peer *peer, struct mutex *lock)
613 + /* Aligned so it can be passed to fls */
614 + u8 key[4] __aligned(__alignof(u32));
617 + swap_endian(key, (const u8 *)ip, 32);
618 + return add(&table->root4, 32, key, cidr, peer, lock);
621 +int wg_allowedips_insert_v6(struct allowedips *table, const struct in6_addr *ip,
622 + u8 cidr, struct wg_peer *peer, struct mutex *lock)
624 + /* Aligned so it can be passed to fls64 */
625 + u8 key[16] __aligned(__alignof(u64));
628 + swap_endian(key, (const u8 *)ip, 128);
629 + return add(&table->root6, 128, key, cidr, peer, lock);
632 +void wg_allowedips_remove_by_peer(struct allowedips *table,
633 + struct wg_peer *peer, struct mutex *lock)
636 + walk_remove_by_peer(&table->root4, peer, lock);
637 + walk_remove_by_peer(&table->root6, peer, lock);
640 +int wg_allowedips_read_node(struct allowedips_node *node, u8 ip[16], u8 *cidr)
642 + const unsigned int cidr_bytes = DIV_ROUND_UP(node->cidr, 8U);
643 + swap_endian(ip, node->bits, node->bitlen);
644 + memset(ip + cidr_bytes, 0, node->bitlen / 8U - cidr_bytes);
646 + ip[cidr_bytes - 1U] &= ~0U << (-node->cidr % 8U);
648 + *cidr = node->cidr;
649 + return node->bitlen == 32 ? AF_INET : AF_INET6;
652 +/* Returns a strong reference to a peer */
653 +struct wg_peer *wg_allowedips_lookup_dst(struct allowedips *table,
654 + struct sk_buff *skb)
656 + if (skb->protocol == htons(ETH_P_IP))
657 + return lookup(table->root4, 32, &ip_hdr(skb)->daddr);
658 + else if (skb->protocol == htons(ETH_P_IPV6))
659 + return lookup(table->root6, 128, &ipv6_hdr(skb)->daddr);
663 +/* Returns a strong reference to a peer */
664 +struct wg_peer *wg_allowedips_lookup_src(struct allowedips *table,
665 + struct sk_buff *skb)
667 + if (skb->protocol == htons(ETH_P_IP))
668 + return lookup(table->root4, 32, &ip_hdr(skb)->saddr);
669 + else if (skb->protocol == htons(ETH_P_IPV6))
670 + return lookup(table->root6, 128, &ipv6_hdr(skb)->saddr);
674 +#include "selftest/allowedips.c"
676 +++ b/drivers/net/wireguard/allowedips.h
678 +/* SPDX-License-Identifier: GPL-2.0 */
680 + * Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
683 +#ifndef _WG_ALLOWEDIPS_H
684 +#define _WG_ALLOWEDIPS_H
686 +#include <linux/mutex.h>
687 +#include <linux/ip.h>
688 +#include <linux/ipv6.h>
692 +struct allowedips_node {
693 + struct wg_peer __rcu *peer;
694 + struct allowedips_node __rcu *bit[2];
695 + /* While it may seem scandalous that we waste space for v4,
696 + * we're alloc'ing to the nearest power of 2 anyway, so this
697 + * doesn't actually make a difference.
699 + u8 bits[16] __aligned(__alignof(u64));
700 + u8 cidr, bit_at_a, bit_at_b, bitlen;
702 + /* Keep rarely used list at bottom to be beyond cache line. */
704 + struct list_head peer_list;
705 + struct rcu_head rcu;
710 + struct allowedips_node __rcu *root4;
711 + struct allowedips_node __rcu *root6;
715 +void wg_allowedips_init(struct allowedips *table);
716 +void wg_allowedips_free(struct allowedips *table, struct mutex *mutex);
717 +int wg_allowedips_insert_v4(struct allowedips *table, const struct in_addr *ip,
718 + u8 cidr, struct wg_peer *peer, struct mutex *lock);
719 +int wg_allowedips_insert_v6(struct allowedips *table, const struct in6_addr *ip,
720 + u8 cidr, struct wg_peer *peer, struct mutex *lock);
721 +void wg_allowedips_remove_by_peer(struct allowedips *table,
722 + struct wg_peer *peer, struct mutex *lock);
723 +/* The ip input pointer should be __aligned(__alignof(u64))) */
724 +int wg_allowedips_read_node(struct allowedips_node *node, u8 ip[16], u8 *cidr);
726 +/* These return a strong reference to a peer: */
727 +struct wg_peer *wg_allowedips_lookup_dst(struct allowedips *table,
728 + struct sk_buff *skb);
729 +struct wg_peer *wg_allowedips_lookup_src(struct allowedips *table,
730 + struct sk_buff *skb);
733 +bool wg_allowedips_selftest(void);
736 +#endif /* _WG_ALLOWEDIPS_H */
738 +++ b/drivers/net/wireguard/cookie.c
740 +// SPDX-License-Identifier: GPL-2.0
742 + * Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
748 +#include "messages.h"
749 +#include "ratelimiter.h"
752 +#include <crypto/blake2s.h>
753 +#include <crypto/chacha20poly1305.h>
755 +#include <net/ipv6.h>
756 +#include <crypto/algapi.h>
758 +void wg_cookie_checker_init(struct cookie_checker *checker,
759 + struct wg_device *wg)
761 + init_rwsem(&checker->secret_lock);
762 + checker->secret_birthdate = ktime_get_coarse_boottime_ns();
763 + get_random_bytes(checker->secret, NOISE_HASH_LEN);
764 + checker->device = wg;
767 +enum { COOKIE_KEY_LABEL_LEN = 8 };
768 +static const u8 mac1_key_label[COOKIE_KEY_LABEL_LEN] = "mac1----";
769 +static const u8 cookie_key_label[COOKIE_KEY_LABEL_LEN] = "cookie--";
771 +static void precompute_key(u8 key[NOISE_SYMMETRIC_KEY_LEN],
772 + const u8 pubkey[NOISE_PUBLIC_KEY_LEN],
773 + const u8 label[COOKIE_KEY_LABEL_LEN])
775 + struct blake2s_state blake;
777 + blake2s_init(&blake, NOISE_SYMMETRIC_KEY_LEN);
778 + blake2s_update(&blake, label, COOKIE_KEY_LABEL_LEN);
779 + blake2s_update(&blake, pubkey, NOISE_PUBLIC_KEY_LEN);
780 + blake2s_final(&blake, key);
783 +/* Must hold peer->handshake.static_identity->lock */
784 +void wg_cookie_checker_precompute_device_keys(struct cookie_checker *checker)
786 + if (likely(checker->device->static_identity.has_identity)) {
787 + precompute_key(checker->cookie_encryption_key,
788 + checker->device->static_identity.static_public,
790 + precompute_key(checker->message_mac1_key,
791 + checker->device->static_identity.static_public,
794 + memset(checker->cookie_encryption_key, 0,
795 + NOISE_SYMMETRIC_KEY_LEN);
796 + memset(checker->message_mac1_key, 0, NOISE_SYMMETRIC_KEY_LEN);
800 +void wg_cookie_checker_precompute_peer_keys(struct wg_peer *peer)
802 + precompute_key(peer->latest_cookie.cookie_decryption_key,
803 + peer->handshake.remote_static, cookie_key_label);
804 + precompute_key(peer->latest_cookie.message_mac1_key,
805 + peer->handshake.remote_static, mac1_key_label);
808 +void wg_cookie_init(struct cookie *cookie)
810 + memset(cookie, 0, sizeof(*cookie));
811 + init_rwsem(&cookie->lock);
814 +static void compute_mac1(u8 mac1[COOKIE_LEN], const void *message, size_t len,
815 + const u8 key[NOISE_SYMMETRIC_KEY_LEN])
817 + len = len - sizeof(struct message_macs) +
818 + offsetof(struct message_macs, mac1);
819 + blake2s(mac1, message, key, COOKIE_LEN, len, NOISE_SYMMETRIC_KEY_LEN);
822 +static void compute_mac2(u8 mac2[COOKIE_LEN], const void *message, size_t len,
823 + const u8 cookie[COOKIE_LEN])
825 + len = len - sizeof(struct message_macs) +
826 + offsetof(struct message_macs, mac2);
827 + blake2s(mac2, message, cookie, COOKIE_LEN, len, COOKIE_LEN);
830 +static void make_cookie(u8 cookie[COOKIE_LEN], struct sk_buff *skb,
831 + struct cookie_checker *checker)
833 + struct blake2s_state state;
835 + if (wg_birthdate_has_expired(checker->secret_birthdate,
836 + COOKIE_SECRET_MAX_AGE)) {
837 + down_write(&checker->secret_lock);
838 + checker->secret_birthdate = ktime_get_coarse_boottime_ns();
839 + get_random_bytes(checker->secret, NOISE_HASH_LEN);
840 + up_write(&checker->secret_lock);
843 + down_read(&checker->secret_lock);
845 + blake2s_init_key(&state, COOKIE_LEN, checker->secret, NOISE_HASH_LEN);
846 + if (skb->protocol == htons(ETH_P_IP))
847 + blake2s_update(&state, (u8 *)&ip_hdr(skb)->saddr,
848 + sizeof(struct in_addr));
849 + else if (skb->protocol == htons(ETH_P_IPV6))
850 + blake2s_update(&state, (u8 *)&ipv6_hdr(skb)->saddr,
851 + sizeof(struct in6_addr));
852 + blake2s_update(&state, (u8 *)&udp_hdr(skb)->source, sizeof(__be16));
853 + blake2s_final(&state, cookie);
855 + up_read(&checker->secret_lock);
858 +enum cookie_mac_state wg_cookie_validate_packet(struct cookie_checker *checker,
859 + struct sk_buff *skb,
862 + struct message_macs *macs = (struct message_macs *)
863 + (skb->data + skb->len - sizeof(*macs));
864 + enum cookie_mac_state ret;
865 + u8 computed_mac[COOKIE_LEN];
866 + u8 cookie[COOKIE_LEN];
869 + compute_mac1(computed_mac, skb->data, skb->len,
870 + checker->message_mac1_key);
871 + if (crypto_memneq(computed_mac, macs->mac1, COOKIE_LEN))
874 + ret = VALID_MAC_BUT_NO_COOKIE;
879 + make_cookie(cookie, skb, checker);
881 + compute_mac2(computed_mac, skb->data, skb->len, cookie);
882 + if (crypto_memneq(computed_mac, macs->mac2, COOKIE_LEN))
885 + ret = VALID_MAC_WITH_COOKIE_BUT_RATELIMITED;
886 + if (!wg_ratelimiter_allow(skb, dev_net(checker->device->dev)))
889 + ret = VALID_MAC_WITH_COOKIE;
895 +void wg_cookie_add_mac_to_packet(void *message, size_t len,
896 + struct wg_peer *peer)
898 + struct message_macs *macs = (struct message_macs *)
899 + ((u8 *)message + len - sizeof(*macs));
901 + down_write(&peer->latest_cookie.lock);
902 + compute_mac1(macs->mac1, message, len,
903 + peer->latest_cookie.message_mac1_key);
904 + memcpy(peer->latest_cookie.last_mac1_sent, macs->mac1, COOKIE_LEN);
905 + peer->latest_cookie.have_sent_mac1 = true;
906 + up_write(&peer->latest_cookie.lock);
908 + down_read(&peer->latest_cookie.lock);
909 + if (peer->latest_cookie.is_valid &&
910 + !wg_birthdate_has_expired(peer->latest_cookie.birthdate,
911 + COOKIE_SECRET_MAX_AGE - COOKIE_SECRET_LATENCY))
912 + compute_mac2(macs->mac2, message, len,
913 + peer->latest_cookie.cookie);
915 + memset(macs->mac2, 0, COOKIE_LEN);
916 + up_read(&peer->latest_cookie.lock);
919 +void wg_cookie_message_create(struct message_handshake_cookie *dst,
920 + struct sk_buff *skb, __le32 index,
921 + struct cookie_checker *checker)
923 + struct message_macs *macs = (struct message_macs *)
924 + ((u8 *)skb->data + skb->len - sizeof(*macs));
925 + u8 cookie[COOKIE_LEN];
927 + dst->header.type = cpu_to_le32(MESSAGE_HANDSHAKE_COOKIE);
928 + dst->receiver_index = index;
929 + get_random_bytes_wait(dst->nonce, COOKIE_NONCE_LEN);
931 + make_cookie(cookie, skb, checker);
932 + xchacha20poly1305_encrypt(dst->encrypted_cookie, cookie, COOKIE_LEN,
933 + macs->mac1, COOKIE_LEN, dst->nonce,
934 + checker->cookie_encryption_key);
937 +void wg_cookie_message_consume(struct message_handshake_cookie *src,
938 + struct wg_device *wg)
940 + struct wg_peer *peer = NULL;
941 + u8 cookie[COOKIE_LEN];
944 + if (unlikely(!wg_index_hashtable_lookup(wg->index_hashtable,
945 + INDEX_HASHTABLE_HANDSHAKE |
946 + INDEX_HASHTABLE_KEYPAIR,
947 + src->receiver_index, &peer)))
950 + down_read(&peer->latest_cookie.lock);
951 + if (unlikely(!peer->latest_cookie.have_sent_mac1)) {
952 + up_read(&peer->latest_cookie.lock);
955 + ret = xchacha20poly1305_decrypt(
956 + cookie, src->encrypted_cookie, sizeof(src->encrypted_cookie),
957 + peer->latest_cookie.last_mac1_sent, COOKIE_LEN, src->nonce,
958 + peer->latest_cookie.cookie_decryption_key);
959 + up_read(&peer->latest_cookie.lock);
962 + down_write(&peer->latest_cookie.lock);
963 + memcpy(peer->latest_cookie.cookie, cookie, COOKIE_LEN);
964 + peer->latest_cookie.birthdate = ktime_get_coarse_boottime_ns();
965 + peer->latest_cookie.is_valid = true;
966 + peer->latest_cookie.have_sent_mac1 = false;
967 + up_write(&peer->latest_cookie.lock);
969 + net_dbg_ratelimited("%s: Could not decrypt invalid cookie response\n",
977 +++ b/drivers/net/wireguard/cookie.h
979 +/* SPDX-License-Identifier: GPL-2.0 */
981 + * Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
984 +#ifndef _WG_COOKIE_H
985 +#define _WG_COOKIE_H
987 +#include "messages.h"
988 +#include <linux/rwsem.h>
992 +struct cookie_checker {
993 + u8 secret[NOISE_HASH_LEN];
994 + u8 cookie_encryption_key[NOISE_SYMMETRIC_KEY_LEN];
995 + u8 message_mac1_key[NOISE_SYMMETRIC_KEY_LEN];
996 + u64 secret_birthdate;
997 + struct rw_semaphore secret_lock;
998 + struct wg_device *device;
1004 + u8 cookie[COOKIE_LEN];
1005 + bool have_sent_mac1;
1006 + u8 last_mac1_sent[COOKIE_LEN];
1007 + u8 cookie_decryption_key[NOISE_SYMMETRIC_KEY_LEN];
1008 + u8 message_mac1_key[NOISE_SYMMETRIC_KEY_LEN];
1009 + struct rw_semaphore lock;
1012 +enum cookie_mac_state {
1014 + VALID_MAC_BUT_NO_COOKIE,
1015 + VALID_MAC_WITH_COOKIE_BUT_RATELIMITED,
1016 + VALID_MAC_WITH_COOKIE
1019 +void wg_cookie_checker_init(struct cookie_checker *checker,
1020 + struct wg_device *wg);
1021 +void wg_cookie_checker_precompute_device_keys(struct cookie_checker *checker);
1022 +void wg_cookie_checker_precompute_peer_keys(struct wg_peer *peer);
1023 +void wg_cookie_init(struct cookie *cookie);
1025 +enum cookie_mac_state wg_cookie_validate_packet(struct cookie_checker *checker,
1026 + struct sk_buff *skb,
1027 + bool check_cookie);
1028 +void wg_cookie_add_mac_to_packet(void *message, size_t len,
1029 + struct wg_peer *peer);
1031 +void wg_cookie_message_create(struct message_handshake_cookie *src,
1032 + struct sk_buff *skb, __le32 index,
1033 + struct cookie_checker *checker);
1034 +void wg_cookie_message_consume(struct message_handshake_cookie *src,
1035 + struct wg_device *wg);
1037 +#endif /* _WG_COOKIE_H */
1039 +++ b/drivers/net/wireguard/device.c
1041 +// SPDX-License-Identifier: GPL-2.0
1043 + * Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
1046 +#include "queueing.h"
1047 +#include "socket.h"
1048 +#include "timers.h"
1049 +#include "device.h"
1050 +#include "ratelimiter.h"
1052 +#include "messages.h"
1054 +#include <linux/module.h>
1055 +#include <linux/rtnetlink.h>
1056 +#include <linux/inet.h>
1057 +#include <linux/netdevice.h>
1058 +#include <linux/inetdevice.h>
1059 +#include <linux/if_arp.h>
1060 +#include <linux/icmp.h>
1061 +#include <linux/suspend.h>
1062 +#include <net/icmp.h>
1063 +#include <net/rtnetlink.h>
1064 +#include <net/ip_tunnels.h>
1065 +#include <net/addrconf.h>
1067 +static LIST_HEAD(device_list);
1069 +static int wg_open(struct net_device *dev)
1071 + struct in_device *dev_v4 = __in_dev_get_rtnl(dev);
1072 + struct inet6_dev *dev_v6 = __in6_dev_get(dev);
1073 + struct wg_device *wg = netdev_priv(dev);
1074 + struct wg_peer *peer;
1078 + /* At some point we might put this check near the ip_rt_send_
1079 + * redirect call of ip_forward in net/ipv4/ip_forward.c, similar
1080 + * to the current secpath check.
1082 + IN_DEV_CONF_SET(dev_v4, SEND_REDIRECTS, false);
1083 + IPV4_DEVCONF_ALL(dev_net(dev), SEND_REDIRECTS) = false;
1086 + dev_v6->cnf.addr_gen_mode = IN6_ADDR_GEN_MODE_NONE;
1088 + ret = wg_socket_init(wg, wg->incoming_port);
1091 + mutex_lock(&wg->device_update_lock);
1092 + list_for_each_entry(peer, &wg->peer_list, peer_list) {
1093 + wg_packet_send_staged_packets(peer);
1094 + if (peer->persistent_keepalive_interval)
1095 + wg_packet_send_keepalive(peer);
1097 + mutex_unlock(&wg->device_update_lock);
1101 +#ifdef CONFIG_PM_SLEEP
1102 +static int wg_pm_notification(struct notifier_block *nb, unsigned long action,
1105 + struct wg_device *wg;
1106 + struct wg_peer *peer;
1108 + /* If the machine is constantly suspending and resuming, as part of
1109 + * its normal operation rather than as a somewhat rare event, then we
1110 + * don't actually want to clear keys.
1112 + if (IS_ENABLED(CONFIG_PM_AUTOSLEEP) || IS_ENABLED(CONFIG_ANDROID))
1115 + if (action != PM_HIBERNATION_PREPARE && action != PM_SUSPEND_PREPARE)
1119 + list_for_each_entry(wg, &device_list, device_list) {
1120 + mutex_lock(&wg->device_update_lock);
1121 + list_for_each_entry(peer, &wg->peer_list, peer_list) {
1122 + del_timer(&peer->timer_zero_key_material);
1123 + wg_noise_handshake_clear(&peer->handshake);
1124 + wg_noise_keypairs_clear(&peer->keypairs);
1126 + mutex_unlock(&wg->device_update_lock);
1133 +static struct notifier_block pm_notifier = { .notifier_call = wg_pm_notification };
1136 +static int wg_stop(struct net_device *dev)
1138 + struct wg_device *wg = netdev_priv(dev);
1139 + struct wg_peer *peer;
1141 + mutex_lock(&wg->device_update_lock);
1142 + list_for_each_entry(peer, &wg->peer_list, peer_list) {
1143 + wg_packet_purge_staged_packets(peer);
1144 + wg_timers_stop(peer);
1145 + wg_noise_handshake_clear(&peer->handshake);
1146 + wg_noise_keypairs_clear(&peer->keypairs);
1147 + wg_noise_reset_last_sent_handshake(&peer->last_sent_handshake);
1149 + mutex_unlock(&wg->device_update_lock);
1150 + skb_queue_purge(&wg->incoming_handshakes);
1151 + wg_socket_reinit(wg, NULL, NULL);
1155 +static netdev_tx_t wg_xmit(struct sk_buff *skb, struct net_device *dev)
1157 + struct wg_device *wg = netdev_priv(dev);
1158 + struct sk_buff_head packets;
1159 + struct wg_peer *peer;
1160 + struct sk_buff *next;
1161 + sa_family_t family;
1165 + if (unlikely(wg_skb_examine_untrusted_ip_hdr(skb) != skb->protocol)) {
1166 + ret = -EPROTONOSUPPORT;
1167 + net_dbg_ratelimited("%s: Invalid IP packet\n", dev->name);
1171 + peer = wg_allowedips_lookup_dst(&wg->peer_allowedips, skb);
1172 + if (unlikely(!peer)) {
1174 + if (skb->protocol == htons(ETH_P_IP))
1175 + net_dbg_ratelimited("%s: No peer has allowed IPs matching %pI4\n",
1176 + dev->name, &ip_hdr(skb)->daddr);
1177 + else if (skb->protocol == htons(ETH_P_IPV6))
1178 + net_dbg_ratelimited("%s: No peer has allowed IPs matching %pI6\n",
1179 + dev->name, &ipv6_hdr(skb)->daddr);
1183 + family = READ_ONCE(peer->endpoint.addr.sa_family);
1184 + if (unlikely(family != AF_INET && family != AF_INET6)) {
1185 + ret = -EDESTADDRREQ;
1186 + net_dbg_ratelimited("%s: No valid endpoint has been configured or discovered for peer %llu\n",
1187 + dev->name, peer->internal_id);
1191 + mtu = skb_dst(skb) ? dst_mtu(skb_dst(skb)) : dev->mtu;
1193 + __skb_queue_head_init(&packets);
1194 + if (!skb_is_gso(skb)) {
1195 + skb_mark_not_on_list(skb);
1197 + struct sk_buff *segs = skb_gso_segment(skb, 0);
1199 + if (unlikely(IS_ERR(segs))) {
1200 + ret = PTR_ERR(segs);
1203 + dev_kfree_skb(skb);
1207 + skb_list_walk_safe(skb, skb, next) {
1208 + skb_mark_not_on_list(skb);
1210 + skb = skb_share_check(skb, GFP_ATOMIC);
1211 + if (unlikely(!skb))
1214 + /* We only need to keep the original dst around for icmp,
1215 + * so at this point we're in a position to drop it.
1217 + skb_dst_drop(skb);
1219 + PACKET_CB(skb)->mtu = mtu;
1221 + __skb_queue_tail(&packets, skb);
1224 + spin_lock_bh(&peer->staged_packet_queue.lock);
1225 + /* If the queue is getting too big, we start removing the oldest packets
1226 + * until it's small again. We do this before adding the new packet, so
1227 + * we don't remove GSO segments that are in excess.
1229 + while (skb_queue_len(&peer->staged_packet_queue) > MAX_STAGED_PACKETS) {
1230 + dev_kfree_skb(__skb_dequeue(&peer->staged_packet_queue));
1231 + ++dev->stats.tx_dropped;
1233 + skb_queue_splice_tail(&packets, &peer->staged_packet_queue);
1234 + spin_unlock_bh(&peer->staged_packet_queue.lock);
1236 + wg_packet_send_staged_packets(peer);
1238 + wg_peer_put(peer);
1239 + return NETDEV_TX_OK;
1242 + wg_peer_put(peer);
1244 + ++dev->stats.tx_errors;
1245 + if (skb->protocol == htons(ETH_P_IP))
1246 + icmp_send(skb, ICMP_DEST_UNREACH, ICMP_HOST_UNREACH, 0);
1247 + else if (skb->protocol == htons(ETH_P_IPV6))
1248 + icmpv6_send(skb, ICMPV6_DEST_UNREACH, ICMPV6_ADDR_UNREACH, 0);
1253 +static const struct net_device_ops netdev_ops = {
1254 + .ndo_open = wg_open,
1255 + .ndo_stop = wg_stop,
1256 + .ndo_start_xmit = wg_xmit,
1257 + .ndo_get_stats64 = ip_tunnel_get_stats64
1260 +static void wg_destruct(struct net_device *dev)
1262 + struct wg_device *wg = netdev_priv(dev);
1265 + list_del(&wg->device_list);
1267 + mutex_lock(&wg->device_update_lock);
1268 + wg->incoming_port = 0;
1269 + wg_socket_reinit(wg, NULL, NULL);
1270 + /* The final references are cleared in the below calls to destroy_workqueue. */
1271 + wg_peer_remove_all(wg);
1272 + destroy_workqueue(wg->handshake_receive_wq);
1273 + destroy_workqueue(wg->handshake_send_wq);
1274 + destroy_workqueue(wg->packet_crypt_wq);
1275 + wg_packet_queue_free(&wg->decrypt_queue, true);
1276 + wg_packet_queue_free(&wg->encrypt_queue, true);
1277 + rcu_barrier(); /* Wait for all the peers to be actually freed. */
1278 + wg_ratelimiter_uninit();
1279 + memzero_explicit(&wg->static_identity, sizeof(wg->static_identity));
1280 + skb_queue_purge(&wg->incoming_handshakes);
1281 + free_percpu(dev->tstats);
1282 + free_percpu(wg->incoming_handshakes_worker);
1283 + if (wg->have_creating_net_ref)
1284 + put_net(wg->creating_net);
1285 + kvfree(wg->index_hashtable);
1286 + kvfree(wg->peer_hashtable);
1287 + mutex_unlock(&wg->device_update_lock);
1289 + pr_debug("%s: Interface deleted\n", dev->name);
1293 +static const struct device_type device_type = { .name = KBUILD_MODNAME };
1295 +static void wg_setup(struct net_device *dev)
1297 + struct wg_device *wg = netdev_priv(dev);
1298 + enum { WG_NETDEV_FEATURES = NETIF_F_HW_CSUM | NETIF_F_RXCSUM |
1299 + NETIF_F_SG | NETIF_F_GSO |
1300 + NETIF_F_GSO_SOFTWARE | NETIF_F_HIGHDMA };
1302 + dev->netdev_ops = &netdev_ops;
1303 + dev->hard_header_len = 0;
1304 + dev->addr_len = 0;
1305 + dev->needed_headroom = DATA_PACKET_HEAD_ROOM;
1306 + dev->needed_tailroom = noise_encrypted_len(MESSAGE_PADDING_MULTIPLE);
1307 + dev->type = ARPHRD_NONE;
1308 + dev->flags = IFF_POINTOPOINT | IFF_NOARP;
1309 + dev->priv_flags |= IFF_NO_QUEUE;
1310 + dev->features |= NETIF_F_LLTX;
1311 + dev->features |= WG_NETDEV_FEATURES;
1312 + dev->hw_features |= WG_NETDEV_FEATURES;
1313 + dev->hw_enc_features |= WG_NETDEV_FEATURES;
1314 + dev->mtu = ETH_DATA_LEN - MESSAGE_MINIMUM_LENGTH -
1315 + sizeof(struct udphdr) -
1316 + max(sizeof(struct ipv6hdr), sizeof(struct iphdr));
1318 + SET_NETDEV_DEVTYPE(dev, &device_type);
1320 + /* We need to keep the dst around in case of icmp replies. */
1321 + netif_keep_dst(dev);
1323 + memset(wg, 0, sizeof(*wg));
1327 +static int wg_newlink(struct net *src_net, struct net_device *dev,
1328 + struct nlattr *tb[], struct nlattr *data[],
1329 + struct netlink_ext_ack *extack)
1331 + struct wg_device *wg = netdev_priv(dev);
1332 + int ret = -ENOMEM;
1334 + wg->creating_net = src_net;
1335 + init_rwsem(&wg->static_identity.lock);
1336 + mutex_init(&wg->socket_update_lock);
1337 + mutex_init(&wg->device_update_lock);
1338 + skb_queue_head_init(&wg->incoming_handshakes);
1339 + wg_allowedips_init(&wg->peer_allowedips);
1340 + wg_cookie_checker_init(&wg->cookie_checker, wg);
1341 + INIT_LIST_HEAD(&wg->peer_list);
1342 + wg->device_update_gen = 1;
1344 + wg->peer_hashtable = wg_pubkey_hashtable_alloc();
1345 + if (!wg->peer_hashtable)
1348 + wg->index_hashtable = wg_index_hashtable_alloc();
1349 + if (!wg->index_hashtable)
1350 + goto err_free_peer_hashtable;
1352 + dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats);
1354 + goto err_free_index_hashtable;
1356 + wg->incoming_handshakes_worker =
1357 + wg_packet_percpu_multicore_worker_alloc(
1358 + wg_packet_handshake_receive_worker, wg);
1359 + if (!wg->incoming_handshakes_worker)
1360 + goto err_free_tstats;
1362 + wg->handshake_receive_wq = alloc_workqueue("wg-kex-%s",
1363 + WQ_CPU_INTENSIVE | WQ_FREEZABLE, 0, dev->name);
1364 + if (!wg->handshake_receive_wq)
1365 + goto err_free_incoming_handshakes;
1367 + wg->handshake_send_wq = alloc_workqueue("wg-kex-%s",
1368 + WQ_UNBOUND | WQ_FREEZABLE, 0, dev->name);
1369 + if (!wg->handshake_send_wq)
1370 + goto err_destroy_handshake_receive;
1372 + wg->packet_crypt_wq = alloc_workqueue("wg-crypt-%s",
1373 + WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM, 0, dev->name);
1374 + if (!wg->packet_crypt_wq)
1375 + goto err_destroy_handshake_send;
1377 + ret = wg_packet_queue_init(&wg->encrypt_queue, wg_packet_encrypt_worker,
1378 + true, MAX_QUEUED_PACKETS);
1380 + goto err_destroy_packet_crypt;
1382 + ret = wg_packet_queue_init(&wg->decrypt_queue, wg_packet_decrypt_worker,
1383 + true, MAX_QUEUED_PACKETS);
1385 + goto err_free_encrypt_queue;
1387 + ret = wg_ratelimiter_init();
1389 + goto err_free_decrypt_queue;
1391 + ret = register_netdevice(dev);
1393 + goto err_uninit_ratelimiter;
1395 + list_add(&wg->device_list, &device_list);
1397 + /* We wait until the end to assign priv_destructor, so that
1398 + * register_netdevice doesn't call it for us if it fails.
1400 + dev->priv_destructor = wg_destruct;
1402 + pr_debug("%s: Interface created\n", dev->name);
1405 +err_uninit_ratelimiter:
1406 + wg_ratelimiter_uninit();
1407 +err_free_decrypt_queue:
1408 + wg_packet_queue_free(&wg->decrypt_queue, true);
1409 +err_free_encrypt_queue:
1410 + wg_packet_queue_free(&wg->encrypt_queue, true);
1411 +err_destroy_packet_crypt:
1412 + destroy_workqueue(wg->packet_crypt_wq);
1413 +err_destroy_handshake_send:
1414 + destroy_workqueue(wg->handshake_send_wq);
1415 +err_destroy_handshake_receive:
1416 + destroy_workqueue(wg->handshake_receive_wq);
1417 +err_free_incoming_handshakes:
1418 + free_percpu(wg->incoming_handshakes_worker);
1420 + free_percpu(dev->tstats);
1421 +err_free_index_hashtable:
1422 + kvfree(wg->index_hashtable);
1423 +err_free_peer_hashtable:
1424 + kvfree(wg->peer_hashtable);
1428 +static struct rtnl_link_ops link_ops __read_mostly = {
1429 + .kind = KBUILD_MODNAME,
1430 + .priv_size = sizeof(struct wg_device),
1431 + .setup = wg_setup,
1432 + .newlink = wg_newlink,
1435 +static int wg_netdevice_notification(struct notifier_block *nb,
1436 + unsigned long action, void *data)
1438 + struct net_device *dev = ((struct netdev_notifier_info *)data)->dev;
1439 + struct wg_device *wg = netdev_priv(dev);
1443 + if (action != NETDEV_REGISTER || dev->netdev_ops != &netdev_ops)
1446 + if (dev_net(dev) == wg->creating_net && wg->have_creating_net_ref) {
1447 + put_net(wg->creating_net);
1448 + wg->have_creating_net_ref = false;
1449 + } else if (dev_net(dev) != wg->creating_net &&
1450 + !wg->have_creating_net_ref) {
1451 + wg->have_creating_net_ref = true;
1452 + get_net(wg->creating_net);
1457 +static struct notifier_block netdevice_notifier = {
1458 + .notifier_call = wg_netdevice_notification
1461 +int __init wg_device_init(void)
1465 +#ifdef CONFIG_PM_SLEEP
1466 + ret = register_pm_notifier(&pm_notifier);
1471 + ret = register_netdevice_notifier(&netdevice_notifier);
1475 + ret = rtnl_link_register(&link_ops);
1477 + goto error_netdevice;
1482 + unregister_netdevice_notifier(&netdevice_notifier);
1484 +#ifdef CONFIG_PM_SLEEP
1485 + unregister_pm_notifier(&pm_notifier);
1490 +void wg_device_uninit(void)
1492 + rtnl_link_unregister(&link_ops);
1493 + unregister_netdevice_notifier(&netdevice_notifier);
1494 +#ifdef CONFIG_PM_SLEEP
1495 + unregister_pm_notifier(&pm_notifier);
1500 +++ b/drivers/net/wireguard/device.h
1502 +/* SPDX-License-Identifier: GPL-2.0 */
1504 + * Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
1507 +#ifndef _WG_DEVICE_H
1508 +#define _WG_DEVICE_H
1511 +#include "allowedips.h"
1512 +#include "peerlookup.h"
1513 +#include "cookie.h"
1515 +#include <linux/types.h>
1516 +#include <linux/netdevice.h>
1517 +#include <linux/workqueue.h>
1518 +#include <linux/mutex.h>
1519 +#include <linux/net.h>
1520 +#include <linux/ptr_ring.h>
1524 +struct multicore_worker {
1526 + struct work_struct work;
1529 +struct crypt_queue {
1530 + struct ptr_ring ring;
1533 + struct multicore_worker __percpu *worker;
1536 + struct work_struct work;
1541 + struct net_device *dev;
1542 + struct crypt_queue encrypt_queue, decrypt_queue;
1543 + struct sock __rcu *sock4, *sock6;
1544 + struct net *creating_net;
1545 + struct noise_static_identity static_identity;
1546 + struct workqueue_struct *handshake_receive_wq, *handshake_send_wq;
1547 + struct workqueue_struct *packet_crypt_wq;
1548 + struct sk_buff_head incoming_handshakes;
1549 + int incoming_handshake_cpu;
1550 + struct multicore_worker __percpu *incoming_handshakes_worker;
1551 + struct cookie_checker cookie_checker;
1552 + struct pubkey_hashtable *peer_hashtable;
1553 + struct index_hashtable *index_hashtable;
1554 + struct allowedips peer_allowedips;
1555 + struct mutex device_update_lock, socket_update_lock;
1556 + struct list_head device_list, peer_list;
1557 + unsigned int num_peers, device_update_gen;
1559 + u16 incoming_port;
1560 + bool have_creating_net_ref;
1563 +int wg_device_init(void);
1564 +void wg_device_uninit(void);
1566 +#endif /* _WG_DEVICE_H */
1568 +++ b/drivers/net/wireguard/main.c
1570 +// SPDX-License-Identifier: GPL-2.0
1572 + * Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
1575 +#include "version.h"
1576 +#include "device.h"
1578 +#include "queueing.h"
1579 +#include "ratelimiter.h"
1580 +#include "netlink.h"
1582 +#include <uapi/linux/wireguard.h>
1584 +#include <linux/version.h>
1585 +#include <linux/init.h>
1586 +#include <linux/module.h>
1587 +#include <linux/genetlink.h>
1588 +#include <net/rtnetlink.h>
1590 +static int __init mod_init(void)
1595 + if (!wg_allowedips_selftest() || !wg_packet_counter_selftest() ||
1596 + !wg_ratelimiter_selftest())
1597 + return -ENOTRECOVERABLE;
1601 + ret = wg_device_init();
1605 + ret = wg_genetlink_init();
1609 + pr_info("WireGuard " WIREGUARD_VERSION " loaded. See www.wireguard.com for information.\n");
1610 + pr_info("Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.\n");
1615 + wg_device_uninit();
1620 +static void __exit mod_exit(void)
1622 + wg_genetlink_uninit();
1623 + wg_device_uninit();
1626 +module_init(mod_init);
1627 +module_exit(mod_exit);
1628 +MODULE_LICENSE("GPL v2");
1629 +MODULE_DESCRIPTION("WireGuard secure network tunnel");
1630 +MODULE_AUTHOR("Jason A. Donenfeld <Jason@zx2c4.com>");
1631 +MODULE_VERSION(WIREGUARD_VERSION);
1632 +MODULE_ALIAS_RTNL_LINK(KBUILD_MODNAME);
1633 +MODULE_ALIAS_GENL_FAMILY(WG_GENL_NAME);
1635 +++ b/drivers/net/wireguard/messages.h
1637 +/* SPDX-License-Identifier: GPL-2.0 */
1639 + * Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
1642 +#ifndef _WG_MESSAGES_H
1643 +#define _WG_MESSAGES_H
1645 +#include <crypto/curve25519.h>
1646 +#include <crypto/chacha20poly1305.h>
1647 +#include <crypto/blake2s.h>
1649 +#include <linux/kernel.h>
1650 +#include <linux/param.h>
1651 +#include <linux/skbuff.h>
1653 +enum noise_lengths {
1654 + NOISE_PUBLIC_KEY_LEN = CURVE25519_KEY_SIZE,
1655 + NOISE_SYMMETRIC_KEY_LEN = CHACHA20POLY1305_KEY_SIZE,
1656 + NOISE_TIMESTAMP_LEN = sizeof(u64) + sizeof(u32),
1657 + NOISE_AUTHTAG_LEN = CHACHA20POLY1305_AUTHTAG_SIZE,
1658 + NOISE_HASH_LEN = BLAKE2S_HASH_SIZE
1661 +#define noise_encrypted_len(plain_len) ((plain_len) + NOISE_AUTHTAG_LEN)
1663 +enum cookie_values {
1664 + COOKIE_SECRET_MAX_AGE = 2 * 60,
1665 + COOKIE_SECRET_LATENCY = 5,
1666 + COOKIE_NONCE_LEN = XCHACHA20POLY1305_NONCE_SIZE,
1670 +enum counter_values {
1671 + COUNTER_BITS_TOTAL = 2048,
1672 + COUNTER_REDUNDANT_BITS = BITS_PER_LONG,
1673 + COUNTER_WINDOW_SIZE = COUNTER_BITS_TOTAL - COUNTER_REDUNDANT_BITS
1677 + REKEY_AFTER_MESSAGES = 1ULL << 60,
1678 + REJECT_AFTER_MESSAGES = U64_MAX - COUNTER_WINDOW_SIZE - 1,
1679 + REKEY_TIMEOUT = 5,
1680 + REKEY_TIMEOUT_JITTER_MAX_JIFFIES = HZ / 3,
1681 + REKEY_AFTER_TIME = 120,
1682 + REJECT_AFTER_TIME = 180,
1683 + INITIATIONS_PER_SECOND = 50,
1684 + MAX_PEERS_PER_DEVICE = 1U << 20,
1685 + KEEPALIVE_TIMEOUT = 10,
1686 + MAX_TIMER_HANDSHAKES = 90 / REKEY_TIMEOUT,
1687 + MAX_QUEUED_INCOMING_HANDSHAKES = 4096, /* TODO: replace this with DQL */
1688 + MAX_STAGED_PACKETS = 128,
1689 + MAX_QUEUED_PACKETS = 1024 /* TODO: replace this with DQL */
1692 +enum message_type {
1693 + MESSAGE_INVALID = 0,
1694 + MESSAGE_HANDSHAKE_INITIATION = 1,
1695 + MESSAGE_HANDSHAKE_RESPONSE = 2,
1696 + MESSAGE_HANDSHAKE_COOKIE = 3,
1700 +struct message_header {
1701 + /* The actual layout of this that we want is:
1703 + * u8 reserved_zero[3]
1705 + * But it turns out that by encoding this as little endian,
1706 + * we achieve the same thing, and it makes checking faster.
1711 +struct message_macs {
1712 + u8 mac1[COOKIE_LEN];
1713 + u8 mac2[COOKIE_LEN];
1716 +struct message_handshake_initiation {
1717 + struct message_header header;
1718 + __le32 sender_index;
1719 + u8 unencrypted_ephemeral[NOISE_PUBLIC_KEY_LEN];
1720 + u8 encrypted_static[noise_encrypted_len(NOISE_PUBLIC_KEY_LEN)];
1721 + u8 encrypted_timestamp[noise_encrypted_len(NOISE_TIMESTAMP_LEN)];
1722 + struct message_macs macs;
1725 +struct message_handshake_response {
1726 + struct message_header header;
1727 + __le32 sender_index;
1728 + __le32 receiver_index;
1729 + u8 unencrypted_ephemeral[NOISE_PUBLIC_KEY_LEN];
1730 + u8 encrypted_nothing[noise_encrypted_len(0)];
1731 + struct message_macs macs;
1734 +struct message_handshake_cookie {
1735 + struct message_header header;
1736 + __le32 receiver_index;
1737 + u8 nonce[COOKIE_NONCE_LEN];
1738 + u8 encrypted_cookie[noise_encrypted_len(COOKIE_LEN)];
1741 +struct message_data {
1742 + struct message_header header;
1745 + u8 encrypted_data[];
1748 +#define message_data_len(plain_len) \
1749 + (noise_encrypted_len(plain_len) + sizeof(struct message_data))
1751 +enum message_alignments {
1752 + MESSAGE_PADDING_MULTIPLE = 16,
1753 + MESSAGE_MINIMUM_LENGTH = message_data_len(0)
1756 +#define SKB_HEADER_LEN \
1757 + (max(sizeof(struct iphdr), sizeof(struct ipv6hdr)) + \
1758 + sizeof(struct udphdr) + NET_SKB_PAD)
1759 +#define DATA_PACKET_HEAD_ROOM \
1760 + ALIGN(sizeof(struct message_data) + SKB_HEADER_LEN, 4)
1762 +enum { HANDSHAKE_DSCP = 0x88 /* AF41, plus 00 ECN */ };
1764 +#endif /* _WG_MESSAGES_H */
1766 +++ b/drivers/net/wireguard/netlink.c
1768 +// SPDX-License-Identifier: GPL-2.0
1770 + * Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
1773 +#include "netlink.h"
1774 +#include "device.h"
1776 +#include "socket.h"
1777 +#include "queueing.h"
1778 +#include "messages.h"
1780 +#include <uapi/linux/wireguard.h>
1782 +#include <linux/if.h>
1783 +#include <net/genetlink.h>
1784 +#include <net/sock.h>
1785 +#include <crypto/algapi.h>
1787 +static struct genl_family genl_family;
1789 +static const struct nla_policy device_policy[WGDEVICE_A_MAX + 1] = {
1790 + [WGDEVICE_A_IFINDEX] = { .type = NLA_U32 },
1791 + [WGDEVICE_A_IFNAME] = { .type = NLA_NUL_STRING, .len = IFNAMSIZ - 1 },
1792 + [WGDEVICE_A_PRIVATE_KEY] = { .type = NLA_EXACT_LEN, .len = NOISE_PUBLIC_KEY_LEN },
1793 + [WGDEVICE_A_PUBLIC_KEY] = { .type = NLA_EXACT_LEN, .len = NOISE_PUBLIC_KEY_LEN },
1794 + [WGDEVICE_A_FLAGS] = { .type = NLA_U32 },
1795 + [WGDEVICE_A_LISTEN_PORT] = { .type = NLA_U16 },
1796 + [WGDEVICE_A_FWMARK] = { .type = NLA_U32 },
1797 + [WGDEVICE_A_PEERS] = { .type = NLA_NESTED }
1800 +static const struct nla_policy peer_policy[WGPEER_A_MAX + 1] = {
1801 + [WGPEER_A_PUBLIC_KEY] = { .type = NLA_EXACT_LEN, .len = NOISE_PUBLIC_KEY_LEN },
1802 + [WGPEER_A_PRESHARED_KEY] = { .type = NLA_EXACT_LEN, .len = NOISE_SYMMETRIC_KEY_LEN },
1803 + [WGPEER_A_FLAGS] = { .type = NLA_U32 },
1804 + [WGPEER_A_ENDPOINT] = { .type = NLA_MIN_LEN, .len = sizeof(struct sockaddr) },
1805 + [WGPEER_A_PERSISTENT_KEEPALIVE_INTERVAL] = { .type = NLA_U16 },
1806 + [WGPEER_A_LAST_HANDSHAKE_TIME] = { .type = NLA_EXACT_LEN, .len = sizeof(struct __kernel_timespec) },
1807 + [WGPEER_A_RX_BYTES] = { .type = NLA_U64 },
1808 + [WGPEER_A_TX_BYTES] = { .type = NLA_U64 },
1809 + [WGPEER_A_ALLOWEDIPS] = { .type = NLA_NESTED },
1810 + [WGPEER_A_PROTOCOL_VERSION] = { .type = NLA_U32 }
1813 +static const struct nla_policy allowedip_policy[WGALLOWEDIP_A_MAX + 1] = {
1814 + [WGALLOWEDIP_A_FAMILY] = { .type = NLA_U16 },
1815 + [WGALLOWEDIP_A_IPADDR] = { .type = NLA_MIN_LEN, .len = sizeof(struct in_addr) },
1816 + [WGALLOWEDIP_A_CIDR_MASK] = { .type = NLA_U8 }
1819 +static struct wg_device *lookup_interface(struct nlattr **attrs,
1820 + struct sk_buff *skb)
1822 + struct net_device *dev = NULL;
1824 + if (!attrs[WGDEVICE_A_IFINDEX] == !attrs[WGDEVICE_A_IFNAME])
1825 + return ERR_PTR(-EBADR);
1826 + if (attrs[WGDEVICE_A_IFINDEX])
1827 + dev = dev_get_by_index(sock_net(skb->sk),
1828 + nla_get_u32(attrs[WGDEVICE_A_IFINDEX]));
1829 + else if (attrs[WGDEVICE_A_IFNAME])
1830 + dev = dev_get_by_name(sock_net(skb->sk),
1831 + nla_data(attrs[WGDEVICE_A_IFNAME]));
1833 + return ERR_PTR(-ENODEV);
1834 + if (!dev->rtnl_link_ops || !dev->rtnl_link_ops->kind ||
1835 + strcmp(dev->rtnl_link_ops->kind, KBUILD_MODNAME)) {
1837 + return ERR_PTR(-EOPNOTSUPP);
1839 + return netdev_priv(dev);
1842 +static int get_allowedips(struct sk_buff *skb, const u8 *ip, u8 cidr,
1845 + struct nlattr *allowedip_nest;
1847 + allowedip_nest = nla_nest_start(skb, 0);
1848 + if (!allowedip_nest)
1851 + if (nla_put_u8(skb, WGALLOWEDIP_A_CIDR_MASK, cidr) ||
1852 + nla_put_u16(skb, WGALLOWEDIP_A_FAMILY, family) ||
1853 + nla_put(skb, WGALLOWEDIP_A_IPADDR, family == AF_INET6 ?
1854 + sizeof(struct in6_addr) : sizeof(struct in_addr), ip)) {
1855 + nla_nest_cancel(skb, allowedip_nest);
1859 + nla_nest_end(skb, allowedip_nest);
1864 + struct wg_device *wg;
1865 + struct wg_peer *next_peer;
1866 + u64 allowedips_seq;
1867 + struct allowedips_node *next_allowedip;
1870 +#define DUMP_CTX(cb) ((struct dump_ctx *)(cb)->args)
1873 +get_peer(struct wg_peer *peer, struct sk_buff *skb, struct dump_ctx *ctx)
1876 + struct nlattr *allowedips_nest, *peer_nest = nla_nest_start(skb, 0);
1877 + struct allowedips_node *allowedips_node = ctx->next_allowedip;
1883 + down_read(&peer->handshake.lock);
1884 + fail = nla_put(skb, WGPEER_A_PUBLIC_KEY, NOISE_PUBLIC_KEY_LEN,
1885 + peer->handshake.remote_static);
1886 + up_read(&peer->handshake.lock);
1890 + if (!allowedips_node) {
1891 + const struct __kernel_timespec last_handshake = {
1892 + .tv_sec = peer->walltime_last_handshake.tv_sec,
1893 + .tv_nsec = peer->walltime_last_handshake.tv_nsec
1896 + down_read(&peer->handshake.lock);
1897 + fail = nla_put(skb, WGPEER_A_PRESHARED_KEY,
1898 + NOISE_SYMMETRIC_KEY_LEN,
1899 + peer->handshake.preshared_key);
1900 + up_read(&peer->handshake.lock);
1904 + if (nla_put(skb, WGPEER_A_LAST_HANDSHAKE_TIME,
1905 + sizeof(last_handshake), &last_handshake) ||
1906 + nla_put_u16(skb, WGPEER_A_PERSISTENT_KEEPALIVE_INTERVAL,
1907 + peer->persistent_keepalive_interval) ||
1908 + nla_put_u64_64bit(skb, WGPEER_A_TX_BYTES, peer->tx_bytes,
1909 + WGPEER_A_UNSPEC) ||
1910 + nla_put_u64_64bit(skb, WGPEER_A_RX_BYTES, peer->rx_bytes,
1911 + WGPEER_A_UNSPEC) ||
1912 + nla_put_u32(skb, WGPEER_A_PROTOCOL_VERSION, 1))
1915 + read_lock_bh(&peer->endpoint_lock);
1916 + if (peer->endpoint.addr.sa_family == AF_INET)
1917 + fail = nla_put(skb, WGPEER_A_ENDPOINT,
1918 + sizeof(peer->endpoint.addr4),
1919 + &peer->endpoint.addr4);
1920 + else if (peer->endpoint.addr.sa_family == AF_INET6)
1921 + fail = nla_put(skb, WGPEER_A_ENDPOINT,
1922 + sizeof(peer->endpoint.addr6),
1923 + &peer->endpoint.addr6);
1924 + read_unlock_bh(&peer->endpoint_lock);
1928 + list_first_entry_or_null(&peer->allowedips_list,
1929 + struct allowedips_node, peer_list);
1931 + if (!allowedips_node)
1932 + goto no_allowedips;
1933 + if (!ctx->allowedips_seq)
1934 + ctx->allowedips_seq = peer->device->peer_allowedips.seq;
1935 + else if (ctx->allowedips_seq != peer->device->peer_allowedips.seq)
1936 + goto no_allowedips;
1938 + allowedips_nest = nla_nest_start(skb, WGPEER_A_ALLOWEDIPS);
1939 + if (!allowedips_nest)
1942 + list_for_each_entry_from(allowedips_node, &peer->allowedips_list,
1944 + u8 cidr, ip[16] __aligned(__alignof(u64));
1947 + family = wg_allowedips_read_node(allowedips_node, ip, &cidr);
1948 + if (get_allowedips(skb, ip, cidr, family)) {
1949 + nla_nest_end(skb, allowedips_nest);
1950 + nla_nest_end(skb, peer_nest);
1951 + ctx->next_allowedip = allowedips_node;
1955 + nla_nest_end(skb, allowedips_nest);
1957 + nla_nest_end(skb, peer_nest);
1958 + ctx->next_allowedip = NULL;
1959 + ctx->allowedips_seq = 0;
1962 + nla_nest_cancel(skb, peer_nest);
1966 +static int wg_get_device_start(struct netlink_callback *cb)
1968 + struct nlattr **attrs = genl_family_attrbuf(&genl_family);
1969 + struct wg_device *wg;
1972 + ret = nlmsg_parse(cb->nlh, GENL_HDRLEN + genl_family.hdrsize, attrs,
1973 + genl_family.maxattr, device_policy, NULL);
1976 + wg = lookup_interface(attrs, cb->skb);
1978 + return PTR_ERR(wg);
1979 + DUMP_CTX(cb)->wg = wg;
1983 +static int wg_get_device_dump(struct sk_buff *skb, struct netlink_callback *cb)
1985 + struct wg_peer *peer, *next_peer_cursor;
1986 + struct dump_ctx *ctx = DUMP_CTX(cb);
1987 + struct wg_device *wg = ctx->wg;
1988 + struct nlattr *peers_nest;
1989 + int ret = -EMSGSIZE;
1994 + mutex_lock(&wg->device_update_lock);
1995 + cb->seq = wg->device_update_gen;
1996 + next_peer_cursor = ctx->next_peer;
1998 + hdr = genlmsg_put(skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq,
1999 + &genl_family, NLM_F_MULTI, WG_CMD_GET_DEVICE);
2002 + genl_dump_check_consistent(cb, hdr);
2004 + if (!ctx->next_peer) {
2005 + if (nla_put_u16(skb, WGDEVICE_A_LISTEN_PORT,
2006 + wg->incoming_port) ||
2007 + nla_put_u32(skb, WGDEVICE_A_FWMARK, wg->fwmark) ||
2008 + nla_put_u32(skb, WGDEVICE_A_IFINDEX, wg->dev->ifindex) ||
2009 + nla_put_string(skb, WGDEVICE_A_IFNAME, wg->dev->name))
2012 + down_read(&wg->static_identity.lock);
2013 + if (wg->static_identity.has_identity) {
2014 + if (nla_put(skb, WGDEVICE_A_PRIVATE_KEY,
2015 + NOISE_PUBLIC_KEY_LEN,
2016 + wg->static_identity.static_private) ||
2017 + nla_put(skb, WGDEVICE_A_PUBLIC_KEY,
2018 + NOISE_PUBLIC_KEY_LEN,
2019 + wg->static_identity.static_public)) {
2020 + up_read(&wg->static_identity.lock);
2024 + up_read(&wg->static_identity.lock);
2027 + peers_nest = nla_nest_start(skb, WGDEVICE_A_PEERS);
2031 + /* If the last cursor was removed via list_del_init in peer_remove, then
2032 + * we just treat this the same as there being no more peers left. The
2033 + * reason is that seq_nr should indicate to userspace that this isn't a
2034 + * coherent dump anyway, so they'll try again.
2036 + if (list_empty(&wg->peer_list) ||
2037 + (ctx->next_peer && list_empty(&ctx->next_peer->peer_list))) {
2038 + nla_nest_cancel(skb, peers_nest);
2041 + lockdep_assert_held(&wg->device_update_lock);
2042 + peer = list_prepare_entry(ctx->next_peer, &wg->peer_list, peer_list);
2043 + list_for_each_entry_continue(peer, &wg->peer_list, peer_list) {
2044 + if (get_peer(peer, skb, ctx)) {
2048 + next_peer_cursor = peer;
2050 + nla_nest_end(skb, peers_nest);
2053 + if (!ret && !done && next_peer_cursor)
2054 + wg_peer_get(next_peer_cursor);
2055 + wg_peer_put(ctx->next_peer);
2056 + mutex_unlock(&wg->device_update_lock);
2060 + genlmsg_cancel(skb, hdr);
2063 + genlmsg_end(skb, hdr);
2065 + ctx->next_peer = NULL;
2068 + ctx->next_peer = next_peer_cursor;
2071 + /* At this point, we can't really deal ourselves with safely zeroing out
2072 + * the private key material after usage. This will need an additional API
2073 + * in the kernel for marking skbs as zero_on_free.
2077 +static int wg_get_device_done(struct netlink_callback *cb)
2079 + struct dump_ctx *ctx = DUMP_CTX(cb);
2082 + dev_put(ctx->wg->dev);
2083 + wg_peer_put(ctx->next_peer);
2087 +static int set_port(struct wg_device *wg, u16 port)
2089 + struct wg_peer *peer;
2091 + if (wg->incoming_port == port)
2093 + list_for_each_entry(peer, &wg->peer_list, peer_list)
2094 + wg_socket_clear_peer_endpoint_src(peer);
2095 + if (!netif_running(wg->dev)) {
2096 + wg->incoming_port = port;
2099 + return wg_socket_init(wg, port);
2102 +static int set_allowedip(struct wg_peer *peer, struct nlattr **attrs)
2104 + int ret = -EINVAL;
2108 + if (!attrs[WGALLOWEDIP_A_FAMILY] || !attrs[WGALLOWEDIP_A_IPADDR] ||
2109 + !attrs[WGALLOWEDIP_A_CIDR_MASK])
2111 + family = nla_get_u16(attrs[WGALLOWEDIP_A_FAMILY]);
2112 + cidr = nla_get_u8(attrs[WGALLOWEDIP_A_CIDR_MASK]);
2114 + if (family == AF_INET && cidr <= 32 &&
2115 + nla_len(attrs[WGALLOWEDIP_A_IPADDR]) == sizeof(struct in_addr))
2116 + ret = wg_allowedips_insert_v4(
2117 + &peer->device->peer_allowedips,
2118 + nla_data(attrs[WGALLOWEDIP_A_IPADDR]), cidr, peer,
2119 + &peer->device->device_update_lock);
2120 + else if (family == AF_INET6 && cidr <= 128 &&
2121 + nla_len(attrs[WGALLOWEDIP_A_IPADDR]) == sizeof(struct in6_addr))
2122 + ret = wg_allowedips_insert_v6(
2123 + &peer->device->peer_allowedips,
2124 + nla_data(attrs[WGALLOWEDIP_A_IPADDR]), cidr, peer,
2125 + &peer->device->device_update_lock);
2130 +static int set_peer(struct wg_device *wg, struct nlattr **attrs)
2132 + u8 *public_key = NULL, *preshared_key = NULL;
2133 + struct wg_peer *peer = NULL;
2138 + if (attrs[WGPEER_A_PUBLIC_KEY] &&
2139 + nla_len(attrs[WGPEER_A_PUBLIC_KEY]) == NOISE_PUBLIC_KEY_LEN)
2140 + public_key = nla_data(attrs[WGPEER_A_PUBLIC_KEY]);
2143 + if (attrs[WGPEER_A_PRESHARED_KEY] &&
2144 + nla_len(attrs[WGPEER_A_PRESHARED_KEY]) == NOISE_SYMMETRIC_KEY_LEN)
2145 + preshared_key = nla_data(attrs[WGPEER_A_PRESHARED_KEY]);
2147 + if (attrs[WGPEER_A_FLAGS])
2148 + flags = nla_get_u32(attrs[WGPEER_A_FLAGS]);
2149 + ret = -EOPNOTSUPP;
2150 + if (flags & ~__WGPEER_F_ALL)
2153 + ret = -EPFNOSUPPORT;
2154 + if (attrs[WGPEER_A_PROTOCOL_VERSION]) {
2155 + if (nla_get_u32(attrs[WGPEER_A_PROTOCOL_VERSION]) != 1)
2159 + peer = wg_pubkey_hashtable_lookup(wg->peer_hashtable,
2160 + nla_data(attrs[WGPEER_A_PUBLIC_KEY]));
2162 + if (!peer) { /* Peer doesn't exist yet. Add a new one. */
2163 + if (flags & (WGPEER_F_REMOVE_ME | WGPEER_F_UPDATE_ONLY))
2166 + /* The peer is new, so there aren't allowed IPs to remove. */
2167 + flags &= ~WGPEER_F_REPLACE_ALLOWEDIPS;
2169 + down_read(&wg->static_identity.lock);
2170 + if (wg->static_identity.has_identity &&
2171 + !memcmp(nla_data(attrs[WGPEER_A_PUBLIC_KEY]),
2172 + wg->static_identity.static_public,
2173 + NOISE_PUBLIC_KEY_LEN)) {
2174 + /* We silently ignore peers that have the same public
2175 + * key as the device. The reason we do it silently is
2176 + * that we'd like for people to be able to reuse the
2177 + * same set of API calls across peers.
2179 + up_read(&wg->static_identity.lock);
2183 + up_read(&wg->static_identity.lock);
2185 + peer = wg_peer_create(wg, public_key, preshared_key);
2186 + if (IS_ERR(peer)) {
2187 + /* Similar to the above, if the key is invalid, we skip
2188 + * it without fanfare, so that services don't need to
2189 + * worry about doing key validation themselves.
2191 + ret = PTR_ERR(peer) == -EKEYREJECTED ? 0 : PTR_ERR(peer);
2195 + /* Take additional reference, as though we've just been
2198 + wg_peer_get(peer);
2201 + if (flags & WGPEER_F_REMOVE_ME) {
2202 + wg_peer_remove(peer);
2206 + if (preshared_key) {
2207 + down_write(&peer->handshake.lock);
2208 + memcpy(&peer->handshake.preshared_key, preshared_key,
2209 + NOISE_SYMMETRIC_KEY_LEN);
2210 + up_write(&peer->handshake.lock);
2213 + if (attrs[WGPEER_A_ENDPOINT]) {
2214 + struct sockaddr *addr = nla_data(attrs[WGPEER_A_ENDPOINT]);
2215 + size_t len = nla_len(attrs[WGPEER_A_ENDPOINT]);
2217 + if ((len == sizeof(struct sockaddr_in) &&
2218 + addr->sa_family == AF_INET) ||
2219 + (len == sizeof(struct sockaddr_in6) &&
2220 + addr->sa_family == AF_INET6)) {
2221 + struct endpoint endpoint = { { { 0 } } };
2223 + memcpy(&endpoint.addr, addr, len);
2224 + wg_socket_set_peer_endpoint(peer, &endpoint);
2228 + if (flags & WGPEER_F_REPLACE_ALLOWEDIPS)
2229 + wg_allowedips_remove_by_peer(&wg->peer_allowedips, peer,
2230 + &wg->device_update_lock);
2232 + if (attrs[WGPEER_A_ALLOWEDIPS]) {
2233 + struct nlattr *attr, *allowedip[WGALLOWEDIP_A_MAX + 1];
2236 + nla_for_each_nested(attr, attrs[WGPEER_A_ALLOWEDIPS], rem) {
2237 + ret = nla_parse_nested(allowedip, WGALLOWEDIP_A_MAX,
2238 + attr, allowedip_policy, NULL);
2241 + ret = set_allowedip(peer, allowedip);
2247 + if (attrs[WGPEER_A_PERSISTENT_KEEPALIVE_INTERVAL]) {
2248 + const u16 persistent_keepalive_interval = nla_get_u16(
2249 + attrs[WGPEER_A_PERSISTENT_KEEPALIVE_INTERVAL]);
2250 + const bool send_keepalive =
2251 + !peer->persistent_keepalive_interval &&
2252 + persistent_keepalive_interval &&
2253 + netif_running(wg->dev);
2255 + peer->persistent_keepalive_interval = persistent_keepalive_interval;
2256 + if (send_keepalive)
2257 + wg_packet_send_keepalive(peer);
2260 + if (netif_running(wg->dev))
2261 + wg_packet_send_staged_packets(peer);
2264 + wg_peer_put(peer);
2265 + if (attrs[WGPEER_A_PRESHARED_KEY])
2266 + memzero_explicit(nla_data(attrs[WGPEER_A_PRESHARED_KEY]),
2267 + nla_len(attrs[WGPEER_A_PRESHARED_KEY]));
2271 +static int wg_set_device(struct sk_buff *skb, struct genl_info *info)
2273 + struct wg_device *wg = lookup_interface(info->attrs, skb);
2278 + ret = PTR_ERR(wg);
2283 + mutex_lock(&wg->device_update_lock);
2285 + if (info->attrs[WGDEVICE_A_FLAGS])
2286 + flags = nla_get_u32(info->attrs[WGDEVICE_A_FLAGS]);
2287 + ret = -EOPNOTSUPP;
2288 + if (flags & ~__WGDEVICE_F_ALL)
2292 + if ((info->attrs[WGDEVICE_A_LISTEN_PORT] ||
2293 + info->attrs[WGDEVICE_A_FWMARK]) &&
2294 + !ns_capable(wg->creating_net->user_ns, CAP_NET_ADMIN))
2297 + ++wg->device_update_gen;
2299 + if (info->attrs[WGDEVICE_A_FWMARK]) {
2300 + struct wg_peer *peer;
2302 + wg->fwmark = nla_get_u32(info->attrs[WGDEVICE_A_FWMARK]);
2303 + list_for_each_entry(peer, &wg->peer_list, peer_list)
2304 + wg_socket_clear_peer_endpoint_src(peer);
2307 + if (info->attrs[WGDEVICE_A_LISTEN_PORT]) {
2308 + ret = set_port(wg,
2309 + nla_get_u16(info->attrs[WGDEVICE_A_LISTEN_PORT]));
2314 + if (flags & WGDEVICE_F_REPLACE_PEERS)
2315 + wg_peer_remove_all(wg);
2317 + if (info->attrs[WGDEVICE_A_PRIVATE_KEY] &&
2318 + nla_len(info->attrs[WGDEVICE_A_PRIVATE_KEY]) ==
2319 + NOISE_PUBLIC_KEY_LEN) {
2320 + u8 *private_key = nla_data(info->attrs[WGDEVICE_A_PRIVATE_KEY]);
2321 + u8 public_key[NOISE_PUBLIC_KEY_LEN];
2322 + struct wg_peer *peer, *temp;
2324 + if (!crypto_memneq(wg->static_identity.static_private,
2325 + private_key, NOISE_PUBLIC_KEY_LEN))
2326 + goto skip_set_private_key;
2328 + /* We remove before setting, to prevent race, which means doing
2329 + * two 25519-genpub ops.
2331 + if (curve25519_generate_public(public_key, private_key)) {
2332 + peer = wg_pubkey_hashtable_lookup(wg->peer_hashtable,
2335 + wg_peer_put(peer);
2336 + wg_peer_remove(peer);
2340 + down_write(&wg->static_identity.lock);
2341 + wg_noise_set_static_identity_private_key(&wg->static_identity,
2343 + list_for_each_entry_safe(peer, temp, &wg->peer_list,
2345 + if (wg_noise_precompute_static_static(peer))
2346 + wg_noise_expire_current_peer_keypairs(peer);
2348 + wg_peer_remove(peer);
2350 + wg_cookie_checker_precompute_device_keys(&wg->cookie_checker);
2351 + up_write(&wg->static_identity.lock);
2353 +skip_set_private_key:
2355 + if (info->attrs[WGDEVICE_A_PEERS]) {
2356 + struct nlattr *attr, *peer[WGPEER_A_MAX + 1];
2359 + nla_for_each_nested(attr, info->attrs[WGDEVICE_A_PEERS], rem) {
2360 + ret = nla_parse_nested(peer, WGPEER_A_MAX, attr,
2361 + peer_policy, NULL);
2364 + ret = set_peer(wg, peer);
2372 + mutex_unlock(&wg->device_update_lock);
2376 + if (info->attrs[WGDEVICE_A_PRIVATE_KEY])
2377 + memzero_explicit(nla_data(info->attrs[WGDEVICE_A_PRIVATE_KEY]),
2378 + nla_len(info->attrs[WGDEVICE_A_PRIVATE_KEY]));
2382 +static const struct genl_ops genl_ops[] = {
2384 + .cmd = WG_CMD_GET_DEVICE,
2385 + .start = wg_get_device_start,
2386 + .dumpit = wg_get_device_dump,
2387 + .done = wg_get_device_done,
2388 + .flags = GENL_UNS_ADMIN_PERM
2390 + .cmd = WG_CMD_SET_DEVICE,
2391 + .doit = wg_set_device,
2392 + .flags = GENL_UNS_ADMIN_PERM
2396 +static struct genl_family genl_family __ro_after_init = {
2398 + .n_ops = ARRAY_SIZE(genl_ops),
2399 + .name = WG_GENL_NAME,
2400 + .version = WG_GENL_VERSION,
2401 + .maxattr = WGDEVICE_A_MAX,
2402 + .module = THIS_MODULE,
2403 + .policy = device_policy,
2407 +int __init wg_genetlink_init(void)
2409 + return genl_register_family(&genl_family);
2412 +void __exit wg_genetlink_uninit(void)
2414 + genl_unregister_family(&genl_family);
2417 +++ b/drivers/net/wireguard/netlink.h
2419 +/* SPDX-License-Identifier: GPL-2.0 */
2421 + * Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
2424 +#ifndef _WG_NETLINK_H
2425 +#define _WG_NETLINK_H
2427 +int wg_genetlink_init(void);
2428 +void wg_genetlink_uninit(void);
2430 +#endif /* _WG_NETLINK_H */
2432 +++ b/drivers/net/wireguard/noise.c
2434 +// SPDX-License-Identifier: GPL-2.0
2436 + * Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
2440 +#include "device.h"
2442 +#include "messages.h"
2443 +#include "queueing.h"
2444 +#include "peerlookup.h"
2446 +#include <linux/rcupdate.h>
2447 +#include <linux/slab.h>
2448 +#include <linux/bitmap.h>
2449 +#include <linux/scatterlist.h>
2450 +#include <linux/highmem.h>
2451 +#include <crypto/algapi.h>
2453 +/* This implements Noise_IKpsk2:
2457 + * -> e, es, s, ss, {t}
2458 + * <- e, ee, se, psk, {}
2461 +static const u8 handshake_name[37] = "Noise_IKpsk2_25519_ChaChaPoly_BLAKE2s";
2462 +static const u8 identifier_name[34] = "WireGuard v1 zx2c4 Jason@zx2c4.com";
2463 +static u8 handshake_init_hash[NOISE_HASH_LEN] __ro_after_init;
2464 +static u8 handshake_init_chaining_key[NOISE_HASH_LEN] __ro_after_init;
2465 +static atomic64_t keypair_counter = ATOMIC64_INIT(0);
2467 +void __init wg_noise_init(void)
2469 + struct blake2s_state blake;
2471 + blake2s(handshake_init_chaining_key, handshake_name, NULL,
2472 + NOISE_HASH_LEN, sizeof(handshake_name), 0);
2473 + blake2s_init(&blake, NOISE_HASH_LEN);
2474 + blake2s_update(&blake, handshake_init_chaining_key, NOISE_HASH_LEN);
2475 + blake2s_update(&blake, identifier_name, sizeof(identifier_name));
2476 + blake2s_final(&blake, handshake_init_hash);
2479 +/* Must hold peer->handshake.static_identity->lock */
2480 +bool wg_noise_precompute_static_static(struct wg_peer *peer)
2484 + down_write(&peer->handshake.lock);
2485 + if (peer->handshake.static_identity->has_identity)
2487 + peer->handshake.precomputed_static_static,
2488 + peer->handshake.static_identity->static_private,
2489 + peer->handshake.remote_static);
2491 + memset(peer->handshake.precomputed_static_static, 0,
2492 + NOISE_PUBLIC_KEY_LEN);
2493 + up_write(&peer->handshake.lock);
2497 +bool wg_noise_handshake_init(struct noise_handshake *handshake,
2498 + struct noise_static_identity *static_identity,
2499 + const u8 peer_public_key[NOISE_PUBLIC_KEY_LEN],
2500 + const u8 peer_preshared_key[NOISE_SYMMETRIC_KEY_LEN],
2501 + struct wg_peer *peer)
2503 + memset(handshake, 0, sizeof(*handshake));
2504 + init_rwsem(&handshake->lock);
2505 + handshake->entry.type = INDEX_HASHTABLE_HANDSHAKE;
2506 + handshake->entry.peer = peer;
2507 + memcpy(handshake->remote_static, peer_public_key, NOISE_PUBLIC_KEY_LEN);
2508 + if (peer_preshared_key)
2509 + memcpy(handshake->preshared_key, peer_preshared_key,
2510 + NOISE_SYMMETRIC_KEY_LEN);
2511 + handshake->static_identity = static_identity;
2512 + handshake->state = HANDSHAKE_ZEROED;
2513 + return wg_noise_precompute_static_static(peer);
2516 +static void handshake_zero(struct noise_handshake *handshake)
2518 + memset(&handshake->ephemeral_private, 0, NOISE_PUBLIC_KEY_LEN);
2519 + memset(&handshake->remote_ephemeral, 0, NOISE_PUBLIC_KEY_LEN);
2520 + memset(&handshake->hash, 0, NOISE_HASH_LEN);
2521 + memset(&handshake->chaining_key, 0, NOISE_HASH_LEN);
2522 + handshake->remote_index = 0;
2523 + handshake->state = HANDSHAKE_ZEROED;
2526 +void wg_noise_handshake_clear(struct noise_handshake *handshake)
2528 + wg_index_hashtable_remove(
2529 + handshake->entry.peer->device->index_hashtable,
2530 + &handshake->entry);
2531 + down_write(&handshake->lock);
2532 + handshake_zero(handshake);
2533 + up_write(&handshake->lock);
2534 + wg_index_hashtable_remove(
2535 + handshake->entry.peer->device->index_hashtable,
2536 + &handshake->entry);
2539 +static struct noise_keypair *keypair_create(struct wg_peer *peer)
2541 + struct noise_keypair *keypair = kzalloc(sizeof(*keypair), GFP_KERNEL);
2543 + if (unlikely(!keypair))
2545 + keypair->internal_id = atomic64_inc_return(&keypair_counter);
2546 + keypair->entry.type = INDEX_HASHTABLE_KEYPAIR;
2547 + keypair->entry.peer = peer;
2548 + kref_init(&keypair->refcount);
2552 +static void keypair_free_rcu(struct rcu_head *rcu)
2554 + kzfree(container_of(rcu, struct noise_keypair, rcu));
2557 +static void keypair_free_kref(struct kref *kref)
2559 + struct noise_keypair *keypair =
2560 + container_of(kref, struct noise_keypair, refcount);
2562 + net_dbg_ratelimited("%s: Keypair %llu destroyed for peer %llu\n",
2563 + keypair->entry.peer->device->dev->name,
2564 + keypair->internal_id,
2565 + keypair->entry.peer->internal_id);
2566 + wg_index_hashtable_remove(keypair->entry.peer->device->index_hashtable,
2568 + call_rcu(&keypair->rcu, keypair_free_rcu);
2571 +void wg_noise_keypair_put(struct noise_keypair *keypair, bool unreference_now)
2573 + if (unlikely(!keypair))
2575 + if (unlikely(unreference_now))
2576 + wg_index_hashtable_remove(
2577 + keypair->entry.peer->device->index_hashtable,
2579 + kref_put(&keypair->refcount, keypair_free_kref);
2582 +struct noise_keypair *wg_noise_keypair_get(struct noise_keypair *keypair)
2584 + RCU_LOCKDEP_WARN(!rcu_read_lock_bh_held(),
2585 + "Taking noise keypair reference without holding the RCU BH read lock");
2586 + if (unlikely(!keypair || !kref_get_unless_zero(&keypair->refcount)))
2591 +void wg_noise_keypairs_clear(struct noise_keypairs *keypairs)
2593 + struct noise_keypair *old;
2595 + spin_lock_bh(&keypairs->keypair_update_lock);
2597 + /* We zero the next_keypair before zeroing the others, so that
2598 + * wg_noise_received_with_keypair returns early before subsequent ones
2601 + old = rcu_dereference_protected(keypairs->next_keypair,
2602 + lockdep_is_held(&keypairs->keypair_update_lock));
2603 + RCU_INIT_POINTER(keypairs->next_keypair, NULL);
2604 + wg_noise_keypair_put(old, true);
2606 + old = rcu_dereference_protected(keypairs->previous_keypair,
2607 + lockdep_is_held(&keypairs->keypair_update_lock));
2608 + RCU_INIT_POINTER(keypairs->previous_keypair, NULL);
2609 + wg_noise_keypair_put(old, true);
2611 + old = rcu_dereference_protected(keypairs->current_keypair,
2612 + lockdep_is_held(&keypairs->keypair_update_lock));
2613 + RCU_INIT_POINTER(keypairs->current_keypair, NULL);
2614 + wg_noise_keypair_put(old, true);
2616 + spin_unlock_bh(&keypairs->keypair_update_lock);
2619 +void wg_noise_expire_current_peer_keypairs(struct wg_peer *peer)
2621 + struct noise_keypair *keypair;
2623 + wg_noise_handshake_clear(&peer->handshake);
2624 + wg_noise_reset_last_sent_handshake(&peer->last_sent_handshake);
2626 + spin_lock_bh(&peer->keypairs.keypair_update_lock);
2627 + keypair = rcu_dereference_protected(peer->keypairs.next_keypair,
2628 + lockdep_is_held(&peer->keypairs.keypair_update_lock));
2630 + keypair->sending.is_valid = false;
2631 + keypair = rcu_dereference_protected(peer->keypairs.current_keypair,
2632 + lockdep_is_held(&peer->keypairs.keypair_update_lock));
2634 + keypair->sending.is_valid = false;
2635 + spin_unlock_bh(&peer->keypairs.keypair_update_lock);
2638 +static void add_new_keypair(struct noise_keypairs *keypairs,
2639 + struct noise_keypair *new_keypair)
2641 + struct noise_keypair *previous_keypair, *next_keypair, *current_keypair;
2643 + spin_lock_bh(&keypairs->keypair_update_lock);
2644 + previous_keypair = rcu_dereference_protected(keypairs->previous_keypair,
2645 + lockdep_is_held(&keypairs->keypair_update_lock));
2646 + next_keypair = rcu_dereference_protected(keypairs->next_keypair,
2647 + lockdep_is_held(&keypairs->keypair_update_lock));
2648 + current_keypair = rcu_dereference_protected(keypairs->current_keypair,
2649 + lockdep_is_held(&keypairs->keypair_update_lock));
2650 + if (new_keypair->i_am_the_initiator) {
2651 + /* If we're the initiator, it means we've sent a handshake, and
2652 + * received a confirmation response, which means this new
2653 + * keypair can now be used.
2655 + if (next_keypair) {
2656 + /* If there already was a next keypair pending, we
2657 + * demote it to be the previous keypair, and free the
2658 + * existing current. Note that this means KCI can result
2659 + * in this transition. It would perhaps be more sound to
2660 + * always just get rid of the unused next keypair
2661 + * instead of putting it in the previous slot, but this
2662 + * might be a bit less robust. Something to think about
2665 + RCU_INIT_POINTER(keypairs->next_keypair, NULL);
2666 + rcu_assign_pointer(keypairs->previous_keypair,
2668 + wg_noise_keypair_put(current_keypair, true);
2669 + } else /* If there wasn't an existing next keypair, we replace
2670 + * the previous with the current one.
2672 + rcu_assign_pointer(keypairs->previous_keypair,
2674 + /* At this point we can get rid of the old previous keypair, and
2675 + * set up the new keypair.
2677 + wg_noise_keypair_put(previous_keypair, true);
2678 + rcu_assign_pointer(keypairs->current_keypair, new_keypair);
2680 + /* If we're the responder, it means we can't use the new keypair
2681 + * until we receive confirmation via the first data packet, so
2682 + * we get rid of the existing previous one, the possibly
2683 + * existing next one, and slide in the new next one.
2685 + rcu_assign_pointer(keypairs->next_keypair, new_keypair);
2686 + wg_noise_keypair_put(next_keypair, true);
2687 + RCU_INIT_POINTER(keypairs->previous_keypair, NULL);
2688 + wg_noise_keypair_put(previous_keypair, true);
2690 + spin_unlock_bh(&keypairs->keypair_update_lock);
2693 +bool wg_noise_received_with_keypair(struct noise_keypairs *keypairs,
2694 + struct noise_keypair *received_keypair)
2696 + struct noise_keypair *old_keypair;
2699 + /* We first check without taking the spinlock. */
2700 + key_is_new = received_keypair ==
2701 + rcu_access_pointer(keypairs->next_keypair);
2702 + if (likely(!key_is_new))
2705 + spin_lock_bh(&keypairs->keypair_update_lock);
2706 + /* After locking, we double check that things didn't change from
2709 + if (unlikely(received_keypair !=
2710 + rcu_dereference_protected(keypairs->next_keypair,
2711 + lockdep_is_held(&keypairs->keypair_update_lock)))) {
2712 + spin_unlock_bh(&keypairs->keypair_update_lock);
2716 + /* When we've finally received the confirmation, we slide the next
2717 + * into the current, the current into the previous, and get rid of
2718 + * the old previous.
2720 + old_keypair = rcu_dereference_protected(keypairs->previous_keypair,
2721 + lockdep_is_held(&keypairs->keypair_update_lock));
2722 + rcu_assign_pointer(keypairs->previous_keypair,
2723 + rcu_dereference_protected(keypairs->current_keypair,
2724 + lockdep_is_held(&keypairs->keypair_update_lock)));
2725 + wg_noise_keypair_put(old_keypair, true);
2726 + rcu_assign_pointer(keypairs->current_keypair, received_keypair);
2727 + RCU_INIT_POINTER(keypairs->next_keypair, NULL);
2729 + spin_unlock_bh(&keypairs->keypair_update_lock);
2733 +/* Must hold static_identity->lock */
2734 +void wg_noise_set_static_identity_private_key(
2735 + struct noise_static_identity *static_identity,
2736 + const u8 private_key[NOISE_PUBLIC_KEY_LEN])
2738 + memcpy(static_identity->static_private, private_key,
2739 + NOISE_PUBLIC_KEY_LEN);
2740 + curve25519_clamp_secret(static_identity->static_private);
2741 + static_identity->has_identity = curve25519_generate_public(
2742 + static_identity->static_public, private_key);
2745 +/* This is Hugo Krawczyk's HKDF:
2746 + * - https://eprint.iacr.org/2010/264.pdf
2747 + * - https://tools.ietf.org/html/rfc5869
2749 +static void kdf(u8 *first_dst, u8 *second_dst, u8 *third_dst, const u8 *data,
2750 + size_t first_len, size_t second_len, size_t third_len,
2751 + size_t data_len, const u8 chaining_key[NOISE_HASH_LEN])
2753 + u8 output[BLAKE2S_HASH_SIZE + 1];
2754 + u8 secret[BLAKE2S_HASH_SIZE];
2756 + WARN_ON(IS_ENABLED(DEBUG) &&
2757 + (first_len > BLAKE2S_HASH_SIZE ||
2758 + second_len > BLAKE2S_HASH_SIZE ||
2759 + third_len > BLAKE2S_HASH_SIZE ||
2760 + ((second_len || second_dst || third_len || third_dst) &&
2761 + (!first_len || !first_dst)) ||
2762 + ((third_len || third_dst) && (!second_len || !second_dst))));
2764 + /* Extract entropy from data into secret */
2765 + blake2s256_hmac(secret, data, chaining_key, data_len, NOISE_HASH_LEN);
2767 + if (!first_dst || !first_len)
2770 + /* Expand first key: key = secret, data = 0x1 */
2772 + blake2s256_hmac(output, output, secret, 1, BLAKE2S_HASH_SIZE);
2773 + memcpy(first_dst, output, first_len);
2775 + if (!second_dst || !second_len)
2778 + /* Expand second key: key = secret, data = first-key || 0x2 */
2779 + output[BLAKE2S_HASH_SIZE] = 2;
2780 + blake2s256_hmac(output, output, secret, BLAKE2S_HASH_SIZE + 1,
2781 + BLAKE2S_HASH_SIZE);
2782 + memcpy(second_dst, output, second_len);
2784 + if (!third_dst || !third_len)
2787 + /* Expand third key: key = secret, data = second-key || 0x3 */
2788 + output[BLAKE2S_HASH_SIZE] = 3;
2789 + blake2s256_hmac(output, output, secret, BLAKE2S_HASH_SIZE + 1,
2790 + BLAKE2S_HASH_SIZE);
2791 + memcpy(third_dst, output, third_len);
2794 + /* Clear sensitive data from stack */
2795 + memzero_explicit(secret, BLAKE2S_HASH_SIZE);
2796 + memzero_explicit(output, BLAKE2S_HASH_SIZE + 1);
2799 +static void symmetric_key_init(struct noise_symmetric_key *key)
2801 + spin_lock_init(&key->counter.receive.lock);
2802 + atomic64_set(&key->counter.counter, 0);
2803 + memset(key->counter.receive.backtrack, 0,
2804 + sizeof(key->counter.receive.backtrack));
2805 + key->birthdate = ktime_get_coarse_boottime_ns();
2806 + key->is_valid = true;
2809 +static void derive_keys(struct noise_symmetric_key *first_dst,
2810 + struct noise_symmetric_key *second_dst,
2811 + const u8 chaining_key[NOISE_HASH_LEN])
2813 + kdf(first_dst->key, second_dst->key, NULL, NULL,
2814 + NOISE_SYMMETRIC_KEY_LEN, NOISE_SYMMETRIC_KEY_LEN, 0, 0,
2816 + symmetric_key_init(first_dst);
2817 + symmetric_key_init(second_dst);
2820 +static bool __must_check mix_dh(u8 chaining_key[NOISE_HASH_LEN],
2821 + u8 key[NOISE_SYMMETRIC_KEY_LEN],
2822 + const u8 private[NOISE_PUBLIC_KEY_LEN],
2823 + const u8 public[NOISE_PUBLIC_KEY_LEN])
2825 + u8 dh_calculation[NOISE_PUBLIC_KEY_LEN];
2827 + if (unlikely(!curve25519(dh_calculation, private, public)))
2829 + kdf(chaining_key, key, NULL, dh_calculation, NOISE_HASH_LEN,
2830 + NOISE_SYMMETRIC_KEY_LEN, 0, NOISE_PUBLIC_KEY_LEN, chaining_key);
2831 + memzero_explicit(dh_calculation, NOISE_PUBLIC_KEY_LEN);
2835 +static void mix_hash(u8 hash[NOISE_HASH_LEN], const u8 *src, size_t src_len)
2837 + struct blake2s_state blake;
2839 + blake2s_init(&blake, NOISE_HASH_LEN);
2840 + blake2s_update(&blake, hash, NOISE_HASH_LEN);
2841 + blake2s_update(&blake, src, src_len);
2842 + blake2s_final(&blake, hash);
2845 +static void mix_psk(u8 chaining_key[NOISE_HASH_LEN], u8 hash[NOISE_HASH_LEN],
2846 + u8 key[NOISE_SYMMETRIC_KEY_LEN],
2847 + const u8 psk[NOISE_SYMMETRIC_KEY_LEN])
2849 + u8 temp_hash[NOISE_HASH_LEN];
2851 + kdf(chaining_key, temp_hash, key, psk, NOISE_HASH_LEN, NOISE_HASH_LEN,
2852 + NOISE_SYMMETRIC_KEY_LEN, NOISE_SYMMETRIC_KEY_LEN, chaining_key);
2853 + mix_hash(hash, temp_hash, NOISE_HASH_LEN);
2854 + memzero_explicit(temp_hash, NOISE_HASH_LEN);
2857 +static void handshake_init(u8 chaining_key[NOISE_HASH_LEN],
2858 + u8 hash[NOISE_HASH_LEN],
2859 + const u8 remote_static[NOISE_PUBLIC_KEY_LEN])
2861 + memcpy(hash, handshake_init_hash, NOISE_HASH_LEN);
2862 + memcpy(chaining_key, handshake_init_chaining_key, NOISE_HASH_LEN);
2863 + mix_hash(hash, remote_static, NOISE_PUBLIC_KEY_LEN);
2866 +static void message_encrypt(u8 *dst_ciphertext, const u8 *src_plaintext,
2867 + size_t src_len, u8 key[NOISE_SYMMETRIC_KEY_LEN],
2868 + u8 hash[NOISE_HASH_LEN])
2870 + chacha20poly1305_encrypt(dst_ciphertext, src_plaintext, src_len, hash,
2872 + 0 /* Always zero for Noise_IK */, key);
2873 + mix_hash(hash, dst_ciphertext, noise_encrypted_len(src_len));
2876 +static bool message_decrypt(u8 *dst_plaintext, const u8 *src_ciphertext,
2877 + size_t src_len, u8 key[NOISE_SYMMETRIC_KEY_LEN],
2878 + u8 hash[NOISE_HASH_LEN])
2880 + if (!chacha20poly1305_decrypt(dst_plaintext, src_ciphertext, src_len,
2881 + hash, NOISE_HASH_LEN,
2882 + 0 /* Always zero for Noise_IK */, key))
2884 + mix_hash(hash, src_ciphertext, src_len);
2888 +static void message_ephemeral(u8 ephemeral_dst[NOISE_PUBLIC_KEY_LEN],
2889 + const u8 ephemeral_src[NOISE_PUBLIC_KEY_LEN],
2890 + u8 chaining_key[NOISE_HASH_LEN],
2891 + u8 hash[NOISE_HASH_LEN])
2893 + if (ephemeral_dst != ephemeral_src)
2894 + memcpy(ephemeral_dst, ephemeral_src, NOISE_PUBLIC_KEY_LEN);
2895 + mix_hash(hash, ephemeral_src, NOISE_PUBLIC_KEY_LEN);
2896 + kdf(chaining_key, NULL, NULL, ephemeral_src, NOISE_HASH_LEN, 0, 0,
2897 + NOISE_PUBLIC_KEY_LEN, chaining_key);
2900 +static void tai64n_now(u8 output[NOISE_TIMESTAMP_LEN])
2902 + struct timespec64 now;
2904 + ktime_get_real_ts64(&now);
2906 + /* In order to prevent some sort of infoleak from precise timers, we
2907 + * round down the nanoseconds part to the closest rounded-down power of
2908 + * two to the maximum initiations per second allowed anyway by the
2911 + now.tv_nsec = ALIGN_DOWN(now.tv_nsec,
2912 + rounddown_pow_of_two(NSEC_PER_SEC / INITIATIONS_PER_SECOND));
2914 + /* https://cr.yp.to/libtai/tai64.html */
2915 + *(__be64 *)output = cpu_to_be64(0x400000000000000aULL + now.tv_sec);
2916 + *(__be32 *)(output + sizeof(__be64)) = cpu_to_be32(now.tv_nsec);
2920 +wg_noise_handshake_create_initiation(struct message_handshake_initiation *dst,
2921 + struct noise_handshake *handshake)
2923 + u8 timestamp[NOISE_TIMESTAMP_LEN];
2924 + u8 key[NOISE_SYMMETRIC_KEY_LEN];
2927 + /* We need to wait for crng _before_ taking any locks, since
2928 + * curve25519_generate_secret uses get_random_bytes_wait.
2930 + wait_for_random_bytes();
2932 + down_read(&handshake->static_identity->lock);
2933 + down_write(&handshake->lock);
2935 + if (unlikely(!handshake->static_identity->has_identity))
2938 + dst->header.type = cpu_to_le32(MESSAGE_HANDSHAKE_INITIATION);
2940 + handshake_init(handshake->chaining_key, handshake->hash,
2941 + handshake->remote_static);
2944 + curve25519_generate_secret(handshake->ephemeral_private);
2945 + if (!curve25519_generate_public(dst->unencrypted_ephemeral,
2946 + handshake->ephemeral_private))
2948 + message_ephemeral(dst->unencrypted_ephemeral,
2949 + dst->unencrypted_ephemeral, handshake->chaining_key,
2953 + if (!mix_dh(handshake->chaining_key, key, handshake->ephemeral_private,
2954 + handshake->remote_static))
2958 + message_encrypt(dst->encrypted_static,
2959 + handshake->static_identity->static_public,
2960 + NOISE_PUBLIC_KEY_LEN, key, handshake->hash);
2963 + kdf(handshake->chaining_key, key, NULL,
2964 + handshake->precomputed_static_static, NOISE_HASH_LEN,
2965 + NOISE_SYMMETRIC_KEY_LEN, 0, NOISE_PUBLIC_KEY_LEN,
2966 + handshake->chaining_key);
2969 + tai64n_now(timestamp);
2970 + message_encrypt(dst->encrypted_timestamp, timestamp,
2971 + NOISE_TIMESTAMP_LEN, key, handshake->hash);
2973 + dst->sender_index = wg_index_hashtable_insert(
2974 + handshake->entry.peer->device->index_hashtable,
2975 + &handshake->entry);
2977 + handshake->state = HANDSHAKE_CREATED_INITIATION;
2981 + up_write(&handshake->lock);
2982 + up_read(&handshake->static_identity->lock);
2983 + memzero_explicit(key, NOISE_SYMMETRIC_KEY_LEN);
2988 +wg_noise_handshake_consume_initiation(struct message_handshake_initiation *src,
2989 + struct wg_device *wg)
2991 + struct wg_peer *peer = NULL, *ret_peer = NULL;
2992 + struct noise_handshake *handshake;
2993 + bool replay_attack, flood_attack;
2994 + u8 key[NOISE_SYMMETRIC_KEY_LEN];
2995 + u8 chaining_key[NOISE_HASH_LEN];
2996 + u8 hash[NOISE_HASH_LEN];
2997 + u8 s[NOISE_PUBLIC_KEY_LEN];
2998 + u8 e[NOISE_PUBLIC_KEY_LEN];
2999 + u8 t[NOISE_TIMESTAMP_LEN];
3000 + u64 initiation_consumption;
3002 + down_read(&wg->static_identity.lock);
3003 + if (unlikely(!wg->static_identity.has_identity))
3006 + handshake_init(chaining_key, hash, wg->static_identity.static_public);
3009 + message_ephemeral(e, src->unencrypted_ephemeral, chaining_key, hash);
3012 + if (!mix_dh(chaining_key, key, wg->static_identity.static_private, e))
3016 + if (!message_decrypt(s, src->encrypted_static,
3017 + sizeof(src->encrypted_static), key, hash))
3020 + /* Lookup which peer we're actually talking to */
3021 + peer = wg_pubkey_hashtable_lookup(wg->peer_hashtable, s);
3024 + handshake = &peer->handshake;
3027 + kdf(chaining_key, key, NULL, handshake->precomputed_static_static,
3028 + NOISE_HASH_LEN, NOISE_SYMMETRIC_KEY_LEN, 0, NOISE_PUBLIC_KEY_LEN,
3032 + if (!message_decrypt(t, src->encrypted_timestamp,
3033 + sizeof(src->encrypted_timestamp), key, hash))
3036 + down_read(&handshake->lock);
3037 + replay_attack = memcmp(t, handshake->latest_timestamp,
3038 + NOISE_TIMESTAMP_LEN) <= 0;
3039 + flood_attack = (s64)handshake->last_initiation_consumption +
3040 + NSEC_PER_SEC / INITIATIONS_PER_SECOND >
3041 + (s64)ktime_get_coarse_boottime_ns();
3042 + up_read(&handshake->lock);
3043 + if (replay_attack || flood_attack)
3046 + /* Success! Copy everything to peer */
3047 + down_write(&handshake->lock);
3048 + memcpy(handshake->remote_ephemeral, e, NOISE_PUBLIC_KEY_LEN);
3049 + if (memcmp(t, handshake->latest_timestamp, NOISE_TIMESTAMP_LEN) > 0)
3050 + memcpy(handshake->latest_timestamp, t, NOISE_TIMESTAMP_LEN);
3051 + memcpy(handshake->hash, hash, NOISE_HASH_LEN);
3052 + memcpy(handshake->chaining_key, chaining_key, NOISE_HASH_LEN);
3053 + handshake->remote_index = src->sender_index;
3054 + if ((s64)(handshake->last_initiation_consumption -
3055 + (initiation_consumption = ktime_get_coarse_boottime_ns())) < 0)
3056 + handshake->last_initiation_consumption = initiation_consumption;
3057 + handshake->state = HANDSHAKE_CONSUMED_INITIATION;
3058 + up_write(&handshake->lock);
3062 + memzero_explicit(key, NOISE_SYMMETRIC_KEY_LEN);
3063 + memzero_explicit(hash, NOISE_HASH_LEN);
3064 + memzero_explicit(chaining_key, NOISE_HASH_LEN);
3065 + up_read(&wg->static_identity.lock);
3067 + wg_peer_put(peer);
3071 +bool wg_noise_handshake_create_response(struct message_handshake_response *dst,
3072 + struct noise_handshake *handshake)
3074 + u8 key[NOISE_SYMMETRIC_KEY_LEN];
3077 + /* We need to wait for crng _before_ taking any locks, since
3078 + * curve25519_generate_secret uses get_random_bytes_wait.
3080 + wait_for_random_bytes();
3082 + down_read(&handshake->static_identity->lock);
3083 + down_write(&handshake->lock);
3085 + if (handshake->state != HANDSHAKE_CONSUMED_INITIATION)
3088 + dst->header.type = cpu_to_le32(MESSAGE_HANDSHAKE_RESPONSE);
3089 + dst->receiver_index = handshake->remote_index;
3092 + curve25519_generate_secret(handshake->ephemeral_private);
3093 + if (!curve25519_generate_public(dst->unencrypted_ephemeral,
3094 + handshake->ephemeral_private))
3096 + message_ephemeral(dst->unencrypted_ephemeral,
3097 + dst->unencrypted_ephemeral, handshake->chaining_key,
3101 + if (!mix_dh(handshake->chaining_key, NULL, handshake->ephemeral_private,
3102 + handshake->remote_ephemeral))
3106 + if (!mix_dh(handshake->chaining_key, NULL, handshake->ephemeral_private,
3107 + handshake->remote_static))
3111 + mix_psk(handshake->chaining_key, handshake->hash, key,
3112 + handshake->preshared_key);
3115 + message_encrypt(dst->encrypted_nothing, NULL, 0, key, handshake->hash);
3117 + dst->sender_index = wg_index_hashtable_insert(
3118 + handshake->entry.peer->device->index_hashtable,
3119 + &handshake->entry);
3121 + handshake->state = HANDSHAKE_CREATED_RESPONSE;
3125 + up_write(&handshake->lock);
3126 + up_read(&handshake->static_identity->lock);
3127 + memzero_explicit(key, NOISE_SYMMETRIC_KEY_LEN);
3132 +wg_noise_handshake_consume_response(struct message_handshake_response *src,
3133 + struct wg_device *wg)
3135 + enum noise_handshake_state state = HANDSHAKE_ZEROED;
3136 + struct wg_peer *peer = NULL, *ret_peer = NULL;
3137 + struct noise_handshake *handshake;
3138 + u8 key[NOISE_SYMMETRIC_KEY_LEN];
3139 + u8 hash[NOISE_HASH_LEN];
3140 + u8 chaining_key[NOISE_HASH_LEN];
3141 + u8 e[NOISE_PUBLIC_KEY_LEN];
3142 + u8 ephemeral_private[NOISE_PUBLIC_KEY_LEN];
3143 + u8 static_private[NOISE_PUBLIC_KEY_LEN];
3145 + down_read(&wg->static_identity.lock);
3147 + if (unlikely(!wg->static_identity.has_identity))
3150 + handshake = (struct noise_handshake *)wg_index_hashtable_lookup(
3151 + wg->index_hashtable, INDEX_HASHTABLE_HANDSHAKE,
3152 + src->receiver_index, &peer);
3153 + if (unlikely(!handshake))
3156 + down_read(&handshake->lock);
3157 + state = handshake->state;
3158 + memcpy(hash, handshake->hash, NOISE_HASH_LEN);
3159 + memcpy(chaining_key, handshake->chaining_key, NOISE_HASH_LEN);
3160 + memcpy(ephemeral_private, handshake->ephemeral_private,
3161 + NOISE_PUBLIC_KEY_LEN);
3162 + up_read(&handshake->lock);
3164 + if (state != HANDSHAKE_CREATED_INITIATION)
3168 + message_ephemeral(e, src->unencrypted_ephemeral, chaining_key, hash);
3171 + if (!mix_dh(chaining_key, NULL, ephemeral_private, e))
3175 + if (!mix_dh(chaining_key, NULL, wg->static_identity.static_private, e))
3179 + mix_psk(chaining_key, hash, key, handshake->preshared_key);
3182 + if (!message_decrypt(NULL, src->encrypted_nothing,
3183 + sizeof(src->encrypted_nothing), key, hash))
3186 + /* Success! Copy everything to peer */
3187 + down_write(&handshake->lock);
3188 + /* It's important to check that the state is still the same, while we
3189 + * have an exclusive lock.
3191 + if (handshake->state != state) {
3192 + up_write(&handshake->lock);
3195 + memcpy(handshake->remote_ephemeral, e, NOISE_PUBLIC_KEY_LEN);
3196 + memcpy(handshake->hash, hash, NOISE_HASH_LEN);
3197 + memcpy(handshake->chaining_key, chaining_key, NOISE_HASH_LEN);
3198 + handshake->remote_index = src->sender_index;
3199 + handshake->state = HANDSHAKE_CONSUMED_RESPONSE;
3200 + up_write(&handshake->lock);
3205 + wg_peer_put(peer);
3207 + memzero_explicit(key, NOISE_SYMMETRIC_KEY_LEN);
3208 + memzero_explicit(hash, NOISE_HASH_LEN);
3209 + memzero_explicit(chaining_key, NOISE_HASH_LEN);
3210 + memzero_explicit(ephemeral_private, NOISE_PUBLIC_KEY_LEN);
3211 + memzero_explicit(static_private, NOISE_PUBLIC_KEY_LEN);
3212 + up_read(&wg->static_identity.lock);
3216 +bool wg_noise_handshake_begin_session(struct noise_handshake *handshake,
3217 + struct noise_keypairs *keypairs)
3219 + struct noise_keypair *new_keypair;
3222 + down_write(&handshake->lock);
3223 + if (handshake->state != HANDSHAKE_CREATED_RESPONSE &&
3224 + handshake->state != HANDSHAKE_CONSUMED_RESPONSE)
3227 + new_keypair = keypair_create(handshake->entry.peer);
3230 + new_keypair->i_am_the_initiator = handshake->state ==
3231 + HANDSHAKE_CONSUMED_RESPONSE;
3232 + new_keypair->remote_index = handshake->remote_index;
3234 + if (new_keypair->i_am_the_initiator)
3235 + derive_keys(&new_keypair->sending, &new_keypair->receiving,
3236 + handshake->chaining_key);
3238 + derive_keys(&new_keypair->receiving, &new_keypair->sending,
3239 + handshake->chaining_key);
3241 + handshake_zero(handshake);
3242 + rcu_read_lock_bh();
3243 + if (likely(!READ_ONCE(container_of(handshake, struct wg_peer,
3244 + handshake)->is_dead))) {
3245 + add_new_keypair(keypairs, new_keypair);
3246 + net_dbg_ratelimited("%s: Keypair %llu created for peer %llu\n",
3247 + handshake->entry.peer->device->dev->name,
3248 + new_keypair->internal_id,
3249 + handshake->entry.peer->internal_id);
3250 + ret = wg_index_hashtable_replace(
3251 + handshake->entry.peer->device->index_hashtable,
3252 + &handshake->entry, &new_keypair->entry);
3254 + kzfree(new_keypair);
3256 + rcu_read_unlock_bh();
3259 + up_write(&handshake->lock);
3263 +++ b/drivers/net/wireguard/noise.h
3265 +/* SPDX-License-Identifier: GPL-2.0 */
3267 + * Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
3269 +#ifndef _WG_NOISE_H
3270 +#define _WG_NOISE_H
3272 +#include "messages.h"
3273 +#include "peerlookup.h"
3275 +#include <linux/types.h>
3276 +#include <linux/spinlock.h>
3277 +#include <linux/atomic.h>
3278 +#include <linux/rwsem.h>
3279 +#include <linux/mutex.h>
3280 +#include <linux/kref.h>
3282 +union noise_counter {
3285 + unsigned long backtrack[COUNTER_BITS_TOTAL / BITS_PER_LONG];
3288 + atomic64_t counter;
3291 +struct noise_symmetric_key {
3292 + u8 key[NOISE_SYMMETRIC_KEY_LEN];
3293 + union noise_counter counter;
3298 +struct noise_keypair {
3299 + struct index_hashtable_entry entry;
3300 + struct noise_symmetric_key sending;
3301 + struct noise_symmetric_key receiving;
3302 + __le32 remote_index;
3303 + bool i_am_the_initiator;
3304 + struct kref refcount;
3305 + struct rcu_head rcu;
3309 +struct noise_keypairs {
3310 + struct noise_keypair __rcu *current_keypair;
3311 + struct noise_keypair __rcu *previous_keypair;
3312 + struct noise_keypair __rcu *next_keypair;
3313 + spinlock_t keypair_update_lock;
3316 +struct noise_static_identity {
3317 + u8 static_public[NOISE_PUBLIC_KEY_LEN];
3318 + u8 static_private[NOISE_PUBLIC_KEY_LEN];
3319 + struct rw_semaphore lock;
3320 + bool has_identity;
3323 +enum noise_handshake_state {
3325 + HANDSHAKE_CREATED_INITIATION,
3326 + HANDSHAKE_CONSUMED_INITIATION,
3327 + HANDSHAKE_CREATED_RESPONSE,
3328 + HANDSHAKE_CONSUMED_RESPONSE
3331 +struct noise_handshake {
3332 + struct index_hashtable_entry entry;
3334 + enum noise_handshake_state state;
3335 + u64 last_initiation_consumption;
3337 + struct noise_static_identity *static_identity;
3339 + u8 ephemeral_private[NOISE_PUBLIC_KEY_LEN];
3340 + u8 remote_static[NOISE_PUBLIC_KEY_LEN];
3341 + u8 remote_ephemeral[NOISE_PUBLIC_KEY_LEN];
3342 + u8 precomputed_static_static[NOISE_PUBLIC_KEY_LEN];
3344 + u8 preshared_key[NOISE_SYMMETRIC_KEY_LEN];
3346 + u8 hash[NOISE_HASH_LEN];
3347 + u8 chaining_key[NOISE_HASH_LEN];
3349 + u8 latest_timestamp[NOISE_TIMESTAMP_LEN];
3350 + __le32 remote_index;
3352 + /* Protects all members except the immutable (after noise_handshake_
3353 + * init): remote_static, precomputed_static_static, static_identity.
3355 + struct rw_semaphore lock;
3360 +void wg_noise_init(void);
3361 +bool wg_noise_handshake_init(struct noise_handshake *handshake,
3362 + struct noise_static_identity *static_identity,
3363 + const u8 peer_public_key[NOISE_PUBLIC_KEY_LEN],
3364 + const u8 peer_preshared_key[NOISE_SYMMETRIC_KEY_LEN],
3365 + struct wg_peer *peer);
3366 +void wg_noise_handshake_clear(struct noise_handshake *handshake);
3367 +static inline void wg_noise_reset_last_sent_handshake(atomic64_t *handshake_ns)
3369 + atomic64_set(handshake_ns, ktime_get_coarse_boottime_ns() -
3370 + (u64)(REKEY_TIMEOUT + 1) * NSEC_PER_SEC);
3373 +void wg_noise_keypair_put(struct noise_keypair *keypair, bool unreference_now);
3374 +struct noise_keypair *wg_noise_keypair_get(struct noise_keypair *keypair);
3375 +void wg_noise_keypairs_clear(struct noise_keypairs *keypairs);
3376 +bool wg_noise_received_with_keypair(struct noise_keypairs *keypairs,
3377 + struct noise_keypair *received_keypair);
3378 +void wg_noise_expire_current_peer_keypairs(struct wg_peer *peer);
3380 +void wg_noise_set_static_identity_private_key(
3381 + struct noise_static_identity *static_identity,
3382 + const u8 private_key[NOISE_PUBLIC_KEY_LEN]);
3383 +bool wg_noise_precompute_static_static(struct wg_peer *peer);
3386 +wg_noise_handshake_create_initiation(struct message_handshake_initiation *dst,
3387 + struct noise_handshake *handshake);
3389 +wg_noise_handshake_consume_initiation(struct message_handshake_initiation *src,
3390 + struct wg_device *wg);
3392 +bool wg_noise_handshake_create_response(struct message_handshake_response *dst,
3393 + struct noise_handshake *handshake);
3395 +wg_noise_handshake_consume_response(struct message_handshake_response *src,
3396 + struct wg_device *wg);
3398 +bool wg_noise_handshake_begin_session(struct noise_handshake *handshake,
3399 + struct noise_keypairs *keypairs);
3401 +#endif /* _WG_NOISE_H */
3403 +++ b/drivers/net/wireguard/peer.c
3405 +// SPDX-License-Identifier: GPL-2.0
3407 + * Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
3411 +#include "device.h"
3412 +#include "queueing.h"
3413 +#include "timers.h"
3414 +#include "peerlookup.h"
3417 +#include <linux/kref.h>
3418 +#include <linux/lockdep.h>
3419 +#include <linux/rcupdate.h>
3420 +#include <linux/list.h>
3422 +static atomic64_t peer_counter = ATOMIC64_INIT(0);
3424 +struct wg_peer *wg_peer_create(struct wg_device *wg,
3425 + const u8 public_key[NOISE_PUBLIC_KEY_LEN],
3426 + const u8 preshared_key[NOISE_SYMMETRIC_KEY_LEN])
3428 + struct wg_peer *peer;
3429 + int ret = -ENOMEM;
3431 + lockdep_assert_held(&wg->device_update_lock);
3433 + if (wg->num_peers >= MAX_PEERS_PER_DEVICE)
3434 + return ERR_PTR(ret);
3436 + peer = kzalloc(sizeof(*peer), GFP_KERNEL);
3437 + if (unlikely(!peer))
3438 + return ERR_PTR(ret);
3439 + peer->device = wg;
3441 + if (!wg_noise_handshake_init(&peer->handshake, &wg->static_identity,
3442 + public_key, preshared_key, peer)) {
3443 + ret = -EKEYREJECTED;
3446 + if (dst_cache_init(&peer->endpoint_cache, GFP_KERNEL))
3448 + if (wg_packet_queue_init(&peer->tx_queue, wg_packet_tx_worker, false,
3449 + MAX_QUEUED_PACKETS))
3451 + if (wg_packet_queue_init(&peer->rx_queue, NULL, false,
3452 + MAX_QUEUED_PACKETS))
3455 + peer->internal_id = atomic64_inc_return(&peer_counter);
3456 + peer->serial_work_cpu = nr_cpumask_bits;
3457 + wg_cookie_init(&peer->latest_cookie);
3458 + wg_timers_init(peer);
3459 + wg_cookie_checker_precompute_peer_keys(peer);
3460 + spin_lock_init(&peer->keypairs.keypair_update_lock);
3461 + INIT_WORK(&peer->transmit_handshake_work,
3462 + wg_packet_handshake_send_worker);
3463 + rwlock_init(&peer->endpoint_lock);
3464 + kref_init(&peer->refcount);
3465 + skb_queue_head_init(&peer->staged_packet_queue);
3466 + wg_noise_reset_last_sent_handshake(&peer->last_sent_handshake);
3467 + set_bit(NAPI_STATE_NO_BUSY_POLL, &peer->napi.state);
3468 + netif_napi_add(wg->dev, &peer->napi, wg_packet_rx_poll,
3469 + NAPI_POLL_WEIGHT);
3470 + napi_enable(&peer->napi);
3471 + list_add_tail(&peer->peer_list, &wg->peer_list);
3472 + INIT_LIST_HEAD(&peer->allowedips_list);
3473 + wg_pubkey_hashtable_add(wg->peer_hashtable, peer);
3475 + pr_debug("%s: Peer %llu created\n", wg->dev->name, peer->internal_id);
3479 + wg_packet_queue_free(&peer->tx_queue, false);
3481 + dst_cache_destroy(&peer->endpoint_cache);
3484 + return ERR_PTR(ret);
3487 +struct wg_peer *wg_peer_get_maybe_zero(struct wg_peer *peer)
3489 + RCU_LOCKDEP_WARN(!rcu_read_lock_bh_held(),
3490 + "Taking peer reference without holding the RCU read lock");
3491 + if (unlikely(!peer || !kref_get_unless_zero(&peer->refcount)))
3496 +static void peer_make_dead(struct wg_peer *peer)
3498 + /* Remove from configuration-time lookup structures. */
3499 + list_del_init(&peer->peer_list);
3500 + wg_allowedips_remove_by_peer(&peer->device->peer_allowedips, peer,
3501 + &peer->device->device_update_lock);
3502 + wg_pubkey_hashtable_remove(peer->device->peer_hashtable, peer);
3504 + /* Mark as dead, so that we don't allow jumping contexts after. */
3505 + WRITE_ONCE(peer->is_dead, true);
3507 + /* The caller must now synchronize_rcu() for this to take effect. */
3510 +static void peer_remove_after_dead(struct wg_peer *peer)
3512 + WARN_ON(!peer->is_dead);
3514 + /* No more keypairs can be created for this peer, since is_dead protects
3515 + * add_new_keypair, so we can now destroy existing ones.
3517 + wg_noise_keypairs_clear(&peer->keypairs);
3519 + /* Destroy all ongoing timers that were in-flight at the beginning of
3522 + wg_timers_stop(peer);
3524 + /* The transition between packet encryption/decryption queues isn't
3525 + * guarded by is_dead, but each reference's life is strictly bounded by
3526 + * two generations: once for parallel crypto and once for serial
3527 + * ingestion, so we can simply flush twice, and be sure that we no
3528 + * longer have references inside these queues.
3531 + /* a) For encrypt/decrypt. */
3532 + flush_workqueue(peer->device->packet_crypt_wq);
3533 + /* b.1) For send (but not receive, since that's napi). */
3534 + flush_workqueue(peer->device->packet_crypt_wq);
3535 + /* b.2.1) For receive (but not send, since that's wq). */
3536 + napi_disable(&peer->napi);
3537 + /* b.2.1) It's now safe to remove the napi struct, which must be done
3538 + * here from process context.
3540 + netif_napi_del(&peer->napi);
3542 + /* Ensure any workstructs we own (like transmit_handshake_work or
3543 + * clear_peer_work) no longer are in use.
3545 + flush_workqueue(peer->device->handshake_send_wq);
3547 + /* After the above flushes, a peer might still be active in a few
3548 + * different contexts: 1) from xmit(), before hitting is_dead and
3549 + * returning, 2) from wg_packet_consume_data(), before hitting is_dead
3550 + * and returning, 3) from wg_receive_handshake_packet() after a point
3551 + * where it has processed an incoming handshake packet, but where
3552 + * all calls to pass it off to timers fails because of is_dead. We won't
3553 + * have new references in (1) eventually, because we're removed from
3554 + * allowedips; we won't have new references in (2) eventually, because
3555 + * wg_index_hashtable_lookup will always return NULL, since we removed
3556 + * all existing keypairs and no more can be created; we won't have new
3557 + * references in (3) eventually, because we're removed from the pubkey
3558 + * hash table, which allows for a maximum of one handshake response,
3559 + * via the still-uncleared index hashtable entry, but not more than one,
3560 + * and in wg_cookie_message_consume, the lookup eventually gets a peer
3561 + * with a refcount of zero, so no new reference is taken.
3564 + --peer->device->num_peers;
3565 + wg_peer_put(peer);
3568 +/* We have a separate "remove" function make sure that all active places where
3569 + * a peer is currently operating will eventually come to an end and not pass
3570 + * their reference onto another context.
3572 +void wg_peer_remove(struct wg_peer *peer)
3574 + if (unlikely(!peer))
3576 + lockdep_assert_held(&peer->device->device_update_lock);
3578 + peer_make_dead(peer);
3579 + synchronize_rcu();
3580 + peer_remove_after_dead(peer);
3583 +void wg_peer_remove_all(struct wg_device *wg)
3585 + struct wg_peer *peer, *temp;
3586 + LIST_HEAD(dead_peers);
3588 + lockdep_assert_held(&wg->device_update_lock);
3590 + /* Avoid having to traverse individually for each one. */
3591 + wg_allowedips_free(&wg->peer_allowedips, &wg->device_update_lock);
3593 + list_for_each_entry_safe(peer, temp, &wg->peer_list, peer_list) {
3594 + peer_make_dead(peer);
3595 + list_add_tail(&peer->peer_list, &dead_peers);
3597 + synchronize_rcu();
3598 + list_for_each_entry_safe(peer, temp, &dead_peers, peer_list)
3599 + peer_remove_after_dead(peer);
3602 +static void rcu_release(struct rcu_head *rcu)
3604 + struct wg_peer *peer = container_of(rcu, struct wg_peer, rcu);
3606 + dst_cache_destroy(&peer->endpoint_cache);
3607 + wg_packet_queue_free(&peer->rx_queue, false);
3608 + wg_packet_queue_free(&peer->tx_queue, false);
3610 + /* The final zeroing takes care of clearing any remaining handshake key
3611 + * material and other potentially sensitive information.
3616 +static void kref_release(struct kref *refcount)
3618 + struct wg_peer *peer = container_of(refcount, struct wg_peer, refcount);
3620 + pr_debug("%s: Peer %llu (%pISpfsc) destroyed\n",
3621 + peer->device->dev->name, peer->internal_id,
3622 + &peer->endpoint.addr);
3624 + /* Remove ourself from dynamic runtime lookup structures, now that the
3625 + * last reference is gone.
3627 + wg_index_hashtable_remove(peer->device->index_hashtable,
3628 + &peer->handshake.entry);
3630 + /* Remove any lingering packets that didn't have a chance to be
3633 + wg_packet_purge_staged_packets(peer);
3635 + /* Free the memory used. */
3636 + call_rcu(&peer->rcu, rcu_release);
3639 +void wg_peer_put(struct wg_peer *peer)
3641 + if (unlikely(!peer))
3643 + kref_put(&peer->refcount, kref_release);
3646 +++ b/drivers/net/wireguard/peer.h
3648 +/* SPDX-License-Identifier: GPL-2.0 */
3650 + * Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
3656 +#include "device.h"
3658 +#include "cookie.h"
3660 +#include <linux/types.h>
3661 +#include <linux/netfilter.h>
3662 +#include <linux/spinlock.h>
3663 +#include <linux/kref.h>
3664 +#include <net/dst_cache.h>
3670 + struct sockaddr addr;
3671 + struct sockaddr_in addr4;
3672 + struct sockaddr_in6 addr6;
3676 + struct in_addr src4;
3677 + /* Essentially the same as addr6->scope_id */
3680 + struct in6_addr src6;
3685 + struct wg_device *device;
3686 + struct crypt_queue tx_queue, rx_queue;
3687 + struct sk_buff_head staged_packet_queue;
3688 + int serial_work_cpu;
3689 + struct noise_keypairs keypairs;
3690 + struct endpoint endpoint;
3691 + struct dst_cache endpoint_cache;
3692 + rwlock_t endpoint_lock;
3693 + struct noise_handshake handshake;
3694 + atomic64_t last_sent_handshake;
3695 + struct work_struct transmit_handshake_work, clear_peer_work;
3696 + struct cookie latest_cookie;
3697 + struct hlist_node pubkey_hash;
3698 + u64 rx_bytes, tx_bytes;
3699 + struct timer_list timer_retransmit_handshake, timer_send_keepalive;
3700 + struct timer_list timer_new_handshake, timer_zero_key_material;
3701 + struct timer_list timer_persistent_keepalive;
3702 + unsigned int timer_handshake_attempts;
3703 + u16 persistent_keepalive_interval;
3704 + bool timer_need_another_keepalive;
3705 + bool sent_lastminute_handshake;
3706 + struct timespec64 walltime_last_handshake;
3707 + struct kref refcount;
3708 + struct rcu_head rcu;
3709 + struct list_head peer_list;
3710 + struct list_head allowedips_list;
3712 + struct napi_struct napi;
3716 +struct wg_peer *wg_peer_create(struct wg_device *wg,
3717 + const u8 public_key[NOISE_PUBLIC_KEY_LEN],
3718 + const u8 preshared_key[NOISE_SYMMETRIC_KEY_LEN]);
3720 +struct wg_peer *__must_check wg_peer_get_maybe_zero(struct wg_peer *peer);
3721 +static inline struct wg_peer *wg_peer_get(struct wg_peer *peer)
3723 + kref_get(&peer->refcount);
3726 +void wg_peer_put(struct wg_peer *peer);
3727 +void wg_peer_remove(struct wg_peer *peer);
3728 +void wg_peer_remove_all(struct wg_device *wg);
3730 +#endif /* _WG_PEER_H */
3732 +++ b/drivers/net/wireguard/peerlookup.c
3734 +// SPDX-License-Identifier: GPL-2.0
3736 + * Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
3739 +#include "peerlookup.h"
3743 +static struct hlist_head *pubkey_bucket(struct pubkey_hashtable *table,
3744 + const u8 pubkey[NOISE_PUBLIC_KEY_LEN])
3746 + /* siphash gives us a secure 64bit number based on a random key. Since
3747 + * the bits are uniformly distributed, we can then mask off to get the
3750 + const u64 hash = siphash(pubkey, NOISE_PUBLIC_KEY_LEN, &table->key);
3752 + return &table->hashtable[hash & (HASH_SIZE(table->hashtable) - 1)];
3755 +struct pubkey_hashtable *wg_pubkey_hashtable_alloc(void)
3757 + struct pubkey_hashtable *table = kvmalloc(sizeof(*table), GFP_KERNEL);
3762 + get_random_bytes(&table->key, sizeof(table->key));
3763 + hash_init(table->hashtable);
3764 + mutex_init(&table->lock);
3768 +void wg_pubkey_hashtable_add(struct pubkey_hashtable *table,
3769 + struct wg_peer *peer)
3771 + mutex_lock(&table->lock);
3772 + hlist_add_head_rcu(&peer->pubkey_hash,
3773 + pubkey_bucket(table, peer->handshake.remote_static));
3774 + mutex_unlock(&table->lock);
3777 +void wg_pubkey_hashtable_remove(struct pubkey_hashtable *table,
3778 + struct wg_peer *peer)
3780 + mutex_lock(&table->lock);
3781 + hlist_del_init_rcu(&peer->pubkey_hash);
3782 + mutex_unlock(&table->lock);
3785 +/* Returns a strong reference to a peer */
3787 +wg_pubkey_hashtable_lookup(struct pubkey_hashtable *table,
3788 + const u8 pubkey[NOISE_PUBLIC_KEY_LEN])
3790 + struct wg_peer *iter_peer, *peer = NULL;
3792 + rcu_read_lock_bh();
3793 + hlist_for_each_entry_rcu_bh(iter_peer, pubkey_bucket(table, pubkey),
3795 + if (!memcmp(pubkey, iter_peer->handshake.remote_static,
3796 + NOISE_PUBLIC_KEY_LEN)) {
3801 + peer = wg_peer_get_maybe_zero(peer);
3802 + rcu_read_unlock_bh();
3806 +static struct hlist_head *index_bucket(struct index_hashtable *table,
3807 + const __le32 index)
3809 + /* Since the indices are random and thus all bits are uniformly
3810 + * distributed, we can find its bucket simply by masking.
3812 + return &table->hashtable[(__force u32)index &
3813 + (HASH_SIZE(table->hashtable) - 1)];
3816 +struct index_hashtable *wg_index_hashtable_alloc(void)
3818 + struct index_hashtable *table = kvmalloc(sizeof(*table), GFP_KERNEL);
3823 + hash_init(table->hashtable);
3824 + spin_lock_init(&table->lock);
3828 +/* At the moment, we limit ourselves to 2^20 total peers, which generally might
3829 + * amount to 2^20*3 items in this hashtable. The algorithm below works by
3830 + * picking a random number and testing it. We can see that these limits mean we
3831 + * usually succeed pretty quickly:
3833 + * >>> def calculation(tries, size):
3834 + * ... return (size / 2**32)**(tries - 1) * (1 - (size / 2**32))
3836 + * >>> calculation(1, 2**20 * 3)
3838 + * >>> calculation(2, 2**20 * 3)
3839 + * 0.0007318854331970215
3840 + * >>> calculation(3, 2**20 * 3)
3841 + * 5.360489012673497e-07
3842 + * >>> calculation(4, 2**20 * 3)
3843 + * 3.9261394135792216e-10
3845 + * At the moment, we don't do any masking, so this algorithm isn't exactly
3846 + * constant time in either the random guessing or in the hash list lookup. We
3847 + * could require a minimum of 3 tries, which would successfully mask the
3848 + * guessing. this would not, however, help with the growing hash lengths, which
3849 + * is another thing to consider moving forward.
3852 +__le32 wg_index_hashtable_insert(struct index_hashtable *table,
3853 + struct index_hashtable_entry *entry)
3855 + struct index_hashtable_entry *existing_entry;
3857 + spin_lock_bh(&table->lock);
3858 + hlist_del_init_rcu(&entry->index_hash);
3859 + spin_unlock_bh(&table->lock);
3861 + rcu_read_lock_bh();
3863 +search_unused_slot:
3864 + /* First we try to find an unused slot, randomly, while unlocked. */
3865 + entry->index = (__force __le32)get_random_u32();
3866 + hlist_for_each_entry_rcu_bh(existing_entry,
3867 + index_bucket(table, entry->index),
3869 + if (existing_entry->index == entry->index)
3870 + /* If it's already in use, we continue searching. */
3871 + goto search_unused_slot;
3874 + /* Once we've found an unused slot, we lock it, and then double-check
3875 + * that nobody else stole it from us.
3877 + spin_lock_bh(&table->lock);
3878 + hlist_for_each_entry_rcu_bh(existing_entry,
3879 + index_bucket(table, entry->index),
3881 + if (existing_entry->index == entry->index) {
3882 + spin_unlock_bh(&table->lock);
3883 + /* If it was stolen, we start over. */
3884 + goto search_unused_slot;
3887 + /* Otherwise, we know we have it exclusively (since we're locked),
3890 + hlist_add_head_rcu(&entry->index_hash,
3891 + index_bucket(table, entry->index));
3892 + spin_unlock_bh(&table->lock);
3894 + rcu_read_unlock_bh();
3896 + return entry->index;
3899 +bool wg_index_hashtable_replace(struct index_hashtable *table,
3900 + struct index_hashtable_entry *old,
3901 + struct index_hashtable_entry *new)
3903 + if (unlikely(hlist_unhashed(&old->index_hash)))
3905 + spin_lock_bh(&table->lock);
3906 + new->index = old->index;
3907 + hlist_replace_rcu(&old->index_hash, &new->index_hash);
3909 + /* Calling init here NULLs out index_hash, and in fact after this
3910 + * function returns, it's theoretically possible for this to get
3911 + * reinserted elsewhere. That means the RCU lookup below might either
3912 + * terminate early or jump between buckets, in which case the packet
3913 + * simply gets dropped, which isn't terrible.
3915 + INIT_HLIST_NODE(&old->index_hash);
3916 + spin_unlock_bh(&table->lock);
3920 +void wg_index_hashtable_remove(struct index_hashtable *table,
3921 + struct index_hashtable_entry *entry)
3923 + spin_lock_bh(&table->lock);
3924 + hlist_del_init_rcu(&entry->index_hash);
3925 + spin_unlock_bh(&table->lock);
3928 +/* Returns a strong reference to a entry->peer */
3929 +struct index_hashtable_entry *
3930 +wg_index_hashtable_lookup(struct index_hashtable *table,
3931 + const enum index_hashtable_type type_mask,
3932 + const __le32 index, struct wg_peer **peer)
3934 + struct index_hashtable_entry *iter_entry, *entry = NULL;
3936 + rcu_read_lock_bh();
3937 + hlist_for_each_entry_rcu_bh(iter_entry, index_bucket(table, index),
3939 + if (iter_entry->index == index) {
3940 + if (likely(iter_entry->type & type_mask))
3941 + entry = iter_entry;
3945 + if (likely(entry)) {
3946 + entry->peer = wg_peer_get_maybe_zero(entry->peer);
3947 + if (likely(entry->peer))
3948 + *peer = entry->peer;
3952 + rcu_read_unlock_bh();
3956 +++ b/drivers/net/wireguard/peerlookup.h
3958 +/* SPDX-License-Identifier: GPL-2.0 */
3960 + * Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
3963 +#ifndef _WG_PEERLOOKUP_H
3964 +#define _WG_PEERLOOKUP_H
3966 +#include "messages.h"
3968 +#include <linux/hashtable.h>
3969 +#include <linux/mutex.h>
3970 +#include <linux/siphash.h>
3974 +struct pubkey_hashtable {
3975 + /* TODO: move to rhashtable */
3976 + DECLARE_HASHTABLE(hashtable, 11);
3977 + siphash_key_t key;
3978 + struct mutex lock;
3981 +struct pubkey_hashtable *wg_pubkey_hashtable_alloc(void);
3982 +void wg_pubkey_hashtable_add(struct pubkey_hashtable *table,
3983 + struct wg_peer *peer);
3984 +void wg_pubkey_hashtable_remove(struct pubkey_hashtable *table,
3985 + struct wg_peer *peer);
3987 +wg_pubkey_hashtable_lookup(struct pubkey_hashtable *table,
3988 + const u8 pubkey[NOISE_PUBLIC_KEY_LEN]);
3990 +struct index_hashtable {
3991 + /* TODO: move to rhashtable */
3992 + DECLARE_HASHTABLE(hashtable, 13);
3996 +enum index_hashtable_type {
3997 + INDEX_HASHTABLE_HANDSHAKE = 1U << 0,
3998 + INDEX_HASHTABLE_KEYPAIR = 1U << 1
4001 +struct index_hashtable_entry {
4002 + struct wg_peer *peer;
4003 + struct hlist_node index_hash;
4004 + enum index_hashtable_type type;
4008 +struct index_hashtable *wg_index_hashtable_alloc(void);
4009 +__le32 wg_index_hashtable_insert(struct index_hashtable *table,
4010 + struct index_hashtable_entry *entry);
4011 +bool wg_index_hashtable_replace(struct index_hashtable *table,
4012 + struct index_hashtable_entry *old,
4013 + struct index_hashtable_entry *new);
4014 +void wg_index_hashtable_remove(struct index_hashtable *table,
4015 + struct index_hashtable_entry *entry);
4016 +struct index_hashtable_entry *
4017 +wg_index_hashtable_lookup(struct index_hashtable *table,
4018 + const enum index_hashtable_type type_mask,
4019 + const __le32 index, struct wg_peer **peer);
4021 +#endif /* _WG_PEERLOOKUP_H */
4023 +++ b/drivers/net/wireguard/queueing.c
4025 +// SPDX-License-Identifier: GPL-2.0
4027 + * Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
4030 +#include "queueing.h"
4032 +struct multicore_worker __percpu *
4033 +wg_packet_percpu_multicore_worker_alloc(work_func_t function, void *ptr)
4036 + struct multicore_worker __percpu *worker =
4037 + alloc_percpu(struct multicore_worker);
4042 + for_each_possible_cpu(cpu) {
4043 + per_cpu_ptr(worker, cpu)->ptr = ptr;
4044 + INIT_WORK(&per_cpu_ptr(worker, cpu)->work, function);
4049 +int wg_packet_queue_init(struct crypt_queue *queue, work_func_t function,
4050 + bool multicore, unsigned int len)
4054 + memset(queue, 0, sizeof(*queue));
4055 + ret = ptr_ring_init(&queue->ring, len, GFP_KERNEL);
4060 + queue->worker = wg_packet_percpu_multicore_worker_alloc(
4062 + if (!queue->worker)
4065 + INIT_WORK(&queue->work, function);
4071 +void wg_packet_queue_free(struct crypt_queue *queue, bool multicore)
4074 + free_percpu(queue->worker);
4075 + WARN_ON(!__ptr_ring_empty(&queue->ring));
4076 + ptr_ring_cleanup(&queue->ring, NULL);
4079 +++ b/drivers/net/wireguard/queueing.h
4081 +/* SPDX-License-Identifier: GPL-2.0 */
4083 + * Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
4086 +#ifndef _WG_QUEUEING_H
4087 +#define _WG_QUEUEING_H
4090 +#include <linux/types.h>
4091 +#include <linux/skbuff.h>
4092 +#include <linux/ip.h>
4093 +#include <linux/ipv6.h>
4097 +struct multicore_worker;
4098 +struct crypt_queue;
4101 +/* queueing.c APIs: */
4102 +int wg_packet_queue_init(struct crypt_queue *queue, work_func_t function,
4103 + bool multicore, unsigned int len);
4104 +void wg_packet_queue_free(struct crypt_queue *queue, bool multicore);
4105 +struct multicore_worker __percpu *
4106 +wg_packet_percpu_multicore_worker_alloc(work_func_t function, void *ptr);
4108 +/* receive.c APIs: */
4109 +void wg_packet_receive(struct wg_device *wg, struct sk_buff *skb);
4110 +void wg_packet_handshake_receive_worker(struct work_struct *work);
4111 +/* NAPI poll function: */
4112 +int wg_packet_rx_poll(struct napi_struct *napi, int budget);
4113 +/* Workqueue worker: */
4114 +void wg_packet_decrypt_worker(struct work_struct *work);
4117 +void wg_packet_send_queued_handshake_initiation(struct wg_peer *peer,
4119 +void wg_packet_send_handshake_response(struct wg_peer *peer);
4120 +void wg_packet_send_handshake_cookie(struct wg_device *wg,
4121 + struct sk_buff *initiating_skb,
4122 + __le32 sender_index);
4123 +void wg_packet_send_keepalive(struct wg_peer *peer);
4124 +void wg_packet_purge_staged_packets(struct wg_peer *peer);
4125 +void wg_packet_send_staged_packets(struct wg_peer *peer);
4126 +/* Workqueue workers: */
4127 +void wg_packet_handshake_send_worker(struct work_struct *work);
4128 +void wg_packet_tx_worker(struct work_struct *work);
4129 +void wg_packet_encrypt_worker(struct work_struct *work);
4131 +enum packet_state {
4132 + PACKET_STATE_UNCRYPTED,
4133 + PACKET_STATE_CRYPTED,
4139 + struct noise_keypair *keypair;
4145 +#define PACKET_CB(skb) ((struct packet_cb *)((skb)->cb))
4146 +#define PACKET_PEER(skb) (PACKET_CB(skb)->keypair->entry.peer)
4148 +/* Returns either the correct skb->protocol value, or 0 if invalid. */
4149 +static inline __be16 wg_skb_examine_untrusted_ip_hdr(struct sk_buff *skb)
4151 + if (skb_network_header(skb) >= skb->head &&
4152 + (skb_network_header(skb) + sizeof(struct iphdr)) <=
4153 + skb_tail_pointer(skb) &&
4154 + ip_hdr(skb)->version == 4)
4155 + return htons(ETH_P_IP);
4156 + if (skb_network_header(skb) >= skb->head &&
4157 + (skb_network_header(skb) + sizeof(struct ipv6hdr)) <=
4158 + skb_tail_pointer(skb) &&
4159 + ipv6_hdr(skb)->version == 6)
4160 + return htons(ETH_P_IPV6);
4164 +static inline void wg_reset_packet(struct sk_buff *skb)
4166 + const int pfmemalloc = skb->pfmemalloc;
4168 + skb_scrub_packet(skb, true);
4169 + memset(&skb->headers_start, 0,
4170 + offsetof(struct sk_buff, headers_end) -
4171 + offsetof(struct sk_buff, headers_start));
4172 + skb->pfmemalloc = pfmemalloc;
4173 + skb->queue_mapping = 0;
4178 +#ifdef CONFIG_NET_SCHED
4179 + skb->tc_index = 0;
4181 + skb_reset_redirect(skb);
4182 + skb->hdr_len = skb_headroom(skb);
4183 + skb_reset_mac_header(skb);
4184 + skb_reset_network_header(skb);
4185 + skb_reset_transport_header(skb);
4186 + skb_probe_transport_header(skb);
4187 + skb_reset_inner_headers(skb);
4190 +static inline int wg_cpumask_choose_online(int *stored_cpu, unsigned int id)
4192 + unsigned int cpu = *stored_cpu, cpu_index, i;
4194 + if (unlikely(cpu == nr_cpumask_bits ||
4195 + !cpumask_test_cpu(cpu, cpu_online_mask))) {
4196 + cpu_index = id % cpumask_weight(cpu_online_mask);
4197 + cpu = cpumask_first(cpu_online_mask);
4198 + for (i = 0; i < cpu_index; ++i)
4199 + cpu = cpumask_next(cpu, cpu_online_mask);
4200 + *stored_cpu = cpu;
4205 +/* This function is racy, in the sense that next is unlocked, so it could return
4206 + * the same CPU twice. A race-free version of this would be to instead store an
4207 + * atomic sequence number, do an increment-and-return, and then iterate through
4208 + * every possible CPU until we get to that index -- choose_cpu. However that's
4209 + * a bit slower, and it doesn't seem like this potential race actually
4210 + * introduces any performance loss, so we live with it.
4212 +static inline int wg_cpumask_next_online(int *next)
4216 + while (unlikely(!cpumask_test_cpu(cpu, cpu_online_mask)))
4217 + cpu = cpumask_next(cpu, cpu_online_mask) % nr_cpumask_bits;
4218 + *next = cpumask_next(cpu, cpu_online_mask) % nr_cpumask_bits;
4222 +static inline int wg_queue_enqueue_per_device_and_peer(
4223 + struct crypt_queue *device_queue, struct crypt_queue *peer_queue,
4224 + struct sk_buff *skb, struct workqueue_struct *wq, int *next_cpu)
4228 + atomic_set_release(&PACKET_CB(skb)->state, PACKET_STATE_UNCRYPTED);
4229 + /* We first queue this up for the peer ingestion, but the consumer
4230 + * will wait for the state to change to CRYPTED or DEAD before.
4232 + if (unlikely(ptr_ring_produce_bh(&peer_queue->ring, skb)))
4234 + /* Then we queue it up in the device queue, which consumes the
4235 + * packet as soon as it can.
4237 + cpu = wg_cpumask_next_online(next_cpu);
4238 + if (unlikely(ptr_ring_produce_bh(&device_queue->ring, skb)))
4240 + queue_work_on(cpu, wq, &per_cpu_ptr(device_queue->worker, cpu)->work);
4244 +static inline void wg_queue_enqueue_per_peer(struct crypt_queue *queue,
4245 + struct sk_buff *skb,
4246 + enum packet_state state)
4248 + /* We take a reference, because as soon as we call atomic_set, the
4249 + * peer can be freed from below us.
4251 + struct wg_peer *peer = wg_peer_get(PACKET_PEER(skb));
4253 + atomic_set_release(&PACKET_CB(skb)->state, state);
4254 + queue_work_on(wg_cpumask_choose_online(&peer->serial_work_cpu,
4255 + peer->internal_id),
4256 + peer->device->packet_crypt_wq, &queue->work);
4257 + wg_peer_put(peer);
4260 +static inline void wg_queue_enqueue_per_peer_napi(struct sk_buff *skb,
4261 + enum packet_state state)
4263 + /* We take a reference, because as soon as we call atomic_set, the
4264 + * peer can be freed from below us.
4266 + struct wg_peer *peer = wg_peer_get(PACKET_PEER(skb));
4268 + atomic_set_release(&PACKET_CB(skb)->state, state);
4269 + napi_schedule(&peer->napi);
4270 + wg_peer_put(peer);
4274 +bool wg_packet_counter_selftest(void);
4277 +#endif /* _WG_QUEUEING_H */
4279 +++ b/drivers/net/wireguard/ratelimiter.c
4281 +// SPDX-License-Identifier: GPL-2.0
4283 + * Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
4286 +#include "ratelimiter.h"
4287 +#include <linux/siphash.h>
4288 +#include <linux/mm.h>
4289 +#include <linux/slab.h>
4290 +#include <net/ip.h>
4292 +static struct kmem_cache *entry_cache;
4293 +static hsiphash_key_t key;
4294 +static spinlock_t table_lock = __SPIN_LOCK_UNLOCKED("ratelimiter_table_lock");
4295 +static DEFINE_MUTEX(init_lock);
4296 +static u64 init_refcnt; /* Protected by init_lock, hence not atomic. */
4297 +static atomic_t total_entries = ATOMIC_INIT(0);
4298 +static unsigned int max_entries, table_size;
4299 +static void wg_ratelimiter_gc_entries(struct work_struct *);
4300 +static DECLARE_DEFERRABLE_WORK(gc_work, wg_ratelimiter_gc_entries);
4301 +static struct hlist_head *table_v4;
4302 +#if IS_ENABLED(CONFIG_IPV6)
4303 +static struct hlist_head *table_v6;
4306 +struct ratelimiter_entry {
4307 + u64 last_time_ns, tokens, ip;
4310 + struct hlist_node hash;
4311 + struct rcu_head rcu;
4315 + PACKETS_PER_SECOND = 20,
4316 + PACKETS_BURSTABLE = 5,
4317 + PACKET_COST = NSEC_PER_SEC / PACKETS_PER_SECOND,
4318 + TOKEN_MAX = PACKET_COST * PACKETS_BURSTABLE
4321 +static void entry_free(struct rcu_head *rcu)
4323 + kmem_cache_free(entry_cache,
4324 + container_of(rcu, struct ratelimiter_entry, rcu));
4325 + atomic_dec(&total_entries);
4328 +static void entry_uninit(struct ratelimiter_entry *entry)
4330 + hlist_del_rcu(&entry->hash);
4331 + call_rcu(&entry->rcu, entry_free);
4334 +/* Calling this function with a NULL work uninits all entries. */
4335 +static void wg_ratelimiter_gc_entries(struct work_struct *work)
4337 + const u64 now = ktime_get_coarse_boottime_ns();
4338 + struct ratelimiter_entry *entry;
4339 + struct hlist_node *temp;
4342 + for (i = 0; i < table_size; ++i) {
4343 + spin_lock(&table_lock);
4344 + hlist_for_each_entry_safe(entry, temp, &table_v4[i], hash) {
4345 + if (unlikely(!work) ||
4346 + now - entry->last_time_ns > NSEC_PER_SEC)
4347 + entry_uninit(entry);
4349 +#if IS_ENABLED(CONFIG_IPV6)
4350 + hlist_for_each_entry_safe(entry, temp, &table_v6[i], hash) {
4351 + if (unlikely(!work) ||
4352 + now - entry->last_time_ns > NSEC_PER_SEC)
4353 + entry_uninit(entry);
4356 + spin_unlock(&table_lock);
4361 + queue_delayed_work(system_power_efficient_wq, &gc_work, HZ);
4364 +bool wg_ratelimiter_allow(struct sk_buff *skb, struct net *net)
4366 + /* We only take the bottom half of the net pointer, so that we can hash
4367 + * 3 words in the end. This way, siphash's len param fits into the final
4368 + * u32, and we don't incur an extra round.
4370 + const u32 net_word = (unsigned long)net;
4371 + struct ratelimiter_entry *entry;
4372 + struct hlist_head *bucket;
4375 + if (skb->protocol == htons(ETH_P_IP)) {
4376 + ip = (u64 __force)ip_hdr(skb)->saddr;
4377 + bucket = &table_v4[hsiphash_2u32(net_word, ip, &key) &
4378 + (table_size - 1)];
4380 +#if IS_ENABLED(CONFIG_IPV6)
4381 + else if (skb->protocol == htons(ETH_P_IPV6)) {
4382 + /* Only use 64 bits, so as to ratelimit the whole /64. */
4383 + memcpy(&ip, &ipv6_hdr(skb)->saddr, sizeof(ip));
4384 + bucket = &table_v6[hsiphash_3u32(net_word, ip >> 32, ip, &key) &
4385 + (table_size - 1)];
4391 + hlist_for_each_entry_rcu(entry, bucket, hash) {
4392 + if (entry->net == net && entry->ip == ip) {
4395 + /* Quasi-inspired by nft_limit.c, but this is actually a
4396 + * slightly different algorithm. Namely, we incorporate
4397 + * the burst as part of the maximum tokens, rather than
4398 + * as part of the rate.
4400 + spin_lock(&entry->lock);
4401 + now = ktime_get_coarse_boottime_ns();
4402 + tokens = min_t(u64, TOKEN_MAX,
4403 + entry->tokens + now -
4404 + entry->last_time_ns);
4405 + entry->last_time_ns = now;
4406 + ret = tokens >= PACKET_COST;
4407 + entry->tokens = ret ? tokens - PACKET_COST : tokens;
4408 + spin_unlock(&entry->lock);
4409 + rcu_read_unlock();
4413 + rcu_read_unlock();
4415 + if (atomic_inc_return(&total_entries) > max_entries)
4418 + entry = kmem_cache_alloc(entry_cache, GFP_KERNEL);
4419 + if (unlikely(!entry))
4424 + INIT_HLIST_NODE(&entry->hash);
4425 + spin_lock_init(&entry->lock);
4426 + entry->last_time_ns = ktime_get_coarse_boottime_ns();
4427 + entry->tokens = TOKEN_MAX - PACKET_COST;
4428 + spin_lock(&table_lock);
4429 + hlist_add_head_rcu(&entry->hash, bucket);
4430 + spin_unlock(&table_lock);
4434 + atomic_dec(&total_entries);
4438 +int wg_ratelimiter_init(void)
4440 + mutex_lock(&init_lock);
4441 + if (++init_refcnt != 1)
4444 + entry_cache = KMEM_CACHE(ratelimiter_entry, 0);
4448 + /* xt_hashlimit.c uses a slightly different algorithm for ratelimiting,
4449 + * but what it shares in common is that it uses a massive hashtable. So,
4450 + * we borrow their wisdom about good table sizes on different systems
4451 + * dependent on RAM. This calculation here comes from there.
4453 + table_size = (totalram_pages() > (1U << 30) / PAGE_SIZE) ? 8192 :
4454 + max_t(unsigned long, 16, roundup_pow_of_two(
4455 + (totalram_pages() << PAGE_SHIFT) /
4456 + (1U << 14) / sizeof(struct hlist_head)));
4457 + max_entries = table_size * 8;
4459 + table_v4 = kvzalloc(table_size * sizeof(*table_v4), GFP_KERNEL);
4460 + if (unlikely(!table_v4))
4461 + goto err_kmemcache;
4463 +#if IS_ENABLED(CONFIG_IPV6)
4464 + table_v6 = kvzalloc(table_size * sizeof(*table_v6), GFP_KERNEL);
4465 + if (unlikely(!table_v6)) {
4467 + goto err_kmemcache;
4471 + queue_delayed_work(system_power_efficient_wq, &gc_work, HZ);
4472 + get_random_bytes(&key, sizeof(key));
4474 + mutex_unlock(&init_lock);
4478 + kmem_cache_destroy(entry_cache);
4481 + mutex_unlock(&init_lock);
4485 +void wg_ratelimiter_uninit(void)
4487 + mutex_lock(&init_lock);
4488 + if (!init_refcnt || --init_refcnt)
4491 + cancel_delayed_work_sync(&gc_work);
4492 + wg_ratelimiter_gc_entries(NULL);
4495 +#if IS_ENABLED(CONFIG_IPV6)
4498 + kmem_cache_destroy(entry_cache);
4500 + mutex_unlock(&init_lock);
4503 +#include "selftest/ratelimiter.c"
4505 +++ b/drivers/net/wireguard/ratelimiter.h
4507 +/* SPDX-License-Identifier: GPL-2.0 */
4509 + * Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
4512 +#ifndef _WG_RATELIMITER_H
4513 +#define _WG_RATELIMITER_H
4515 +#include <linux/skbuff.h>
4517 +int wg_ratelimiter_init(void);
4518 +void wg_ratelimiter_uninit(void);
4519 +bool wg_ratelimiter_allow(struct sk_buff *skb, struct net *net);
4522 +bool wg_ratelimiter_selftest(void);
4525 +#endif /* _WG_RATELIMITER_H */
4527 +++ b/drivers/net/wireguard/receive.c
4529 +// SPDX-License-Identifier: GPL-2.0
4531 + * Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
4534 +#include "queueing.h"
4535 +#include "device.h"
4537 +#include "timers.h"
4538 +#include "messages.h"
4539 +#include "cookie.h"
4540 +#include "socket.h"
4542 +#include <linux/ip.h>
4543 +#include <linux/ipv6.h>
4544 +#include <linux/udp.h>
4545 +#include <net/ip_tunnels.h>
4547 +/* Must be called with bh disabled. */
4548 +static void update_rx_stats(struct wg_peer *peer, size_t len)
4550 + struct pcpu_sw_netstats *tstats =
4551 + get_cpu_ptr(peer->device->dev->tstats);
4553 + u64_stats_update_begin(&tstats->syncp);
4554 + ++tstats->rx_packets;
4555 + tstats->rx_bytes += len;
4556 + peer->rx_bytes += len;
4557 + u64_stats_update_end(&tstats->syncp);
4558 + put_cpu_ptr(tstats);
4561 +#define SKB_TYPE_LE32(skb) (((struct message_header *)(skb)->data)->type)
4563 +static size_t validate_header_len(struct sk_buff *skb)
4565 + if (unlikely(skb->len < sizeof(struct message_header)))
4567 + if (SKB_TYPE_LE32(skb) == cpu_to_le32(MESSAGE_DATA) &&
4568 + skb->len >= MESSAGE_MINIMUM_LENGTH)
4569 + return sizeof(struct message_data);
4570 + if (SKB_TYPE_LE32(skb) == cpu_to_le32(MESSAGE_HANDSHAKE_INITIATION) &&
4571 + skb->len == sizeof(struct message_handshake_initiation))
4572 + return sizeof(struct message_handshake_initiation);
4573 + if (SKB_TYPE_LE32(skb) == cpu_to_le32(MESSAGE_HANDSHAKE_RESPONSE) &&
4574 + skb->len == sizeof(struct message_handshake_response))
4575 + return sizeof(struct message_handshake_response);
4576 + if (SKB_TYPE_LE32(skb) == cpu_to_le32(MESSAGE_HANDSHAKE_COOKIE) &&
4577 + skb->len == sizeof(struct message_handshake_cookie))
4578 + return sizeof(struct message_handshake_cookie);
4582 +static int prepare_skb_header(struct sk_buff *skb, struct wg_device *wg)
4584 + size_t data_offset, data_len, header_len;
4585 + struct udphdr *udp;
4587 + if (unlikely(wg_skb_examine_untrusted_ip_hdr(skb) != skb->protocol ||
4588 + skb_transport_header(skb) < skb->head ||
4589 + (skb_transport_header(skb) + sizeof(struct udphdr)) >
4590 + skb_tail_pointer(skb)))
4591 + return -EINVAL; /* Bogus IP header */
4592 + udp = udp_hdr(skb);
4593 + data_offset = (u8 *)udp - skb->data;
4594 + if (unlikely(data_offset > U16_MAX ||
4595 + data_offset + sizeof(struct udphdr) > skb->len))
4596 + /* Packet has offset at impossible location or isn't big enough
4597 + * to have UDP fields.
4600 + data_len = ntohs(udp->len);
4601 + if (unlikely(data_len < sizeof(struct udphdr) ||
4602 + data_len > skb->len - data_offset))
4603 + /* UDP packet is reporting too small of a size or lying about
4607 + data_len -= sizeof(struct udphdr);
4608 + data_offset = (u8 *)udp + sizeof(struct udphdr) - skb->data;
4609 + if (unlikely(!pskb_may_pull(skb,
4610 + data_offset + sizeof(struct message_header)) ||
4611 + pskb_trim(skb, data_len + data_offset) < 0))
4613 + skb_pull(skb, data_offset);
4614 + if (unlikely(skb->len != data_len))
4615 + /* Final len does not agree with calculated len */
4617 + header_len = validate_header_len(skb);
4618 + if (unlikely(!header_len))
4620 + __skb_push(skb, data_offset);
4621 + if (unlikely(!pskb_may_pull(skb, data_offset + header_len)))
4623 + __skb_pull(skb, data_offset);
4627 +static void wg_receive_handshake_packet(struct wg_device *wg,
4628 + struct sk_buff *skb)
4630 + enum cookie_mac_state mac_state;
4631 + struct wg_peer *peer = NULL;
4632 + /* This is global, so that our load calculation applies to the whole
4633 + * system. We don't care about races with it at all.
4635 + static u64 last_under_load;
4636 + bool packet_needs_cookie;
4639 + if (SKB_TYPE_LE32(skb) == cpu_to_le32(MESSAGE_HANDSHAKE_COOKIE)) {
4640 + net_dbg_skb_ratelimited("%s: Receiving cookie response from %pISpfsc\n",
4641 + wg->dev->name, skb);
4642 + wg_cookie_message_consume(
4643 + (struct message_handshake_cookie *)skb->data, wg);
4647 + under_load = skb_queue_len(&wg->incoming_handshakes) >=
4648 + MAX_QUEUED_INCOMING_HANDSHAKES / 8;
4650 + last_under_load = ktime_get_coarse_boottime_ns();
4651 + else if (last_under_load)
4652 + under_load = !wg_birthdate_has_expired(last_under_load, 1);
4653 + mac_state = wg_cookie_validate_packet(&wg->cookie_checker, skb,
4655 + if ((under_load && mac_state == VALID_MAC_WITH_COOKIE) ||
4656 + (!under_load && mac_state == VALID_MAC_BUT_NO_COOKIE)) {
4657 + packet_needs_cookie = false;
4658 + } else if (under_load && mac_state == VALID_MAC_BUT_NO_COOKIE) {
4659 + packet_needs_cookie = true;
4661 + net_dbg_skb_ratelimited("%s: Invalid MAC of handshake, dropping packet from %pISpfsc\n",
4662 + wg->dev->name, skb);
4666 + switch (SKB_TYPE_LE32(skb)) {
4667 + case cpu_to_le32(MESSAGE_HANDSHAKE_INITIATION): {
4668 + struct message_handshake_initiation *message =
4669 + (struct message_handshake_initiation *)skb->data;
4671 + if (packet_needs_cookie) {
4672 + wg_packet_send_handshake_cookie(wg, skb,
4673 + message->sender_index);
4676 + peer = wg_noise_handshake_consume_initiation(message, wg);
4677 + if (unlikely(!peer)) {
4678 + net_dbg_skb_ratelimited("%s: Invalid handshake initiation from %pISpfsc\n",
4679 + wg->dev->name, skb);
4682 + wg_socket_set_peer_endpoint_from_skb(peer, skb);
4683 + net_dbg_ratelimited("%s: Receiving handshake initiation from peer %llu (%pISpfsc)\n",
4684 + wg->dev->name, peer->internal_id,
4685 + &peer->endpoint.addr);
4686 + wg_packet_send_handshake_response(peer);
4689 + case cpu_to_le32(MESSAGE_HANDSHAKE_RESPONSE): {
4690 + struct message_handshake_response *message =
4691 + (struct message_handshake_response *)skb->data;
4693 + if (packet_needs_cookie) {
4694 + wg_packet_send_handshake_cookie(wg, skb,
4695 + message->sender_index);
4698 + peer = wg_noise_handshake_consume_response(message, wg);
4699 + if (unlikely(!peer)) {
4700 + net_dbg_skb_ratelimited("%s: Invalid handshake response from %pISpfsc\n",
4701 + wg->dev->name, skb);
4704 + wg_socket_set_peer_endpoint_from_skb(peer, skb);
4705 + net_dbg_ratelimited("%s: Receiving handshake response from peer %llu (%pISpfsc)\n",
4706 + wg->dev->name, peer->internal_id,
4707 + &peer->endpoint.addr);
4708 + if (wg_noise_handshake_begin_session(&peer->handshake,
4709 + &peer->keypairs)) {
4710 + wg_timers_session_derived(peer);
4711 + wg_timers_handshake_complete(peer);
4712 + /* Calling this function will either send any existing
4713 + * packets in the queue and not send a keepalive, which
4714 + * is the best case, Or, if there's nothing in the
4715 + * queue, it will send a keepalive, in order to give
4716 + * immediate confirmation of the session.
4718 + wg_packet_send_keepalive(peer);
4724 + if (unlikely(!peer)) {
4725 + WARN(1, "Somehow a wrong type of packet wound up in the handshake queue!\n");
4729 + local_bh_disable();
4730 + update_rx_stats(peer, skb->len);
4731 + local_bh_enable();
4733 + wg_timers_any_authenticated_packet_received(peer);
4734 + wg_timers_any_authenticated_packet_traversal(peer);
4735 + wg_peer_put(peer);
4738 +void wg_packet_handshake_receive_worker(struct work_struct *work)
4740 + struct wg_device *wg = container_of(work, struct multicore_worker,
4742 + struct sk_buff *skb;
4744 + while ((skb = skb_dequeue(&wg->incoming_handshakes)) != NULL) {
4745 + wg_receive_handshake_packet(wg, skb);
4746 + dev_kfree_skb(skb);
4751 +static void keep_key_fresh(struct wg_peer *peer)
4753 + struct noise_keypair *keypair;
4754 + bool send = false;
4756 + if (peer->sent_lastminute_handshake)
4759 + rcu_read_lock_bh();
4760 + keypair = rcu_dereference_bh(peer->keypairs.current_keypair);
4761 + if (likely(keypair && READ_ONCE(keypair->sending.is_valid)) &&
4762 + keypair->i_am_the_initiator &&
4763 + unlikely(wg_birthdate_has_expired(keypair->sending.birthdate,
4764 + REJECT_AFTER_TIME - KEEPALIVE_TIMEOUT - REKEY_TIMEOUT)))
4766 + rcu_read_unlock_bh();
4769 + peer->sent_lastminute_handshake = true;
4770 + wg_packet_send_queued_handshake_initiation(peer, false);
4774 +static bool decrypt_packet(struct sk_buff *skb, struct noise_symmetric_key *key)
4776 + struct scatterlist sg[MAX_SKB_FRAGS + 8];
4777 + struct sk_buff *trailer;
4778 + unsigned int offset;
4781 + if (unlikely(!key))
4784 + if (unlikely(!READ_ONCE(key->is_valid) ||
4785 + wg_birthdate_has_expired(key->birthdate, REJECT_AFTER_TIME) ||
4786 + key->counter.receive.counter >= REJECT_AFTER_MESSAGES)) {
4787 + WRITE_ONCE(key->is_valid, false);
4791 + PACKET_CB(skb)->nonce =
4792 + le64_to_cpu(((struct message_data *)skb->data)->counter);
4794 + /* We ensure that the network header is part of the packet before we
4795 + * call skb_cow_data, so that there's no chance that data is removed
4796 + * from the skb, so that later we can extract the original endpoint.
4798 + offset = skb->data - skb_network_header(skb);
4799 + skb_push(skb, offset);
4800 + num_frags = skb_cow_data(skb, 0, &trailer);
4801 + offset += sizeof(struct message_data);
4802 + skb_pull(skb, offset);
4803 + if (unlikely(num_frags < 0 || num_frags > ARRAY_SIZE(sg)))
4806 + sg_init_table(sg, num_frags);
4807 + if (skb_to_sgvec(skb, sg, 0, skb->len) <= 0)
4810 + if (!chacha20poly1305_decrypt_sg_inplace(sg, skb->len, NULL, 0,
4811 + PACKET_CB(skb)->nonce,
4815 + /* Another ugly situation of pushing and pulling the header so as to
4816 + * keep endpoint information intact.
4818 + skb_push(skb, offset);
4819 + if (pskb_trim(skb, skb->len - noise_encrypted_len(0)))
4821 + skb_pull(skb, offset);
4826 +/* This is RFC6479, a replay detection bitmap algorithm that avoids bitshifts */
4827 +static bool counter_validate(union noise_counter *counter, u64 their_counter)
4829 + unsigned long index, index_current, top, i;
4832 + spin_lock_bh(&counter->receive.lock);
4834 + if (unlikely(counter->receive.counter >= REJECT_AFTER_MESSAGES + 1 ||
4835 + their_counter >= REJECT_AFTER_MESSAGES))
4840 + if (unlikely((COUNTER_WINDOW_SIZE + their_counter) <
4841 + counter->receive.counter))
4844 + index = their_counter >> ilog2(BITS_PER_LONG);
4846 + if (likely(their_counter > counter->receive.counter)) {
4847 + index_current = counter->receive.counter >> ilog2(BITS_PER_LONG);
4848 + top = min_t(unsigned long, index - index_current,
4849 + COUNTER_BITS_TOTAL / BITS_PER_LONG);
4850 + for (i = 1; i <= top; ++i)
4851 + counter->receive.backtrack[(i + index_current) &
4852 + ((COUNTER_BITS_TOTAL / BITS_PER_LONG) - 1)] = 0;
4853 + counter->receive.counter = their_counter;
4856 + index &= (COUNTER_BITS_TOTAL / BITS_PER_LONG) - 1;
4857 + ret = !test_and_set_bit(their_counter & (BITS_PER_LONG - 1),
4858 + &counter->receive.backtrack[index]);
4861 + spin_unlock_bh(&counter->receive.lock);
4865 +#include "selftest/counter.c"
4867 +static void wg_packet_consume_data_done(struct wg_peer *peer,
4868 + struct sk_buff *skb,
4869 + struct endpoint *endpoint)
4871 + struct net_device *dev = peer->device->dev;
4872 + unsigned int len, len_before_trim;
4873 + struct wg_peer *routed_peer;
4875 + wg_socket_set_peer_endpoint(peer, endpoint);
4877 + if (unlikely(wg_noise_received_with_keypair(&peer->keypairs,
4878 + PACKET_CB(skb)->keypair))) {
4879 + wg_timers_handshake_complete(peer);
4880 + wg_packet_send_staged_packets(peer);
4883 + keep_key_fresh(peer);
4885 + wg_timers_any_authenticated_packet_received(peer);
4886 + wg_timers_any_authenticated_packet_traversal(peer);
4888 + /* A packet with length 0 is a keepalive packet */
4889 + if (unlikely(!skb->len)) {
4890 + update_rx_stats(peer, message_data_len(0));
4891 + net_dbg_ratelimited("%s: Receiving keepalive packet from peer %llu (%pISpfsc)\n",
4892 + dev->name, peer->internal_id,
4893 + &peer->endpoint.addr);
4894 + goto packet_processed;
4897 + wg_timers_data_received(peer);
4899 + if (unlikely(skb_network_header(skb) < skb->head))
4900 + goto dishonest_packet_size;
4901 + if (unlikely(!(pskb_network_may_pull(skb, sizeof(struct iphdr)) &&
4902 + (ip_hdr(skb)->version == 4 ||
4903 + (ip_hdr(skb)->version == 6 &&
4904 + pskb_network_may_pull(skb, sizeof(struct ipv6hdr)))))))
4905 + goto dishonest_packet_type;
4908 + /* We've already verified the Poly1305 auth tag, which means this packet
4909 + * was not modified in transit. We can therefore tell the networking
4910 + * stack that all checksums of every layer of encapsulation have already
4911 + * been checked "by the hardware" and therefore is unneccessary to check
4912 + * again in software.
4914 + skb->ip_summed = CHECKSUM_UNNECESSARY;
4915 + skb->csum_level = ~0; /* All levels */
4916 + skb->protocol = wg_skb_examine_untrusted_ip_hdr(skb);
4917 + if (skb->protocol == htons(ETH_P_IP)) {
4918 + len = ntohs(ip_hdr(skb)->tot_len);
4919 + if (unlikely(len < sizeof(struct iphdr)))
4920 + goto dishonest_packet_size;
4921 + if (INET_ECN_is_ce(PACKET_CB(skb)->ds))
4922 + IP_ECN_set_ce(ip_hdr(skb));
4923 + } else if (skb->protocol == htons(ETH_P_IPV6)) {
4924 + len = ntohs(ipv6_hdr(skb)->payload_len) +
4925 + sizeof(struct ipv6hdr);
4926 + if (INET_ECN_is_ce(PACKET_CB(skb)->ds))
4927 + IP6_ECN_set_ce(skb, ipv6_hdr(skb));
4929 + goto dishonest_packet_type;
4932 + if (unlikely(len > skb->len))
4933 + goto dishonest_packet_size;
4934 + len_before_trim = skb->len;
4935 + if (unlikely(pskb_trim(skb, len)))
4936 + goto packet_processed;
4938 + routed_peer = wg_allowedips_lookup_src(&peer->device->peer_allowedips,
4940 + wg_peer_put(routed_peer); /* We don't need the extra reference. */
4942 + if (unlikely(routed_peer != peer))
4943 + goto dishonest_packet_peer;
4945 + if (unlikely(napi_gro_receive(&peer->napi, skb) == GRO_DROP)) {
4946 + ++dev->stats.rx_dropped;
4947 + net_dbg_ratelimited("%s: Failed to give packet to userspace from peer %llu (%pISpfsc)\n",
4948 + dev->name, peer->internal_id,
4949 + &peer->endpoint.addr);
4951 + update_rx_stats(peer, message_data_len(len_before_trim));
4955 +dishonest_packet_peer:
4956 + net_dbg_skb_ratelimited("%s: Packet has unallowed src IP (%pISc) from peer %llu (%pISpfsc)\n",
4957 + dev->name, skb, peer->internal_id,
4958 + &peer->endpoint.addr);
4959 + ++dev->stats.rx_errors;
4960 + ++dev->stats.rx_frame_errors;
4961 + goto packet_processed;
4962 +dishonest_packet_type:
4963 + net_dbg_ratelimited("%s: Packet is neither ipv4 nor ipv6 from peer %llu (%pISpfsc)\n",
4964 + dev->name, peer->internal_id, &peer->endpoint.addr);
4965 + ++dev->stats.rx_errors;
4966 + ++dev->stats.rx_frame_errors;
4967 + goto packet_processed;
4968 +dishonest_packet_size:
4969 + net_dbg_ratelimited("%s: Packet has incorrect size from peer %llu (%pISpfsc)\n",
4970 + dev->name, peer->internal_id, &peer->endpoint.addr);
4971 + ++dev->stats.rx_errors;
4972 + ++dev->stats.rx_length_errors;
4973 + goto packet_processed;
4975 + dev_kfree_skb(skb);
4978 +int wg_packet_rx_poll(struct napi_struct *napi, int budget)
4980 + struct wg_peer *peer = container_of(napi, struct wg_peer, napi);
4981 + struct crypt_queue *queue = &peer->rx_queue;
4982 + struct noise_keypair *keypair;
4983 + struct endpoint endpoint;
4984 + enum packet_state state;
4985 + struct sk_buff *skb;
4986 + int work_done = 0;
4989 + if (unlikely(budget <= 0))
4992 + while ((skb = __ptr_ring_peek(&queue->ring)) != NULL &&
4993 + (state = atomic_read_acquire(&PACKET_CB(skb)->state)) !=
4994 + PACKET_STATE_UNCRYPTED) {
4995 + __ptr_ring_discard_one(&queue->ring);
4996 + peer = PACKET_PEER(skb);
4997 + keypair = PACKET_CB(skb)->keypair;
5000 + if (unlikely(state != PACKET_STATE_CRYPTED))
5003 + if (unlikely(!counter_validate(&keypair->receiving.counter,
5004 + PACKET_CB(skb)->nonce))) {
5005 + net_dbg_ratelimited("%s: Packet has invalid nonce %llu (max %llu)\n",
5006 + peer->device->dev->name,
5007 + PACKET_CB(skb)->nonce,
5008 + keypair->receiving.counter.receive.counter);
5012 + if (unlikely(wg_socket_endpoint_from_skb(&endpoint, skb)))
5015 + wg_reset_packet(skb);
5016 + wg_packet_consume_data_done(peer, skb, &endpoint);
5020 + wg_noise_keypair_put(keypair, false);
5021 + wg_peer_put(peer);
5022 + if (unlikely(free))
5023 + dev_kfree_skb(skb);
5025 + if (++work_done >= budget)
5029 + if (work_done < budget)
5030 + napi_complete_done(napi, work_done);
5035 +void wg_packet_decrypt_worker(struct work_struct *work)
5037 + struct crypt_queue *queue = container_of(work, struct multicore_worker,
5039 + struct sk_buff *skb;
5041 + while ((skb = ptr_ring_consume_bh(&queue->ring)) != NULL) {
5042 + enum packet_state state = likely(decrypt_packet(skb,
5043 + &PACKET_CB(skb)->keypair->receiving)) ?
5044 + PACKET_STATE_CRYPTED : PACKET_STATE_DEAD;
5045 + wg_queue_enqueue_per_peer_napi(skb, state);
5049 +static void wg_packet_consume_data(struct wg_device *wg, struct sk_buff *skb)
5051 + __le32 idx = ((struct message_data *)skb->data)->key_idx;
5052 + struct wg_peer *peer = NULL;
5055 + rcu_read_lock_bh();
5056 + PACKET_CB(skb)->keypair =
5057 + (struct noise_keypair *)wg_index_hashtable_lookup(
5058 + wg->index_hashtable, INDEX_HASHTABLE_KEYPAIR, idx,
5060 + if (unlikely(!wg_noise_keypair_get(PACKET_CB(skb)->keypair)))
5063 + if (unlikely(READ_ONCE(peer->is_dead)))
5066 + ret = wg_queue_enqueue_per_device_and_peer(&wg->decrypt_queue,
5067 + &peer->rx_queue, skb,
5068 + wg->packet_crypt_wq,
5069 + &wg->decrypt_queue.last_cpu);
5070 + if (unlikely(ret == -EPIPE))
5071 + wg_queue_enqueue_per_peer_napi(skb, PACKET_STATE_DEAD);
5072 + if (likely(!ret || ret == -EPIPE)) {
5073 + rcu_read_unlock_bh();
5077 + wg_noise_keypair_put(PACKET_CB(skb)->keypair, false);
5079 + rcu_read_unlock_bh();
5080 + wg_peer_put(peer);
5081 + dev_kfree_skb(skb);
5084 +void wg_packet_receive(struct wg_device *wg, struct sk_buff *skb)
5086 + if (unlikely(prepare_skb_header(skb, wg) < 0))
5088 + switch (SKB_TYPE_LE32(skb)) {
5089 + case cpu_to_le32(MESSAGE_HANDSHAKE_INITIATION):
5090 + case cpu_to_le32(MESSAGE_HANDSHAKE_RESPONSE):
5091 + case cpu_to_le32(MESSAGE_HANDSHAKE_COOKIE): {
5094 + if (skb_queue_len(&wg->incoming_handshakes) >
5095 + MAX_QUEUED_INCOMING_HANDSHAKES ||
5096 + unlikely(!rng_is_initialized())) {
5097 + net_dbg_skb_ratelimited("%s: Dropping handshake packet from %pISpfsc\n",
5098 + wg->dev->name, skb);
5101 + skb_queue_tail(&wg->incoming_handshakes, skb);
5102 + /* Queues up a call to packet_process_queued_handshake_
5105 + cpu = wg_cpumask_next_online(&wg->incoming_handshake_cpu);
5106 + queue_work_on(cpu, wg->handshake_receive_wq,
5107 + &per_cpu_ptr(wg->incoming_handshakes_worker, cpu)->work);
5110 + case cpu_to_le32(MESSAGE_DATA):
5111 + PACKET_CB(skb)->ds = ip_tunnel_get_dsfield(ip_hdr(skb), skb);
5112 + wg_packet_consume_data(wg, skb);
5115 + net_dbg_skb_ratelimited("%s: Invalid packet from %pISpfsc\n",
5116 + wg->dev->name, skb);
5122 + dev_kfree_skb(skb);
5125 +++ b/drivers/net/wireguard/selftest/allowedips.c
5127 +// SPDX-License-Identifier: GPL-2.0
5129 + * Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
5131 + * This contains some basic static unit tests for the allowedips data structure.
5132 + * It also has two additional modes that are disabled and meant to be used by
5133 + * folks directly playing with this file. If you define the macro
5134 + * DEBUG_PRINT_TRIE_GRAPHVIZ to be 1, then every time there's a full tree in
5135 + * memory, it will be printed out as KERN_DEBUG in a format that can be passed
5136 + * to graphviz (the dot command) to visualize it. If you define the macro
5137 + * DEBUG_RANDOM_TRIE to be 1, then there will be an extremely costly set of
5138 + * randomized tests done against a trivial implementation, which may take
5139 + * upwards of a half-hour to complete. There's no set of users who should be
5140 + * enabling these, and the only developers that should go anywhere near these
5141 + * nobs are the ones who are reading this comment.
5146 +#include <linux/siphash.h>
5148 +static __init void swap_endian_and_apply_cidr(u8 *dst, const u8 *src, u8 bits,
5151 + swap_endian(dst, src, bits);
5152 + memset(dst + (cidr + 7) / 8, 0, bits / 8 - (cidr + 7) / 8);
5154 + dst[(cidr + 7) / 8 - 1] &= ~0U << ((8 - (cidr % 8)) % 8);
5157 +static __init void print_node(struct allowedips_node *node, u8 bits)
5159 + char *fmt_connection = KERN_DEBUG "\t\"%p/%d\" -> \"%p/%d\";\n";
5160 + char *fmt_declaration = KERN_DEBUG
5161 + "\t\"%p/%d\"[style=%s, color=\"#%06x\"];\n";
5162 + char *style = "dotted";
5163 + u8 ip1[16], ip2[16];
5167 + fmt_connection = KERN_DEBUG "\t\"%pI4/%d\" -> \"%pI4/%d\";\n";
5168 + fmt_declaration = KERN_DEBUG
5169 + "\t\"%pI4/%d\"[style=%s, color=\"#%06x\"];\n";
5170 + } else if (bits == 128) {
5171 + fmt_connection = KERN_DEBUG "\t\"%pI6/%d\" -> \"%pI6/%d\";\n";
5172 + fmt_declaration = KERN_DEBUG
5173 + "\t\"%pI6/%d\"[style=%s, color=\"#%06x\"];\n";
5176 + hsiphash_key_t key = { { 0 } };
5178 + memcpy(&key, &node->peer, sizeof(node->peer));
5179 + color = hsiphash_1u32(0xdeadbeef, &key) % 200 << 16 |
5180 + hsiphash_1u32(0xbabecafe, &key) % 200 << 8 |
5181 + hsiphash_1u32(0xabad1dea, &key) % 200;
5184 + swap_endian_and_apply_cidr(ip1, node->bits, bits, node->cidr);
5185 + printk(fmt_declaration, ip1, node->cidr, style, color);
5186 + if (node->bit[0]) {
5187 + swap_endian_and_apply_cidr(ip2,
5188 + rcu_dereference_raw(node->bit[0])->bits, bits,
5190 + printk(fmt_connection, ip1, node->cidr, ip2,
5191 + rcu_dereference_raw(node->bit[0])->cidr);
5192 + print_node(rcu_dereference_raw(node->bit[0]), bits);
5194 + if (node->bit[1]) {
5195 + swap_endian_and_apply_cidr(ip2,
5196 + rcu_dereference_raw(node->bit[1])->bits,
5197 + bits, node->cidr);
5198 + printk(fmt_connection, ip1, node->cidr, ip2,
5199 + rcu_dereference_raw(node->bit[1])->cidr);
5200 + print_node(rcu_dereference_raw(node->bit[1]), bits);
5204 +static __init void print_tree(struct allowedips_node __rcu *top, u8 bits)
5206 + printk(KERN_DEBUG "digraph trie {\n");
5207 + print_node(rcu_dereference_raw(top), bits);
5208 + printk(KERN_DEBUG "}\n");
5213 + NUM_RAND_ROUTES = 400,
5214 + NUM_MUTATED_ROUTES = 100,
5215 + NUM_QUERIES = NUM_RAND_ROUTES * NUM_MUTATED_ROUTES * 30
5218 +struct horrible_allowedips {
5219 + struct hlist_head head;
5222 +struct horrible_allowedips_node {
5223 + struct hlist_node table;
5224 + union nf_inet_addr ip;
5225 + union nf_inet_addr mask;
5230 +static __init void horrible_allowedips_init(struct horrible_allowedips *table)
5232 + INIT_HLIST_HEAD(&table->head);
5235 +static __init void horrible_allowedips_free(struct horrible_allowedips *table)
5237 + struct horrible_allowedips_node *node;
5238 + struct hlist_node *h;
5240 + hlist_for_each_entry_safe(node, h, &table->head, table) {
5241 + hlist_del(&node->table);
5246 +static __init inline union nf_inet_addr horrible_cidr_to_mask(u8 cidr)
5248 + union nf_inet_addr mask;
5250 + memset(&mask, 0x00, 128 / 8);
5251 + memset(&mask, 0xff, cidr / 8);
5253 + mask.all[cidr / 32] = (__force u32)htonl(
5254 + (0xFFFFFFFFUL << (32 - (cidr % 32))) & 0xFFFFFFFFUL);
5258 +static __init inline u8 horrible_mask_to_cidr(union nf_inet_addr subnet)
5260 + return hweight32(subnet.all[0]) + hweight32(subnet.all[1]) +
5261 + hweight32(subnet.all[2]) + hweight32(subnet.all[3]);
5264 +static __init inline void
5265 +horrible_mask_self(struct horrible_allowedips_node *node)
5267 + if (node->ip_version == 4) {
5268 + node->ip.ip &= node->mask.ip;
5269 + } else if (node->ip_version == 6) {
5270 + node->ip.ip6[0] &= node->mask.ip6[0];
5271 + node->ip.ip6[1] &= node->mask.ip6[1];
5272 + node->ip.ip6[2] &= node->mask.ip6[2];
5273 + node->ip.ip6[3] &= node->mask.ip6[3];
5277 +static __init inline bool
5278 +horrible_match_v4(const struct horrible_allowedips_node *node,
5279 + struct in_addr *ip)
5281 + return (ip->s_addr & node->mask.ip) == node->ip.ip;
5284 +static __init inline bool
5285 +horrible_match_v6(const struct horrible_allowedips_node *node,
5286 + struct in6_addr *ip)
5288 + return (ip->in6_u.u6_addr32[0] & node->mask.ip6[0]) ==
5289 + node->ip.ip6[0] &&
5290 + (ip->in6_u.u6_addr32[1] & node->mask.ip6[1]) ==
5291 + node->ip.ip6[1] &&
5292 + (ip->in6_u.u6_addr32[2] & node->mask.ip6[2]) ==
5293 + node->ip.ip6[2] &&
5294 + (ip->in6_u.u6_addr32[3] & node->mask.ip6[3]) == node->ip.ip6[3];
5298 +horrible_insert_ordered(struct horrible_allowedips *table,
5299 + struct horrible_allowedips_node *node)
5301 + struct horrible_allowedips_node *other = NULL, *where = NULL;
5302 + u8 my_cidr = horrible_mask_to_cidr(node->mask);
5304 + hlist_for_each_entry(other, &table->head, table) {
5305 + if (!memcmp(&other->mask, &node->mask,
5306 + sizeof(union nf_inet_addr)) &&
5307 + !memcmp(&other->ip, &node->ip,
5308 + sizeof(union nf_inet_addr)) &&
5309 + other->ip_version == node->ip_version) {
5310 + other->value = node->value;
5315 + if (horrible_mask_to_cidr(other->mask) <= my_cidr)
5318 + if (!other && !where)
5319 + hlist_add_head(&node->table, &table->head);
5321 + hlist_add_behind(&node->table, &where->table);
5323 + hlist_add_before(&node->table, &where->table);
5327 +horrible_allowedips_insert_v4(struct horrible_allowedips *table,
5328 + struct in_addr *ip, u8 cidr, void *value)
5330 + struct horrible_allowedips_node *node = kzalloc(sizeof(*node),
5333 + if (unlikely(!node))
5335 + node->ip.in = *ip;
5336 + node->mask = horrible_cidr_to_mask(cidr);
5337 + node->ip_version = 4;
5338 + node->value = value;
5339 + horrible_mask_self(node);
5340 + horrible_insert_ordered(table, node);
5345 +horrible_allowedips_insert_v6(struct horrible_allowedips *table,
5346 + struct in6_addr *ip, u8 cidr, void *value)
5348 + struct horrible_allowedips_node *node = kzalloc(sizeof(*node),
5351 + if (unlikely(!node))
5353 + node->ip.in6 = *ip;
5354 + node->mask = horrible_cidr_to_mask(cidr);
5355 + node->ip_version = 6;
5356 + node->value = value;
5357 + horrible_mask_self(node);
5358 + horrible_insert_ordered(table, node);
5362 +static __init void *
5363 +horrible_allowedips_lookup_v4(struct horrible_allowedips *table,
5364 + struct in_addr *ip)
5366 + struct horrible_allowedips_node *node;
5369 + hlist_for_each_entry(node, &table->head, table) {
5370 + if (node->ip_version != 4)
5372 + if (horrible_match_v4(node, ip)) {
5373 + ret = node->value;
5380 +static __init void *
5381 +horrible_allowedips_lookup_v6(struct horrible_allowedips *table,
5382 + struct in6_addr *ip)
5384 + struct horrible_allowedips_node *node;
5387 + hlist_for_each_entry(node, &table->head, table) {
5388 + if (node->ip_version != 6)
5390 + if (horrible_match_v6(node, ip)) {
5391 + ret = node->value;
5398 +static __init bool randomized_test(void)
5400 + unsigned int i, j, k, mutate_amount, cidr;
5401 + u8 ip[16], mutate_mask[16], mutated[16];
5402 + struct wg_peer **peers, *peer;
5403 + struct horrible_allowedips h;
5404 + DEFINE_MUTEX(mutex);
5405 + struct allowedips t;
5408 + mutex_init(&mutex);
5410 + wg_allowedips_init(&t);
5411 + horrible_allowedips_init(&h);
5413 + peers = kcalloc(NUM_PEERS, sizeof(*peers), GFP_KERNEL);
5414 + if (unlikely(!peers)) {
5415 + pr_err("allowedips random self-test malloc: FAIL\n");
5418 + for (i = 0; i < NUM_PEERS; ++i) {
5419 + peers[i] = kzalloc(sizeof(*peers[i]), GFP_KERNEL);
5420 + if (unlikely(!peers[i])) {
5421 + pr_err("allowedips random self-test malloc: FAIL\n");
5424 + kref_init(&peers[i]->refcount);
5427 + mutex_lock(&mutex);
5429 + for (i = 0; i < NUM_RAND_ROUTES; ++i) {
5430 + prandom_bytes(ip, 4);
5431 + cidr = prandom_u32_max(32) + 1;
5432 + peer = peers[prandom_u32_max(NUM_PEERS)];
5433 + if (wg_allowedips_insert_v4(&t, (struct in_addr *)ip, cidr,
5434 + peer, &mutex) < 0) {
5435 + pr_err("allowedips random self-test malloc: FAIL\n");
5438 + if (horrible_allowedips_insert_v4(&h, (struct in_addr *)ip,
5439 + cidr, peer) < 0) {
5440 + pr_err("allowedips random self-test malloc: FAIL\n");
5443 + for (j = 0; j < NUM_MUTATED_ROUTES; ++j) {
5444 + memcpy(mutated, ip, 4);
5445 + prandom_bytes(mutate_mask, 4);
5446 + mutate_amount = prandom_u32_max(32);
5447 + for (k = 0; k < mutate_amount / 8; ++k)
5448 + mutate_mask[k] = 0xff;
5449 + mutate_mask[k] = 0xff
5450 + << ((8 - (mutate_amount % 8)) % 8);
5451 + for (; k < 4; ++k)
5452 + mutate_mask[k] = 0;
5453 + for (k = 0; k < 4; ++k)
5454 + mutated[k] = (mutated[k] & mutate_mask[k]) |
5455 + (~mutate_mask[k] &
5456 + prandom_u32_max(256));
5457 + cidr = prandom_u32_max(32) + 1;
5458 + peer = peers[prandom_u32_max(NUM_PEERS)];
5459 + if (wg_allowedips_insert_v4(&t,
5460 + (struct in_addr *)mutated,
5461 + cidr, peer, &mutex) < 0) {
5462 + pr_err("allowedips random malloc: FAIL\n");
5465 + if (horrible_allowedips_insert_v4(&h,
5466 + (struct in_addr *)mutated, cidr, peer)) {
5467 + pr_err("allowedips random self-test malloc: FAIL\n");
5473 + for (i = 0; i < NUM_RAND_ROUTES; ++i) {
5474 + prandom_bytes(ip, 16);
5475 + cidr = prandom_u32_max(128) + 1;
5476 + peer = peers[prandom_u32_max(NUM_PEERS)];
5477 + if (wg_allowedips_insert_v6(&t, (struct in6_addr *)ip, cidr,
5478 + peer, &mutex) < 0) {
5479 + pr_err("allowedips random self-test malloc: FAIL\n");
5482 + if (horrible_allowedips_insert_v6(&h, (struct in6_addr *)ip,
5483 + cidr, peer) < 0) {
5484 + pr_err("allowedips random self-test malloc: FAIL\n");
5487 + for (j = 0; j < NUM_MUTATED_ROUTES; ++j) {
5488 + memcpy(mutated, ip, 16);
5489 + prandom_bytes(mutate_mask, 16);
5490 + mutate_amount = prandom_u32_max(128);
5491 + for (k = 0; k < mutate_amount / 8; ++k)
5492 + mutate_mask[k] = 0xff;
5493 + mutate_mask[k] = 0xff
5494 + << ((8 - (mutate_amount % 8)) % 8);
5495 + for (; k < 4; ++k)
5496 + mutate_mask[k] = 0;
5497 + for (k = 0; k < 4; ++k)
5498 + mutated[k] = (mutated[k] & mutate_mask[k]) |
5499 + (~mutate_mask[k] &
5500 + prandom_u32_max(256));
5501 + cidr = prandom_u32_max(128) + 1;
5502 + peer = peers[prandom_u32_max(NUM_PEERS)];
5503 + if (wg_allowedips_insert_v6(&t,
5504 + (struct in6_addr *)mutated,
5505 + cidr, peer, &mutex) < 0) {
5506 + pr_err("allowedips random self-test malloc: FAIL\n");
5509 + if (horrible_allowedips_insert_v6(
5510 + &h, (struct in6_addr *)mutated, cidr,
5512 + pr_err("allowedips random self-test malloc: FAIL\n");
5518 + mutex_unlock(&mutex);
5520 + if (IS_ENABLED(DEBUG_PRINT_TRIE_GRAPHVIZ)) {
5521 + print_tree(t.root4, 32);
5522 + print_tree(t.root6, 128);
5525 + for (i = 0; i < NUM_QUERIES; ++i) {
5526 + prandom_bytes(ip, 4);
5527 + if (lookup(t.root4, 32, ip) !=
5528 + horrible_allowedips_lookup_v4(&h, (struct in_addr *)ip)) {
5529 + pr_err("allowedips random self-test: FAIL\n");
5534 + for (i = 0; i < NUM_QUERIES; ++i) {
5535 + prandom_bytes(ip, 16);
5536 + if (lookup(t.root6, 128, ip) !=
5537 + horrible_allowedips_lookup_v6(&h, (struct in6_addr *)ip)) {
5538 + pr_err("allowedips random self-test: FAIL\n");
5545 + mutex_lock(&mutex);
5547 + wg_allowedips_free(&t, &mutex);
5548 + mutex_unlock(&mutex);
5549 + horrible_allowedips_free(&h);
5551 + for (i = 0; i < NUM_PEERS; ++i)
5558 +static __init inline struct in_addr *ip4(u8 a, u8 b, u8 c, u8 d)
5560 + static struct in_addr ip;
5561 + u8 *split = (u8 *)&ip;
5570 +static __init inline struct in6_addr *ip6(u32 a, u32 b, u32 c, u32 d)
5572 + static struct in6_addr ip;
5573 + __be32 *split = (__be32 *)&ip;
5575 + split[0] = cpu_to_be32(a);
5576 + split[1] = cpu_to_be32(b);
5577 + split[2] = cpu_to_be32(c);
5578 + split[3] = cpu_to_be32(d);
5582 +static __init struct wg_peer *init_peer(void)
5584 + struct wg_peer *peer = kzalloc(sizeof(*peer), GFP_KERNEL);
5588 + kref_init(&peer->refcount);
5589 + INIT_LIST_HEAD(&peer->allowedips_list);
5593 +#define insert(version, mem, ipa, ipb, ipc, ipd, cidr) \
5594 + wg_allowedips_insert_v##version(&t, ip##version(ipa, ipb, ipc, ipd), \
5595 + cidr, mem, &mutex)
5597 +#define maybe_fail() do { \
5600 + pr_info("allowedips self-test %zu: FAIL\n", i); \
5601 + success = false; \
5605 +#define test(version, mem, ipa, ipb, ipc, ipd) do { \
5606 + bool _s = lookup(t.root##version, (version) == 4 ? 32 : 128, \
5607 + ip##version(ipa, ipb, ipc, ipd)) == (mem); \
5611 +#define test_negative(version, mem, ipa, ipb, ipc, ipd) do { \
5612 + bool _s = lookup(t.root##version, (version) == 4 ? 32 : 128, \
5613 + ip##version(ipa, ipb, ipc, ipd)) != (mem); \
5617 +#define test_boolean(cond) do { \
5618 + bool _s = (cond); \
5622 +bool __init wg_allowedips_selftest(void)
5624 + bool found_a = false, found_b = false, found_c = false, found_d = false,
5625 + found_e = false, found_other = false;
5626 + struct wg_peer *a = init_peer(), *b = init_peer(), *c = init_peer(),
5627 + *d = init_peer(), *e = init_peer(), *f = init_peer(),
5628 + *g = init_peer(), *h = init_peer();
5629 + struct allowedips_node *iter_node;
5630 + bool success = false;
5631 + struct allowedips t;
5632 + DEFINE_MUTEX(mutex);
5633 + struct in6_addr ip;
5634 + size_t i = 0, count = 0;
5637 + mutex_init(&mutex);
5638 + mutex_lock(&mutex);
5639 + wg_allowedips_init(&t);
5641 + if (!a || !b || !c || !d || !e || !f || !g || !h) {
5642 + pr_err("allowedips self-test malloc: FAIL\n");
5646 + insert(4, a, 192, 168, 4, 0, 24);
5647 + insert(4, b, 192, 168, 4, 4, 32);
5648 + insert(4, c, 192, 168, 0, 0, 16);
5649 + insert(4, d, 192, 95, 5, 64, 27);
5650 + /* replaces previous entry, and maskself is required */
5651 + insert(4, c, 192, 95, 5, 65, 27);
5652 + insert(6, d, 0x26075300, 0x60006b00, 0, 0xc05f0543, 128);
5653 + insert(6, c, 0x26075300, 0x60006b00, 0, 0, 64);
5654 + insert(4, e, 0, 0, 0, 0, 0);
5655 + insert(6, e, 0, 0, 0, 0, 0);
5656 + /* replaces previous entry */
5657 + insert(6, f, 0, 0, 0, 0, 0);
5658 + insert(6, g, 0x24046800, 0, 0, 0, 32);
5659 + /* maskself is required */
5660 + insert(6, h, 0x24046800, 0x40040800, 0xdeadbeef, 0xdeadbeef, 64);
5661 + insert(6, a, 0x24046800, 0x40040800, 0xdeadbeef, 0xdeadbeef, 128);
5662 + insert(6, c, 0x24446800, 0x40e40800, 0xdeaebeef, 0xdefbeef, 128);
5663 + insert(6, b, 0x24446800, 0xf0e40800, 0xeeaebeef, 0, 98);
5664 + insert(4, g, 64, 15, 112, 0, 20);
5665 + /* maskself is required */
5666 + insert(4, h, 64, 15, 123, 211, 25);
5667 + insert(4, a, 10, 0, 0, 0, 25);
5668 + insert(4, b, 10, 0, 0, 128, 25);
5669 + insert(4, a, 10, 1, 0, 0, 30);
5670 + insert(4, b, 10, 1, 0, 4, 30);
5671 + insert(4, c, 10, 1, 0, 8, 29);
5672 + insert(4, d, 10, 1, 0, 16, 29);
5674 + if (IS_ENABLED(DEBUG_PRINT_TRIE_GRAPHVIZ)) {
5675 + print_tree(t.root4, 32);
5676 + print_tree(t.root6, 128);
5681 + test(4, a, 192, 168, 4, 20);
5682 + test(4, a, 192, 168, 4, 0);
5683 + test(4, b, 192, 168, 4, 4);
5684 + test(4, c, 192, 168, 200, 182);
5685 + test(4, c, 192, 95, 5, 68);
5686 + test(4, e, 192, 95, 5, 96);
5687 + test(6, d, 0x26075300, 0x60006b00, 0, 0xc05f0543);
5688 + test(6, c, 0x26075300, 0x60006b00, 0, 0xc02e01ee);
5689 + test(6, f, 0x26075300, 0x60006b01, 0, 0);
5690 + test(6, g, 0x24046800, 0x40040806, 0, 0x1006);
5691 + test(6, g, 0x24046800, 0x40040806, 0x1234, 0x5678);
5692 + test(6, f, 0x240467ff, 0x40040806, 0x1234, 0x5678);
5693 + test(6, f, 0x24046801, 0x40040806, 0x1234, 0x5678);
5694 + test(6, h, 0x24046800, 0x40040800, 0x1234, 0x5678);
5695 + test(6, h, 0x24046800, 0x40040800, 0, 0);
5696 + test(6, h, 0x24046800, 0x40040800, 0x10101010, 0x10101010);
5697 + test(6, a, 0x24046800, 0x40040800, 0xdeadbeef, 0xdeadbeef);
5698 + test(4, g, 64, 15, 116, 26);
5699 + test(4, g, 64, 15, 127, 3);
5700 + test(4, g, 64, 15, 123, 1);
5701 + test(4, h, 64, 15, 123, 128);
5702 + test(4, h, 64, 15, 123, 129);
5703 + test(4, a, 10, 0, 0, 52);
5704 + test(4, b, 10, 0, 0, 220);
5705 + test(4, a, 10, 1, 0, 2);
5706 + test(4, b, 10, 1, 0, 6);
5707 + test(4, c, 10, 1, 0, 10);
5708 + test(4, d, 10, 1, 0, 20);
5710 + insert(4, a, 1, 0, 0, 0, 32);
5711 + insert(4, a, 64, 0, 0, 0, 32);
5712 + insert(4, a, 128, 0, 0, 0, 32);
5713 + insert(4, a, 192, 0, 0, 0, 32);
5714 + insert(4, a, 255, 0, 0, 0, 32);
5715 + wg_allowedips_remove_by_peer(&t, a, &mutex);
5716 + test_negative(4, a, 1, 0, 0, 0);
5717 + test_negative(4, a, 64, 0, 0, 0);
5718 + test_negative(4, a, 128, 0, 0, 0);
5719 + test_negative(4, a, 192, 0, 0, 0);
5720 + test_negative(4, a, 255, 0, 0, 0);
5722 + wg_allowedips_free(&t, &mutex);
5723 + wg_allowedips_init(&t);
5724 + insert(4, a, 192, 168, 0, 0, 16);
5725 + insert(4, a, 192, 168, 0, 0, 24);
5726 + wg_allowedips_remove_by_peer(&t, a, &mutex);
5727 + test_negative(4, a, 192, 168, 0, 1);
5729 + /* These will hit the WARN_ON(len >= 128) in free_node if something
5732 + for (i = 0; i < 128; ++i) {
5733 + part = cpu_to_be64(~(1LLU << (i % 64)));
5734 + memset(&ip, 0xff, 16);
5735 + memcpy((u8 *)&ip + (i < 64) * 8, &part, 8);
5736 + wg_allowedips_insert_v6(&t, &ip, 128, a, &mutex);
5739 + wg_allowedips_free(&t, &mutex);
5741 + wg_allowedips_init(&t);
5742 + insert(4, a, 192, 95, 5, 93, 27);
5743 + insert(6, a, 0x26075300, 0x60006b00, 0, 0xc05f0543, 128);
5744 + insert(4, a, 10, 1, 0, 20, 29);
5745 + insert(6, a, 0x26075300, 0x6d8a6bf8, 0xdab1f1df, 0xc05f1523, 83);
5746 + insert(6, a, 0x26075300, 0x6d8a6bf8, 0xdab1f1df, 0xc05f1523, 21);
5747 + list_for_each_entry(iter_node, &a->allowedips_list, peer_list) {
5748 + u8 cidr, ip[16] __aligned(__alignof(u64));
5749 + int family = wg_allowedips_read_node(iter_node, ip, &cidr);
5753 + if (cidr == 27 && family == AF_INET &&
5754 + !memcmp(ip, ip4(192, 95, 5, 64), sizeof(struct in_addr)))
5756 + else if (cidr == 128 && family == AF_INET6 &&
5757 + !memcmp(ip, ip6(0x26075300, 0x60006b00, 0, 0xc05f0543),
5758 + sizeof(struct in6_addr)))
5760 + else if (cidr == 29 && family == AF_INET &&
5761 + !memcmp(ip, ip4(10, 1, 0, 16), sizeof(struct in_addr)))
5763 + else if (cidr == 83 && family == AF_INET6 &&
5764 + !memcmp(ip, ip6(0x26075300, 0x6d8a6bf8, 0xdab1e000, 0),
5765 + sizeof(struct in6_addr)))
5767 + else if (cidr == 21 && family == AF_INET6 &&
5768 + !memcmp(ip, ip6(0x26075000, 0, 0, 0),
5769 + sizeof(struct in6_addr)))
5772 + found_other = true;
5774 + test_boolean(count == 5);
5775 + test_boolean(found_a);
5776 + test_boolean(found_b);
5777 + test_boolean(found_c);
5778 + test_boolean(found_d);
5779 + test_boolean(found_e);
5780 + test_boolean(!found_other);
5782 + if (IS_ENABLED(DEBUG_RANDOM_TRIE) && success)
5783 + success = randomized_test();
5786 + pr_info("allowedips self-tests: pass\n");
5789 + wg_allowedips_free(&t, &mutex);
5798 + mutex_unlock(&mutex);
5803 +#undef test_negative
5811 +++ b/drivers/net/wireguard/selftest/counter.c
5813 +// SPDX-License-Identifier: GPL-2.0
5815 + * Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
5819 +bool __init wg_packet_counter_selftest(void)
5821 + unsigned int test_num = 0, i;
5822 + union noise_counter counter;
5823 + bool success = true;
5825 +#define T_INIT do { \
5826 + memset(&counter, 0, sizeof(union noise_counter)); \
5827 + spin_lock_init(&counter.receive.lock); \
5829 +#define T_LIM (COUNTER_WINDOW_SIZE + 1)
5830 +#define T(n, v) do { \
5832 + if (counter_validate(&counter, n) != (v)) { \
5833 + pr_err("nonce counter self-test %u: FAIL\n", \
5835 + success = false; \
5840 + /* 1 */ T(0, true);
5841 + /* 2 */ T(1, true);
5842 + /* 3 */ T(1, false);
5843 + /* 4 */ T(9, true);
5844 + /* 5 */ T(8, true);
5845 + /* 6 */ T(7, true);
5846 + /* 7 */ T(7, false);
5847 + /* 8 */ T(T_LIM, true);
5848 + /* 9 */ T(T_LIM - 1, true);
5849 + /* 10 */ T(T_LIM - 1, false);
5850 + /* 11 */ T(T_LIM - 2, true);
5851 + /* 12 */ T(2, true);
5852 + /* 13 */ T(2, false);
5853 + /* 14 */ T(T_LIM + 16, true);
5854 + /* 15 */ T(3, false);
5855 + /* 16 */ T(T_LIM + 16, false);
5856 + /* 17 */ T(T_LIM * 4, true);
5857 + /* 18 */ T(T_LIM * 4 - (T_LIM - 1), true);
5858 + /* 19 */ T(10, false);
5859 + /* 20 */ T(T_LIM * 4 - T_LIM, false);
5860 + /* 21 */ T(T_LIM * 4 - (T_LIM + 1), false);
5861 + /* 22 */ T(T_LIM * 4 - (T_LIM - 2), true);
5862 + /* 23 */ T(T_LIM * 4 + 1 - T_LIM, false);
5863 + /* 24 */ T(0, false);
5864 + /* 25 */ T(REJECT_AFTER_MESSAGES, false);
5865 + /* 26 */ T(REJECT_AFTER_MESSAGES - 1, true);
5866 + /* 27 */ T(REJECT_AFTER_MESSAGES, false);
5867 + /* 28 */ T(REJECT_AFTER_MESSAGES - 1, false);
5868 + /* 29 */ T(REJECT_AFTER_MESSAGES - 2, true);
5869 + /* 30 */ T(REJECT_AFTER_MESSAGES + 1, false);
5870 + /* 31 */ T(REJECT_AFTER_MESSAGES + 2, false);
5871 + /* 32 */ T(REJECT_AFTER_MESSAGES - 2, false);
5872 + /* 33 */ T(REJECT_AFTER_MESSAGES - 3, true);
5873 + /* 34 */ T(0, false);
5876 + for (i = 1; i <= COUNTER_WINDOW_SIZE; ++i)
5882 + for (i = 2; i <= COUNTER_WINDOW_SIZE + 1; ++i)
5888 + for (i = COUNTER_WINDOW_SIZE + 1; i-- > 0;)
5892 + for (i = COUNTER_WINDOW_SIZE + 2; i-- > 1;)
5897 + for (i = COUNTER_WINDOW_SIZE + 1; i-- > 1;)
5899 + T(COUNTER_WINDOW_SIZE + 1, true);
5903 + for (i = COUNTER_WINDOW_SIZE + 1; i-- > 1;)
5906 + T(COUNTER_WINDOW_SIZE + 1, true);
5913 + pr_info("nonce counter self-tests: pass\n");
5918 +++ b/drivers/net/wireguard/selftest/ratelimiter.c
5920 +// SPDX-License-Identifier: GPL-2.0
5922 + * Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
5927 +#include <linux/jiffies.h>
5929 +static const struct {
5931 + unsigned int msec_to_sleep_before;
5932 +} expected_results[] __initconst = {
5933 + [0 ... PACKETS_BURSTABLE - 1] = { true, 0 },
5934 + [PACKETS_BURSTABLE] = { false, 0 },
5935 + [PACKETS_BURSTABLE + 1] = { true, MSEC_PER_SEC / PACKETS_PER_SECOND },
5936 + [PACKETS_BURSTABLE + 2] = { false, 0 },
5937 + [PACKETS_BURSTABLE + 3] = { true, (MSEC_PER_SEC / PACKETS_PER_SECOND) * 2 },
5938 + [PACKETS_BURSTABLE + 4] = { true, 0 },
5939 + [PACKETS_BURSTABLE + 5] = { false, 0 }
5942 +static __init unsigned int maximum_jiffies_at_index(int index)
5944 + unsigned int total_msecs = 2 * MSEC_PER_SEC / PACKETS_PER_SECOND / 3;
5947 + for (i = 0; i <= index; ++i)
5948 + total_msecs += expected_results[i].msec_to_sleep_before;
5949 + return msecs_to_jiffies(total_msecs);
5952 +static __init int timings_test(struct sk_buff *skb4, struct iphdr *hdr4,
5953 + struct sk_buff *skb6, struct ipv6hdr *hdr6,
5956 + unsigned long loop_start_time;
5959 + wg_ratelimiter_gc_entries(NULL);
5961 + loop_start_time = jiffies;
5963 + for (i = 0; i < ARRAY_SIZE(expected_results); ++i) {
5964 + if (expected_results[i].msec_to_sleep_before)
5965 + msleep(expected_results[i].msec_to_sleep_before);
5967 + if (time_is_before_jiffies(loop_start_time +
5968 + maximum_jiffies_at_index(i)))
5969 + return -ETIMEDOUT;
5970 + if (wg_ratelimiter_allow(skb4, &init_net) !=
5971 + expected_results[i].result)
5975 + hdr4->saddr = htonl(ntohl(hdr4->saddr) + i + 1);
5976 + if (time_is_before_jiffies(loop_start_time +
5977 + maximum_jiffies_at_index(i)))
5978 + return -ETIMEDOUT;
5979 + if (!wg_ratelimiter_allow(skb4, &init_net))
5983 + hdr4->saddr = htonl(ntohl(hdr4->saddr) - i - 1);
5985 +#if IS_ENABLED(CONFIG_IPV6)
5986 + hdr6->saddr.in6_u.u6_addr32[2] = htonl(i);
5987 + hdr6->saddr.in6_u.u6_addr32[3] = htonl(i);
5988 + if (time_is_before_jiffies(loop_start_time +
5989 + maximum_jiffies_at_index(i)))
5990 + return -ETIMEDOUT;
5991 + if (wg_ratelimiter_allow(skb6, &init_net) !=
5992 + expected_results[i].result)
5996 + hdr6->saddr.in6_u.u6_addr32[0] =
5997 + htonl(ntohl(hdr6->saddr.in6_u.u6_addr32[0]) + i + 1);
5998 + if (time_is_before_jiffies(loop_start_time +
5999 + maximum_jiffies_at_index(i)))
6000 + return -ETIMEDOUT;
6001 + if (!wg_ratelimiter_allow(skb6, &init_net))
6005 + hdr6->saddr.in6_u.u6_addr32[0] =
6006 + htonl(ntohl(hdr6->saddr.in6_u.u6_addr32[0]) - i - 1);
6008 + if (time_is_before_jiffies(loop_start_time +
6009 + maximum_jiffies_at_index(i)))
6010 + return -ETIMEDOUT;
6016 +static __init int capacity_test(struct sk_buff *skb4, struct iphdr *hdr4,
6021 + wg_ratelimiter_gc_entries(NULL);
6024 + if (atomic_read(&total_entries))
6028 + for (i = 0; i <= max_entries; ++i) {
6029 + hdr4->saddr = htonl(i);
6030 + if (wg_ratelimiter_allow(skb4, &init_net) != (i != max_entries))
6037 +bool __init wg_ratelimiter_selftest(void)
6039 + enum { TRIALS_BEFORE_GIVING_UP = 5000 };
6040 + bool success = false;
6041 + int test = 0, trials;
6042 + struct sk_buff *skb4, *skb6;
6043 + struct iphdr *hdr4;
6044 + struct ipv6hdr *hdr6;
6046 + if (IS_ENABLED(CONFIG_KASAN) || IS_ENABLED(CONFIG_UBSAN))
6049 + BUILD_BUG_ON(MSEC_PER_SEC % PACKETS_PER_SECOND != 0);
6051 + if (wg_ratelimiter_init())
6054 + if (wg_ratelimiter_init()) {
6055 + wg_ratelimiter_uninit();
6059 + if (wg_ratelimiter_init()) {
6060 + wg_ratelimiter_uninit();
6061 + wg_ratelimiter_uninit();
6066 + skb4 = alloc_skb(sizeof(struct iphdr), GFP_KERNEL);
6067 + if (unlikely(!skb4))
6069 + skb4->protocol = htons(ETH_P_IP);
6070 + hdr4 = (struct iphdr *)skb_put(skb4, sizeof(*hdr4));
6071 + hdr4->saddr = htonl(8182);
6072 + skb_reset_network_header(skb4);
6075 +#if IS_ENABLED(CONFIG_IPV6)
6076 + skb6 = alloc_skb(sizeof(struct ipv6hdr), GFP_KERNEL);
6077 + if (unlikely(!skb6)) {
6081 + skb6->protocol = htons(ETH_P_IPV6);
6082 + hdr6 = (struct ipv6hdr *)skb_put(skb6, sizeof(*hdr6));
6083 + hdr6->saddr.in6_u.u6_addr32[0] = htonl(1212);
6084 + hdr6->saddr.in6_u.u6_addr32[1] = htonl(289188);
6085 + skb_reset_network_header(skb6);
6089 + for (trials = TRIALS_BEFORE_GIVING_UP;;) {
6090 + int test_count = 0, ret;
6092 + ret = timings_test(skb4, hdr4, skb6, hdr6, &test_count);
6093 + if (ret == -ETIMEDOUT) {
6095 + test += test_count;
6100 + } else if (ret < 0) {
6101 + test += test_count;
6104 + test += test_count;
6109 + for (trials = TRIALS_BEFORE_GIVING_UP;;) {
6110 + int test_count = 0;
6112 + if (capacity_test(skb4, hdr4, &test_count) < 0) {
6114 + test += test_count;
6120 + test += test_count;
6128 +#if IS_ENABLED(CONFIG_IPV6)
6132 + wg_ratelimiter_uninit();
6133 + wg_ratelimiter_uninit();
6134 + wg_ratelimiter_uninit();
6135 + /* Uninit one extra time to check underflow detection. */
6136 + wg_ratelimiter_uninit();
6139 + pr_info("ratelimiter self-tests: pass\n");
6141 + pr_err("ratelimiter self-test %d: FAIL\n", test);
6147 +++ b/drivers/net/wireguard/send.c
6149 +// SPDX-License-Identifier: GPL-2.0
6151 + * Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
6154 +#include "queueing.h"
6155 +#include "timers.h"
6156 +#include "device.h"
6158 +#include "socket.h"
6159 +#include "messages.h"
6160 +#include "cookie.h"
6162 +#include <linux/uio.h>
6163 +#include <linux/inetdevice.h>
6164 +#include <linux/socket.h>
6165 +#include <net/ip_tunnels.h>
6166 +#include <net/udp.h>
6167 +#include <net/sock.h>
6169 +static void wg_packet_send_handshake_initiation(struct wg_peer *peer)
6171 + struct message_handshake_initiation packet;
6173 + if (!wg_birthdate_has_expired(atomic64_read(&peer->last_sent_handshake),
6175 + return; /* This function is rate limited. */
6177 + atomic64_set(&peer->last_sent_handshake, ktime_get_coarse_boottime_ns());
6178 + net_dbg_ratelimited("%s: Sending handshake initiation to peer %llu (%pISpfsc)\n",
6179 + peer->device->dev->name, peer->internal_id,
6180 + &peer->endpoint.addr);
6182 + if (wg_noise_handshake_create_initiation(&packet, &peer->handshake)) {
6183 + wg_cookie_add_mac_to_packet(&packet, sizeof(packet), peer);
6184 + wg_timers_any_authenticated_packet_traversal(peer);
6185 + wg_timers_any_authenticated_packet_sent(peer);
6186 + atomic64_set(&peer->last_sent_handshake,
6187 + ktime_get_coarse_boottime_ns());
6188 + wg_socket_send_buffer_to_peer(peer, &packet, sizeof(packet),
6190 + wg_timers_handshake_initiated(peer);
6194 +void wg_packet_handshake_send_worker(struct work_struct *work)
6196 + struct wg_peer *peer = container_of(work, struct wg_peer,
6197 + transmit_handshake_work);
6199 + wg_packet_send_handshake_initiation(peer);
6200 + wg_peer_put(peer);
6203 +void wg_packet_send_queued_handshake_initiation(struct wg_peer *peer,
6207 + peer->timer_handshake_attempts = 0;
6209 + rcu_read_lock_bh();
6210 + /* We check last_sent_handshake here in addition to the actual function
6211 + * we're queueing up, so that we don't queue things if not strictly
6214 + if (!wg_birthdate_has_expired(atomic64_read(&peer->last_sent_handshake),
6216 + unlikely(READ_ONCE(peer->is_dead)))
6219 + wg_peer_get(peer);
6220 + /* Queues up calling packet_send_queued_handshakes(peer), where we do a
6221 + * peer_put(peer) after:
6223 + if (!queue_work(peer->device->handshake_send_wq,
6224 + &peer->transmit_handshake_work))
6225 + /* If the work was already queued, we want to drop the
6226 + * extra reference:
6228 + wg_peer_put(peer);
6230 + rcu_read_unlock_bh();
6233 +void wg_packet_send_handshake_response(struct wg_peer *peer)
6235 + struct message_handshake_response packet;
6237 + atomic64_set(&peer->last_sent_handshake, ktime_get_coarse_boottime_ns());
6238 + net_dbg_ratelimited("%s: Sending handshake response to peer %llu (%pISpfsc)\n",
6239 + peer->device->dev->name, peer->internal_id,
6240 + &peer->endpoint.addr);
6242 + if (wg_noise_handshake_create_response(&packet, &peer->handshake)) {
6243 + wg_cookie_add_mac_to_packet(&packet, sizeof(packet), peer);
6244 + if (wg_noise_handshake_begin_session(&peer->handshake,
6245 + &peer->keypairs)) {
6246 + wg_timers_session_derived(peer);
6247 + wg_timers_any_authenticated_packet_traversal(peer);
6248 + wg_timers_any_authenticated_packet_sent(peer);
6249 + atomic64_set(&peer->last_sent_handshake,
6250 + ktime_get_coarse_boottime_ns());
6251 + wg_socket_send_buffer_to_peer(peer, &packet,
6258 +void wg_packet_send_handshake_cookie(struct wg_device *wg,
6259 + struct sk_buff *initiating_skb,
6260 + __le32 sender_index)
6262 + struct message_handshake_cookie packet;
6264 + net_dbg_skb_ratelimited("%s: Sending cookie response for denied handshake message for %pISpfsc\n",
6265 + wg->dev->name, initiating_skb);
6266 + wg_cookie_message_create(&packet, initiating_skb, sender_index,
6267 + &wg->cookie_checker);
6268 + wg_socket_send_buffer_as_reply_to_skb(wg, initiating_skb, &packet,
6272 +static void keep_key_fresh(struct wg_peer *peer)
6274 + struct noise_keypair *keypair;
6275 + bool send = false;
6277 + rcu_read_lock_bh();
6278 + keypair = rcu_dereference_bh(peer->keypairs.current_keypair);
6279 + if (likely(keypair && READ_ONCE(keypair->sending.is_valid)) &&
6280 + (unlikely(atomic64_read(&keypair->sending.counter.counter) >
6281 + REKEY_AFTER_MESSAGES) ||
6282 + (keypair->i_am_the_initiator &&
6283 + unlikely(wg_birthdate_has_expired(keypair->sending.birthdate,
6284 + REKEY_AFTER_TIME)))))
6286 + rcu_read_unlock_bh();
6289 + wg_packet_send_queued_handshake_initiation(peer, false);
6292 +static unsigned int calculate_skb_padding(struct sk_buff *skb)
6294 + /* We do this modulo business with the MTU, just in case the networking
6295 + * layer gives us a packet that's bigger than the MTU. In that case, we
6296 + * wouldn't want the final subtraction to overflow in the case of the
6297 + * padded_size being clamped.
6299 + unsigned int last_unit = skb->len % PACKET_CB(skb)->mtu;
6300 + unsigned int padded_size = ALIGN(last_unit, MESSAGE_PADDING_MULTIPLE);
6302 + if (padded_size > PACKET_CB(skb)->mtu)
6303 + padded_size = PACKET_CB(skb)->mtu;
6304 + return padded_size - last_unit;
6307 +static bool encrypt_packet(struct sk_buff *skb, struct noise_keypair *keypair)
6309 + unsigned int padding_len, plaintext_len, trailer_len;
6310 + struct scatterlist sg[MAX_SKB_FRAGS + 8];
6311 + struct message_data *header;
6312 + struct sk_buff *trailer;
6315 + /* Calculate lengths. */
6316 + padding_len = calculate_skb_padding(skb);
6317 + trailer_len = padding_len + noise_encrypted_len(0);
6318 + plaintext_len = skb->len + padding_len;
6320 + /* Expand data section to have room for padding and auth tag. */
6321 + num_frags = skb_cow_data(skb, trailer_len, &trailer);
6322 + if (unlikely(num_frags < 0 || num_frags > ARRAY_SIZE(sg)))
6325 + /* Set the padding to zeros, and make sure it and the auth tag are part
6328 + memset(skb_tail_pointer(trailer), 0, padding_len);
6330 + /* Expand head section to have room for our header and the network
6331 + * stack's headers.
6333 + if (unlikely(skb_cow_head(skb, DATA_PACKET_HEAD_ROOM) < 0))
6336 + /* Finalize checksum calculation for the inner packet, if required. */
6337 + if (unlikely(skb->ip_summed == CHECKSUM_PARTIAL &&
6338 + skb_checksum_help(skb)))
6341 + /* Only after checksumming can we safely add on the padding at the end
6344 + skb_set_inner_network_header(skb, 0);
6345 + header = (struct message_data *)skb_push(skb, sizeof(*header));
6346 + header->header.type = cpu_to_le32(MESSAGE_DATA);
6347 + header->key_idx = keypair->remote_index;
6348 + header->counter = cpu_to_le64(PACKET_CB(skb)->nonce);
6349 + pskb_put(skb, trailer, trailer_len);
6351 + /* Now we can encrypt the scattergather segments */
6352 + sg_init_table(sg, num_frags);
6353 + if (skb_to_sgvec(skb, sg, sizeof(struct message_data),
6354 + noise_encrypted_len(plaintext_len)) <= 0)
6356 + return chacha20poly1305_encrypt_sg_inplace(sg, plaintext_len, NULL, 0,
6357 + PACKET_CB(skb)->nonce,
6358 + keypair->sending.key);
6361 +void wg_packet_send_keepalive(struct wg_peer *peer)
6363 + struct sk_buff *skb;
6365 + if (skb_queue_empty(&peer->staged_packet_queue)) {
6366 + skb = alloc_skb(DATA_PACKET_HEAD_ROOM + MESSAGE_MINIMUM_LENGTH,
6368 + if (unlikely(!skb))
6370 + skb_reserve(skb, DATA_PACKET_HEAD_ROOM);
6371 + skb->dev = peer->device->dev;
6372 + PACKET_CB(skb)->mtu = skb->dev->mtu;
6373 + skb_queue_tail(&peer->staged_packet_queue, skb);
6374 + net_dbg_ratelimited("%s: Sending keepalive packet to peer %llu (%pISpfsc)\n",
6375 + peer->device->dev->name, peer->internal_id,
6376 + &peer->endpoint.addr);
6379 + wg_packet_send_staged_packets(peer);
6382 +static void wg_packet_create_data_done(struct sk_buff *first,
6383 + struct wg_peer *peer)
6385 + struct sk_buff *skb, *next;
6386 + bool is_keepalive, data_sent = false;
6388 + wg_timers_any_authenticated_packet_traversal(peer);
6389 + wg_timers_any_authenticated_packet_sent(peer);
6390 + skb_list_walk_safe(first, skb, next) {
6391 + is_keepalive = skb->len == message_data_len(0);
6392 + if (likely(!wg_socket_send_skb_to_peer(peer, skb,
6393 + PACKET_CB(skb)->ds) && !is_keepalive))
6397 + if (likely(data_sent))
6398 + wg_timers_data_sent(peer);
6400 + keep_key_fresh(peer);
6403 +void wg_packet_tx_worker(struct work_struct *work)
6405 + struct crypt_queue *queue = container_of(work, struct crypt_queue,
6407 + struct noise_keypair *keypair;
6408 + enum packet_state state;
6409 + struct sk_buff *first;
6410 + struct wg_peer *peer;
6412 + while ((first = __ptr_ring_peek(&queue->ring)) != NULL &&
6413 + (state = atomic_read_acquire(&PACKET_CB(first)->state)) !=
6414 + PACKET_STATE_UNCRYPTED) {
6415 + __ptr_ring_discard_one(&queue->ring);
6416 + peer = PACKET_PEER(first);
6417 + keypair = PACKET_CB(first)->keypair;
6419 + if (likely(state == PACKET_STATE_CRYPTED))
6420 + wg_packet_create_data_done(first, peer);
6422 + kfree_skb_list(first);
6424 + wg_noise_keypair_put(keypair, false);
6425 + wg_peer_put(peer);
6429 +void wg_packet_encrypt_worker(struct work_struct *work)
6431 + struct crypt_queue *queue = container_of(work, struct multicore_worker,
6433 + struct sk_buff *first, *skb, *next;
6435 + while ((first = ptr_ring_consume_bh(&queue->ring)) != NULL) {
6436 + enum packet_state state = PACKET_STATE_CRYPTED;
6438 + skb_list_walk_safe(first, skb, next) {
6439 + if (likely(encrypt_packet(skb,
6440 + PACKET_CB(first)->keypair))) {
6441 + wg_reset_packet(skb);
6443 + state = PACKET_STATE_DEAD;
6447 + wg_queue_enqueue_per_peer(&PACKET_PEER(first)->tx_queue, first,
6453 +static void wg_packet_create_data(struct sk_buff *first)
6455 + struct wg_peer *peer = PACKET_PEER(first);
6456 + struct wg_device *wg = peer->device;
6457 + int ret = -EINVAL;
6459 + rcu_read_lock_bh();
6460 + if (unlikely(READ_ONCE(peer->is_dead)))
6463 + ret = wg_queue_enqueue_per_device_and_peer(&wg->encrypt_queue,
6464 + &peer->tx_queue, first,
6465 + wg->packet_crypt_wq,
6466 + &wg->encrypt_queue.last_cpu);
6467 + if (unlikely(ret == -EPIPE))
6468 + wg_queue_enqueue_per_peer(&peer->tx_queue, first,
6469 + PACKET_STATE_DEAD);
6471 + rcu_read_unlock_bh();
6472 + if (likely(!ret || ret == -EPIPE))
6474 + wg_noise_keypair_put(PACKET_CB(first)->keypair, false);
6475 + wg_peer_put(peer);
6476 + kfree_skb_list(first);
6479 +void wg_packet_purge_staged_packets(struct wg_peer *peer)
6481 + spin_lock_bh(&peer->staged_packet_queue.lock);
6482 + peer->device->dev->stats.tx_dropped += peer->staged_packet_queue.qlen;
6483 + __skb_queue_purge(&peer->staged_packet_queue);
6484 + spin_unlock_bh(&peer->staged_packet_queue.lock);
6487 +void wg_packet_send_staged_packets(struct wg_peer *peer)
6489 + struct noise_symmetric_key *key;
6490 + struct noise_keypair *keypair;
6491 + struct sk_buff_head packets;
6492 + struct sk_buff *skb;
6494 + /* Steal the current queue into our local one. */
6495 + __skb_queue_head_init(&packets);
6496 + spin_lock_bh(&peer->staged_packet_queue.lock);
6497 + skb_queue_splice_init(&peer->staged_packet_queue, &packets);
6498 + spin_unlock_bh(&peer->staged_packet_queue.lock);
6499 + if (unlikely(skb_queue_empty(&packets)))
6502 + /* First we make sure we have a valid reference to a valid key. */
6503 + rcu_read_lock_bh();
6504 + keypair = wg_noise_keypair_get(
6505 + rcu_dereference_bh(peer->keypairs.current_keypair));
6506 + rcu_read_unlock_bh();
6507 + if (unlikely(!keypair))
6509 + key = &keypair->sending;
6510 + if (unlikely(!READ_ONCE(key->is_valid)))
6512 + if (unlikely(wg_birthdate_has_expired(key->birthdate,
6513 + REJECT_AFTER_TIME)))
6516 + /* After we know we have a somewhat valid key, we now try to assign
6517 + * nonces to all of the packets in the queue. If we can't assign nonces
6518 + * for all of them, we just consider it a failure and wait for the next
6521 + skb_queue_walk(&packets, skb) {
6522 + /* 0 for no outer TOS: no leak. TODO: at some later point, we
6523 + * might consider using flowi->tos as outer instead.
6525 + PACKET_CB(skb)->ds = ip_tunnel_ecn_encap(0, ip_hdr(skb), skb);
6526 + PACKET_CB(skb)->nonce =
6527 + atomic64_inc_return(&key->counter.counter) - 1;
6528 + if (unlikely(PACKET_CB(skb)->nonce >= REJECT_AFTER_MESSAGES))
6532 + packets.prev->next = NULL;
6533 + wg_peer_get(keypair->entry.peer);
6534 + PACKET_CB(packets.next)->keypair = keypair;
6535 + wg_packet_create_data(packets.next);
6539 + WRITE_ONCE(key->is_valid, false);
6541 + wg_noise_keypair_put(keypair, false);
6543 + /* We orphan the packets if we're waiting on a handshake, so that they
6544 + * don't block a socket's pool.
6546 + skb_queue_walk(&packets, skb)
6548 + /* Then we put them back on the top of the queue. We're not too
6549 + * concerned about accidentally getting things a little out of order if
6550 + * packets are being added really fast, because this queue is for before
6551 + * packets can even be sent and it's small anyway.
6553 + spin_lock_bh(&peer->staged_packet_queue.lock);
6554 + skb_queue_splice(&packets, &peer->staged_packet_queue);
6555 + spin_unlock_bh(&peer->staged_packet_queue.lock);
6557 + /* If we're exiting because there's something wrong with the key, it
6558 + * means we should initiate a new handshake.
6560 + wg_packet_send_queued_handshake_initiation(peer, false);
6563 +++ b/drivers/net/wireguard/socket.c
6565 +// SPDX-License-Identifier: GPL-2.0
6567 + * Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
6570 +#include "device.h"
6572 +#include "socket.h"
6573 +#include "queueing.h"
6574 +#include "messages.h"
6576 +#include <linux/ctype.h>
6577 +#include <linux/net.h>
6578 +#include <linux/if_vlan.h>
6579 +#include <linux/if_ether.h>
6580 +#include <linux/inetdevice.h>
6581 +#include <net/udp_tunnel.h>
6582 +#include <net/ipv6.h>
6584 +static int send4(struct wg_device *wg, struct sk_buff *skb,
6585 + struct endpoint *endpoint, u8 ds, struct dst_cache *cache)
6587 + struct flowi4 fl = {
6588 + .saddr = endpoint->src4.s_addr,
6589 + .daddr = endpoint->addr4.sin_addr.s_addr,
6590 + .fl4_dport = endpoint->addr4.sin_port,
6591 + .flowi4_mark = wg->fwmark,
6592 + .flowi4_proto = IPPROTO_UDP
6594 + struct rtable *rt = NULL;
6595 + struct sock *sock;
6598 + skb_mark_not_on_list(skb);
6599 + skb->dev = wg->dev;
6600 + skb->mark = wg->fwmark;
6602 + rcu_read_lock_bh();
6603 + sock = rcu_dereference_bh(wg->sock4);
6605 + if (unlikely(!sock)) {
6610 + fl.fl4_sport = inet_sk(sock)->inet_sport;
6613 + rt = dst_cache_get_ip4(cache, &fl.saddr);
6616 + security_sk_classify_flow(sock, flowi4_to_flowi(&fl));
6617 + if (unlikely(!inet_confirm_addr(sock_net(sock), NULL, 0,
6618 + fl.saddr, RT_SCOPE_HOST))) {
6619 + endpoint->src4.s_addr = 0;
6620 + *(__force __be32 *)&endpoint->src_if4 = 0;
6623 + dst_cache_reset(cache);
6625 + rt = ip_route_output_flow(sock_net(sock), &fl, sock);
6626 + if (unlikely(endpoint->src_if4 && ((IS_ERR(rt) &&
6627 + PTR_ERR(rt) == -EINVAL) || (!IS_ERR(rt) &&
6628 + rt->dst.dev->ifindex != endpoint->src_if4)))) {
6629 + endpoint->src4.s_addr = 0;
6630 + *(__force __be32 *)&endpoint->src_if4 = 0;
6633 + dst_cache_reset(cache);
6636 + rt = ip_route_output_flow(sock_net(sock), &fl, sock);
6638 + if (unlikely(IS_ERR(rt))) {
6639 + ret = PTR_ERR(rt);
6640 + net_dbg_ratelimited("%s: No route to %pISpfsc, error %d\n",
6641 + wg->dev->name, &endpoint->addr, ret);
6643 + } else if (unlikely(rt->dst.dev == skb->dev)) {
6646 + net_dbg_ratelimited("%s: Avoiding routing loop to %pISpfsc\n",
6647 + wg->dev->name, &endpoint->addr);
6651 + dst_cache_set_ip4(cache, &rt->dst, fl.saddr);
6654 + skb->ignore_df = 1;
6655 + udp_tunnel_xmit_skb(rt, sock, skb, fl.saddr, fl.daddr, ds,
6656 + ip4_dst_hoplimit(&rt->dst), 0, fl.fl4_sport,
6657 + fl.fl4_dport, false, false);
6663 + rcu_read_unlock_bh();
6667 +static int send6(struct wg_device *wg, struct sk_buff *skb,
6668 + struct endpoint *endpoint, u8 ds, struct dst_cache *cache)
6670 +#if IS_ENABLED(CONFIG_IPV6)
6671 + struct flowi6 fl = {
6672 + .saddr = endpoint->src6,
6673 + .daddr = endpoint->addr6.sin6_addr,
6674 + .fl6_dport = endpoint->addr6.sin6_port,
6675 + .flowi6_mark = wg->fwmark,
6676 + .flowi6_oif = endpoint->addr6.sin6_scope_id,
6677 + .flowi6_proto = IPPROTO_UDP
6678 + /* TODO: addr->sin6_flowinfo */
6680 + struct dst_entry *dst = NULL;
6681 + struct sock *sock;
6684 + skb_mark_not_on_list(skb);
6685 + skb->dev = wg->dev;
6686 + skb->mark = wg->fwmark;
6688 + rcu_read_lock_bh();
6689 + sock = rcu_dereference_bh(wg->sock6);
6691 + if (unlikely(!sock)) {
6696 + fl.fl6_sport = inet_sk(sock)->inet_sport;
6699 + dst = dst_cache_get_ip6(cache, &fl.saddr);
6702 + security_sk_classify_flow(sock, flowi6_to_flowi(&fl));
6703 + if (unlikely(!ipv6_addr_any(&fl.saddr) &&
6704 + !ipv6_chk_addr(sock_net(sock), &fl.saddr, NULL, 0))) {
6705 + endpoint->src6 = fl.saddr = in6addr_any;
6707 + dst_cache_reset(cache);
6709 + dst = ipv6_stub->ipv6_dst_lookup_flow(sock_net(sock), sock, &fl,
6711 + if (unlikely(IS_ERR(dst))) {
6712 + ret = PTR_ERR(dst);
6713 + net_dbg_ratelimited("%s: No route to %pISpfsc, error %d\n",
6714 + wg->dev->name, &endpoint->addr, ret);
6716 + } else if (unlikely(dst->dev == skb->dev)) {
6719 + net_dbg_ratelimited("%s: Avoiding routing loop to %pISpfsc\n",
6720 + wg->dev->name, &endpoint->addr);
6724 + dst_cache_set_ip6(cache, dst, &fl.saddr);
6727 + skb->ignore_df = 1;
6728 + udp_tunnel6_xmit_skb(dst, sock, skb, skb->dev, &fl.saddr, &fl.daddr, ds,
6729 + ip6_dst_hoplimit(dst), 0, fl.fl6_sport,
6730 + fl.fl6_dport, false);
6736 + rcu_read_unlock_bh();
6739 + return -EAFNOSUPPORT;
6743 +int wg_socket_send_skb_to_peer(struct wg_peer *peer, struct sk_buff *skb, u8 ds)
6745 + size_t skb_len = skb->len;
6746 + int ret = -EAFNOSUPPORT;
6748 + read_lock_bh(&peer->endpoint_lock);
6749 + if (peer->endpoint.addr.sa_family == AF_INET)
6750 + ret = send4(peer->device, skb, &peer->endpoint, ds,
6751 + &peer->endpoint_cache);
6752 + else if (peer->endpoint.addr.sa_family == AF_INET6)
6753 + ret = send6(peer->device, skb, &peer->endpoint, ds,
6754 + &peer->endpoint_cache);
6756 + dev_kfree_skb(skb);
6758 + peer->tx_bytes += skb_len;
6759 + read_unlock_bh(&peer->endpoint_lock);
6764 +int wg_socket_send_buffer_to_peer(struct wg_peer *peer, void *buffer,
6765 + size_t len, u8 ds)
6767 + struct sk_buff *skb = alloc_skb(len + SKB_HEADER_LEN, GFP_ATOMIC);
6769 + if (unlikely(!skb))
6772 + skb_reserve(skb, SKB_HEADER_LEN);
6773 + skb_set_inner_network_header(skb, 0);
6774 + skb_put_data(skb, buffer, len);
6775 + return wg_socket_send_skb_to_peer(peer, skb, ds);
6778 +int wg_socket_send_buffer_as_reply_to_skb(struct wg_device *wg,
6779 + struct sk_buff *in_skb, void *buffer,
6783 + struct sk_buff *skb;
6784 + struct endpoint endpoint;
6786 + if (unlikely(!in_skb))
6788 + ret = wg_socket_endpoint_from_skb(&endpoint, in_skb);
6789 + if (unlikely(ret < 0))
6792 + skb = alloc_skb(len + SKB_HEADER_LEN, GFP_ATOMIC);
6793 + if (unlikely(!skb))
6795 + skb_reserve(skb, SKB_HEADER_LEN);
6796 + skb_set_inner_network_header(skb, 0);
6797 + skb_put_data(skb, buffer, len);
6799 + if (endpoint.addr.sa_family == AF_INET)
6800 + ret = send4(wg, skb, &endpoint, 0, NULL);
6801 + else if (endpoint.addr.sa_family == AF_INET6)
6802 + ret = send6(wg, skb, &endpoint, 0, NULL);
6803 + /* No other possibilities if the endpoint is valid, which it is,
6804 + * as we checked above.
6810 +int wg_socket_endpoint_from_skb(struct endpoint *endpoint,
6811 + const struct sk_buff *skb)
6813 + memset(endpoint, 0, sizeof(*endpoint));
6814 + if (skb->protocol == htons(ETH_P_IP)) {
6815 + endpoint->addr4.sin_family = AF_INET;
6816 + endpoint->addr4.sin_port = udp_hdr(skb)->source;
6817 + endpoint->addr4.sin_addr.s_addr = ip_hdr(skb)->saddr;
6818 + endpoint->src4.s_addr = ip_hdr(skb)->daddr;
6819 + endpoint->src_if4 = skb->skb_iif;
6820 + } else if (skb->protocol == htons(ETH_P_IPV6)) {
6821 + endpoint->addr6.sin6_family = AF_INET6;
6822 + endpoint->addr6.sin6_port = udp_hdr(skb)->source;
6823 + endpoint->addr6.sin6_addr = ipv6_hdr(skb)->saddr;
6824 + endpoint->addr6.sin6_scope_id = ipv6_iface_scope_id(
6825 + &ipv6_hdr(skb)->saddr, skb->skb_iif);
6826 + endpoint->src6 = ipv6_hdr(skb)->daddr;
6833 +static bool endpoint_eq(const struct endpoint *a, const struct endpoint *b)
6835 + return (a->addr.sa_family == AF_INET && b->addr.sa_family == AF_INET &&
6836 + a->addr4.sin_port == b->addr4.sin_port &&
6837 + a->addr4.sin_addr.s_addr == b->addr4.sin_addr.s_addr &&
6838 + a->src4.s_addr == b->src4.s_addr && a->src_if4 == b->src_if4) ||
6839 + (a->addr.sa_family == AF_INET6 &&
6840 + b->addr.sa_family == AF_INET6 &&
6841 + a->addr6.sin6_port == b->addr6.sin6_port &&
6842 + ipv6_addr_equal(&a->addr6.sin6_addr, &b->addr6.sin6_addr) &&
6843 + a->addr6.sin6_scope_id == b->addr6.sin6_scope_id &&
6844 + ipv6_addr_equal(&a->src6, &b->src6)) ||
6845 + unlikely(!a->addr.sa_family && !b->addr.sa_family);
6848 +void wg_socket_set_peer_endpoint(struct wg_peer *peer,
6849 + const struct endpoint *endpoint)
6851 + /* First we check unlocked, in order to optimize, since it's pretty rare
6852 + * that an endpoint will change. If we happen to be mid-write, and two
6853 + * CPUs wind up writing the same thing or something slightly different,
6854 + * it doesn't really matter much either.
6856 + if (endpoint_eq(endpoint, &peer->endpoint))
6858 + write_lock_bh(&peer->endpoint_lock);
6859 + if (endpoint->addr.sa_family == AF_INET) {
6860 + peer->endpoint.addr4 = endpoint->addr4;
6861 + peer->endpoint.src4 = endpoint->src4;
6862 + peer->endpoint.src_if4 = endpoint->src_if4;
6863 + } else if (endpoint->addr.sa_family == AF_INET6) {
6864 + peer->endpoint.addr6 = endpoint->addr6;
6865 + peer->endpoint.src6 = endpoint->src6;
6869 + dst_cache_reset(&peer->endpoint_cache);
6871 + write_unlock_bh(&peer->endpoint_lock);
6874 +void wg_socket_set_peer_endpoint_from_skb(struct wg_peer *peer,
6875 + const struct sk_buff *skb)
6877 + struct endpoint endpoint;
6879 + if (!wg_socket_endpoint_from_skb(&endpoint, skb))
6880 + wg_socket_set_peer_endpoint(peer, &endpoint);
6883 +void wg_socket_clear_peer_endpoint_src(struct wg_peer *peer)
6885 + write_lock_bh(&peer->endpoint_lock);
6886 + memset(&peer->endpoint.src6, 0, sizeof(peer->endpoint.src6));
6887 + dst_cache_reset(&peer->endpoint_cache);
6888 + write_unlock_bh(&peer->endpoint_lock);
6891 +static int wg_receive(struct sock *sk, struct sk_buff *skb)
6893 + struct wg_device *wg;
6895 + if (unlikely(!sk))
6897 + wg = sk->sk_user_data;
6898 + if (unlikely(!wg))
6900 + wg_packet_receive(wg, skb);
6908 +static void sock_free(struct sock *sock)
6910 + if (unlikely(!sock))
6912 + sk_clear_memalloc(sock);
6913 + udp_tunnel_sock_release(sock->sk_socket);
6916 +static void set_sock_opts(struct socket *sock)
6918 + sock->sk->sk_allocation = GFP_ATOMIC;
6919 + sock->sk->sk_sndbuf = INT_MAX;
6920 + sk_set_memalloc(sock->sk);
6923 +int wg_socket_init(struct wg_device *wg, u16 port)
6926 + struct udp_tunnel_sock_cfg cfg = {
6927 + .sk_user_data = wg,
6929 + .encap_rcv = wg_receive
6931 + struct socket *new4 = NULL, *new6 = NULL;
6932 + struct udp_port_cfg port4 = {
6933 + .family = AF_INET,
6934 + .local_ip.s_addr = htonl(INADDR_ANY),
6935 + .local_udp_port = htons(port),
6936 + .use_udp_checksums = true
6938 +#if IS_ENABLED(CONFIG_IPV6)
6940 + struct udp_port_cfg port6 = {
6941 + .family = AF_INET6,
6942 + .local_ip6 = IN6ADDR_ANY_INIT,
6943 + .use_udp6_tx_checksums = true,
6944 + .use_udp6_rx_checksums = true,
6945 + .ipv6_v6only = true
6949 +#if IS_ENABLED(CONFIG_IPV6)
6953 + ret = udp_sock_create(wg->creating_net, &port4, &new4);
6955 + pr_err("%s: Could not create IPv4 socket\n", wg->dev->name);
6958 + set_sock_opts(new4);
6959 + setup_udp_tunnel_sock(wg->creating_net, new4, &cfg);
6961 +#if IS_ENABLED(CONFIG_IPV6)
6962 + if (ipv6_mod_enabled()) {
6963 + port6.local_udp_port = inet_sk(new4->sk)->inet_sport;
6964 + ret = udp_sock_create(wg->creating_net, &port6, &new6);
6966 + udp_tunnel_sock_release(new4);
6967 + if (ret == -EADDRINUSE && !port && retries++ < 100)
6969 + pr_err("%s: Could not create IPv6 socket\n",
6973 + set_sock_opts(new6);
6974 + setup_udp_tunnel_sock(wg->creating_net, new6, &cfg);
6978 + wg_socket_reinit(wg, new4->sk, new6 ? new6->sk : NULL);
6982 +void wg_socket_reinit(struct wg_device *wg, struct sock *new4,
6983 + struct sock *new6)
6985 + struct sock *old4, *old6;
6987 + mutex_lock(&wg->socket_update_lock);
6988 + old4 = rcu_dereference_protected(wg->sock4,
6989 + lockdep_is_held(&wg->socket_update_lock));
6990 + old6 = rcu_dereference_protected(wg->sock6,
6991 + lockdep_is_held(&wg->socket_update_lock));
6992 + rcu_assign_pointer(wg->sock4, new4);
6993 + rcu_assign_pointer(wg->sock6, new6);
6995 + wg->incoming_port = ntohs(inet_sk(new4)->inet_sport);
6996 + mutex_unlock(&wg->socket_update_lock);
6997 + synchronize_rcu();
6998 + synchronize_net();
7003 +++ b/drivers/net/wireguard/socket.h
7005 +/* SPDX-License-Identifier: GPL-2.0 */
7007 + * Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
7010 +#ifndef _WG_SOCKET_H
7011 +#define _WG_SOCKET_H
7013 +#include <linux/netdevice.h>
7014 +#include <linux/udp.h>
7015 +#include <linux/if_vlan.h>
7016 +#include <linux/if_ether.h>
7018 +int wg_socket_init(struct wg_device *wg, u16 port);
7019 +void wg_socket_reinit(struct wg_device *wg, struct sock *new4,
7020 + struct sock *new6);
7021 +int wg_socket_send_buffer_to_peer(struct wg_peer *peer, void *data,
7022 + size_t len, u8 ds);
7023 +int wg_socket_send_skb_to_peer(struct wg_peer *peer, struct sk_buff *skb,
7025 +int wg_socket_send_buffer_as_reply_to_skb(struct wg_device *wg,
7026 + struct sk_buff *in_skb,
7027 + void *out_buffer, size_t len);
7029 +int wg_socket_endpoint_from_skb(struct endpoint *endpoint,
7030 + const struct sk_buff *skb);
7031 +void wg_socket_set_peer_endpoint(struct wg_peer *peer,
7032 + const struct endpoint *endpoint);
7033 +void wg_socket_set_peer_endpoint_from_skb(struct wg_peer *peer,
7034 + const struct sk_buff *skb);
7035 +void wg_socket_clear_peer_endpoint_src(struct wg_peer *peer);
7037 +#if defined(CONFIG_DYNAMIC_DEBUG) || defined(DEBUG)
7038 +#define net_dbg_skb_ratelimited(fmt, dev, skb, ...) do { \
7039 + struct endpoint __endpoint; \
7040 + wg_socket_endpoint_from_skb(&__endpoint, skb); \
7041 + net_dbg_ratelimited(fmt, dev, &__endpoint.addr, \
7045 +#define net_dbg_skb_ratelimited(fmt, skb, ...)
7048 +#endif /* _WG_SOCKET_H */
7050 +++ b/drivers/net/wireguard/timers.c
7052 +// SPDX-License-Identifier: GPL-2.0
7054 + * Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
7057 +#include "timers.h"
7058 +#include "device.h"
7060 +#include "queueing.h"
7061 +#include "socket.h"
7064 + * - Timer for retransmitting the handshake if we don't hear back after
7065 + * `REKEY_TIMEOUT + jitter` ms.
7067 + * - Timer for sending empty packet if we have received a packet but after have
7068 + * not sent one for `KEEPALIVE_TIMEOUT` ms.
7070 + * - Timer for initiating new handshake if we have sent a packet but after have
7071 + * not received one (even empty) for `(KEEPALIVE_TIMEOUT + REKEY_TIMEOUT) +
7074 + * - Timer for zeroing out all ephemeral keys after `(REJECT_AFTER_TIME * 3)` ms
7075 + * if no new keys have been received.
7077 + * - Timer for, if enabled, sending an empty authenticated packet every user-
7078 + * specified seconds.
7081 +static inline void mod_peer_timer(struct wg_peer *peer,
7082 + struct timer_list *timer,
7083 + unsigned long expires)
7085 + rcu_read_lock_bh();
7086 + if (likely(netif_running(peer->device->dev) &&
7087 + !READ_ONCE(peer->is_dead)))
7088 + mod_timer(timer, expires);
7089 + rcu_read_unlock_bh();
7092 +static void wg_expired_retransmit_handshake(struct timer_list *timer)
7094 + struct wg_peer *peer = from_timer(peer, timer,
7095 + timer_retransmit_handshake);
7097 + if (peer->timer_handshake_attempts > MAX_TIMER_HANDSHAKES) {
7098 + pr_debug("%s: Handshake for peer %llu (%pISpfsc) did not complete after %d attempts, giving up\n",
7099 + peer->device->dev->name, peer->internal_id,
7100 + &peer->endpoint.addr, MAX_TIMER_HANDSHAKES + 2);
7102 + del_timer(&peer->timer_send_keepalive);
7103 + /* We drop all packets without a keypair and don't try again,
7104 + * if we try unsuccessfully for too long to make a handshake.
7106 + wg_packet_purge_staged_packets(peer);
7108 + /* We set a timer for destroying any residue that might be left
7109 + * of a partial exchange.
7111 + if (!timer_pending(&peer->timer_zero_key_material))
7112 + mod_peer_timer(peer, &peer->timer_zero_key_material,
7113 + jiffies + REJECT_AFTER_TIME * 3 * HZ);
7115 + ++peer->timer_handshake_attempts;
7116 + pr_debug("%s: Handshake for peer %llu (%pISpfsc) did not complete after %d seconds, retrying (try %d)\n",
7117 + peer->device->dev->name, peer->internal_id,
7118 + &peer->endpoint.addr, REKEY_TIMEOUT,
7119 + peer->timer_handshake_attempts + 1);
7121 + /* We clear the endpoint address src address, in case this is
7122 + * the cause of trouble.
7124 + wg_socket_clear_peer_endpoint_src(peer);
7126 + wg_packet_send_queued_handshake_initiation(peer, true);
7130 +static void wg_expired_send_keepalive(struct timer_list *timer)
7132 + struct wg_peer *peer = from_timer(peer, timer, timer_send_keepalive);
7134 + wg_packet_send_keepalive(peer);
7135 + if (peer->timer_need_another_keepalive) {
7136 + peer->timer_need_another_keepalive = false;
7137 + mod_peer_timer(peer, &peer->timer_send_keepalive,
7138 + jiffies + KEEPALIVE_TIMEOUT * HZ);
7142 +static void wg_expired_new_handshake(struct timer_list *timer)
7144 + struct wg_peer *peer = from_timer(peer, timer, timer_new_handshake);
7146 + pr_debug("%s: Retrying handshake with peer %llu (%pISpfsc) because we stopped hearing back after %d seconds\n",
7147 + peer->device->dev->name, peer->internal_id,
7148 + &peer->endpoint.addr, KEEPALIVE_TIMEOUT + REKEY_TIMEOUT);
7149 + /* We clear the endpoint address src address, in case this is the cause
7152 + wg_socket_clear_peer_endpoint_src(peer);
7153 + wg_packet_send_queued_handshake_initiation(peer, false);
7156 +static void wg_expired_zero_key_material(struct timer_list *timer)
7158 + struct wg_peer *peer = from_timer(peer, timer, timer_zero_key_material);
7160 + rcu_read_lock_bh();
7161 + if (!READ_ONCE(peer->is_dead)) {
7162 + wg_peer_get(peer);
7163 + if (!queue_work(peer->device->handshake_send_wq,
7164 + &peer->clear_peer_work))
7165 + /* If the work was already on the queue, we want to drop
7166 + * the extra reference.
7168 + wg_peer_put(peer);
7170 + rcu_read_unlock_bh();
7173 +static void wg_queued_expired_zero_key_material(struct work_struct *work)
7175 + struct wg_peer *peer = container_of(work, struct wg_peer,
7178 + pr_debug("%s: Zeroing out all keys for peer %llu (%pISpfsc), since we haven't received a new one in %d seconds\n",
7179 + peer->device->dev->name, peer->internal_id,
7180 + &peer->endpoint.addr, REJECT_AFTER_TIME * 3);
7181 + wg_noise_handshake_clear(&peer->handshake);
7182 + wg_noise_keypairs_clear(&peer->keypairs);
7183 + wg_peer_put(peer);
7186 +static void wg_expired_send_persistent_keepalive(struct timer_list *timer)
7188 + struct wg_peer *peer = from_timer(peer, timer,
7189 + timer_persistent_keepalive);
7191 + if (likely(peer->persistent_keepalive_interval))
7192 + wg_packet_send_keepalive(peer);
7195 +/* Should be called after an authenticated data packet is sent. */
7196 +void wg_timers_data_sent(struct wg_peer *peer)
7198 + if (!timer_pending(&peer->timer_new_handshake))
7199 + mod_peer_timer(peer, &peer->timer_new_handshake,
7200 + jiffies + (KEEPALIVE_TIMEOUT + REKEY_TIMEOUT) * HZ +
7201 + prandom_u32_max(REKEY_TIMEOUT_JITTER_MAX_JIFFIES));
7204 +/* Should be called after an authenticated data packet is received. */
7205 +void wg_timers_data_received(struct wg_peer *peer)
7207 + if (likely(netif_running(peer->device->dev))) {
7208 + if (!timer_pending(&peer->timer_send_keepalive))
7209 + mod_peer_timer(peer, &peer->timer_send_keepalive,
7210 + jiffies + KEEPALIVE_TIMEOUT * HZ);
7212 + peer->timer_need_another_keepalive = true;
7216 +/* Should be called after any type of authenticated packet is sent, whether
7217 + * keepalive, data, or handshake.
7219 +void wg_timers_any_authenticated_packet_sent(struct wg_peer *peer)
7221 + del_timer(&peer->timer_send_keepalive);
7224 +/* Should be called after any type of authenticated packet is received, whether
7225 + * keepalive, data, or handshake.
7227 +void wg_timers_any_authenticated_packet_received(struct wg_peer *peer)
7229 + del_timer(&peer->timer_new_handshake);
7232 +/* Should be called after a handshake initiation message is sent. */
7233 +void wg_timers_handshake_initiated(struct wg_peer *peer)
7235 + mod_peer_timer(peer, &peer->timer_retransmit_handshake,
7236 + jiffies + REKEY_TIMEOUT * HZ +
7237 + prandom_u32_max(REKEY_TIMEOUT_JITTER_MAX_JIFFIES));
7240 +/* Should be called after a handshake response message is received and processed
7241 + * or when getting key confirmation via the first data message.
7243 +void wg_timers_handshake_complete(struct wg_peer *peer)
7245 + del_timer(&peer->timer_retransmit_handshake);
7246 + peer->timer_handshake_attempts = 0;
7247 + peer->sent_lastminute_handshake = false;
7248 + ktime_get_real_ts64(&peer->walltime_last_handshake);
7251 +/* Should be called after an ephemeral key is created, which is before sending a
7252 + * handshake response or after receiving a handshake response.
7254 +void wg_timers_session_derived(struct wg_peer *peer)
7256 + mod_peer_timer(peer, &peer->timer_zero_key_material,
7257 + jiffies + REJECT_AFTER_TIME * 3 * HZ);
7260 +/* Should be called before a packet with authentication, whether
7261 + * keepalive, data, or handshakem is sent, or after one is received.
7263 +void wg_timers_any_authenticated_packet_traversal(struct wg_peer *peer)
7265 + if (peer->persistent_keepalive_interval)
7266 + mod_peer_timer(peer, &peer->timer_persistent_keepalive,
7267 + jiffies + peer->persistent_keepalive_interval * HZ);
7270 +void wg_timers_init(struct wg_peer *peer)
7272 + timer_setup(&peer->timer_retransmit_handshake,
7273 + wg_expired_retransmit_handshake, 0);
7274 + timer_setup(&peer->timer_send_keepalive, wg_expired_send_keepalive, 0);
7275 + timer_setup(&peer->timer_new_handshake, wg_expired_new_handshake, 0);
7276 + timer_setup(&peer->timer_zero_key_material,
7277 + wg_expired_zero_key_material, 0);
7278 + timer_setup(&peer->timer_persistent_keepalive,
7279 + wg_expired_send_persistent_keepalive, 0);
7280 + INIT_WORK(&peer->clear_peer_work, wg_queued_expired_zero_key_material);
7281 + peer->timer_handshake_attempts = 0;
7282 + peer->sent_lastminute_handshake = false;
7283 + peer->timer_need_another_keepalive = false;
7286 +void wg_timers_stop(struct wg_peer *peer)
7288 + del_timer_sync(&peer->timer_retransmit_handshake);
7289 + del_timer_sync(&peer->timer_send_keepalive);
7290 + del_timer_sync(&peer->timer_new_handshake);
7291 + del_timer_sync(&peer->timer_zero_key_material);
7292 + del_timer_sync(&peer->timer_persistent_keepalive);
7293 + flush_work(&peer->clear_peer_work);
7296 +++ b/drivers/net/wireguard/timers.h
7298 +/* SPDX-License-Identifier: GPL-2.0 */
7300 + * Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
7303 +#ifndef _WG_TIMERS_H
7304 +#define _WG_TIMERS_H
7306 +#include <linux/ktime.h>
7310 +void wg_timers_init(struct wg_peer *peer);
7311 +void wg_timers_stop(struct wg_peer *peer);
7312 +void wg_timers_data_sent(struct wg_peer *peer);
7313 +void wg_timers_data_received(struct wg_peer *peer);
7314 +void wg_timers_any_authenticated_packet_sent(struct wg_peer *peer);
7315 +void wg_timers_any_authenticated_packet_received(struct wg_peer *peer);
7316 +void wg_timers_handshake_initiated(struct wg_peer *peer);
7317 +void wg_timers_handshake_complete(struct wg_peer *peer);
7318 +void wg_timers_session_derived(struct wg_peer *peer);
7319 +void wg_timers_any_authenticated_packet_traversal(struct wg_peer *peer);
7321 +static inline bool wg_birthdate_has_expired(u64 birthday_nanoseconds,
7322 + u64 expiration_seconds)
7324 + return (s64)(birthday_nanoseconds + expiration_seconds * NSEC_PER_SEC)
7325 + <= (s64)ktime_get_coarse_boottime_ns();
7328 +#endif /* _WG_TIMERS_H */
7330 +++ b/drivers/net/wireguard/version.h
7332 +#define WIREGUARD_VERSION "1.0.0"
7334 +++ b/include/uapi/linux/wireguard.h
7336 +/* SPDX-License-Identifier: (GPL-2.0 WITH Linux-syscall-note) OR MIT */
7338 + * Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
7343 + * The below enums and macros are for interfacing with WireGuard, using generic
7344 + * netlink, with family WG_GENL_NAME and version WG_GENL_VERSION. It defines two
7345 + * methods: get and set. Note that while they share many common attributes,
7346 + * these two functions actually accept a slightly different set of inputs and
7349 + * WG_CMD_GET_DEVICE
7350 + * -----------------
7352 + * May only be called via NLM_F_REQUEST | NLM_F_DUMP. The command should contain
7353 + * one but not both of:
7355 + * WGDEVICE_A_IFINDEX: NLA_U32
7356 + * WGDEVICE_A_IFNAME: NLA_NUL_STRING, maxlen IFNAMESIZ - 1
7358 + * The kernel will then return several messages (NLM_F_MULTI) containing the
7359 + * following tree of nested items:
7361 + * WGDEVICE_A_IFINDEX: NLA_U32
7362 + * WGDEVICE_A_IFNAME: NLA_NUL_STRING, maxlen IFNAMESIZ - 1
7363 + * WGDEVICE_A_PRIVATE_KEY: NLA_EXACT_LEN, len WG_KEY_LEN
7364 + * WGDEVICE_A_PUBLIC_KEY: NLA_EXACT_LEN, len WG_KEY_LEN
7365 + * WGDEVICE_A_LISTEN_PORT: NLA_U16
7366 + * WGDEVICE_A_FWMARK: NLA_U32
7367 + * WGDEVICE_A_PEERS: NLA_NESTED
7369 + * WGPEER_A_PUBLIC_KEY: NLA_EXACT_LEN, len WG_KEY_LEN
7370 + * WGPEER_A_PRESHARED_KEY: NLA_EXACT_LEN, len WG_KEY_LEN
7371 + * WGPEER_A_ENDPOINT: NLA_MIN_LEN(struct sockaddr), struct sockaddr_in or struct sockaddr_in6
7372 + * WGPEER_A_PERSISTENT_KEEPALIVE_INTERVAL: NLA_U16
7373 + * WGPEER_A_LAST_HANDSHAKE_TIME: NLA_EXACT_LEN, struct __kernel_timespec
7374 + * WGPEER_A_RX_BYTES: NLA_U64
7375 + * WGPEER_A_TX_BYTES: NLA_U64
7376 + * WGPEER_A_ALLOWEDIPS: NLA_NESTED
7378 + * WGALLOWEDIP_A_FAMILY: NLA_U16
7379 + * WGALLOWEDIP_A_IPADDR: NLA_MIN_LEN(struct in_addr), struct in_addr or struct in6_addr
7380 + * WGALLOWEDIP_A_CIDR_MASK: NLA_U8
7386 + * WGPEER_A_PROTOCOL_VERSION: NLA_U32
7391 + * It is possible that all of the allowed IPs of a single peer will not
7392 + * fit within a single netlink message. In that case, the same peer will
7393 + * be written in the following message, except it will only contain
7394 + * WGPEER_A_PUBLIC_KEY and WGPEER_A_ALLOWEDIPS. This may occur several
7395 + * times in a row for the same peer. It is then up to the receiver to
7396 + * coalesce adjacent peers. Likewise, it is possible that all peers will
7397 + * not fit within a single message. So, subsequent peers will be sent
7398 + * in following messages, except those will only contain WGDEVICE_A_IFNAME
7399 + * and WGDEVICE_A_PEERS. It is then up to the receiver to coalesce these
7400 + * messages to form the complete list of peers.
7402 + * Since this is an NLA_F_DUMP command, the final message will always be
7403 + * NLMSG_DONE, even if an error occurs. However, this NLMSG_DONE message
7404 + * contains an integer error code. It is either zero or a negative error
7405 + * code corresponding to the errno.
7407 + * WG_CMD_SET_DEVICE
7408 + * -----------------
7410 + * May only be called via NLM_F_REQUEST. The command should contain the
7411 + * following tree of nested items, containing one but not both of
7412 + * WGDEVICE_A_IFINDEX and WGDEVICE_A_IFNAME:
7414 + * WGDEVICE_A_IFINDEX: NLA_U32
7415 + * WGDEVICE_A_IFNAME: NLA_NUL_STRING, maxlen IFNAMESIZ - 1
7416 + * WGDEVICE_A_FLAGS: NLA_U32, 0 or WGDEVICE_F_REPLACE_PEERS if all current
7417 + * peers should be removed prior to adding the list below.
7418 + * WGDEVICE_A_PRIVATE_KEY: len WG_KEY_LEN, all zeros to remove
7419 + * WGDEVICE_A_LISTEN_PORT: NLA_U16, 0 to choose randomly
7420 + * WGDEVICE_A_FWMARK: NLA_U32, 0 to disable
7421 + * WGDEVICE_A_PEERS: NLA_NESTED
7423 + * WGPEER_A_PUBLIC_KEY: len WG_KEY_LEN
7424 + * WGPEER_A_FLAGS: NLA_U32, 0 and/or WGPEER_F_REMOVE_ME if the
7425 + * specified peer should not exist at the end of the
7426 + * operation, rather than added/updated and/or
7427 + * WGPEER_F_REPLACE_ALLOWEDIPS if all current allowed
7428 + * IPs of this peer should be removed prior to adding
7429 + * the list below and/or WGPEER_F_UPDATE_ONLY if the
7430 + * peer should only be set if it already exists.
7431 + * WGPEER_A_PRESHARED_KEY: len WG_KEY_LEN, all zeros to remove
7432 + * WGPEER_A_ENDPOINT: struct sockaddr_in or struct sockaddr_in6
7433 + * WGPEER_A_PERSISTENT_KEEPALIVE_INTERVAL: NLA_U16, 0 to disable
7434 + * WGPEER_A_ALLOWEDIPS: NLA_NESTED
7436 + * WGALLOWEDIP_A_FAMILY: NLA_U16
7437 + * WGALLOWEDIP_A_IPADDR: struct in_addr or struct in6_addr
7438 + * WGALLOWEDIP_A_CIDR_MASK: NLA_U8
7444 + * WGPEER_A_PROTOCOL_VERSION: NLA_U32, should not be set or used at
7445 + * all by most users of this API, as the
7446 + * most recent protocol will be used when
7447 + * this is unset. Otherwise, must be set
7453 + * It is possible that the amount of configuration data exceeds that of
7454 + * the maximum message length accepted by the kernel. In that case, several
7455 + * messages should be sent one after another, with each successive one
7456 + * filling in information not contained in the prior. Note that if
7457 + * WGDEVICE_F_REPLACE_PEERS is specified in the first message, it probably
7458 + * should not be specified in fragments that come after, so that the list
7459 + * of peers is only cleared the first time but appened after. Likewise for
7460 + * peers, if WGPEER_F_REPLACE_ALLOWEDIPS is specified in the first message
7461 + * of a peer, it likely should not be specified in subsequent fragments.
7463 + * If an error occurs, NLMSG_ERROR will reply containing an errno.
7466 +#ifndef _WG_UAPI_WIREGUARD_H
7467 +#define _WG_UAPI_WIREGUARD_H
7469 +#define WG_GENL_NAME "wireguard"
7470 +#define WG_GENL_VERSION 1
7472 +#define WG_KEY_LEN 32
7475 + WG_CMD_GET_DEVICE,
7476 + WG_CMD_SET_DEVICE,
7479 +#define WG_CMD_MAX (__WG_CMD_MAX - 1)
7481 +enum wgdevice_flag {
7482 + WGDEVICE_F_REPLACE_PEERS = 1U << 0,
7483 + __WGDEVICE_F_ALL = WGDEVICE_F_REPLACE_PEERS
7485 +enum wgdevice_attribute {
7486 + WGDEVICE_A_UNSPEC,
7487 + WGDEVICE_A_IFINDEX,
7488 + WGDEVICE_A_IFNAME,
7489 + WGDEVICE_A_PRIVATE_KEY,
7490 + WGDEVICE_A_PUBLIC_KEY,
7492 + WGDEVICE_A_LISTEN_PORT,
7493 + WGDEVICE_A_FWMARK,
7497 +#define WGDEVICE_A_MAX (__WGDEVICE_A_LAST - 1)
7500 + WGPEER_F_REMOVE_ME = 1U << 0,
7501 + WGPEER_F_REPLACE_ALLOWEDIPS = 1U << 1,
7502 + WGPEER_F_UPDATE_ONLY = 1U << 2,
7503 + __WGPEER_F_ALL = WGPEER_F_REMOVE_ME | WGPEER_F_REPLACE_ALLOWEDIPS |
7504 + WGPEER_F_UPDATE_ONLY
7506 +enum wgpeer_attribute {
7508 + WGPEER_A_PUBLIC_KEY,
7509 + WGPEER_A_PRESHARED_KEY,
7511 + WGPEER_A_ENDPOINT,
7512 + WGPEER_A_PERSISTENT_KEEPALIVE_INTERVAL,
7513 + WGPEER_A_LAST_HANDSHAKE_TIME,
7514 + WGPEER_A_RX_BYTES,
7515 + WGPEER_A_TX_BYTES,
7516 + WGPEER_A_ALLOWEDIPS,
7517 + WGPEER_A_PROTOCOL_VERSION,
7520 +#define WGPEER_A_MAX (__WGPEER_A_LAST - 1)
7522 +enum wgallowedip_attribute {
7523 + WGALLOWEDIP_A_UNSPEC,
7524 + WGALLOWEDIP_A_FAMILY,
7525 + WGALLOWEDIP_A_IPADDR,
7526 + WGALLOWEDIP_A_CIDR_MASK,
7527 + __WGALLOWEDIP_A_LAST
7529 +#define WGALLOWEDIP_A_MAX (__WGALLOWEDIP_A_LAST - 1)
7531 +#endif /* _WG_UAPI_WIREGUARD_H */
7533 +++ b/tools/testing/selftests/wireguard/netns.sh
7536 +# SPDX-License-Identifier: GPL-2.0
7538 +# Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
7540 +# This script tests the below topology:
7542 +# ┌─────────────────────┐ ┌──────────────────────────────────┐ ┌─────────────────────┐
7543 +# │ $ns1 namespace │ │ $ns0 namespace │ │ $ns2 namespace │
7545 +# │┌────────┐ │ │ ┌────────┐ │ │ ┌────────┐│
7546 +# ││ wg0 │───────────┼───┼────────────│ lo │────────────┼───┼───────────│ wg0 ││
7547 +# │├────────┴──────────┐│ │ ┌───────┴────────┴────────┐ │ │┌──────────┴────────┤│
7548 +# ││192.168.241.1/24 ││ │ │(ns1) (ns2) │ │ ││192.168.241.2/24 ││
7549 +# ││fd00::1/24 ││ │ │127.0.0.1:1 127.0.0.1:2│ │ ││fd00::2/24 ││
7550 +# │└───────────────────┘│ │ │[::]:1 [::]:2 │ │ │└───────────────────┘│
7551 +# └─────────────────────┘ │ └─────────────────────────┘ │ └─────────────────────┘
7552 +# └──────────────────────────────────┘
7554 +# After the topology is prepared we run a series of TCP/UDP iperf3 tests between the
7555 +# wireguard peers in $ns1 and $ns2. Note that $ns0 is the endpoint for the wg0
7556 +# interfaces in $ns1 and $ns2. See https://www.wireguard.com/netns/ for further
7557 +# details on how this is accomplished.
7561 +export WG_HIDE_KEYS=never
7562 +netns0="wg-test-$$-0"
7563 +netns1="wg-test-$$-1"
7564 +netns2="wg-test-$$-2"
7565 +pretty() { echo -e "\x1b[32m\x1b[1m[+] ${1:+NS$1: }${2}\x1b[0m" >&3; }
7566 +pp() { pretty "" "$*"; "$@"; }
7567 +maybe_exec() { if [[ $BASHPID -eq $$ ]]; then "$@"; else exec "$@"; fi; }
7568 +n0() { pretty 0 "$*"; maybe_exec ip netns exec $netns0 "$@"; }
7569 +n1() { pretty 1 "$*"; maybe_exec ip netns exec $netns1 "$@"; }
7570 +n2() { pretty 2 "$*"; maybe_exec ip netns exec $netns2 "$@"; }
7571 +ip0() { pretty 0 "ip $*"; ip -n $netns0 "$@"; }
7572 +ip1() { pretty 1 "ip $*"; ip -n $netns1 "$@"; }
7573 +ip2() { pretty 2 "ip $*"; ip -n $netns2 "$@"; }
7574 +sleep() { read -t "$1" -N 0 || true; }
7575 +waitiperf() { pretty "${1//*-}" "wait for iperf:5201"; while [[ $(ss -N "$1" -tlp 'sport = 5201') != *iperf3* ]]; do sleep 0.1; done; }
7576 +waitncatudp() { pretty "${1//*-}" "wait for udp:1111"; while [[ $(ss -N "$1" -ulp 'sport = 1111') != *ncat* ]]; do sleep 0.1; done; }
7577 +waitncattcp() { pretty "${1//*-}" "wait for tcp:1111"; while [[ $(ss -N "$1" -tlp 'sport = 1111') != *ncat* ]]; do sleep 0.1; done; }
7578 +waitiface() { pretty "${1//*-}" "wait for $2 to come up"; ip netns exec "$1" bash -c "while [[ \$(< \"/sys/class/net/$2/operstate\") != up ]]; do read -t .1 -N 0 || true; done;"; }
7583 + printf "$orig_message_cost" > /proc/sys/net/core/message_cost
7584 + ip0 link del dev wg0
7585 + ip1 link del dev wg0
7586 + ip2 link del dev wg0
7587 + local to_kill="$(ip netns pids $netns0) $(ip netns pids $netns1) $(ip netns pids $netns2)"
7588 + [[ -n $to_kill ]] && kill $to_kill
7589 + pp ip netns del $netns1
7590 + pp ip netns del $netns2
7591 + pp ip netns del $netns0
7595 +orig_message_cost="$(< /proc/sys/net/core/message_cost)"
7597 +printf 0 > /proc/sys/net/core/message_cost
7599 +ip netns del $netns0 2>/dev/null || true
7600 +ip netns del $netns1 2>/dev/null || true
7601 +ip netns del $netns2 2>/dev/null || true
7602 +pp ip netns add $netns0
7603 +pp ip netns add $netns1
7604 +pp ip netns add $netns2
7605 +ip0 link set up dev lo
7607 +ip0 link add dev wg0 type wireguard
7608 +ip0 link set wg0 netns $netns1
7609 +ip0 link add dev wg0 type wireguard
7610 +ip0 link set wg0 netns $netns2
7611 +key1="$(pp wg genkey)"
7612 +key2="$(pp wg genkey)"
7613 +key3="$(pp wg genkey)"
7614 +pub1="$(pp wg pubkey <<<"$key1")"
7615 +pub2="$(pp wg pubkey <<<"$key2")"
7616 +pub3="$(pp wg pubkey <<<"$key3")"
7617 +psk="$(pp wg genpsk)"
7618 +[[ -n $key1 && -n $key2 && -n $psk ]]
7620 +configure_peers() {
7621 + ip1 addr add 192.168.241.1/24 dev wg0
7622 + ip1 addr add fd00::1/24 dev wg0
7624 + ip2 addr add 192.168.241.2/24 dev wg0
7625 + ip2 addr add fd00::2/24 dev wg0
7628 + private-key <(echo "$key1") \
7631 + preshared-key <(echo "$psk") \
7632 + allowed-ips 192.168.241.2/32,fd00::2/128
7634 + private-key <(echo "$key2") \
7637 + preshared-key <(echo "$psk") \
7638 + allowed-ips 192.168.241.1/32,fd00::1/128
7640 + ip1 link set up dev wg0
7641 + ip2 link set up dev wg0
7647 + n2 ping -c 10 -f -W 1 192.168.241.1
7648 + n1 ping -c 10 -f -W 1 192.168.241.2
7651 + n2 ping6 -c 10 -f -W 1 fd00::1
7652 + n1 ping6 -c 10 -f -W 1 fd00::2
7655 + n2 iperf3 -s -1 -B 192.168.241.2 &
7657 + n1 iperf3 -Z -t 3 -c 192.168.241.2
7660 + n1 iperf3 -s -1 -B fd00::1 &
7662 + n2 iperf3 -Z -t 3 -c fd00::1
7665 + n1 iperf3 -s -1 -B 192.168.241.1 &
7667 + n2 iperf3 -Z -t 3 -b 0 -u -c 192.168.241.1
7670 + n2 iperf3 -s -1 -B fd00::2 &
7672 + n1 iperf3 -Z -t 3 -b 0 -u -c fd00::2
7675 +[[ $(ip1 link show dev wg0) =~ mtu\ ([0-9]+) ]] && orig_mtu="${BASH_REMATCH[1]}"
7676 +big_mtu=$(( 34816 - 1500 + $orig_mtu ))
7678 +# Test using IPv4 as outer transport
7679 +n1 wg set wg0 peer "$pub2" endpoint 127.0.0.1:2
7680 +n2 wg set wg0 peer "$pub1" endpoint 127.0.0.1:1
7681 +# Before calling tests, we first make sure that the stats counters and timestamper are working
7682 +n2 ping -c 10 -f -W 1 192.168.241.1
7683 +{ read _; read _; read _; read rx_bytes _; read _; read tx_bytes _; } < <(ip2 -stats link show dev wg0)
7684 +(( rx_bytes == 1372 && (tx_bytes == 1428 || tx_bytes == 1460) ))
7685 +{ read _; read _; read _; read rx_bytes _; read _; read tx_bytes _; } < <(ip1 -stats link show dev wg0)
7686 +(( tx_bytes == 1372 && (rx_bytes == 1428 || rx_bytes == 1460) ))
7687 +read _ rx_bytes tx_bytes < <(n2 wg show wg0 transfer)
7688 +(( rx_bytes == 1372 && (tx_bytes == 1428 || tx_bytes == 1460) ))
7689 +read _ rx_bytes tx_bytes < <(n1 wg show wg0 transfer)
7690 +(( tx_bytes == 1372 && (rx_bytes == 1428 || rx_bytes == 1460) ))
7691 +read _ timestamp < <(n1 wg show wg0 latest-handshakes)
7692 +(( timestamp != 0 ))
7695 +ip1 link set wg0 mtu $big_mtu
7696 +ip2 link set wg0 mtu $big_mtu
7699 +ip1 link set wg0 mtu $orig_mtu
7700 +ip2 link set wg0 mtu $orig_mtu
7702 +# Test using IPv6 as outer transport
7703 +n1 wg set wg0 peer "$pub2" endpoint [::1]:2
7704 +n2 wg set wg0 peer "$pub1" endpoint [::1]:1
7706 +ip1 link set wg0 mtu $big_mtu
7707 +ip2 link set wg0 mtu $big_mtu
7710 +# Test that route MTUs work with the padding
7711 +ip1 link set wg0 mtu 1300
7712 +ip2 link set wg0 mtu 1300
7713 +n1 wg set wg0 peer "$pub2" endpoint 127.0.0.1:2
7714 +n2 wg set wg0 peer "$pub1" endpoint 127.0.0.1:1
7715 +n0 iptables -A INPUT -m length --length 1360 -j DROP
7716 +n1 ip route add 192.168.241.2/32 dev wg0 mtu 1299
7717 +n2 ip route add 192.168.241.1/32 dev wg0 mtu 1299
7718 +n2 ping -c 1 -W 1 -s 1269 192.168.241.1
7719 +n2 ip route delete 192.168.241.1/32 dev wg0 mtu 1299
7720 +n1 ip route delete 192.168.241.2/32 dev wg0 mtu 1299
7721 +n0 iptables -F INPUT
7723 +ip1 link set wg0 mtu $orig_mtu
7724 +ip2 link set wg0 mtu $orig_mtu
7726 +# Test using IPv4 that roaming works
7727 +ip0 -4 addr del 127.0.0.1/8 dev lo
7728 +ip0 -4 addr add 127.212.121.99/8 dev lo
7729 +n1 wg set wg0 listen-port 9999
7730 +n1 wg set wg0 peer "$pub2" endpoint 127.0.0.1:2
7731 +n1 ping6 -W 1 -c 1 fd00::2
7732 +[[ $(n2 wg show wg0 endpoints) == "$pub1 127.212.121.99:9999" ]]
7734 +# Test using IPv6 that roaming works
7735 +n1 wg set wg0 listen-port 9998
7736 +n1 wg set wg0 peer "$pub2" endpoint [::1]:2
7737 +n1 ping -W 1 -c 1 192.168.241.2
7738 +[[ $(n2 wg show wg0 endpoints) == "$pub1 [::1]:9998" ]]
7740 +# Test that crypto-RP filter works
7741 +n1 wg set wg0 peer "$pub2" allowed-ips 192.168.241.0/24
7742 +exec 4< <(n1 ncat -l -u -p 1111)
7744 +waitncatudp $netns1
7745 +n2 ncat -u 192.168.241.1 1111 <<<"X"
7746 +read -r -N 1 -t 1 out <&4 && [[ $out == "X" ]]
7748 +more_specific_key="$(pp wg genkey | pp wg pubkey)"
7749 +n1 wg set wg0 peer "$more_specific_key" allowed-ips 192.168.241.2/32
7750 +n2 wg set wg0 listen-port 9997
7751 +exec 4< <(n1 ncat -l -u -p 1111)
7753 +waitncatudp $netns1
7754 +n2 ncat -u 192.168.241.1 1111 <<<"X"
7755 +! read -r -N 1 -t 1 out <&4 || false
7757 +n1 wg set wg0 peer "$more_specific_key" remove
7758 +[[ $(n1 wg show wg0 endpoints) == "$pub2 [::1]:9997" ]]
7760 +# Test that we can change private keys keys and immediately handshake
7761 +n1 wg set wg0 private-key <(echo "$key1") peer "$pub2" preshared-key <(echo "$psk") allowed-ips 192.168.241.2/32 endpoint 127.0.0.1:2
7762 +n2 wg set wg0 private-key <(echo "$key2") listen-port 2 peer "$pub1" preshared-key <(echo "$psk") allowed-ips 192.168.241.1/32
7763 +n1 ping -W 1 -c 1 192.168.241.2
7764 +n1 wg set wg0 private-key <(echo "$key3")
7765 +n2 wg set wg0 peer "$pub3" preshared-key <(echo "$psk") allowed-ips 192.168.241.1/32 peer "$pub1" remove
7766 +n1 ping -W 1 -c 1 192.168.241.2
7771 +# Test using NAT. We now change the topology to this:
7772 +# ┌────────────────────────────────────────┐ ┌────────────────────────────────────────────────┐ ┌────────────────────────────────────────┐
7773 +# │ $ns1 namespace │ │ $ns0 namespace │ │ $ns2 namespace │
7775 +# │ ┌─────┐ ┌─────┐ │ │ ┌──────┐ ┌──────┐ │ │ ┌─────┐ ┌─────┐ │
7776 +# │ │ wg0 │─────────────│vethc│───────────┼────┼────│vethrc│ │vethrs│──────────────┼─────┼──│veths│────────────│ wg0 │ │
7777 +# │ ├─────┴──────────┐ ├─────┴──────────┐│ │ ├──────┴─────────┐ ├──────┴────────────┐ │ │ ├─────┴──────────┐ ├─────┴──────────┐ │
7778 +# │ │192.168.241.1/24│ │192.168.1.100/24││ │ │192.168.1.1/24 │ │10.0.0.1/24 │ │ │ │10.0.0.100/24 │ │192.168.241.2/24│ │
7779 +# │ │fd00::1/24 │ │ ││ │ │ │ │SNAT:192.168.1.0/24│ │ │ │ │ │fd00::2/24 │ │
7780 +# │ └────────────────┘ └────────────────┘│ │ └────────────────┘ └───────────────────┘ │ │ └────────────────┘ └────────────────┘ │
7781 +# └────────────────────────────────────────┘ └────────────────────────────────────────────────┘ └────────────────────────────────────────┘
7783 +ip1 link add dev wg0 type wireguard
7784 +ip2 link add dev wg0 type wireguard
7787 +ip0 link add vethrc type veth peer name vethc
7788 +ip0 link add vethrs type veth peer name veths
7789 +ip0 link set vethc netns $netns1
7790 +ip0 link set veths netns $netns2
7791 +ip0 link set vethrc up
7792 +ip0 link set vethrs up
7793 +ip0 addr add 192.168.1.1/24 dev vethrc
7794 +ip0 addr add 10.0.0.1/24 dev vethrs
7795 +ip1 addr add 192.168.1.100/24 dev vethc
7796 +ip1 link set vethc up
7797 +ip1 route add default via 192.168.1.1
7798 +ip2 addr add 10.0.0.100/24 dev veths
7799 +ip2 link set veths up
7800 +waitiface $netns0 vethrc
7801 +waitiface $netns0 vethrs
7802 +waitiface $netns1 vethc
7803 +waitiface $netns2 veths
7805 +n0 bash -c 'printf 1 > /proc/sys/net/ipv4/ip_forward'
7806 +n0 bash -c 'printf 2 > /proc/sys/net/netfilter/nf_conntrack_udp_timeout'
7807 +n0 bash -c 'printf 2 > /proc/sys/net/netfilter/nf_conntrack_udp_timeout_stream'
7808 +n0 iptables -t nat -A POSTROUTING -s 192.168.1.0/24 -d 10.0.0.0/24 -j SNAT --to 10.0.0.1
7810 +n1 wg set wg0 peer "$pub2" endpoint 10.0.0.100:2 persistent-keepalive 1
7811 +n1 ping -W 1 -c 1 192.168.241.2
7812 +n2 ping -W 1 -c 1 192.168.241.1
7813 +[[ $(n2 wg show wg0 endpoints) == "$pub1 10.0.0.1:1" ]]
7814 +# Demonstrate n2 can still send packets to n1, since persistent-keepalive will prevent connection tracking entry from expiring (to see entries: `n0 conntrack -L`).
7816 +n2 ping -W 1 -c 1 192.168.241.1
7817 +n1 wg set wg0 peer "$pub2" persistent-keepalive 0
7819 +# Do a wg-quick(8)-style policy routing for the default route, making sure vethc has a v6 address to tease out bugs.
7820 +ip1 -6 addr add fc00::9/96 dev vethc
7821 +ip1 -6 route add default via fc00::1
7822 +ip2 -4 addr add 192.168.99.7/32 dev wg0
7823 +ip2 -6 addr add abab::1111/128 dev wg0
7824 +n1 wg set wg0 fwmark 51820 peer "$pub2" allowed-ips 192.168.99.7,abab::1111
7825 +ip1 -6 route add default dev wg0 table 51820
7826 +ip1 -6 rule add not fwmark 51820 table 51820
7827 +ip1 -6 rule add table main suppress_prefixlength 0
7828 +ip1 -4 route add default dev wg0 table 51820
7829 +ip1 -4 rule add not fwmark 51820 table 51820
7830 +ip1 -4 rule add table main suppress_prefixlength 0
7831 +# suppress_prefixlength only got added in 3.12, and we want to support 3.10+.
7832 +if [[ $(ip1 -4 rule show all) == *suppress_prefixlength* ]]; then
7833 + # Flood the pings instead of sending just one, to trigger routing table reference counting bugs.
7834 + n1 ping -W 1 -c 100 -f 192.168.99.7
7835 + n1 ping -W 1 -c 100 -f abab::1111
7838 +n0 iptables -t nat -F
7839 +ip0 link del vethrc
7840 +ip0 link del vethrs
7844 +# Test that saddr routing is sticky but not too sticky, changing to this topology:
7845 +# ┌────────────────────────────────────────┐ ┌────────────────────────────────────────┐
7846 +# │ $ns1 namespace │ │ $ns2 namespace │
7848 +# │ ┌─────┐ ┌─────┐ │ │ ┌─────┐ ┌─────┐ │
7849 +# │ │ wg0 │─────────────│veth1│───────────┼────┼──│veth2│────────────│ wg0 │ │
7850 +# │ ├─────┴──────────┐ ├─────┴──────────┐│ │ ├─────┴──────────┐ ├─────┴──────────┐ │
7851 +# │ │192.168.241.1/24│ │10.0.0.1/24 ││ │ │10.0.0.2/24 │ │192.168.241.2/24│ │
7852 +# │ │fd00::1/24 │ │fd00:aa::1/96 ││ │ │fd00:aa::2/96 │ │fd00::2/24 │ │
7853 +# │ └────────────────┘ └────────────────┘│ │ └────────────────┘ └────────────────┘ │
7854 +# └────────────────────────────────────────┘ └────────────────────────────────────────┘
7856 +ip1 link add dev wg0 type wireguard
7857 +ip2 link add dev wg0 type wireguard
7859 +ip1 link add veth1 type veth peer name veth2
7860 +ip1 link set veth2 netns $netns2
7861 +n1 bash -c 'printf 0 > /proc/sys/net/ipv6/conf/all/accept_dad'
7862 +n2 bash -c 'printf 0 > /proc/sys/net/ipv6/conf/all/accept_dad'
7863 +n1 bash -c 'printf 0 > /proc/sys/net/ipv6/conf/veth1/accept_dad'
7864 +n2 bash -c 'printf 0 > /proc/sys/net/ipv6/conf/veth2/accept_dad'
7865 +n1 bash -c 'printf 1 > /proc/sys/net/ipv4/conf/veth1/promote_secondaries'
7867 +# First we check that we aren't overly sticky and can fall over to new IPs when old ones are removed
7868 +ip1 addr add 10.0.0.1/24 dev veth1
7869 +ip1 addr add fd00:aa::1/96 dev veth1
7870 +ip2 addr add 10.0.0.2/24 dev veth2
7871 +ip2 addr add fd00:aa::2/96 dev veth2
7872 +ip1 link set veth1 up
7873 +ip2 link set veth2 up
7874 +waitiface $netns1 veth1
7875 +waitiface $netns2 veth2
7876 +n1 wg set wg0 peer "$pub2" endpoint 10.0.0.2:2
7877 +n1 ping -W 1 -c 1 192.168.241.2
7878 +ip1 addr add 10.0.0.10/24 dev veth1
7879 +ip1 addr del 10.0.0.1/24 dev veth1
7880 +n1 ping -W 1 -c 1 192.168.241.2
7881 +n1 wg set wg0 peer "$pub2" endpoint [fd00:aa::2]:2
7882 +n1 ping -W 1 -c 1 192.168.241.2
7883 +ip1 addr add fd00:aa::10/96 dev veth1
7884 +ip1 addr del fd00:aa::1/96 dev veth1
7885 +n1 ping -W 1 -c 1 192.168.241.2
7887 +# Now we show that we can successfully do reply to sender routing
7888 +ip1 link set veth1 down
7889 +ip2 link set veth2 down
7890 +ip1 addr flush dev veth1
7891 +ip2 addr flush dev veth2
7892 +ip1 addr add 10.0.0.1/24 dev veth1
7893 +ip1 addr add 10.0.0.2/24 dev veth1
7894 +ip1 addr add fd00:aa::1/96 dev veth1
7895 +ip1 addr add fd00:aa::2/96 dev veth1
7896 +ip2 addr add 10.0.0.3/24 dev veth2
7897 +ip2 addr add fd00:aa::3/96 dev veth2
7898 +ip1 link set veth1 up
7899 +ip2 link set veth2 up
7900 +waitiface $netns1 veth1
7901 +waitiface $netns2 veth2
7902 +n2 wg set wg0 peer "$pub1" endpoint 10.0.0.1:1
7903 +n2 ping -W 1 -c 1 192.168.241.1
7904 +[[ $(n2 wg show wg0 endpoints) == "$pub1 10.0.0.1:1" ]]
7905 +n2 wg set wg0 peer "$pub1" endpoint [fd00:aa::1]:1
7906 +n2 ping -W 1 -c 1 192.168.241.1
7907 +[[ $(n2 wg show wg0 endpoints) == "$pub1 [fd00:aa::1]:1" ]]
7908 +n2 wg set wg0 peer "$pub1" endpoint 10.0.0.2:1
7909 +n2 ping -W 1 -c 1 192.168.241.1
7910 +[[ $(n2 wg show wg0 endpoints) == "$pub1 10.0.0.2:1" ]]
7911 +n2 wg set wg0 peer "$pub1" endpoint [fd00:aa::2]:1
7912 +n2 ping -W 1 -c 1 192.168.241.1
7913 +[[ $(n2 wg show wg0 endpoints) == "$pub1 [fd00:aa::2]:1" ]]
7915 +# What happens if the inbound destination address belongs to a different interface as the default route?
7916 +ip1 link add dummy0 type dummy
7917 +ip1 addr add 10.50.0.1/24 dev dummy0
7918 +ip1 link set dummy0 up
7919 +ip2 route add 10.50.0.0/24 dev veth2
7920 +n2 wg set wg0 peer "$pub1" endpoint 10.50.0.1:1
7921 +n2 ping -W 1 -c 1 192.168.241.1
7922 +[[ $(n2 wg show wg0 endpoints) == "$pub1 10.50.0.1:1" ]]
7924 +ip1 link del dummy0
7925 +ip1 addr flush dev veth1
7926 +ip2 addr flush dev veth2
7927 +ip1 route flush dev veth1
7928 +ip2 route flush dev veth2
7930 +# Now we see what happens if another interface route takes precedence over an ongoing one
7931 +ip1 link add veth3 type veth peer name veth4
7932 +ip1 link set veth4 netns $netns2
7933 +ip1 addr add 10.0.0.1/24 dev veth1
7934 +ip2 addr add 10.0.0.2/24 dev veth2
7935 +ip1 addr add 10.0.0.3/24 dev veth3
7936 +ip1 link set veth1 up
7937 +ip2 link set veth2 up
7938 +ip1 link set veth3 up
7939 +ip2 link set veth4 up
7940 +waitiface $netns1 veth1
7941 +waitiface $netns2 veth2
7942 +waitiface $netns1 veth3
7943 +waitiface $netns2 veth4
7944 +ip1 route flush dev veth1
7945 +ip1 route flush dev veth3
7946 +ip1 route add 10.0.0.0/24 dev veth1 src 10.0.0.1 metric 2
7947 +n1 wg set wg0 peer "$pub2" endpoint 10.0.0.2:2
7948 +n1 ping -W 1 -c 1 192.168.241.2
7949 +[[ $(n2 wg show wg0 endpoints) == "$pub1 10.0.0.1:1" ]]
7950 +ip1 route add 10.0.0.0/24 dev veth3 src 10.0.0.3 metric 1
7951 +n1 bash -c 'printf 0 > /proc/sys/net/ipv4/conf/veth1/rp_filter'
7952 +n2 bash -c 'printf 0 > /proc/sys/net/ipv4/conf/veth4/rp_filter'
7953 +n1 bash -c 'printf 0 > /proc/sys/net/ipv4/conf/all/rp_filter'
7954 +n2 bash -c 'printf 0 > /proc/sys/net/ipv4/conf/all/rp_filter'
7955 +n1 ping -W 1 -c 1 192.168.241.2
7956 +[[ $(n2 wg show wg0 endpoints) == "$pub1 10.0.0.3:1" ]]
7963 +# We test that Netlink/IPC is working properly by doing things that usually cause split responses
7964 +ip0 link add dev wg0 type wireguard
7965 +config=( "[Interface]" "PrivateKey=$(wg genkey)" "[Peer]" "PublicKey=$(wg genkey)" )
7966 +for a in {1..255}; do
7967 + for b in {0..255}; do
7968 + config+=( "AllowedIPs=$a.$b.0.0/16,$a::$b/128" )
7971 +n0 wg setconf wg0 <(printf '%s\n' "${config[@]}")
7973 +for ip in $(n0 wg show wg0 allowed-ips); do
7976 +((i == 255*256*2+1))
7978 +ip0 link add dev wg0 type wireguard
7979 +config=( "[Interface]" "PrivateKey=$(wg genkey)" )
7980 +for a in {1..40}; do
7981 + config+=( "[Peer]" "PublicKey=$(wg genkey)" )
7982 + for b in {1..52}; do
7983 + config+=( "AllowedIPs=$a.$b.0.0/16" )
7986 +n0 wg setconf wg0 <(printf '%s\n' "${config[@]}")
7988 +while read -r line; do
7990 + for ip in $line; do
7995 +done < <(n0 wg show wg0 allowed-ips)
7998 +ip0 link add wg0 type wireguard
8000 +for i in {1..29}; do
8001 + config+=( "[Peer]" "PublicKey=$(wg genkey)" )
8003 +config+=( "[Peer]" "PublicKey=$(wg genkey)" "AllowedIPs=255.2.3.4/32,abcd::255/128" )
8004 +n0 wg setconf wg0 <(printf '%s\n' "${config[@]}")
8005 +n0 wg showconf wg0 > /dev/null
8009 +for i in {1..197}; do
8010 + allowedips+=( abcd::$i )
8014 +allowedips="${allowedips[*]}"
8016 +ip0 link add wg0 type wireguard
8017 +n0 wg set wg0 peer "$pub1"
8018 +n0 wg set wg0 peer "$pub2" allowed-ips "$allowedips"
8020 + read -r pub allowedips
8021 + [[ $pub == "$pub1" && $allowedips == "(none)" ]]
8022 + read -r pub allowedips
8023 + [[ $pub == "$pub2" ]]
8025 + for _ in $allowedips; do
8029 +} < <(n0 wg show wg0 allowed-ips)
8032 +! n0 wg show doesnotexist || false
8034 +ip0 link add wg0 type wireguard
8035 +n0 wg set wg0 private-key <(echo "$key1") peer "$pub2" preshared-key <(echo "$psk")
8036 +[[ $(n0 wg show wg0 private-key) == "$key1" ]]
8037 +[[ $(n0 wg show wg0 preshared-keys) == "$pub2 $psk" ]]
8038 +n0 wg set wg0 private-key /dev/null peer "$pub2" preshared-key /dev/null
8039 +[[ $(n0 wg show wg0 private-key) == "(none)" ]]
8040 +[[ $(n0 wg show wg0 preshared-keys) == "$pub2 (none)" ]]
8041 +n0 wg set wg0 peer "$pub2"
8042 +n0 wg set wg0 private-key <(echo "$key2")
8043 +[[ $(n0 wg show wg0 public-key) == "$pub2" ]]
8044 +[[ -z $(n0 wg show wg0 peers) ]]
8045 +n0 wg set wg0 peer "$pub2"
8046 +[[ -z $(n0 wg show wg0 peers) ]]
8047 +n0 wg set wg0 private-key <(echo "$key1")
8048 +n0 wg set wg0 peer "$pub2"
8049 +[[ $(n0 wg show wg0 peers) == "$pub2" ]]
8050 +n0 wg set wg0 private-key <(echo "/${key1:1}")
8051 +[[ $(n0 wg show wg0 private-key) == "+${key1:1}" ]]
8052 +n0 wg set wg0 peer "$pub2" allowed-ips 0.0.0.0/0,10.0.0.0/8,100.0.0.0/10,172.16.0.0/12,192.168.0.0/16
8053 +n0 wg set wg0 peer "$pub2" allowed-ips 0.0.0.0/0
8054 +n0 wg set wg0 peer "$pub2" allowed-ips ::/0,1700::/111,5000::/4,e000::/37,9000::/75
8055 +n0 wg set wg0 peer "$pub2" allowed-ips ::/0
8059 +while read -t 0.1 -r line 2>/dev/null || [[ $? -ne 142 ]]; do
8060 + [[ $line =~ .*(wg[0-9]+:\ [A-Z][a-z]+\ [0-9]+)\ .*(created|destroyed).* ]] || continue
8061 + objects["${BASH_REMATCH[1]}"]+="${BASH_REMATCH[2]}"
8064 +for object in "${!objects[@]}"; do
8065 + if [[ ${objects["$object"]} != *createddestroyed ]]; then
8066 + echo "Error: $object: merely ${objects["$object"]}" >&3
8070 +[[ $alldeleted -eq 1 ]]
8071 +pretty "" "Objects that were created were also destroyed."