+config KERNEL_RPI_AXIPERF
+ bool "Compile the kernel with RaspberryPi AXI Performance monitors"
+ default y
+ depends on KERNEL_PERF_EVENTS && TARGET_bcm27xx
+
+config KERNEL_UBSAN
+ bool "Compile the kernel with undefined behaviour sanity checker"
+ help
+ This option enables undefined behaviour sanity checker
+ Compile-time instrumentation is used to detect various undefined
+ behaviours in runtime. Various types of checks may be enabled
+ via boot parameter ubsan_handle
+ (see: Documentation/dev-tools/ubsan.rst).
+
+config KERNEL_UBSAN_SANITIZE_ALL
+ bool "Enable instrumentation for the entire kernel"
+ depends on KERNEL_UBSAN
+ default y
+ help
+ This option activates instrumentation for the entire kernel.
+ If you don't enable this option, you have to explicitly specify
+ UBSAN_SANITIZE := y for the files/directories you want to check for UB.
+ Enabling this option will get kernel image size increased
+ significantly.
+
+config KERNEL_UBSAN_ALIGNMENT
+ bool "Enable checking of pointers alignment"
+ depends on KERNEL_UBSAN
+ help
+ This option enables detection of unaligned memory accesses.
+ Enabling this option on architectures that support unaligned
+ accesses may produce a lot of false positives.
+
+config KERNEL_UBSAN_BOUNDS
+ bool "Perform array index bounds checking"
+ depends on KERNEL_UBSAN
+ help
+ This option enables detection of directly indexed out of bounds array
+ accesses, where the array size is known at compile time. Note that
+ this does not protect array overflows via bad calls to the
+ {str,mem}*cpy() family of functions (that is addressed by
+ FORTIFY_SOURCE).
+
+config KERNEL_UBSAN_NULL
+ bool "Enable checking of null pointers"
+ depends on KERNEL_UBSAN
+ help
+ This option enables detection of memory accesses via a
+ null pointer.
+
+config KERNEL_UBSAN_TRAP
+ bool "On Sanitizer warnings, abort the running kernel code"
+ depends on KERNEL_UBSAN
+ help
+ Building kernels with Sanitizer features enabled tends to grow the
+ kernel size by around 5%, due to adding all the debugging text on
+ failure paths. To avoid this, Sanitizer instrumentation can just
+ issue a trap. This reduces the kernel size overhead but turns all
+ warnings (including potentially harmless conditions) into full
+ exceptions that abort the running kernel code (regardless of context,
+ locks held, etc), which may destabilize the system. For some system
+ builders this is an acceptable trade-off.
+
+config KERNEL_KASAN
+ bool "Compile the kernel with KASan: runtime memory debugger"
+ select KERNEL_SLUB_DEBUG
+ depends on (x86_64 || aarch64)
+ help
+ Enables kernel address sanitizer - runtime memory debugger,
+ designed to find out-of-bounds accesses and use-after-free bugs.
+ This is strictly a debugging feature and it requires a gcc version
+ of 4.9.2 or later. Detection of out of bounds accesses to stack or
+ global variables requires gcc 5.0 or later.
+ This feature consumes about 1/8 of available memory and brings about
+ ~x3 performance slowdown.
+ For better error detection enable CONFIG_STACKTRACE.
+ Currently CONFIG_KASAN doesn't work with CONFIG_DEBUG_SLAB
+ (the resulting kernel does not boot).
+
+config KERNEL_KASAN_EXTRA
+ bool "KAsan: extra checks"
+ depends on KERNEL_KASAN && KERNEL_DEBUG_KERNEL
+ help
+ This enables further checks in the kernel address sanitizer, for now
+ it only includes the address-use-after-scope check that can lead
+ to excessive kernel stack usage, frame size warnings and longer
+ compile time.
+ https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81715 has more
+
+config KERNEL_KASAN_VMALLOC
+ bool "Back mappings in vmalloc space with real shadow memory"
+ depends on KERNEL_KASAN
+ help
+ By default, the shadow region for vmalloc space is the read-only
+ zero page. This means that KASAN cannot detect errors involving
+ vmalloc space.
+
+ Enabling this option will hook in to vmap/vmalloc and back those
+ mappings with real shadow memory allocated on demand. This allows
+ for KASAN to detect more sorts of errors (and to support vmapped
+ stacks), but at the cost of higher memory usage.
+
+ This option depends on HAVE_ARCH_KASAN_VMALLOC, but we can't
+ depend on that in here, so it is possible that enabling this
+ will have no effect.
+
+if KERNEL_KASAN
+choice
+ prompt "KASAN mode"
+ depends on KERNEL_KASAN
+ default KERNEL_KASAN_GENERIC
+ help
+ KASAN has three modes:
+
+ 1. Generic KASAN (supported by many architectures, enabled with
+ CONFIG_KASAN_GENERIC, similar to userspace ASan),
+ 2. Software Tag-Based KASAN (arm64 only, based on software memory
+ tagging, enabled with CONFIG_KASAN_SW_TAGS, similar to userspace
+ HWASan), and
+ 3. Hardware Tag-Based KASAN (arm64 only, based on hardware memory
+ tagging, enabled with CONFIG_KASAN_HW_TAGS).
+
+config KERNEL_KASAN_GENERIC
+ bool "Generic KASAN"
+ select KERNEL_SLUB_DEBUG
+ help
+ Enables Generic KASAN.
+
+ Consumes about 1/8th of available memory at kernel start and adds an
+ overhead of ~50% for dynamic allocations.
+ The performance slowdown is ~x3.
+
+config KERNEL_KASAN_SW_TAGS
+ bool "Software Tag-Based KASAN"
+ depends on aarch64
+ select KERNEL_SLUB_DEBUG
+ help
+ Enables Software Tag-Based KASAN.
+
+ Supported only on arm64 CPUs and relies on Top Byte Ignore.
+
+ Consumes about 1/16th of available memory at kernel start and
+ add an overhead of ~20% for dynamic allocations.
+
+ May potentially introduce problems related to pointer casting and
+ comparison, as it embeds a tag into the top byte of each pointer.
+
+config KERNEL_KASAN_HW_TAGS
+ bool "Hardware Tag-Based KASAN"
+ depends on aarch64
+ select KERNEL_SLUB_DEBUG
+ select KERNEL_ARM64_MTE
+ help
+ Enables Hardware Tag-Based KASAN.
+
+ Supported only on arm64 CPUs starting from ARMv8.5 and relies on
+ Memory Tagging Extension and Top Byte Ignore.
+
+ Consumes about 1/32nd of available memory.
+
+ May potentially introduce problems related to pointer casting and
+ comparison, as it embeds a tag into the top byte of each pointer.
+
+endchoice
+
+ config KERNEL_ARM64_MTE
+ def_bool n
+
+endif
+
+choice
+ prompt "Instrumentation type"
+ depends on KERNEL_KASAN
+ depends on !KERNEL_KASAN_HW_TAGS
+ default KERNEL_KASAN_OUTLINE
+
+config KERNEL_KASAN_OUTLINE
+ bool "Outline instrumentation"
+ help
+ Before every memory access compiler insert function call
+ __asan_load*/__asan_store*. These functions performs check
+ of shadow memory. This is slower than inline instrumentation,
+ however it doesn't bloat size of kernel's .text section so
+ much as inline does.
+
+config KERNEL_KASAN_INLINE
+ bool "Inline instrumentation"
+ help
+ Compiler directly inserts code checking shadow memory before
+ memory accesses. This is faster than outline (in some workloads
+ it gives about x2 boost over outline instrumentation), but
+ make kernel's .text size much bigger.
+ This requires a gcc version of 5.0 or later.
+
+endchoice
+
+config KERNEL_KCOV
+ bool "Compile the kernel with code coverage for fuzzing"
+ select KERNEL_DEBUG_FS
+ help
+ KCOV exposes kernel code coverage information in a form suitable
+ for coverage-guided fuzzing (randomized testing).
+
+ If RANDOMIZE_BASE is enabled, PC values will not be stable across
+ different machines and across reboots. If you need stable PC values,
+ disable RANDOMIZE_BASE.
+
+ For more details, see Documentation/kcov.txt.
+
+config KERNEL_KCOV_ENABLE_COMPARISONS
+ bool "Enable comparison operands collection by KCOV"
+ depends on KERNEL_KCOV
+ help
+ KCOV also exposes operands of every comparison in the instrumented
+ code along with operand sizes and PCs of the comparison instructions.
+ These operands can be used by fuzzing engines to improve the quality
+ of fuzzing coverage.
+
+config KERNEL_KCOV_INSTRUMENT_ALL
+ bool "Instrument all code by default"
+ depends on KERNEL_KCOV
+ default y if KERNEL_KCOV
+ help
+ If you are doing generic system call fuzzing (like e.g. syzkaller),
+ then you will want to instrument the whole kernel and you should
+ say y here. If you are doing more targeted fuzzing (like e.g.
+ filesystem fuzzing with AFL) then you will want to enable coverage
+ for more specific subsets of files, and should say n here.
+