CVE-2024-26670

Severity CVSS v4.0:
Pending analysis
Type:
CWE-787 Out-of-bounds Write
Publication date:
02/04/2024
Last modified:
01/10/2025

Description

In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> arm64: entry: fix ARM64_WORKAROUND_SPECULATIVE_UNPRIV_LOAD<br /> <br /> Currently the ARM64_WORKAROUND_SPECULATIVE_UNPRIV_LOAD workaround isn&amp;#39;t<br /> quite right, as it is supposed to be applied after the last explicit<br /> memory access, but is immediately followed by an LDR.<br /> <br /> The ARM64_WORKAROUND_SPECULATIVE_UNPRIV_LOAD workaround is used to<br /> handle Cortex-A520 erratum 2966298 and Cortex-A510 erratum 3117295,<br /> which are described in:<br /> <br /> * https://developer.arm.com/documentation/SDEN2444153/0600/?lang=en<br /> * https://developer.arm.com/documentation/SDEN1873361/1600/?lang=en<br /> <br /> In both cases the workaround is described as:<br /> <br /> | If pagetable isolation is disabled, the context switch logic in the<br /> | kernel can be updated to execute the following sequence on affected<br /> | cores before exiting to EL0, and after all explicit memory accesses:<br /> |<br /> | 1. A non-shareable TLBI to any context and/or address, including<br /> | unused contexts or addresses, such as a `TLBI VALE1 Xzr`.<br /> |<br /> | 2. A DSB NSH to guarantee completion of the TLBI.<br /> <br /> The important part being that the TLBI+DSB must be placed "after all<br /> explicit memory accesses".<br /> <br /> Unfortunately, as-implemented, the TLBI+DSB is immediately followed by<br /> an LDR, as we have:<br /> <br /> | alternative_if ARM64_WORKAROUND_SPECULATIVE_UNPRIV_LOAD<br /> | tlbi vale1, xzr<br /> | dsb nsh<br /> | alternative_else_nop_endif<br /> | alternative_if_not ARM64_UNMAP_KERNEL_AT_EL0<br /> | ldr lr, [sp, #S_LR]<br /> | add sp, sp, #PT_REGS_SIZE // restore sp<br /> | eret<br /> | alternative_else_nop_endif<br /> |<br /> | [ ... KPTI exception return path ... ]<br /> <br /> This patch fixes this by reworking the logic to place the TLBI+DSB<br /> immediately before the ERET, after all explicit memory accesses.<br /> <br /> The ERET is currently in a separate alternative block, and alternatives<br /> cannot be nested. To account for this, the alternative block for<br /> ARM64_UNMAP_KERNEL_AT_EL0 is replaced with a single alternative branch<br /> to skip the KPTI logic, with the new shape of the logic being:<br /> <br /> | alternative_insn "b .L_skip_tramp_exit_\@", nop, ARM64_UNMAP_KERNEL_AT_EL0<br /> | [ ... KPTI exception return path ... ]<br /> | .L_skip_tramp_exit_\@:<br /> |<br /> | ldr lr, [sp, #S_LR]<br /> | add sp, sp, #PT_REGS_SIZE // restore sp<br /> |<br /> | alternative_if ARM64_WORKAROUND_SPECULATIVE_UNPRIV_LOAD<br /> | tlbi vale1, xzr<br /> | dsb nsh<br /> | alternative_else_nop_endif<br /> | eret<br /> <br /> The new structure means that the workaround is only applied when KPTI is<br /> not in use; this is fine as noted in the documented implications of the<br /> erratum:<br /> <br /> | Pagetable isolation between EL0 and higher level ELs prevents the<br /> | issue from occurring.<br /> <br /> ... and as per the workaround description quoted above, the workaround<br /> is only necessary "If pagetable isolation is disabled".

Vulnerable products and versions

CPE From Up to
cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:* 6.6 (including) 6.6.15 (excluding)
cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:* 6.7 (including) 6.7.3 (excluding)