CVE-2025-38670

Severity CVSS v4.0:
Pending analysis
Type:
Unavailable / Other
Publication date:
22/08/2025
Last modified:
23/12/2025

Description

In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> arm64/entry: Mask DAIF in cpu_switch_to(), call_on_irq_stack()<br /> <br /> `cpu_switch_to()` and `call_on_irq_stack()` manipulate SP to change<br /> to different stacks along with the Shadow Call Stack if it is enabled.<br /> Those two stack changes cannot be done atomically and both functions<br /> can be interrupted by SErrors or Debug Exceptions which, though unlikely,<br /> is very much broken : if interrupted, we can end up with mismatched stacks<br /> and Shadow Call Stack leading to clobbered stacks.<br /> <br /> In `cpu_switch_to()`, it can happen when SP_EL0 points to the new task,<br /> but x18 stills points to the old task&amp;#39;s SCS. When the interrupt handler<br /> tries to save the task&amp;#39;s SCS pointer, it will save the old task<br /> SCS pointer (x18) into the new task struct (pointed to by SP_EL0),<br /> clobbering it.<br /> <br /> In `call_on_irq_stack()`, it can happen when switching from the task stack<br /> to the IRQ stack and when switching back. In both cases, we can be<br /> interrupted when the SCS pointer points to the IRQ SCS, but SP points to<br /> the task stack. The nested interrupt handler pushes its return addresses<br /> on the IRQ SCS. It then detects that SP points to the task stack,<br /> calls `call_on_irq_stack()` and clobbers the task SCS pointer with<br /> the IRQ SCS pointer, which it will also use !<br /> <br /> This leads to tasks returning to addresses on the wrong SCS,<br /> or even on the IRQ SCS, triggering kernel panics via CONFIG_VMAP_STACK<br /> or FPAC if enabled.<br /> <br /> This is possible on a default config, but unlikely.<br /> However, when enabling CONFIG_ARM64_PSEUDO_NMI, DAIF is unmasked and<br /> instead the GIC is responsible for filtering what interrupts the CPU<br /> should receive based on priority.<br /> Given the goal of emulating NMIs, pseudo-NMIs can be received by the CPU<br /> even in `cpu_switch_to()` and `call_on_irq_stack()`, possibly *very*<br /> frequently depending on the system configuration and workload, leading<br /> to unpredictable kernel panics.<br /> <br /> Completely mask DAIF in `cpu_switch_to()` and restore it when returning.<br /> Do the same in `call_on_irq_stack()`, but restore and mask around<br /> the branch.<br /> Mask DAIF even if CONFIG_SHADOW_CALL_STACK is not enabled for consistency<br /> of behaviour between all configurations.<br /> <br /> Introduce and use an assembly macro for saving and masking DAIF,<br /> as the existing one saves but only masks IF.

Impact