CVE-2025-21839
Publication date:
07/03/2025
In the Linux kernel, the following vulnerability has been resolved:<br />
<br />
KVM: x86: Load DR6 with guest value only before entering .vcpu_run() loop<br />
<br />
Move the conditional loading of hardware DR6 with the guest&#39;s DR6 value<br />
out of the core .vcpu_run() loop to fix a bug where KVM can load hardware<br />
with a stale vcpu->arch.dr6.<br />
<br />
When the guest accesses a DR and host userspace isn&#39;t debugging the guest,<br />
KVM disables DR interception and loads the guest&#39;s values into hardware on<br />
VM-Enter and saves them on VM-Exit. This allows the guest to access DRs<br />
at will, e.g. so that a sequence of DR accesses to configure a breakpoint<br />
only generates one VM-Exit.<br />
<br />
For DR0-DR3, the logic/behavior is identical between VMX and SVM, and also<br />
identical between KVM_DEBUGREG_BP_ENABLED (userspace debugging the guest)<br />
and KVM_DEBUGREG_WONT_EXIT (guest using DRs), and so KVM handles loading<br />
DR0-DR3 in common code, _outside_ of the core kvm_x86_ops.vcpu_run() loop.<br />
<br />
But for DR6, the guest&#39;s value doesn&#39;t need to be loaded into hardware for<br />
KVM_DEBUGREG_BP_ENABLED, and SVM provides a dedicated VMCB field whereas<br />
VMX requires software to manually load the guest value, and so loading the<br />
guest&#39;s value into DR6 is handled by {svm,vmx}_vcpu_run(), i.e. is done<br />
_inside_ the core run loop.<br />
<br />
Unfortunately, saving the guest values on VM-Exit is initiated by common<br />
x86, again outside of the core run loop. If the guest modifies DR6 (in<br />
hardware, when DR interception is disabled), and then the next VM-Exit is<br />
a fastpath VM-Exit, KVM will reload hardware DR6 with vcpu->arch.dr6 and<br />
clobber the guest&#39;s actual value.<br />
<br />
The bug shows up primarily with nested VMX because KVM handles the VMX<br />
preemption timer in the fastpath, and the window between hardware DR6<br />
being modified (in guest context) and DR6 being read by guest software is<br />
orders of magnitude larger in a nested setup. E.g. in non-nested, the<br />
VMX preemption timer would need to fire precisely between #DB injection<br />
and the #DB handler&#39;s read of DR6, whereas with a KVM-on-KVM setup, the<br />
window where hardware DR6 is "dirty" extends all the way from L1 writing<br />
DR6 to VMRESUME (in L1).<br />
<br />
L1&#39;s view:<br />
==========<br />
<br />
CPU 0/KVM-7289 [023] d.... 2925.640961: kvm_entry: vcpu 0<br />
A: L1 Writes DR6<br />
CPU 0/KVM-7289 [023] d.... 2925.640963: : Set DRs, DR6 = 0xffff0ff1<br />
<br />
B: CPU 0/KVM-7289 [023] d.... 2925.640967: kvm_exit: vcpu 0 reason EXTERNAL_INTERRUPT intr_info 0x800000ec<br />
<br />
D: L1 reads DR6, arch.dr6 = 0<br />
CPU 0/KVM-7289 [023] d.... 2925.640969: : Sync DRs, DR6 = 0xffff0ff0<br />
<br />
CPU 0/KVM-7289 [023] d.... 2925.640976: kvm_entry: vcpu 0<br />
L2 reads DR6, L1 disables DR interception<br />
CPU 0/KVM-7289 [023] d.... 2925.640980: kvm_exit: vcpu 0 reason DR_ACCESS info1 0x0000000000000216<br />
CPU 0/KVM-7289 [023] d.... 2925.640983: kvm_entry: vcpu 0<br />
<br />
CPU 0/KVM-7289 [023] d.... 2925.640983: : Set DRs, DR6 = 0xffff0ff0<br />
<br />
L2 detects failure<br />
CPU 0/KVM-7289 [023] d.... 2925.640987: kvm_exit: vcpu 0 reason HLT<br />
L1 reads DR6 (confirms failure)<br />
CPU 0/KVM-7289 [023] d.... 2925.640990: : Sync DRs, DR6 = 0xffff0ff0<br />
<br />
L0&#39;s view:<br />
==========<br />
L2 reads DR6, arch.dr6 = 0<br />
CPU 23/KVM-5046 [001] d.... 3410.005610: kvm_exit: vcpu 23 reason DR_ACCESS info1 0x0000000000000216<br />
CPU 23/KVM-5046 [001] ..... 3410.005610: kvm_nested_vmexit: vcpu 23 reason DR_ACCESS info1 0x0000000000000216<br />
<br />
L2 => L1 nested VM-Exit<br />
CPU 23/KVM-5046 [001] ..... 3410.005610: kvm_nested_vmexit_inject: reason: DR_ACCESS ext_inf1: 0x0000000000000216<br />
<br />
CPU 23/KVM-5046 [001] d.... 3410.005610: kvm_entry: vcpu 23<br />
CPU 23/KVM-5046 [001] d.... 3410.005611: kvm_exit: vcpu 23 reason VMREAD<br />
CPU 23/KVM-5046 [001] d.... 3410.005611: kvm_entry: vcpu 23<br />
CPU 23/KVM-5046 [001] d.... 3410.<br />
---truncated---
Severity CVSS v4.0: Pending analysis
Last modification:
09/05/2025