CVE-2025-71078
Gravedad:
Pendiente de análisis
Tipo:
No Disponible / Otro tipo
Fecha de publicación:
13/01/2026
Última modificación:
13/01/2026
Descripción
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br />
<br />
powerpc/64s/slb: Fix SLB multihit issue during SLB preload<br />
<br />
On systems using the hash MMU, there is a software SLB preload cache that<br />
mirrors the entries loaded into the hardware SLB buffer. This preload<br />
cache is subject to periodic eviction — typically after every 256 context<br />
switches — to remove old entry.<br />
<br />
To optimize performance, the kernel skips switch_mmu_context() in<br />
switch_mm_irqs_off() when the prev and next mm_struct are the same.<br />
However, on hash MMU systems, this can lead to inconsistencies between<br />
the hardware SLB and the software preload cache.<br />
<br />
If an SLB entry for a process is evicted from the software cache on one<br />
CPU, and the same process later runs on another CPU without executing<br />
switch_mmu_context(), the hardware SLB may retain stale entries. If the<br />
kernel then attempts to reload that entry, it can trigger an SLB<br />
multi-hit error.<br />
<br />
The following timeline shows how stale SLB entries are created and can<br />
cause a multi-hit error when a process moves between CPUs without a<br />
MMU context switch.<br />
<br />
CPU 0 CPU 1<br />
----- -----<br />
Process P<br />
exec swapper/1<br />
load_elf_binary<br />
begin_new_exc<br />
activate_mm<br />
switch_mm_irqs_off<br />
switch_mmu_context<br />
switch_slb<br />
/*<br />
* This invalidates all<br />
* the entries in the HW<br />
* and setup the new HW<br />
* SLB entries as per the<br />
* preload cache.<br />
*/<br />
context_switch<br />
sched_migrate_task migrates process P to cpu-1<br />
<br />
Process swapper/0 context switch (to process P)<br />
(uses mm_struct of Process P) switch_mm_irqs_off()<br />
switch_slb<br />
load_slb++<br />
/*<br />
* load_slb becomes 0 here<br />
* and we evict an entry from<br />
* the preload cache with<br />
* preload_age(). We still<br />
* keep HW SLB and preload<br />
* cache in sync, that is<br />
* because all HW SLB entries<br />
* anyways gets evicted in<br />
* switch_slb during SLBIA.<br />
* We then only add those<br />
* entries back in HW SLB,<br />
* which are currently<br />
* present in preload_cache<br />
* (after eviction).<br />
*/<br />
load_elf_binary continues...<br />
setup_new_exec()<br />
slb_setup_new_exec()<br />
<br />
sched_switch event<br />
sched_migrate_task migrates<br />
process P to cpu-0<br />
<br />
context_switch from swapper/0 to Process P<br />
switch_mm_irqs_off()<br />
/*<br />
* Since both prev and next mm struct are same we don&#39;t call<br />
* switch_mmu_context(). This will cause the HW SLB and SW preload<br />
* cache to go out of sync in preload_new_slb_context. Because there<br />
* was an SLB entry which was evicted from both HW and preload cache<br />
* on cpu-1. Now later in preload_new_slb_context(), when we will try<br />
* to add the same preload entry again, we will add this to the SW<br />
* preload cache and then will add it to the HW SLB. Since on cpu-0<br />
* this entry was never invalidated, hence adding this entry to the HW<br />
* SLB will cause a SLB multi-hit error.<br />
*/<br />
load_elf_binary cont<br />
---truncated---
Impacto
Referencias a soluciones, herramientas e información
- https://git.kernel.org/stable/c/00312419f0863964625d6dcda8183f96849412c6
- https://git.kernel.org/stable/c/4ae1e46d8a290319f33f71a2710a1382ba5431e8
- https://git.kernel.org/stable/c/895123c309a34d2cfccf7812b41e17261a3a6f37
- https://git.kernel.org/stable/c/b13a3dbfa196af68eae2031f209743735ad416bf
- https://git.kernel.org/stable/c/c9f865022a1823d814032a09906e91e4701a35fc



