Instituto Nacional de ciberseguridad. Sección Incibe
Instituto Nacional de Ciberseguridad. Sección INCIBE-CERT

CVE-2025-68341

Gravedad:
Pendiente de análisis
Tipo:
No Disponible / Otro tipo
Fecha de publicación:
23/12/2025
Última modificación:
23/12/2025

Descripción

*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> veth: reduce XDP no_direct return section to fix race<br /> <br /> As explain in commit fa349e396e48 ("veth: Fix race with AF_XDP exposing<br /> old or uninitialized descriptors") for veth there is a chance after<br /> napi_complete_done() that another CPU can manage start another NAPI<br /> instance running veth_pool(). For NAPI this is correctly handled as the<br /> napi_schedule_prep() check will prevent multiple instances from getting<br /> scheduled, but for the remaining code in veth_pool() this can run<br /> concurrent with the newly started NAPI instance.<br /> <br /> The problem/race is that xdp_clear_return_frame_no_direct() isn&amp;#39;t<br /> designed to be nested.<br /> <br /> Prior to commit 401cb7dae813 ("net: Reference bpf_redirect_info via<br /> task_struct on PREEMPT_RT.") the temporary BPF net context<br /> bpf_redirect_info was stored per CPU, where this wasn&amp;#39;t an issue. Since<br /> this commit the BPF context is stored in &amp;#39;current&amp;#39; task_struct. When<br /> running veth in threaded-NAPI mode, then the kthread becomes the storage<br /> area. Now a race exists between two concurrent veth_pool() function calls<br /> one exiting NAPI and one running new NAPI, both using the same BPF net<br /> context.<br /> <br /> Race is when another CPU gets within the xdp_set_return_frame_no_direct()<br /> section before exiting veth_pool() calls the clear-function<br /> xdp_clear_return_frame_no_direct().

Impacto