CVE-2026-43465

Severity CVSS v4.0:
Pending analysis
Type:
Unavailable / Other
Publication date:
08/05/2026
Last modified:
12/05/2026

Description

In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> net/mlx5e: RX, Fix XDP multi-buf frag counting for striding RQ<br /> <br /> XDP multi-buf programs can modify the layout of the XDP buffer when the<br /> program calls bpf_xdp_pull_data() or bpf_xdp_adjust_tail(). The<br /> referenced commit in the fixes tag corrected the assumption in the mlx5<br /> driver that the XDP buffer layout doesn&amp;#39;t change during a program<br /> execution. However, this fix introduced another issue: the dropped<br /> fragments still need to be counted on the driver side to avoid page<br /> fragment reference counting issues.<br /> <br /> The issue was discovered by the drivers/net/xdp.py selftest,<br /> more specifically the test_xdp_native_tx_mb:<br /> - The mlx5 driver allocates a page_pool page and initializes it with<br /> a frag counter of 64 (pp_ref_count=64) and the internal frag counter<br /> to 0.<br /> - The test sends one packet with no payload.<br /> - On RX (mlx5e_skb_from_cqe_mpwrq_nonlinear()), mlx5 configures the XDP<br /> buffer with the packet data starting in the first fragment which is the<br /> page mentioned above.<br /> - The XDP program runs and calls bpf_xdp_pull_data() which moves the<br /> header into the linear part of the XDP buffer. As the packet doesn&amp;#39;t<br /> contain more data, the program drops the tail fragment since it no<br /> longer contains any payload (pp_ref_count=63).<br /> - mlx5 device skips counting this fragment. Internal frag counter<br /> remains 0.<br /> - mlx5 releases all 64 fragments of the page but page pp_ref_count is<br /> 63 =&gt; negative reference counting error.<br /> <br /> Resulting splat during the test:<br /> <br /> WARNING: CPU: 0 PID: 188225 at ./include/net/page_pool/helpers.h:297 mlx5e_page_release_fragmented.isra.0+0xbd/0xe0 [mlx5_core]<br /> Modules linked in: [...]<br /> CPU: 0 UID: 0 PID: 188225 Comm: ip Not tainted 6.18.0-rc7_for_upstream_min_debug_2025_12_08_11_44 #1 NONE<br /> Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014<br /> RIP: 0010:mlx5e_page_release_fragmented.isra.0+0xbd/0xe0 [mlx5_core]<br /> [...]<br /> Call Trace:<br /> <br /> mlx5e_free_rx_mpwqe+0x20a/0x250 [mlx5_core]<br /> mlx5e_dealloc_rx_mpwqe+0x37/0xb0 [mlx5_core]<br /> mlx5e_free_rx_descs+0x11a/0x170 [mlx5_core]<br /> mlx5e_close_rq+0x78/0xa0 [mlx5_core]<br /> mlx5e_close_queues+0x46/0x2a0 [mlx5_core]<br /> mlx5e_close_channel+0x24/0x90 [mlx5_core]<br /> mlx5e_close_channels+0x5d/0xf0 [mlx5_core]<br /> mlx5e_safe_switch_params+0x2ec/0x380 [mlx5_core]<br /> mlx5e_change_mtu+0x11d/0x490 [mlx5_core]<br /> mlx5e_change_nic_mtu+0x19/0x30 [mlx5_core]<br /> netif_set_mtu_ext+0xfc/0x240<br /> do_setlink.isra.0+0x226/0x1100<br /> rtnl_newlink+0x7a9/0xba0<br /> rtnetlink_rcv_msg+0x220/0x3c0<br /> netlink_rcv_skb+0x4b/0xf0<br /> netlink_unicast+0x255/0x380<br /> netlink_sendmsg+0x1f3/0x420<br /> __sock_sendmsg+0x38/0x60<br /> ____sys_sendmsg+0x1e8/0x240<br /> ___sys_sendmsg+0x7c/0xb0<br /> [...]<br /> __sys_sendmsg+0x5f/0xb0<br /> do_syscall_64+0x55/0xc70<br /> <br /> The problem applies for XDP_PASS as well which is handled in a different<br /> code path in the driver.<br /> <br /> This patch fixes the issue by doing page frag counting on all the<br /> original XDP buffer fragments for all relevant XDP actions (XDP_TX ,<br /> XDP_REDIRECT and XDP_PASS). This is basically reverting to the original<br /> counting before the commit in the fixes tag.<br /> <br /> As frag_page is still pointing to the original tail, the nr_frags<br /> parameter to xdp_update_skb_frags_info() needs to be calculated<br /> in a different way to reflect the new nr_frags.