CVE-2024-53058

Severity CVSS v4.0:
Pending analysis
Type:
Unavailable / Other
Publication date:
19/11/2024
Last modified:
03/11/2025

Description

In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> net: stmmac: TSO: Fix unbalanced DMA map/unmap for non-paged SKB data<br /> <br /> In case the non-paged data of a SKB carries protocol header and protocol<br /> payload to be transmitted on a certain platform that the DMA AXI address<br /> width is configured to 40-bit/48-bit, or the size of the non-paged data<br /> is bigger than TSO_MAX_BUFF_SIZE on a certain platform that the DMA AXI<br /> address width is configured to 32-bit, then this SKB requires at least<br /> two DMA transmit descriptors to serve it.<br /> <br /> For example, three descriptors are allocated to split one DMA buffer<br /> mapped from one piece of non-paged data:<br /> dma_desc[N + 0],<br /> dma_desc[N + 1],<br /> dma_desc[N + 2].<br /> Then three elements of tx_q-&gt;tx_skbuff_dma[] will be allocated to hold<br /> extra information to be reused in stmmac_tx_clean():<br /> tx_q-&gt;tx_skbuff_dma[N + 0],<br /> tx_q-&gt;tx_skbuff_dma[N + 1],<br /> tx_q-&gt;tx_skbuff_dma[N + 2].<br /> Now we focus on tx_q-&gt;tx_skbuff_dma[entry].buf, which is the DMA buffer<br /> address returned by DMA mapping call. stmmac_tx_clean() will try to<br /> unmap the DMA buffer _ONLY_IF_ tx_q-&gt;tx_skbuff_dma[entry].buf<br /> is a valid buffer address.<br /> <br /> The expected behavior that saves DMA buffer address of this non-paged<br /> data to tx_q-&gt;tx_skbuff_dma[entry].buf is:<br /> tx_q-&gt;tx_skbuff_dma[N + 0].buf = NULL;<br /> tx_q-&gt;tx_skbuff_dma[N + 1].buf = NULL;<br /> tx_q-&gt;tx_skbuff_dma[N + 2].buf = dma_map_single();<br /> Unfortunately, the current code misbehaves like this:<br /> tx_q-&gt;tx_skbuff_dma[N + 0].buf = dma_map_single();<br /> tx_q-&gt;tx_skbuff_dma[N + 1].buf = NULL;<br /> tx_q-&gt;tx_skbuff_dma[N + 2].buf = NULL;<br /> <br /> On the stmmac_tx_clean() side, when dma_desc[N + 0] is closed by the<br /> DMA engine, tx_q-&gt;tx_skbuff_dma[N + 0].buf is a valid buffer address<br /> obviously, then the DMA buffer will be unmapped immediately.<br /> There may be a rare case that the DMA engine does not finish the<br /> pending dma_desc[N + 1], dma_desc[N + 2] yet. Now things will go<br /> horribly wrong, DMA is going to access a unmapped/unreferenced memory<br /> region, corrupted data will be transmited or iommu fault will be<br /> triggered :(<br /> <br /> In contrast, the for-loop that maps SKB fragments behaves perfectly<br /> as expected, and that is how the driver should do for both non-paged<br /> data and paged frags actually.<br /> <br /> This patch corrects DMA map/unmap sequences by fixing the array index<br /> for tx_q-&gt;tx_skbuff_dma[entry].buf when assigning DMA buffer address.<br /> <br /> Tested and verified on DWXGMAC CORE 3.20a

Vulnerable products and versions

CPE From Up to
cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:* 4.7 (including) 5.15.171 (excluding)
cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:* 5.16 (including) 6.1.116 (excluding)
cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:* 6.2 (including) 6.6.60 (excluding)
cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:* 6.7 (including) 6.11.7 (excluding)
cpe:2.3:o:linux:linux_kernel:6.12:rc1:*:*:*:*:*:*
cpe:2.3:o:linux:linux_kernel:6.12:rc2:*:*:*:*:*:*
cpe:2.3:o:linux:linux_kernel:6.12:rc3:*:*:*:*:*:*
cpe:2.3:o:linux:linux_kernel:6.12:rc4:*:*:*:*:*:*
cpe:2.3:o:linux:linux_kernel:6.12:rc5:*:*:*:*:*:*