Pending analysis
Unavailable / Other
Publication date:
Last modified:


In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> IB/hfi1: Fix bugs with non-PAGE_SIZE-end multi-iovec user SDMA requests<br /> <br /> hfi1 user SDMA request processing has two bugs that can cause data<br /> corruption for user SDMA requests that have multiple payload iovecs<br /> where an iovec other than the tail iovec does not run up to the page<br /> boundary for the buffer pointed to by that iovec.a<br /> <br /> Here are the specific bugs:<br /> 1. user_sdma_txadd() does not use struct user_sdma_iovec-&gt;iov.iov_len.<br /> Rather, user_sdma_txadd() will add up to PAGE_SIZE bytes from iovec<br /> to the packet, even if some of those bytes are past<br /> iovec-&gt;iov.iov_len and are thus not intended to be in the packet.<br /> 2. user_sdma_txadd() and user_sdma_send_pkts() fail to advance to the<br /> next iovec in user_sdma_request-&gt;iovs when the current iovec<br /> is not PAGE_SIZE and does not contain enough data to complete the<br /> packet. The transmitted packet will contain the wrong data from the<br /> iovec pages.<br /> <br /> This has not been an issue with SDMA packets from hfi1 Verbs or PSM2<br /> because they only produce iovecs that end short of PAGE_SIZE as the tail<br /> iovec of an SDMA request.<br /> <br /> Fixing these bugs exposes other bugs with the SDMA pin cache<br /> (struct mmu_rb_handler) that get in way of supporting user SDMA requests<br /> with multiple payload iovecs whose buffers do not end at PAGE_SIZE. So<br /> this commit fixes those issues as well.<br /> <br /> Here are the mmu_rb_handler bugs that non-PAGE_SIZE-end multi-iovec<br /> payload user SDMA requests can hit:<br /> 1. Overlapping memory ranges in mmu_rb_handler will result in duplicate<br /> pinnings.<br /> 2. When extending an existing mmu_rb_handler entry (struct mmu_rb_node),<br /> the mmu_rb code (1) removes the existing entry under a lock, (2)<br /> releases that lock, pins the new pages, (3) then reacquires the lock<br /> to insert the extended mmu_rb_node.<br /> <br /> If someone else comes in and inserts an overlapping entry between (2)<br /> and (3), insert in (3) will fail.<br /> <br /> The failure path code in this case unpins _all_ pages in either the<br /> original mmu_rb_node or the new mmu_rb_node that was inserted between<br /> (2) and (3).<br /> 3. In hfi1_mmu_rb_remove_unless_exact(), mmu_rb_node-&gt;refcount is<br /> incremented outside of mmu_rb_handler-&gt;lock. As a result, mmu_rb_node<br /> could be evicted by another thread that gets mmu_rb_handler-&gt;lock and<br /> checks mmu_rb_node-&gt;refcount before mmu_rb_node-&gt;refcount is<br /> incremented.<br /> 4. Related to #2 above, SDMA request submission failure path does not<br /> check mmu_rb_node-&gt;refcount before freeing mmu_rb_node object.<br /> <br /> If there are other SDMA requests in progress whose iovecs have<br /> pointers to the now-freed mmu_rb_node(s), those pointers to the<br /> now-freed mmu_rb nodes will be dereferenced when those SDMA requests<br /> complete.