CVE-2021-46987
Severity CVSS v4.0:
Pending analysis
Type:
Unavailable / Other
Publication date:
28/02/2024
Last modified:
03/11/2025
Description
In the Linux kernel, the following vulnerability has been resolved:<br />
<br />
btrfs: fix deadlock when cloning inline extents and using qgroups<br />
<br />
There are a few exceptional cases where cloning an inline extent needs to<br />
copy the inline extent data into a page of the destination inode.<br />
<br />
When this happens, we end up starting a transaction while having a dirty<br />
page for the destination inode and while having the range locked in the<br />
destination&#39;s inode iotree too. Because when reserving metadata space<br />
for a transaction we may need to flush existing delalloc in case there is<br />
not enough free space, we have a mechanism in place to prevent a deadlock,<br />
which was introduced in commit 3d45f221ce627d ("btrfs: fix deadlock when<br />
cloning inline extent and low on free metadata space").<br />
<br />
However when using qgroups, a transaction also reserves metadata qgroup<br />
space, which can also result in flushing delalloc in case there is not<br />
enough available space at the moment. When this happens we deadlock, since<br />
flushing delalloc requires locking the file range in the inode&#39;s iotree<br />
and the range was already locked at the very beginning of the clone<br />
operation, before attempting to start the transaction.<br />
<br />
When this issue happens, stack traces like the following are reported:<br />
<br />
[72747.556262] task:kworker/u81:9 state:D stack: 0 pid: 225 ppid: 2 flags:0x00004000<br />
[72747.556268] Workqueue: writeback wb_workfn (flush-btrfs-1142)<br />
[72747.556271] Call Trace:<br />
[72747.556273] __schedule+0x296/0x760<br />
[72747.556277] schedule+0x3c/0xa0<br />
[72747.556279] io_schedule+0x12/0x40<br />
[72747.556284] __lock_page+0x13c/0x280<br />
[72747.556287] ? generic_file_readonly_mmap+0x70/0x70<br />
[72747.556325] extent_write_cache_pages+0x22a/0x440 [btrfs]<br />
[72747.556331] ? __set_page_dirty_nobuffers+0xe7/0x160<br />
[72747.556358] ? set_extent_buffer_dirty+0x5e/0x80 [btrfs]<br />
[72747.556362] ? update_group_capacity+0x25/0x210<br />
[72747.556366] ? cpumask_next_and+0x1a/0x20<br />
[72747.556391] extent_writepages+0x44/0xa0 [btrfs]<br />
[72747.556394] do_writepages+0x41/0xd0<br />
[72747.556398] __writeback_single_inode+0x39/0x2a0<br />
[72747.556403] writeback_sb_inodes+0x1ea/0x440<br />
[72747.556407] __writeback_inodes_wb+0x5f/0xc0<br />
[72747.556410] wb_writeback+0x235/0x2b0<br />
[72747.556414] ? get_nr_inodes+0x35/0x50<br />
[72747.556417] wb_workfn+0x354/0x490<br />
[72747.556420] ? newidle_balance+0x2c5/0x3e0<br />
[72747.556424] process_one_work+0x1aa/0x340<br />
[72747.556426] worker_thread+0x30/0x390<br />
[72747.556429] ? create_worker+0x1a0/0x1a0<br />
[72747.556432] kthread+0x116/0x130<br />
[72747.556435] ? kthread_park+0x80/0x80<br />
[72747.556438] ret_from_fork+0x1f/0x30<br />
<br />
[72747.566958] Workqueue: btrfs-flush_delalloc btrfs_work_helper [btrfs]<br />
[72747.566961] Call Trace:<br />
[72747.566964] __schedule+0x296/0x760<br />
[72747.566968] ? finish_wait+0x80/0x80<br />
[72747.566970] schedule+0x3c/0xa0<br />
[72747.566995] wait_extent_bit.constprop.68+0x13b/0x1c0 [btrfs]<br />
[72747.566999] ? finish_wait+0x80/0x80<br />
[72747.567024] lock_extent_bits+0x37/0x90 [btrfs]<br />
[72747.567047] btrfs_invalidatepage+0x299/0x2c0 [btrfs]<br />
[72747.567051] ? find_get_pages_range_tag+0x2cd/0x380<br />
[72747.567076] __extent_writepage+0x203/0x320 [btrfs]<br />
[72747.567102] extent_write_cache_pages+0x2bb/0x440 [btrfs]<br />
[72747.567106] ? update_load_avg+0x7e/0x5f0<br />
[72747.567109] ? enqueue_entity+0xf4/0x6f0<br />
[72747.567134] extent_writepages+0x44/0xa0 [btrfs]<br />
[72747.567137] ? enqueue_task_fair+0x93/0x6f0<br />
[72747.567140] do_writepages+0x41/0xd0<br />
[72747.567144] __filemap_fdatawrite_range+0xc7/0x100<br />
[72747.567167] btrfs_run_delalloc_work+0x17/0x40 [btrfs]<br />
[72747.567195] btrfs_work_helper+0xc2/0x300 [btrfs]<br />
[72747.567200] process_one_work+0x1aa/0x340<br />
[72747.567202] worker_thread+0x30/0x390<br />
[72747.567205] ? create_worker+0x1a0/0x1a0<br />
[72747.567208] kthread+0x116/0x130<br />
[72747.567211] ? kthread_park+0x80/0x80<br />
[72747.567214] ret_from_fork+0x1f/0x30<br />
<br />
[72747.569686] task:fsstress state:D stack: <br />
---truncated---
Impact
Base Score 3.x
5.50
Severity 3.x
MEDIUM
Vulnerable products and versions
| CPE | From | Up to |
|---|---|---|
| cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:* | 5.9 (including) | 5.11.22 (excluding) |
| cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:* | 5.12 (including) | 5.12.5 (excluding) |
| cpe:2.3:o:linux:linux_kernel:5.13:rc1:*:*:*:*:*:* |
To consult the complete list of CPE names with products and versions, see this page
References to Advisories, Solutions, and Tools
- https://git.kernel.org/stable/c/96157707c0420e3d3edfe046f1cc797fee117ade
- https://git.kernel.org/stable/c/d5347827d0b4b2250cbce6eccaa1c81dc78d8651
- https://git.kernel.org/stable/c/f8fbbd06fab9b75dcd68d850fe318ac3bc128974
- https://git.kernel.org/stable/c/f9baa501b4fd6962257853d46ddffbc21f27e344
- https://git.kernel.org/stable/c/96157707c0420e3d3edfe046f1cc797fee117ade
- https://git.kernel.org/stable/c/d5347827d0b4b2250cbce6eccaa1c81dc78d8651
- https://git.kernel.org/stable/c/f9baa501b4fd6962257853d46ddffbc21f27e344
- https://lists.debian.org/debian-lts-announce/2025/10/msg00007.html



