CVE-2025-68756
Publication date:
05/01/2026
In the Linux kernel, the following vulnerability has been resolved:<br />
<br />
block: Use RCU in blk_mq_[un]quiesce_tagset() instead of set->tag_list_lock<br />
<br />
blk_mq_{add,del}_queue_tag_set() functions add and remove queues from<br />
tagset, the functions make sure that tagset and queues are marked as<br />
shared when two or more queues are attached to the same tagset.<br />
Initially a tagset starts as unshared and when the number of added<br />
queues reaches two, blk_mq_add_queue_tag_set() marks it as shared along<br />
with all the queues attached to it. When the number of attached queues<br />
drops to 1 blk_mq_del_queue_tag_set() need to mark both the tagset and<br />
the remaining queues as unshared.<br />
<br />
Both functions need to freeze current queues in tagset before setting on<br />
unsetting BLK_MQ_F_TAG_QUEUE_SHARED flag. While doing so, both functions<br />
hold set->tag_list_lock mutex, which makes sense as we do not want<br />
queues to be added or deleted in the process. This used to work fine<br />
until commit 98d81f0df70c ("nvme: use blk_mq_[un]quiesce_tagset")<br />
made the nvme driver quiesce tagset instead of quiscing individual<br />
queues. blk_mq_quiesce_tagset() does the job and quiesce the queues in<br />
set->tag_list while holding set->tag_list_lock also.<br />
<br />
This results in deadlock between two threads with these stacktraces:<br />
<br />
__schedule+0x47c/0xbb0<br />
? timerqueue_add+0x66/0xb0<br />
schedule+0x1c/0xa0<br />
schedule_preempt_disabled+0xa/0x10<br />
__mutex_lock.constprop.0+0x271/0x600<br />
blk_mq_quiesce_tagset+0x25/0xc0<br />
nvme_dev_disable+0x9c/0x250<br />
nvme_timeout+0x1fc/0x520<br />
blk_mq_handle_expired+0x5c/0x90<br />
bt_iter+0x7e/0x90<br />
blk_mq_queue_tag_busy_iter+0x27e/0x550<br />
? __blk_mq_complete_request_remote+0x10/0x10<br />
? __blk_mq_complete_request_remote+0x10/0x10<br />
? __call_rcu_common.constprop.0+0x1c0/0x210<br />
blk_mq_timeout_work+0x12d/0x170<br />
process_one_work+0x12e/0x2d0<br />
worker_thread+0x288/0x3a0<br />
? rescuer_thread+0x480/0x480<br />
kthread+0xb8/0xe0<br />
? kthread_park+0x80/0x80<br />
ret_from_fork+0x2d/0x50<br />
? kthread_park+0x80/0x80<br />
ret_from_fork_asm+0x11/0x20<br />
<br />
__schedule+0x47c/0xbb0<br />
? xas_find+0x161/0x1a0<br />
schedule+0x1c/0xa0<br />
blk_mq_freeze_queue_wait+0x3d/0x70<br />
? destroy_sched_domains_rcu+0x30/0x30<br />
blk_mq_update_tag_set_shared+0x44/0x80<br />
blk_mq_exit_queue+0x141/0x150<br />
del_gendisk+0x25a/0x2d0<br />
nvme_ns_remove+0xc9/0x170<br />
nvme_remove_namespaces+0xc7/0x100<br />
nvme_remove+0x62/0x150<br />
pci_device_remove+0x23/0x60<br />
device_release_driver_internal+0x159/0x200<br />
unbind_store+0x99/0xa0<br />
kernfs_fop_write_iter+0x112/0x1e0<br />
vfs_write+0x2b1/0x3d0<br />
ksys_write+0x4e/0xb0<br />
do_syscall_64+0x5b/0x160<br />
entry_SYSCALL_64_after_hwframe+0x4b/0x53<br />
<br />
The top stacktrace is showing nvme_timeout() called to handle nvme<br />
command timeout. timeout handler is trying to disable the controller and<br />
as a first step, it needs to blk_mq_quiesce_tagset() to tell blk-mq not<br />
to call queue callback handlers. The thread is stuck waiting for<br />
set->tag_list_lock as it tries to walk the queues in set->tag_list.<br />
<br />
The lock is held by the second thread in the bottom stack which is<br />
waiting for one of queues to be frozen. The queue usage counter will<br />
drop to zero after nvme_timeout() finishes, and this will not happen<br />
because the thread will wait for this mutex forever.<br />
<br />
Given that [un]quiescing queue is an operation that does not need to<br />
sleep, update blk_mq_[un]quiesce_tagset() to use RCU instead of taking<br />
set->tag_list_lock, update blk_mq_{add,del}_queue_tag_set() to use RCU<br />
safe list operations. Also, delete INIT_LIST_HEAD(&q->tag_set_list)<br />
in blk_mq_del_queue_tag_set() because we can not re-initialize it while<br />
the list is being traversed under RCU. The deleted queue will not be<br />
added/deleted to/from a tagset and it will be freed in blk_free_queue()<br />
after the end of RCU grace period.
Severity CVSS v4.0: Pending analysis
Last modification:
05/01/2026