CVE-2023-53730
Publication date:
22/10/2025
In the Linux kernel, the following vulnerability has been resolved:<br />
<br />
blk-iocost: use spin_lock_irqsave in adjust_inuse_and_calc_cost<br />
<br />
adjust_inuse_and_calc_cost() use spin_lock_irq() and IRQ will be enabled<br />
when unlock. DEADLOCK might happen if we have held other locks and disabled<br />
IRQ before invoking it.<br />
<br />
Fix it by using spin_lock_irqsave() instead, which can keep IRQ state<br />
consistent with before when unlock.<br />
<br />
================================<br />
WARNING: inconsistent lock state<br />
5.10.0-02758-g8e5f91fd772f #26 Not tainted<br />
--------------------------------<br />
inconsistent {IN-HARDIRQ-W} -> {HARDIRQ-ON-W} usage.<br />
kworker/2:3/388 [HC0[0]:SC0[0]:HE0:SE1] takes:<br />
ffff888118c00c28 (&bfqd->lock){?.-.}-{2:2}, at: spin_lock_irq<br />
ffff888118c00c28 (&bfqd->lock){?.-.}-{2:2}, at: bfq_bio_merge+0x141/0x390<br />
{IN-HARDIRQ-W} state was registered at:<br />
__lock_acquire+0x3d7/0x1070<br />
lock_acquire+0x197/0x4a0<br />
__raw_spin_lock_irqsave<br />
_raw_spin_lock_irqsave+0x3b/0x60<br />
bfq_idle_slice_timer_body<br />
bfq_idle_slice_timer+0x53/0x1d0<br />
__run_hrtimer+0x477/0xa70<br />
__hrtimer_run_queues+0x1c6/0x2d0<br />
hrtimer_interrupt+0x302/0x9e0<br />
local_apic_timer_interrupt<br />
__sysvec_apic_timer_interrupt+0xfd/0x420<br />
run_sysvec_on_irqstack_cond<br />
sysvec_apic_timer_interrupt+0x46/0xa0<br />
asm_sysvec_apic_timer_interrupt+0x12/0x20<br />
irq event stamp: 837522<br />
hardirqs last enabled at (837521): [] __raw_spin_unlock_irqrestore<br />
hardirqs last enabled at (837521): [] _raw_spin_unlock_irqrestore+0x3d/0x40<br />
hardirqs last disabled at (837522): [] __raw_spin_lock_irq<br />
hardirqs last disabled at (837522): [] _raw_spin_lock_irq+0x43/0x50<br />
softirqs last enabled at (835852): [] __do_softirq+0x558/0x8ec<br />
softirqs last disabled at (835845): [] asm_call_irq_on_stack+0xf/0x20<br />
<br />
other info that might help us debug this:<br />
Possible unsafe locking scenario:<br />
<br />
CPU0<br />
----<br />
lock(&bfqd->lock);<br />
<br />
lock(&bfqd->lock);<br />
<br />
*** DEADLOCK ***<br />
<br />
3 locks held by kworker/2:3/388:<br />
#0: ffff888107af0f38 ((wq_completion)kthrotld){+.+.}-{0:0}, at: process_one_work+0x742/0x13f0<br />
#1: ffff8881176bfdd8 ((work_completion)(&td->dispatch_work)){+.+.}-{0:0}, at: process_one_work+0x777/0x13f0<br />
#2: ffff888118c00c28 (&bfqd->lock){?.-.}-{2:2}, at: spin_lock_irq<br />
#2: ffff888118c00c28 (&bfqd->lock){?.-.}-{2:2}, at: bfq_bio_merge+0x141/0x390<br />
<br />
stack backtrace:<br />
CPU: 2 PID: 388 Comm: kworker/2:3 Not tainted 5.10.0-02758-g8e5f91fd772f #26<br />
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014<br />
Workqueue: kthrotld blk_throtl_dispatch_work_fn<br />
Call Trace:<br />
__dump_stack lib/dump_stack.c:77 [inline]<br />
dump_stack+0x107/0x167<br />
print_usage_bug<br />
valid_state<br />
mark_lock_irq.cold+0x32/0x3a<br />
mark_lock+0x693/0xbc0<br />
mark_held_locks+0x9e/0xe0<br />
__trace_hardirqs_on_caller<br />
lockdep_hardirqs_on_prepare.part.0+0x151/0x360<br />
trace_hardirqs_on+0x5b/0x180<br />
__raw_spin_unlock_irq<br />
_raw_spin_unlock_irq+0x24/0x40<br />
spin_unlock_irq<br />
adjust_inuse_and_calc_cost+0x4fb/0x970<br />
ioc_rqos_merge+0x277/0x740<br />
__rq_qos_merge+0x62/0xb0<br />
rq_qos_merge<br />
bio_attempt_back_merge+0x12c/0x4a0<br />
blk_mq_sched_try_merge+0x1b6/0x4d0<br />
bfq_bio_merge+0x24a/0x390<br />
__blk_mq_sched_bio_merge+0xa6/0x460<br />
blk_mq_sched_bio_merge<br />
blk_mq_submit_bio+0x2e7/0x1ee0<br />
__submit_bio_noacct_mq+0x175/0x3b0<br />
submit_bio_noacct+0x1fb/0x270<br />
blk_throtl_dispatch_work_fn+0x1ef/0x2b0<br />
process_one_work+0x83e/0x13f0<br />
process_scheduled_works<br />
worker_thread+0x7e3/0xd80<br />
kthread+0x353/0x470<br />
ret_from_fork+0x1f/0x30
Severity CVSS v4.0: Pending analysis
Last modification:
15/04/2026