CVE-2022-50149
Publication date:
18/06/2025
In the Linux kernel, the following vulnerability has been resolved:<br />
<br />
driver core: fix potential deadlock in __driver_attach<br />
<br />
In __driver_attach function, There are also AA deadlock problem,<br />
like the commit b232b02bf3c2 ("driver core: fix deadlock in<br />
__device_attach").<br />
<br />
stack like commit b232b02bf3c2 ("driver core: fix deadlock in<br />
__device_attach").<br />
list below:<br />
In __driver_attach function, The lock holding logic is as follows:<br />
...<br />
__driver_attach<br />
if (driver_allows_async_probing(drv))<br />
device_lock(dev) // get lock dev<br />
async_schedule_dev(__driver_attach_async_helper, dev); // func<br />
async_schedule_node<br />
async_schedule_node_domain(func)<br />
entry = kzalloc(sizeof(struct async_entry), GFP_ATOMIC);<br />
/* when fail or work limit, sync to execute func, but<br />
__driver_attach_async_helper will get lock dev as<br />
will, which will lead to A-A deadlock. */<br />
if (!entry || atomic_read(&entry_count) > MAX_WORK) {<br />
func;<br />
else<br />
queue_work_node(node, system_unbound_wq, &entry->work)<br />
device_unlock(dev)<br />
<br />
As above show, when it is allowed to do async probes, because of<br />
out of memory or work limit, async work is not be allowed, to do<br />
sync execute instead. it will lead to A-A deadlock because of<br />
__driver_attach_async_helper getting lock dev.<br />
<br />
Reproduce:<br />
and it can be reproduce by make the condition<br />
(if (!entry || atomic_read(&entry_count) > MAX_WORK)) untenable, like<br />
below:<br />
<br />
[ 370.785650] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables<br />
this message.<br />
[ 370.787154] task:swapper/0 state:D stack: 0 pid: 1 ppid:<br />
0 flags:0x00004000<br />
[ 370.788865] Call Trace:<br />
[ 370.789374] <br />
[ 370.789841] __schedule+0x482/0x1050<br />
[ 370.790613] schedule+0x92/0x1a0<br />
[ 370.791290] schedule_preempt_disabled+0x2c/0x50<br />
[ 370.792256] __mutex_lock.isra.0+0x757/0xec0<br />
[ 370.793158] __mutex_lock_slowpath+0x1f/0x30<br />
[ 370.794079] mutex_lock+0x50/0x60<br />
[ 370.794795] __device_driver_lock+0x2f/0x70<br />
[ 370.795677] ? driver_probe_device+0xd0/0xd0<br />
[ 370.796576] __driver_attach_async_helper+0x1d/0xd0<br />
[ 370.797318] ? driver_probe_device+0xd0/0xd0<br />
[ 370.797957] async_schedule_node_domain+0xa5/0xc0<br />
[ 370.798652] async_schedule_node+0x19/0x30<br />
[ 370.799243] __driver_attach+0x246/0x290<br />
[ 370.799828] ? driver_allows_async_probing+0xa0/0xa0<br />
[ 370.800548] bus_for_each_dev+0x9d/0x130<br />
[ 370.801132] driver_attach+0x22/0x30<br />
[ 370.801666] bus_add_driver+0x290/0x340<br />
[ 370.802246] driver_register+0x88/0x140<br />
[ 370.802817] ? virtio_scsi_init+0x116/0x116<br />
[ 370.803425] scsi_register_driver+0x1a/0x30<br />
[ 370.804057] init_sd+0x184/0x226<br />
[ 370.804533] do_one_initcall+0x71/0x3a0<br />
[ 370.805107] kernel_init_freeable+0x39a/0x43a<br />
[ 370.805759] ? rest_init+0x150/0x150<br />
[ 370.806283] kernel_init+0x26/0x230<br />
[ 370.806799] ret_from_fork+0x1f/0x30<br />
<br />
To fix the deadlock, move the async_schedule_dev outside device_lock,<br />
as we can see, in async_schedule_node_domain, the parameter of<br />
queue_work_node is system_unbound_wq, so it can accept concurrent<br />
operations. which will also not change the code logic, and will<br />
not lead to deadlock.
Severity CVSS v4.0: Pending analysis
Last modification:
17/11/2025