CVE-2021-47408
Publication date:
21/05/2024
In the Linux kernel, the following vulnerability has been resolved:<br />
<br />
netfilter: conntrack: serialize hash resizes and cleanups<br />
<br />
Syzbot was able to trigger the following warning [1]<br />
<br />
No repro found by syzbot yet but I was able to trigger similar issue<br />
by having 2 scripts running in parallel, changing conntrack hash sizes,<br />
and:<br />
<br />
for j in `seq 1 1000` ; do unshare -n /bin/true >/dev/null ; done<br />
<br />
It would take more than 5 minutes for net_namespace structures<br />
to be cleaned up.<br />
<br />
This is because nf_ct_iterate_cleanup() has to restart everytime<br />
a resize happened.<br />
<br />
By adding a mutex, we can serialize hash resizes and cleanups<br />
and also make get_next_corpse() faster by skipping over empty<br />
buckets.<br />
<br />
Even without resizes in the picture, this patch considerably<br />
speeds up network namespace dismantles.<br />
<br />
[1]<br />
INFO: task syz-executor.0:8312 can&#39;t die for more than 144 seconds.<br />
task:syz-executor.0 state:R running task stack:25672 pid: 8312 ppid: 6573 flags:0x00004006<br />
Call Trace:<br />
context_switch kernel/sched/core.c:4955 [inline]<br />
__schedule+0x940/0x26f0 kernel/sched/core.c:6236<br />
preempt_schedule_common+0x45/0xc0 kernel/sched/core.c:6408<br />
preempt_schedule_thunk+0x16/0x18 arch/x86/entry/thunk_64.S:35<br />
__local_bh_enable_ip+0x109/0x120 kernel/softirq.c:390<br />
local_bh_enable include/linux/bottom_half.h:32 [inline]<br />
get_next_corpse net/netfilter/nf_conntrack_core.c:2252 [inline]<br />
nf_ct_iterate_cleanup+0x15a/0x450 net/netfilter/nf_conntrack_core.c:2275<br />
nf_conntrack_cleanup_net_list+0x14c/0x4f0 net/netfilter/nf_conntrack_core.c:2469<br />
ops_exit_list+0x10d/0x160 net/core/net_namespace.c:171<br />
setup_net+0x639/0xa30 net/core/net_namespace.c:349<br />
copy_net_ns+0x319/0x760 net/core/net_namespace.c:470<br />
create_new_namespaces+0x3f6/0xb20 kernel/nsproxy.c:110<br />
unshare_nsproxy_namespaces+0xc1/0x1f0 kernel/nsproxy.c:226<br />
ksys_unshare+0x445/0x920 kernel/fork.c:3128<br />
__do_sys_unshare kernel/fork.c:3202 [inline]<br />
__se_sys_unshare kernel/fork.c:3200 [inline]<br />
__x64_sys_unshare+0x2d/0x40 kernel/fork.c:3200<br />
do_syscall_x64 arch/x86/entry/common.c:50 [inline]<br />
do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80<br />
entry_SYSCALL_64_after_hwframe+0x44/0xae<br />
RIP: 0033:0x7f63da68e739<br />
RSP: 002b:00007f63d7c05188 EFLAGS: 00000246 ORIG_RAX: 0000000000000110<br />
RAX: ffffffffffffffda RBX: 00007f63da792f80 RCX: 00007f63da68e739<br />
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000040000000<br />
RBP: 00007f63da6e8cc4 R08: 0000000000000000 R09: 0000000000000000<br />
R10: 0000000000000000 R11: 0000000000000246 R12: 00007f63da792f80<br />
R13: 00007fff50b75d3f R14: 00007f63d7c05300 R15: 0000000000022000<br />
<br />
Showing all locks held in the system:<br />
1 lock held by khungtaskd/27:<br />
#0: ffffffff8b980020 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x53/0x260 kernel/locking/lockdep.c:6446<br />
2 locks held by kworker/u4:2/153:<br />
#0: ffff888010c69138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]<br />
#0: ffff888010c69138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]<br />
#0: ffff888010c69138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1198 [inline]<br />
#0: ffff888010c69138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:634 [inline]<br />
#0: ffff888010c69138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:661 [inline]<br />
#0: ffff888010c69138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x896/0x1690 kernel/workqueue.c:2268<br />
#1: ffffc9000140fdb0 ((kfence_timer).work){+.+.}-{0:0}, at: process_one_work+0x8ca/0x1690 kernel/workqueue.c:2272<br />
1 lock held by systemd-udevd/2970:<br />
1 lock held by in:imklog/6258:<br />
#0: ffff88807f970ff0 (&f->f_pos_lock){+.+.}-{3:3}, at: __fdget_pos+0xe9/0x100 fs/file.c:990<br />
3 locks held by kworker/1:6/8158:<br />
1 lock held by syz-executor.0/8312:<br />
2 locks held by kworker/u4:13/9320:<br />
1 lock held by<br />
---truncated---
Severity CVSS v4.0: Pending analysis
Last modification:
25/09/2025