CVE-2024-56779
Publication date:
08/01/2025
In the Linux kernel, the following vulnerability has been resolved:<br />
<br />
nfsd: fix nfs4_openowner leak when concurrent nfsd4_open occur<br />
<br />
The action force umount(umount -f) will attempt to kill all rpc_task even<br />
umount operation may ultimately fail if some files remain open.<br />
Consequently, if an action attempts to open a file, it can potentially<br />
send two rpc_task to nfs server.<br />
<br />
NFS CLIENT<br />
thread1 thread2<br />
open("file")<br />
...<br />
nfs4_do_open<br />
_nfs4_do_open<br />
_nfs4_open_and_get_state<br />
_nfs4_proc_open<br />
nfs4_run_open_task<br />
/* rpc_task1 */<br />
rpc_run_task<br />
rpc_wait_for_completion_task<br />
<br />
umount -f<br />
nfs_umount_begin<br />
rpc_killall_tasks<br />
rpc_signal_task<br />
rpc_task1 been wakeup<br />
and return -512<br />
_nfs4_do_open // while loop<br />
...<br />
nfs4_run_open_task<br />
/* rpc_task2 */<br />
rpc_run_task<br />
rpc_wait_for_completion_task<br />
<br />
While processing an open request, nfsd will first attempt to find or<br />
allocate an nfs4_openowner. If it finds an nfs4_openowner that is not<br />
marked as NFS4_OO_CONFIRMED, this nfs4_openowner will released. Since<br />
two rpc_task can attempt to open the same file simultaneously from the<br />
client to server, and because two instances of nfsd can run<br />
concurrently, this situation can lead to lots of memory leak.<br />
Additionally, when we echo 0 to /proc/fs/nfsd/threads, warning will be<br />
triggered.<br />
<br />
NFS SERVER<br />
nfsd1 nfsd2 echo 0 > /proc/fs/nfsd/threads<br />
<br />
nfsd4_open<br />
nfsd4_process_open1<br />
find_or_alloc_open_stateowner<br />
// alloc oo1, stateid1<br />
nfsd4_open<br />
nfsd4_process_open1<br />
find_or_alloc_open_stateowner<br />
// find oo1, without NFS4_OO_CONFIRMED<br />
release_openowner<br />
unhash_openowner_locked<br />
list_del_init(&oo->oo_perclient)<br />
// cannot find this oo<br />
// from client, LEAK!!!<br />
alloc_stateowner // alloc oo2<br />
<br />
nfsd4_process_open2<br />
init_open_stateid<br />
// associate oo1<br />
// with stateid1, stateid1 LEAK!!!<br />
nfs4_get_vfs_file<br />
// alloc nfsd_file1 and nfsd_file_mark1<br />
// all LEAK!!!<br />
<br />
nfsd4_process_open2<br />
...<br />
<br />
write_threads<br />
...<br />
nfsd_destroy_serv<br />
nfsd_shutdown_net<br />
nfs4_state_shutdown_net<br />
nfs4_state_destroy_net<br />
destroy_client<br />
__destroy_client<br />
// won&#39;t find oo1!!!<br />
nfsd_shutdown_generic<br />
nfsd_file_cache_shutdown<br />
kmem_cache_destroy<br />
for nfsd_file_slab<br />
and nfsd_file_mark_slab<br />
// bark since nfsd_file1<br />
// and nfsd_file_mark1<br />
// still alive<br />
<br />
=======================================================================<br />
BUG nfsd_file (Not tainted): Objects remaining in nfsd_file on<br />
__kmem_cache_shutdown()<br />
-----------------------------------------------------------------------<br />
<br />
Slab 0xffd4000004438a80 objects=34 used=1 fp=0xff11000110e2ad28<br />
flags=0x17ffffc0000240(workingset|head|node=0|zone=2|lastcpupid=0x1fffff)<br />
CPU: 4 UID: 0 PID: 757 Comm: sh Not tainted 6.12.0-rc6+ #19<br />
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS<br />
1.16.1-2.fc37 04/01/2014<br />
Call Trace:<br />
<br />
dum<br />
---truncated---
Severity CVSS v4.0: Pending analysis
Last modification:
03/11/2025