Vulnerabilities

With the aim of informing, warning and helping professionals with the latest security vulnerabilities in technology systems, we have made a database available for users interested in this information, which is in Spanish and includes all of the latest documented and recognised vulnerabilities.

This repository, with over 75,000 registers, is based on the information from the NVD (National Vulnerability Database) – by virtue of a partnership agreement – through which INCIBE translates the included information into Spanish.

On occasions this list will show vulnerabilities that have still not been translated, as they are added while the INCIBE team is still carrying out the translation process. The CVE  (Common Vulnerabilities and Exposures) Standard for Information Security Vulnerability Names is used with the aim to support the exchange of information between different tools and databases.

All vulnerabilities collected are linked to different information sources, as well as available patches or solutions provided by manufacturers and developers. It is possible to carry out advanced searches, as there is the option to select different criteria to narrow down the results, some examples being vulnerability types, manufacturers and impact levels, among others.

Through RSS feeds or Newsletters we can be informed daily about the latest vulnerabilities added to the repository. Below there is a list, updated daily, where you can discover the latest vulnerabilities.

CVE-2024-57985

Publication date:
27/02/2025
In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> firmware: qcom: scm: Cleanup global &amp;#39;__scm&amp;#39; on probe failures<br /> <br /> If SCM driver fails the probe, it should not leave global &amp;#39;__scm&amp;#39;<br /> variable assigned, because external users of this driver will assume the<br /> probe finished successfully. For example TZMEM parts (&amp;#39;__scm-&gt;mempool&amp;#39;)<br /> are initialized later in the probe, but users of it (__scm_smc_call())<br /> rely on the &amp;#39;__scm&amp;#39; variable.<br /> <br /> This fixes theoretical NULL pointer exception, triggered via introducing<br /> probe deferral in SCM driver with call trace:<br /> <br /> qcom_tzmem_alloc+0x70/0x1ac (P)<br /> qcom_tzmem_alloc+0x64/0x1ac (L)<br /> qcom_scm_assign_mem+0x78/0x194<br /> qcom_rmtfs_mem_probe+0x2d4/0x38c<br /> platform_probe+0x68/0xc8
Severity CVSS v4.0: Pending analysis
Last modification:
27/02/2025

CVE-2024-57982

Publication date:
27/02/2025
In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> xfrm: state: fix out-of-bounds read during lookup<br /> <br /> lookup and resize can run in parallel.<br /> <br /> The xfrm_state_hash_generation seqlock ensures a retry, but the hash<br /> functions can observe a hmask value that is too large for the new hlist<br /> array.<br /> <br /> rehash does:<br /> rcu_assign_pointer(net-&gt;xfrm.state_bydst, ndst) [..]<br /> net-&gt;xfrm.state_hmask = nhashmask;<br /> <br /> While state lookup does:<br /> h = xfrm_dst_hash(net, daddr, saddr, tmpl-&gt;reqid, encap_family);<br /> hlist_for_each_entry_rcu(x, net-&gt;xfrm.state_bydst + h, bydst) {<br /> <br /> This is only safe in case the update to state_bydst is larger than<br /> net-&gt;xfrm.xfrm_state_hmask (or if the lookup function gets<br /> serialized via state spinlock again).<br /> <br /> Fix this by prefetching state_hmask and the associated pointers.<br /> The xfrm_state_hash_generation seqlock retry will ensure that the pointer<br /> and the hmask will be consistent.<br /> <br /> The existing helpers, like xfrm_dst_hash(), are now unsafe for RCU side,<br /> add lockdep assertions to document that they are only safe for insert<br /> side.<br /> <br /> xfrm_state_lookup_byaddr() uses the spinlock rather than RCU.<br /> AFAICS this is an oversight from back when state lookup was converted to<br /> RCU, this lock should be replaced with RCU in a future patch.
Severity CVSS v4.0: Pending analysis
Last modification:
07/03/2025

CVE-2024-57983

Publication date:
27/02/2025
In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> mailbox: th1520: Fix memory corruption due to incorrect array size<br /> <br /> The functions th1520_mbox_suspend_noirq and th1520_mbox_resume_noirq are<br /> intended to save and restore the interrupt mask registers in the MBOX<br /> ICU0. However, the array used to store these registers was incorrectly<br /> sized, leading to memory corruption when accessing all four registers.<br /> <br /> This commit corrects the array size to accommodate all four interrupt<br /> mask registers, preventing memory corruption during suspend and resume<br /> operations.
Severity CVSS v4.0: Pending analysis
Last modification:
07/03/2025

CVE-2024-57973

Publication date:
27/02/2025
In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> rdma/cxgb4: Prevent potential integer overflow on 32bit<br /> <br /> The "gl-&gt;tot_len" variable is controlled by the user. It comes from<br /> process_responses(). On 32bit systems, the "gl-&gt;tot_len + sizeof(struct<br /> cpl_pass_accept_req) + sizeof(struct rss_header)" addition could have an<br /> integer wrapping bug. Use size_add() to prevent this.
Severity CVSS v4.0: Pending analysis
Last modification:
13/03/2025

CVE-2024-57977

Publication date:
27/02/2025
In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> memcg: fix soft lockup in the OOM process<br /> <br /> A soft lockup issue was found in the product with about 56,000 tasks were<br /> in the OOM cgroup, it was traversing them when the soft lockup was<br /> triggered.<br /> <br /> watchdog: BUG: soft lockup - CPU#2 stuck for 23s! [VM Thread:1503066]<br /> CPU: 2 PID: 1503066 Comm: VM Thread Kdump: loaded Tainted: G<br /> Hardware name: Huawei Cloud OpenStack Nova, BIOS<br /> RIP: 0010:console_unlock+0x343/0x540<br /> RSP: 0000:ffffb751447db9a0 EFLAGS: 00000247 ORIG_RAX: ffffffffffffff13<br /> RAX: 0000000000000001 RBX: 0000000000000000 RCX: 00000000ffffffff<br /> RDX: 0000000000000000 RSI: 0000000000000004 RDI: 0000000000000247<br /> RBP: ffffffffafc71f90 R08: 0000000000000000 R09: 0000000000000040<br /> R10: 0000000000000080 R11: 0000000000000000 R12: ffffffffafc74bd0<br /> R13: ffffffffaf60a220 R14: 0000000000000247 R15: 0000000000000000<br /> CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033<br /> CR2: 00007f2fe6ad91f0 CR3: 00000004b2076003 CR4: 0000000000360ee0<br /> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000<br /> DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400<br /> Call Trace:<br /> vprintk_emit+0x193/0x280<br /> printk+0x52/0x6e<br /> dump_task+0x114/0x130<br /> mem_cgroup_scan_tasks+0x76/0x100<br /> dump_header+0x1fe/0x210<br /> oom_kill_process+0xd1/0x100<br /> out_of_memory+0x125/0x570<br /> mem_cgroup_out_of_memory+0xb5/0xd0<br /> try_charge+0x720/0x770<br /> mem_cgroup_try_charge+0x86/0x180<br /> mem_cgroup_try_charge_delay+0x1c/0x40<br /> do_anonymous_page+0xb5/0x390<br /> handle_mm_fault+0xc4/0x1f0<br /> <br /> This is because thousands of processes are in the OOM cgroup, it takes a<br /> long time to traverse all of them. As a result, this lead to soft lockup<br /> in the OOM process.<br /> <br /> To fix this issue, call &amp;#39;cond_resched&amp;#39; in the &amp;#39;mem_cgroup_scan_tasks&amp;#39;<br /> function per 1000 iterations. For global OOM, call<br /> &amp;#39;touch_softlockup_watchdog&amp;#39; per 1000 iterations to avoid this issue.
Severity CVSS v4.0: Pending analysis
Last modification:
13/03/2025

CVE-2024-57978

Publication date:
27/02/2025
In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> media: imx-jpeg: Fix potential error pointer dereference in detach_pm()<br /> <br /> The proble is on the first line:<br /> <br /> if (jpeg-&gt;pd_dev[i] &amp;&amp; !pm_runtime_suspended(jpeg-&gt;pd_dev[i]))<br /> <br /> If jpeg-&gt;pd_dev[i] is an error pointer, then passing it to<br /> pm_runtime_suspended() will lead to an Oops. The other conditions<br /> check for both error pointers and NULL, but it would be more clear to<br /> use the IS_ERR_OR_NULL() check for that.
Severity CVSS v4.0: Pending analysis
Last modification:
13/03/2025

CVE-2024-57974

Publication date:
27/02/2025
In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> udp: Deal with race between UDP socket address change and rehash<br /> <br /> If a UDP socket changes its local address while it&amp;#39;s receiving<br /> datagrams, as a result of connect(), there is a period during which<br /> a lookup operation might fail to find it, after the address is changed<br /> but before the secondary hash (port and address) and the four-tuple<br /> hash (local and remote ports and addresses) are updated.<br /> <br /> Secondary hash chains were introduced by commit 30fff9231fad ("udp:<br /> bind() optimisation") and, as a result, a rehash operation became<br /> needed to make a bound socket reachable again after a connect().<br /> <br /> This operation was introduced by commit 719f835853a9 ("udp: add<br /> rehash on connect()") which isn&amp;#39;t however a complete fix: the<br /> socket will be found once the rehashing completes, but not while<br /> it&amp;#39;s pending.<br /> <br /> This is noticeable with a socat(1) server in UDP4-LISTEN mode, and a<br /> client sending datagrams to it. After the server receives the first<br /> datagram (cf. _xioopen_ipdgram_listen()), it issues a connect() to<br /> the address of the sender, in order to set up a directed flow.<br /> <br /> Now, if the client, running on a different CPU thread, happens to<br /> send a (subsequent) datagram while the server&amp;#39;s socket changes its<br /> address, but is not rehashed yet, this will result in a failed<br /> lookup and a port unreachable error delivered to the client, as<br /> apparent from the following reproducer:<br /> <br /> LEN=$(($(cat /proc/sys/net/core/wmem_default) / 4))<br /> dd if=/dev/urandom bs=1 count=${LEN} of=tmp.in<br /> <br /> while :; do<br /> taskset -c 1 socat UDP4-LISTEN:1337,null-eof OPEN:tmp.out,create,trunc &amp;<br /> sleep 0.1 || sleep 1<br /> taskset -c 2 socat OPEN:tmp.in UDP4:localhost:1337,shut-null<br /> wait<br /> done<br /> <br /> where the client will eventually get ECONNREFUSED on a write()<br /> (typically the second or third one of a given iteration):<br /> <br /> 2024/11/13 21:28:23 socat[46901] E write(6, 0x556db2e3c000, 8192): Connection refused<br /> <br /> This issue was first observed as a seldom failure in Podman&amp;#39;s tests<br /> checking UDP functionality while using pasta(1) to connect the<br /> container&amp;#39;s network namespace, which leads us to a reproducer with<br /> the lookup error resulting in an ICMP packet on a tap device:<br /> <br /> LOCAL_ADDR="$(ip -j -4 addr show|jq -rM &amp;#39;.[] | .addr_info[0] | select(.scope == "global").local&amp;#39;)"<br /> <br /> while :; do<br /> ./pasta --config-net -p pasta.pcap -u 1337 socat UDP4-LISTEN:1337,null-eof OPEN:tmp.out,create,trunc &amp;<br /> sleep 0.2 || sleep 1<br /> socat OPEN:tmp.in UDP4:${LOCAL_ADDR}:1337,shut-null<br /> wait<br /> cmp tmp.in tmp.out<br /> done<br /> <br /> Once this fails:<br /> <br /> tmp.in tmp.out differ: char 8193, line 29<br /> <br /> we can finally have a look at what&amp;#39;s going on:<br /> <br /> $ tshark -r pasta.pcap<br /> 1 0.000000 :: ? ff02::16 ICMPv6 110 Multicast Listener Report Message v2<br /> 2 0.168690 88.198.0.161 ? 88.198.0.164 UDP 8234 60260 ? 1337 Len=8192<br /> 3 0.168767 88.198.0.161 ? 88.198.0.164 UDP 8234 60260 ? 1337 Len=8192<br /> 4 0.168806 88.198.0.161 ? 88.198.0.164 UDP 8234 60260 ? 1337 Len=8192<br /> 5 0.168827 c6:47:05:8d:dc:04 ? Broadcast ARP 42 Who has 88.198.0.161? Tell 88.198.0.164<br /> 6 0.168851 9a:55:9a:55:9a:55 ? c6:47:05:8d:dc:04 ARP 42 88.198.0.161 is at 9a:55:9a:55:9a:55<br /> 7 0.168875 88.198.0.161 ? 88.198.0.164 UDP 8234 60260 ? 1337 Len=8192<br /> 8 0.168896 88.198.0.164 ? 88.198.0.161 ICMP 590 Destination unreachable (Port unreachable)<br /> 9 0.168926 88.198.0.161 ? 88.198.0.164 UDP 8234 60260 ? 1337 Len=8192<br /> 10 0.168959 88.198.0.161 ? 88.198.0.164 UDP 8234 60260 ? 1337 Len=8192<br /> 11 0.168989 88.198.0.161 ? 88.198.0.164 UDP 4138 60260 ? 1337 Len=4096<br /> 12 0.169010 88.198.0.161 ? 88.198.0.164 UDP 42 60260 ? 1337 Len=0<br /> <br /> On the third datagram received, the network namespace of the container<br /> initiates an ARP lookup to deliver the ICMP message.<br /> <br /> In another variant of this reproducer, starting the client with:<br /> <br /> strace -f pasta --config-net -u 1337 socat UDP4-LISTEN:1337,null-eof OPEN:tmp.out,create,tru<br /> ---truncated---
Severity CVSS v4.0: Pending analysis
Last modification:
27/02/2025

CVE-2024-57975

Publication date:
27/02/2025
In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> btrfs: do proper folio cleanup when run_delalloc_nocow() failed<br /> <br /> [BUG]<br /> With CONFIG_DEBUG_VM set, test case generic/476 has some chance to crash<br /> with the following VM_BUG_ON_FOLIO():<br /> <br /> BTRFS error (device dm-3): cow_file_range failed, start 1146880 end 1253375 len 106496 ret -28<br /> BTRFS error (device dm-3): run_delalloc_nocow failed, start 1146880 end 1253375 len 106496 ret -28<br /> page: refcount:4 mapcount:0 mapping:00000000592787cc index:0x12 pfn:0x10664<br /> aops:btrfs_aops [btrfs] ino:101 dentry name(?):"f1774"<br /> flags: 0x2fffff80004028(uptodate|lru|private|node=0|zone=2|lastcpupid=0xfffff)<br /> page dumped because: VM_BUG_ON_FOLIO(!folio_test_locked(folio))<br /> ------------[ cut here ]------------<br /> kernel BUG at mm/page-writeback.c:2992!<br /> Internal error: Oops - BUG: 00000000f2000800 [#1] SMP<br /> CPU: 2 UID: 0 PID: 3943513 Comm: kworker/u24:15 Tainted: G OE 6.12.0-rc7-custom+ #87<br /> Tainted: [O]=OOT_MODULE, [E]=UNSIGNED_MODULE<br /> Hardware name: QEMU KVM Virtual Machine, BIOS unknown 2/2/2022<br /> Workqueue: events_unbound btrfs_async_reclaim_data_space [btrfs]<br /> pc : folio_clear_dirty_for_io+0x128/0x258<br /> lr : folio_clear_dirty_for_io+0x128/0x258<br /> Call trace:<br /> folio_clear_dirty_for_io+0x128/0x258<br /> btrfs_folio_clamp_clear_dirty+0x80/0xd0 [btrfs]<br /> __process_folios_contig+0x154/0x268 [btrfs]<br /> extent_clear_unlock_delalloc+0x5c/0x80 [btrfs]<br /> run_delalloc_nocow+0x5f8/0x760 [btrfs]<br /> btrfs_run_delalloc_range+0xa8/0x220 [btrfs]<br /> writepage_delalloc+0x230/0x4c8 [btrfs]<br /> extent_writepage+0xb8/0x358 [btrfs]<br /> extent_write_cache_pages+0x21c/0x4e8 [btrfs]<br /> btrfs_writepages+0x94/0x150 [btrfs]<br /> do_writepages+0x74/0x190<br /> filemap_fdatawrite_wbc+0x88/0xc8<br /> start_delalloc_inodes+0x178/0x3a8 [btrfs]<br /> btrfs_start_delalloc_roots+0x174/0x280 [btrfs]<br /> shrink_delalloc+0x114/0x280 [btrfs]<br /> flush_space+0x250/0x2f8 [btrfs]<br /> btrfs_async_reclaim_data_space+0x180/0x228 [btrfs]<br /> process_one_work+0x164/0x408<br /> worker_thread+0x25c/0x388<br /> kthread+0x100/0x118<br /> ret_from_fork+0x10/0x20<br /> Code: 910a8021 a90363f7 a9046bf9 94012379 (d4210000)<br /> ---[ end trace 0000000000000000 ]---<br /> <br /> [CAUSE]<br /> The first two lines of extra debug messages show the problem is caused<br /> by the error handling of run_delalloc_nocow().<br /> <br /> E.g. we have the following dirtied range (4K blocksize 4K page size):<br /> <br /> 0 16K 32K<br /> |//////////////////////////////////////|<br /> | Pre-allocated |<br /> <br /> And the range [0, 16K) has a preallocated extent.<br /> <br /> - Enter run_delalloc_nocow() for range [0, 16K)<br /> Which found range [0, 16K) is preallocated, can do the proper NOCOW<br /> write.<br /> <br /> - Enter fallback_to_fow() for range [16K, 32K)<br /> Since the range [16K, 32K) is not backed by preallocated extent, we<br /> have to go COW.<br /> <br /> - cow_file_range() failed for range [16K, 32K)<br /> So cow_file_range() will do the clean up by clearing folio dirty,<br /> unlock the folios.<br /> <br /> Now the folios in range [16K, 32K) is unlocked.<br /> <br /> - Enter extent_clear_unlock_delalloc() from run_delalloc_nocow()<br /> Which is called with PAGE_START_WRITEBACK to start page writeback.<br /> But folios can only be marked writeback when it&amp;#39;s properly locked,<br /> thus this triggered the VM_BUG_ON_FOLIO().<br /> <br /> Furthermore there is another hidden but common bug that<br /> run_delalloc_nocow() is not clearing the folio dirty flags in its error<br /> handling path.<br /> This is the common bug shared between run_delalloc_nocow() and<br /> cow_file_range().<br /> <br /> [FIX]<br /> - Clear folio dirty for range [@start, @cur_offset)<br /> Introduce a helper, cleanup_dirty_folios(), which<br /> will find and lock the folio in the range, clear the dirty flag and<br /> start/end the writeback, with the extra handling for the<br /> @locked_folio.<br /> <br /> - Introduce a helper to clear folio dirty, start and end writeback<br /> <br /> - Introduce a helper to record the last failed COW range end<br /> This is to trace which range we should skip, to avoid double<br /> unlocking.<br /> <br /> - Skip the failed COW range for the e<br /> ---truncated---
Severity CVSS v4.0: Pending analysis
Last modification:
27/02/2025

CVE-2024-57953

Publication date:
27/02/2025
In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> rtc: tps6594: Fix integer overflow on 32bit systems<br /> <br /> The problem is this multiply in tps6594_rtc_set_offset()<br /> <br /> tmp = offset * TICKS_PER_HOUR;<br /> <br /> The "tmp" variable is an s64 but "offset" is a long in the<br /> (-277774)-277774 range. On 32bit systems a long can hold numbers up to<br /> approximately two billion. The number of TICKS_PER_HOUR is really large,<br /> (32768 * 3600) or roughly a hundred million. When you start multiplying<br /> by a hundred million it doesn&amp;#39;t take long to overflow the two billion<br /> mark.<br /> <br /> Probably the safest way to fix this is to change the type of<br /> TICKS_PER_HOUR to long long because it&amp;#39;s such a large number.
Severity CVSS v4.0: Pending analysis
Last modification:
07/03/2025

CVE-2024-57976

Publication date:
27/02/2025
In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> btrfs: do proper folio cleanup when cow_file_range() failed<br /> <br /> [BUG]<br /> When testing with COW fixup marked as BUG_ON() (this is involved with the<br /> new pin_user_pages*() change, which should not result new out-of-band<br /> dirty pages), I hit a crash triggered by the BUG_ON() from hitting COW<br /> fixup path.<br /> <br /> This BUG_ON() happens just after a failed btrfs_run_delalloc_range():<br /> <br /> BTRFS error (device dm-2): failed to run delalloc range, root 348 ino 405 folio 65536 submit_bitmap 6-15 start 90112 len 106496: -28<br /> ------------[ cut here ]------------<br /> kernel BUG at fs/btrfs/extent_io.c:1444!<br /> Internal error: Oops - BUG: 00000000f2000800 [#1] SMP<br /> CPU: 0 UID: 0 PID: 434621 Comm: kworker/u24:8 Tainted: G OE 6.12.0-rc7-custom+ #86<br /> Hardware name: QEMU KVM Virtual Machine, BIOS unknown 2/2/2022<br /> Workqueue: events_unbound btrfs_async_reclaim_data_space [btrfs]<br /> pc : extent_writepage_io+0x2d4/0x308 [btrfs]<br /> lr : extent_writepage_io+0x2d4/0x308 [btrfs]<br /> Call trace:<br /> extent_writepage_io+0x2d4/0x308 [btrfs]<br /> extent_writepage+0x218/0x330 [btrfs]<br /> extent_write_cache_pages+0x1d4/0x4b0 [btrfs]<br /> btrfs_writepages+0x94/0x150 [btrfs]<br /> do_writepages+0x74/0x190<br /> filemap_fdatawrite_wbc+0x88/0xc8<br /> start_delalloc_inodes+0x180/0x3b0 [btrfs]<br /> btrfs_start_delalloc_roots+0x174/0x280 [btrfs]<br /> shrink_delalloc+0x114/0x280 [btrfs]<br /> flush_space+0x250/0x2f8 [btrfs]<br /> btrfs_async_reclaim_data_space+0x180/0x228 [btrfs]<br /> process_one_work+0x164/0x408<br /> worker_thread+0x25c/0x388<br /> kthread+0x100/0x118<br /> ret_from_fork+0x10/0x20<br /> Code: aa1403e1 9402f3ef aa1403e0 9402f36f (d4210000)<br /> ---[ end trace 0000000000000000 ]---<br /> <br /> [CAUSE]<br /> That failure is mostly from cow_file_range(), where we can hit -ENOSPC.<br /> <br /> Although the -ENOSPC is already a bug related to our space reservation<br /> code, let&amp;#39;s just focus on the error handling.<br /> <br /> For example, we have the following dirty range [0, 64K) of an inode,<br /> with 4K sector size and 4K page size:<br /> <br /> 0 16K 32K 48K 64K<br /> |///////////////////////////////////////|<br /> |#######################################|<br /> <br /> Where |///| means page are still dirty, and |###| means the extent io<br /> tree has EXTENT_DELALLOC flag.<br /> <br /> - Enter extent_writepage() for page 0<br /> <br /> - Enter btrfs_run_delalloc_range() for range [0, 64K)<br /> <br /> - Enter cow_file_range() for range [0, 64K)<br /> <br /> - Function btrfs_reserve_extent() only reserved one 16K extent<br /> So we created extent map and ordered extent for range [0, 16K)<br /> <br /> 0 16K 32K 48K 64K<br /> |////////|//////////////////////////////|<br /> ||##############################|<br /> <br /> And range [0, 16K) has its delalloc flag cleared.<br /> But since we haven&amp;#39;t yet submit any bio, involved 4 pages are still<br /> dirty.<br /> <br /> - Function btrfs_reserve_extent() returns with -ENOSPC<br /> Now we have to run error cleanup, which will clear all<br /> EXTENT_DELALLOC* flags and clear the dirty flags for the remaining<br /> ranges:<br /> <br /> 0 16K 32K 48K 64K<br /> |////////| |<br /> | | |<br /> <br /> Note that range [0, 16K) still has its pages dirty.<br /> <br /> - Some time later, writeback is triggered again for the range [0, 16K)<br /> since the page range still has dirty flags.<br /> <br /> - btrfs_run_delalloc_range() will do nothing because there is no<br /> EXTENT_DELALLOC flag.<br /> <br /> - extent_writepage_io() finds page 0 has no ordered flag<br /> Which falls into the COW fixup path, triggering the BUG_ON().<br /> <br /> Unfortunately this error handling bug dates back to the introduction of<br /> btrfs. Thankfully with the abuse of COW fixup, at least it won&amp;#39;t crash<br /> the kernel.<br /> <br /> [FIX]<br /> Instead of immediately unlocking the extent and folios, we keep the extent<br /> and folios locked until either erroring out or the whole delalloc range<br /> finished.<br /> <br /> When the whole delalloc range finished without error, we just unlock the<br /> whole range with PAGE_SET_ORDERED (and PAGE_UNLOCK for !keep_locked<br /> cases)<br /> ---truncated---
Severity CVSS v4.0: Pending analysis
Last modification:
06/07/2025

CVE-2025-1460

Publication date:
26/02/2025
Rejected reason: This CVE ID has been rejected or withdrawn by its CVE Numbering Authority.
Severity CVSS v4.0: Pending analysis
Last modification:
26/02/2025

CVE-2025-1728

Publication date:
26/02/2025
Rejected reason: ** REJECT ** DO NOT USE THIS CANDIDATE NUMBER. Reason: This candidate was issued in error. Notes: All references and descriptions in this candidate have been removed to prevent accidental usage.
Severity CVSS v4.0: Pending analysis
Last modification:
26/02/2025