Instituto Nacional de ciberseguridad. Sección Incibe
Instituto Nacional de Ciberseguridad. Sección INCIBE-CERT

Vulnerabilidades

Con el objetivo de informar, advertir y ayudar a los profesionales sobre las ultimas vulnerabilidades de seguridad en sistemas tecnológicos, ponemos a disposición de los usuarios interesados en esta información una base de datos con información en castellano sobre cada una de las ultimas vulnerabilidades documentadas y conocidas.

Este repositorio con más de 75.000 registros esta basado en la información de NVD (National Vulnerability Database) – en función de un acuerdo de colaboración – por el cual desde INCIBE realizamos la traducción al castellano de la información incluida. En ocasiones este listado mostrará vulnerabilidades que aún no han sido traducidas debido a que se recogen en el transcurso del tiempo en el que el equipo de INCIBE realiza el proceso de traducción.

Se emplea el estándar de nomenclatura de vulnerabilidades CVE (Common Vulnerabilities and Exposures), con el fin de facilitar el intercambio de información entre diferentes bases de datos y herramientas. Cada una de las vulnerabilidades recogidas enlaza a diversas fuentes de información así como a parches disponibles o soluciones aportadas por los fabricantes y desarrolladores. Es posible realizar búsquedas avanzadas teniendo la opción de seleccionar diferentes criterios como el tipo de vulnerabilidad, fabricante, tipo de impacto entre otros, con el fin de acortar los resultados.

Mediante suscripción RSS o Boletines podemos estar informados diariamente de las ultimas vulnerabilidades incorporadas al repositorio.

CVE-2025-39753

Fecha de publicación:
11/09/2025
Idioma:
Inglés
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> gfs2: Set .migrate_folio in gfs2_{rgrp,meta}_aops<br /> <br /> Clears up the warning added in 7ee3647243e5 ("migrate: Remove call to<br /> -&gt;writepage") that occurs in various xfstests, causing "something found<br /> in dmesg" failures.<br /> <br /> [ 341.136573] gfs2_meta_aops does not implement migrate_folio<br /> [ 341.136953] WARNING: CPU: 1 PID: 36 at mm/migrate.c:944 move_to_new_folio+0x2f8/0x300
Gravedad: Pendiente de análisis
Última modificación:
15/09/2025

CVE-2025-39754

Fecha de publicación:
11/09/2025
Idioma:
Inglés
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> mm/smaps: fix race between smaps_hugetlb_range and migration<br /> <br /> smaps_hugetlb_range() handles the pte without holdling ptl, and may be<br /> concurrenct with migration, leaing to BUG_ON in pfn_swap_entry_to_page(). <br /> The race is as follows.<br /> <br /> smaps_hugetlb_range migrate_pages<br /> huge_ptep_get<br /> remove_migration_ptes<br /> folio_unlock<br /> pfn_swap_entry_folio<br /> BUG_ON<br /> <br /> To fix it, hold ptl lock in smaps_hugetlb_range().
Gravedad: Pendiente de análisis
Última modificación:
15/09/2025

CVE-2025-39758

Fecha de publicación:
11/09/2025
Idioma:
Inglés
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> RDMA/siw: Fix the sendmsg byte count in siw_tcp_sendpages<br /> <br /> Ever since commit c2ff29e99a76 ("siw: Inline do_tcp_sendpages()"),<br /> we have been doing this:<br /> <br /> static int siw_tcp_sendpages(struct socket *s, struct page **page, int offset,<br /> size_t size)<br /> [...]<br /> /* Calculate the number of bytes we need to push, for this page<br /> * specifically */<br /> size_t bytes = min_t(size_t, PAGE_SIZE - offset, size);<br /> /* If we can&amp;#39;t splice it, then copy it in, as normal */<br /> if (!sendpage_ok(page[i]))<br /> msg.msg_flags &amp;= ~MSG_SPLICE_PAGES;<br /> /* Set the bvec pointing to the page, with len $bytes */<br /> bvec_set_page(&amp;bvec, page[i], bytes, offset);<br /> /* Set the iter to $size, aka the size of the whole sendpages (!!!) */<br /> iov_iter_bvec(&amp;msg.msg_iter, ITER_SOURCE, &amp;bvec, 1, size);<br /> try_page_again:<br /> lock_sock(sk);<br /> /* Sendmsg with $size size (!!!) */<br /> rv = tcp_sendmsg_locked(sk, &amp;msg, size);<br /> <br /> This means we&amp;#39;ve been sending oversized iov_iters and tcp_sendmsg calls<br /> for a while. This has a been a benign bug because sendpage_ok() always<br /> returned true. With the recent slab allocator changes being slowly<br /> introduced into next (that disallow sendpage on large kmalloc<br /> allocations), we have recently hit out-of-bounds crashes, due to slight<br /> differences in iov_iter behavior between the MSG_SPLICE_PAGES and<br /> "regular" copy paths:<br /> <br /> (MSG_SPLICE_PAGES)<br /> skb_splice_from_iter<br /> iov_iter_extract_pages<br /> iov_iter_extract_bvec_pages<br /> uses i-&gt;nr_segs to correctly stop in its tracks before OoB&amp;#39;ing everywhere<br /> skb_splice_from_iter gets a "short" read<br /> <br /> (!MSG_SPLICE_PAGES)<br /> skb_copy_to_page_nocache copy=iov_iter_count<br /> [...]<br /> copy_from_iter<br /> /* this doesn&amp;#39;t help */<br /> if (unlikely(iter-&gt;count count;<br /> iterate_bvec<br /> ... and we run off the bvecs<br /> <br /> Fix this by properly setting the iov_iter&amp;#39;s byte count, plus sending the<br /> correct byte count to tcp_sendmsg_locked.
Gravedad: Pendiente de análisis
Última modificación:
15/09/2025

CVE-2025-39756

Fecha de publicación:
11/09/2025
Idioma:
Inglés
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> fs: Prevent file descriptor table allocations exceeding INT_MAX<br /> <br /> When sysctl_nr_open is set to a very high value (for example, 1073741816<br /> as set by systemd), processes attempting to use file descriptors near<br /> the limit can trigger massive memory allocation attempts that exceed<br /> INT_MAX, resulting in a WARNING in mm/slub.c:<br /> <br /> WARNING: CPU: 0 PID: 44 at mm/slub.c:5027 __kvmalloc_node_noprof+0x21a/0x288<br /> <br /> This happens because kvmalloc_array() and kvmalloc() check if the<br /> requested size exceeds INT_MAX and emit a warning when the allocation is<br /> not flagged with __GFP_NOWARN.<br /> <br /> Specifically, when nr_open is set to 1073741816 (0x3ffffff8) and a<br /> process calls dup2(oldfd, 1073741880), the kernel attempts to allocate:<br /> - File descriptor array: 1073741880 * 8 bytes = 8,589,935,040 bytes<br /> - Multiple bitmaps: ~400MB<br /> - Total allocation size: &gt; 8GB (exceeding INT_MAX = 2,147,483,647)<br /> <br /> Reproducer:<br /> 1. Set /proc/sys/fs/nr_open to 1073741816:<br /> # echo 1073741816 &gt; /proc/sys/fs/nr_open<br /> <br /> 2. Run a program that uses a high file descriptor:<br /> #include <br /> #include <br /> <br /> int main() {<br /> struct rlimit rlim = {1073741824, 1073741824};<br /> setrlimit(RLIMIT_NOFILE, &amp;rlim);<br /> dup2(2, 1073741880); // Triggers the warning<br /> return 0;<br /> }<br /> <br /> 3. Observe WARNING in dmesg at mm/slub.c:5027<br /> <br /> systemd commit a8b627a introduced automatic bumping of fs.nr_open to the<br /> maximum possible value. The rationale was that systems with memory<br /> control groups (memcg) no longer need separate file descriptor limits<br /> since memory is properly accounted. However, this change overlooked<br /> that:<br /> <br /> 1. The kernel&amp;#39;s allocation functions still enforce INT_MAX as a maximum<br /> size regardless of memcg accounting<br /> 2. Programs and tests that legitimately test file descriptor limits can<br /> inadvertently trigger massive allocations<br /> 3. The resulting allocations (&gt;8GB) are impractical and will always fail<br /> <br /> systemd&amp;#39;s algorithm starts with INT_MAX and keeps halving the value<br /> until the kernel accepts it. On most systems, this results in nr_open<br /> being set to 1073741816 (0x3ffffff8), which is just under 1GB of file<br /> descriptors.<br /> <br /> While processes rarely use file descriptors near this limit in normal<br /> operation, certain selftests (like<br /> tools/testing/selftests/core/unshare_test.c) and programs that test file<br /> descriptor limits can trigger this issue.<br /> <br /> Fix this by adding a check in alloc_fdtable() to ensure the requested<br /> allocation size does not exceed INT_MAX. This causes the operation to<br /> fail with -EMFILE instead of triggering a kernel warning and avoids the<br /> impractical &gt;8GB memory allocation request.
Gravedad: Pendiente de análisis
Última modificación:
03/11/2025

CVE-2025-39757

Fecha de publicación:
11/09/2025
Idioma:
Inglés
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> ALSA: usb-audio: Validate UAC3 cluster segment descriptors<br /> <br /> UAC3 class segment descriptors need to be verified whether their sizes<br /> match with the declared lengths and whether they fit with the<br /> allocated buffer sizes, too. Otherwise malicious firmware may lead to<br /> the unexpected OOB accesses.
Gravedad: Pendiente de análisis
Última modificación:
03/11/2025

CVE-2025-39759

Fecha de publicación:
11/09/2025
Idioma:
Inglés
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> btrfs: qgroup: fix race between quota disable and quota rescan ioctl<br /> <br /> There&amp;#39;s a race between a task disabling quotas and another running the<br /> rescan ioctl that can result in a use-after-free of qgroup records from<br /> the fs_info-&gt;qgroup_tree rbtree.<br /> <br /> This happens as follows:<br /> <br /> 1) Task A enters btrfs_ioctl_quota_rescan() -&gt; btrfs_qgroup_rescan();<br /> <br /> 2) Task B enters btrfs_quota_disable() and calls<br /> btrfs_qgroup_wait_for_completion(), which does nothing because at that<br /> point fs_info-&gt;qgroup_rescan_running is false (it wasn&amp;#39;t set yet by<br /> task A);<br /> <br /> 3) Task B calls btrfs_free_qgroup_config() which starts freeing qgroups<br /> from fs_info-&gt;qgroup_tree without taking the lock fs_info-&gt;qgroup_lock;<br /> <br /> 4) Task A enters qgroup_rescan_zero_tracking() which starts iterating<br /> the fs_info-&gt;qgroup_tree tree while holding fs_info-&gt;qgroup_lock,<br /> but task B is freeing qgroup records from that tree without holding<br /> the lock, resulting in a use-after-free.<br /> <br /> Fix this by taking fs_info-&gt;qgroup_lock at btrfs_free_qgroup_config().<br /> Also at btrfs_qgroup_rescan() don&amp;#39;t start the rescan worker if quotas<br /> were already disabled.
Gravedad: Pendiente de análisis
Última modificación:
03/11/2025

CVE-2025-39760

Fecha de publicación:
11/09/2025
Idioma:
Inglés
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> usb: core: config: Prevent OOB read in SS endpoint companion parsing<br /> <br /> usb_parse_ss_endpoint_companion() checks descriptor type before length,<br /> enabling a potentially odd read outside of the buffer size.<br /> <br /> Fix this up by checking the size first before looking at any of the<br /> fields in the descriptor.
Gravedad: Pendiente de análisis
Última modificación:
03/11/2025

CVE-2025-39747

Fecha de publicación:
11/09/2025
Idioma:
Inglés
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> drm/msm: Add error handling for krealloc in metadata setup<br /> <br /> Function msm_ioctl_gem_info_set_metadata() now checks for krealloc<br /> failure and returns -ENOMEM, avoiding potential NULL pointer dereference.<br /> Explicitly avoids __GFP_NOFAIL due to deadlock risks and allocation constraints.<br /> <br /> Patchwork: https://patchwork.freedesktop.org/patch/661235/
Gravedad: Pendiente de análisis
Última modificación:
15/09/2025

CVE-2025-39748

Fecha de publicación:
11/09/2025
Idioma:
Inglés
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> bpf: Forget ranges when refining tnum after JSET<br /> <br /> Syzbot reported a kernel warning due to a range invariant violation on<br /> the following BPF program.<br /> <br /> 0: call bpf_get_netns_cookie<br /> 1: if r0 == 0 goto <br /> 2: if r0 &amp; Oxffffffff goto <br /> <br /> The issue is on the path where we fall through both jumps.<br /> <br /> That path is unreachable at runtime: after insn 1, we know r0 != 0, but<br /> with the sign extension on the jset, we would only fallthrough insn 2<br /> if r0 == 0. Unfortunately, is_branch_taken() isn&amp;#39;t currently able to<br /> figure this out, so the verifier walks all branches. The verifier then<br /> refines the register bounds using the second condition and we end<br /> up with inconsistent bounds on this unreachable path:<br /> <br /> 1: if r0 == 0 goto <br /> r0: u64=[0x1, 0xffffffffffffffff] var_off=(0, 0xffffffffffffffff)<br /> 2: if r0 &amp; 0xffffffff goto <br /> r0 before reg_bounds_sync: u64=[0x1, 0xffffffffffffffff] var_off=(0, 0)<br /> r0 after reg_bounds_sync: u64=[0x1, 0] var_off=(0, 0)<br /> <br /> Improving the range refinement for JSET to cover all cases is tricky. We<br /> also don&amp;#39;t expect many users to rely on JSET given LLVM doesn&amp;#39;t generate<br /> those instructions. So instead of improving the range refinement for<br /> JSETs, Eduard suggested we forget the ranges whenever we&amp;#39;re narrowing<br /> tnums after a JSET. This patch implements that approach.
Gravedad: Pendiente de análisis
Última modificación:
15/09/2025

CVE-2025-39750

Fecha de publicación:
11/09/2025
Idioma:
Inglés
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> wifi: ath12k: Correct tid cleanup when tid setup fails<br /> <br /> Currently, if any error occurs during ath12k_dp_rx_peer_tid_setup(),<br /> the tid value is already incremented, even though the corresponding<br /> TID is not actually allocated. Proceed to<br /> ath12k_dp_rx_peer_tid_delete() starting from unallocated tid,<br /> which might leads to freeing unallocated TID and cause potential<br /> crash or out-of-bounds access.<br /> <br /> Hence, fix by correctly decrementing tid before cleanup to match only<br /> the successfully allocated TIDs.<br /> <br /> Also, remove tid-- from failure case of ath12k_dp_rx_peer_frag_setup(),<br /> as decrementing the tid before cleanup in loop will take care of this.<br /> <br /> Compile tested only.
Gravedad: Pendiente de análisis
Última modificación:
15/09/2025

CVE-2025-39751

Fecha de publicación:
11/09/2025
Idioma:
Inglés
*** Pendiente de traducción *** Rejected reason: This CVE ID has been rejected or withdrawn by its CVE Numbering Authority.
Gravedad: Pendiente de análisis
Última modificación:
06/10/2025

CVE-2025-39749

Fecha de publicación:
11/09/2025
Idioma:
Inglés
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> rcu: Protect -&gt;defer_qs_iw_pending from data race<br /> <br /> On kernels built with CONFIG_IRQ_WORK=y, when rcu_read_unlock() is<br /> invoked within an interrupts-disabled region of code [1], it will invoke<br /> rcu_read_unlock_special(), which uses an irq-work handler to force the<br /> system to notice when the RCU read-side critical section actually ends.<br /> That end won&amp;#39;t happen until interrupts are enabled at the soonest.<br /> <br /> In some kernels, such as those booted with rcutree.use_softirq=y, the<br /> irq-work handler is used unconditionally.<br /> <br /> The per-CPU rcu_data structure&amp;#39;s -&gt;defer_qs_iw_pending field is<br /> updated by the irq-work handler and is both read and updated by<br /> rcu_read_unlock_special(). This resulted in the following KCSAN splat:<br /> <br /> ------------------------------------------------------------------------<br /> <br /> BUG: KCSAN: data-race in rcu_preempt_deferred_qs_handler / rcu_read_unlock_special<br /> <br /> read to 0xffff96b95f42d8d8 of 1 bytes by task 90 on cpu 8:<br /> rcu_read_unlock_special+0x175/0x260<br /> __rcu_read_unlock+0x92/0xa0<br /> rt_spin_unlock+0x9b/0xc0<br /> __local_bh_enable+0x10d/0x170<br /> __local_bh_enable_ip+0xfb/0x150<br /> rcu_do_batch+0x595/0xc40<br /> rcu_cpu_kthread+0x4e9/0x830<br /> smpboot_thread_fn+0x24d/0x3b0<br /> kthread+0x3bd/0x410<br /> ret_from_fork+0x35/0x40<br /> ret_from_fork_asm+0x1a/0x30<br /> <br /> write to 0xffff96b95f42d8d8 of 1 bytes by task 88 on cpu 8:<br /> rcu_preempt_deferred_qs_handler+0x1e/0x30<br /> irq_work_single+0xaf/0x160<br /> run_irq_workd+0x91/0xc0<br /> smpboot_thread_fn+0x24d/0x3b0<br /> kthread+0x3bd/0x410<br /> ret_from_fork+0x35/0x40<br /> ret_from_fork_asm+0x1a/0x30<br /> <br /> no locks held by irq_work/8/88.<br /> irq event stamp: 200272<br /> hardirqs last enabled at (200272): [] finish_task_switch+0x131/0x320<br /> hardirqs last disabled at (200271): [] __schedule+0x129/0xd70<br /> softirqs last enabled at (0): [] copy_process+0x4df/0x1cc0<br /> softirqs last disabled at (0): [] 0x0<br /> <br /> ------------------------------------------------------------------------<br /> <br /> The problem is that irq-work handlers run with interrupts enabled, which<br /> means that rcu_preempt_deferred_qs_handler() could be interrupted,<br /> and that interrupt handler might contain an RCU read-side critical<br /> section, which might invoke rcu_read_unlock_special(). In the strict<br /> KCSAN mode of operation used by RCU, this constitutes a data race on<br /> the -&gt;defer_qs_iw_pending field.<br /> <br /> This commit therefore disables interrupts across the portion of the<br /> rcu_preempt_deferred_qs_handler() that updates the -&gt;defer_qs_iw_pending<br /> field. This suffices because this handler is not a fast path.
Gravedad: Pendiente de análisis
Última modificación:
03/11/2025