Instituto Nacional de ciberseguridad. Sección Incibe
Instituto Nacional de Ciberseguridad. Sección INCIBE-CERT

Vulnerabilidades

Con el objetivo de informar, advertir y ayudar a los profesionales sobre las ultimas vulnerabilidades de seguridad en sistemas tecnológicos, ponemos a disposición de los usuarios interesados en esta información una base de datos con información en castellano sobre cada una de las ultimas vulnerabilidades documentadas y conocidas.

Este repositorio con más de 75.000 registros esta basado en la información de NVD (National Vulnerability Database) – en función de un acuerdo de colaboración – por el cual desde INCIBE realizamos la traducción al castellano de la información incluida. En ocasiones este listado mostrará vulnerabilidades que aún no han sido traducidas debido a que se recogen en el transcurso del tiempo en el que el equipo de INCIBE realiza el proceso de traducción.

Se emplea el estándar de nomenclatura de vulnerabilidades CVE (Common Vulnerabilities and Exposures), con el fin de facilitar el intercambio de información entre diferentes bases de datos y herramientas. Cada una de las vulnerabilidades recogidas enlaza a diversas fuentes de información así como a parches disponibles o soluciones aportadas por los fabricantes y desarrolladores. Es posible realizar búsquedas avanzadas teniendo la opción de seleccionar diferentes criterios como el tipo de vulnerabilidad, fabricante, tipo de impacto entre otros, con el fin de acortar los resultados.

Mediante suscripción RSS o Boletines podemos estar informados diariamente de las ultimas vulnerabilidades incorporadas al repositorio.

CVE-2025-71078

Fecha de publicación:
13/01/2026
Idioma:
Inglés
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> powerpc/64s/slb: Fix SLB multihit issue during SLB preload<br /> <br /> On systems using the hash MMU, there is a software SLB preload cache that<br /> mirrors the entries loaded into the hardware SLB buffer. This preload<br /> cache is subject to periodic eviction — typically after every 256 context<br /> switches — to remove old entry.<br /> <br /> To optimize performance, the kernel skips switch_mmu_context() in<br /> switch_mm_irqs_off() when the prev and next mm_struct are the same.<br /> However, on hash MMU systems, this can lead to inconsistencies between<br /> the hardware SLB and the software preload cache.<br /> <br /> If an SLB entry for a process is evicted from the software cache on one<br /> CPU, and the same process later runs on another CPU without executing<br /> switch_mmu_context(), the hardware SLB may retain stale entries. If the<br /> kernel then attempts to reload that entry, it can trigger an SLB<br /> multi-hit error.<br /> <br /> The following timeline shows how stale SLB entries are created and can<br /> cause a multi-hit error when a process moves between CPUs without a<br /> MMU context switch.<br /> <br /> CPU 0 CPU 1<br /> ----- -----<br /> Process P<br /> exec swapper/1<br /> load_elf_binary<br /> begin_new_exc<br /> activate_mm<br /> switch_mm_irqs_off<br /> switch_mmu_context<br /> switch_slb<br /> /*<br /> * This invalidates all<br /> * the entries in the HW<br /> * and setup the new HW<br /> * SLB entries as per the<br /> * preload cache.<br /> */<br /> context_switch<br /> sched_migrate_task migrates process P to cpu-1<br /> <br /> Process swapper/0 context switch (to process P)<br /> (uses mm_struct of Process P) switch_mm_irqs_off()<br /> switch_slb<br /> load_slb++<br /> /*<br /> * load_slb becomes 0 here<br /> * and we evict an entry from<br /> * the preload cache with<br /> * preload_age(). We still<br /> * keep HW SLB and preload<br /> * cache in sync, that is<br /> * because all HW SLB entries<br /> * anyways gets evicted in<br /> * switch_slb during SLBIA.<br /> * We then only add those<br /> * entries back in HW SLB,<br /> * which are currently<br /> * present in preload_cache<br /> * (after eviction).<br /> */<br /> load_elf_binary continues...<br /> setup_new_exec()<br /> slb_setup_new_exec()<br /> <br /> sched_switch event<br /> sched_migrate_task migrates<br /> process P to cpu-0<br /> <br /> context_switch from swapper/0 to Process P<br /> switch_mm_irqs_off()<br /> /*<br /> * Since both prev and next mm struct are same we don&amp;#39;t call<br /> * switch_mmu_context(). This will cause the HW SLB and SW preload<br /> * cache to go out of sync in preload_new_slb_context. Because there<br /> * was an SLB entry which was evicted from both HW and preload cache<br /> * on cpu-1. Now later in preload_new_slb_context(), when we will try<br /> * to add the same preload entry again, we will add this to the SW<br /> * preload cache and then will add it to the HW SLB. Since on cpu-0<br /> * this entry was never invalidated, hence adding this entry to the HW<br /> * SLB will cause a SLB multi-hit error.<br /> */<br /> load_elf_binary cont<br /> ---truncated---
Gravedad: Pendiente de análisis
Última modificación:
19/01/2026

CVE-2025-71079

Fecha de publicación:
13/01/2026
Idioma:
Inglés
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> net: nfc: fix deadlock between nfc_unregister_device and rfkill_fop_write<br /> <br /> A deadlock can occur between nfc_unregister_device() and rfkill_fop_write()<br /> due to lock ordering inversion between device_lock and rfkill_global_mutex.<br /> <br /> The problematic lock order is:<br /> <br /> Thread A (rfkill_fop_write):<br /> rfkill_fop_write()<br /> mutex_lock(&amp;rfkill_global_mutex)<br /> rfkill_set_block()<br /> nfc_rfkill_set_block()<br /> nfc_dev_down()<br /> device_lock(&amp;dev-&gt;dev) dev)<br /> rfkill_unregister()<br /> mutex_lock(&amp;rfkill_global_mutex) <br /> rfkill_global_mutex via rfkill_register) is safe because during<br /> registration the device is not yet in rfkill_list, so no concurrent<br /> rfkill operations can occur on this device.
Gravedad: Pendiente de análisis
Última modificación:
19/01/2026

CVE-2025-71081

Fecha de publicación:
13/01/2026
Idioma:
Inglés
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> ASoC: stm32: sai: fix OF node leak on probe<br /> <br /> The reference taken to the sync provider OF node when probing the<br /> platform device is currently only dropped if the set_sync() callback<br /> fails during DAI probe.<br /> <br /> Make sure to drop the reference on platform probe failures (e.g. probe<br /> deferral) and on driver unbind.<br /> <br /> This also avoids a potential use-after-free in case the DAI is ever<br /> reprobed without first rebinding the platform driver.
Gravedad: Pendiente de análisis
Última modificación:
19/01/2026

CVE-2025-71082

Fecha de publicación:
13/01/2026
Idioma:
Inglés
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> Bluetooth: btusb: revert use of devm_kzalloc in btusb<br /> <br /> This reverts commit 98921dbd00c4e ("Bluetooth: Use devm_kzalloc in<br /> btusb.c file").<br /> <br /> In btusb_probe(), we use devm_kzalloc() to allocate the btusb data. This<br /> ties the lifetime of all the btusb data to the binding of a driver to<br /> one interface, INTF. In a driver that binds to other interfaces, ISOC<br /> and DIAG, this is an accident waiting to happen.<br /> <br /> The issue is revealed in btusb_disconnect(), where calling<br /> usb_driver_release_interface(&amp;btusb_driver, data-&gt;intf) will have devm<br /> free the data that is also being used by the other interfaces of the<br /> driver that may not be released yet.<br /> <br /> To fix this, revert the use of devm and go back to freeing memory<br /> explicitly.
Gravedad: Pendiente de análisis
Última modificación:
19/01/2026

CVE-2025-71083

Fecha de publicación:
13/01/2026
Idioma:
Inglés
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> drm/ttm: Avoid NULL pointer deref for evicted BOs<br /> <br /> It is possible for a BO to exist that is not currently associated with a<br /> resource, e.g. because it has been evicted.<br /> <br /> When devcoredump tries to read the contents of all BOs for dumping, we need<br /> to expect this as well -- in this case, ENODATA is recorded instead of the<br /> buffer contents.
Gravedad: Pendiente de análisis
Última modificación:
19/01/2026

CVE-2025-71067

Fecha de publicación:
13/01/2026
Idioma:
Inglés
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> ntfs: set dummy blocksize to read boot_block when mounting<br /> <br /> When mounting, sb-&gt;s_blocksize is used to read the boot_block without<br /> being defined or validated. Set a dummy blocksize before attempting to<br /> read the boot_block.<br /> <br /> The issue can be triggered with the following syz reproducer:<br /> <br /> mkdirat(0xffffffffffffff9c, &amp;(0x7f0000000080)=&amp;#39;./file1\x00&amp;#39;, 0x0)<br /> r4 = openat$nullb(0xffffffffffffff9c, &amp;(0x7f0000000040), 0x121403, 0x0)<br /> ioctl$FS_IOC_SETFLAGS(r4, 0x40081271, &amp;(0x7f0000000980)=0x4000)<br /> mount(&amp;(0x7f0000000140)=@nullb, &amp;(0x7f0000000040)=&amp;#39;./cgroup\x00&amp;#39;,<br /> &amp;(0x7f0000000000)=&amp;#39;ntfs3\x00&amp;#39;, 0x2208004, 0x0)<br /> syz_clone(0x88200200, 0x0, 0x0, 0x0, 0x0, 0x0)<br /> <br /> Here, the ioctl sets the bdev block size to 16384. During mount,<br /> get_tree_bdev_flags() calls sb_set_blocksize(sb, block_size(bdev)),<br /> but since block_size(bdev) &gt; PAGE_SIZE, sb_set_blocksize() leaves<br /> sb-&gt;s_blocksize at zero.<br /> <br /> Later, ntfs_init_from_boot() attempts to read the boot_block while<br /> sb-&gt;s_blocksize is still zero, which triggers the bug.<br /> <br /> [almaz.alexandrovich@paragon-software.com: changed comment style, added<br /> return value handling]
Gravedad: Pendiente de análisis
Última modificación:
14/01/2026

CVE-2025-71070

Fecha de publicación:
13/01/2026
Idioma:
Inglés
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> ublk: clean up user copy references on ublk server exit<br /> <br /> If a ublk server process releases a ublk char device file, any requests<br /> dispatched to the ublk server but not yet completed will retain a ref<br /> value of UBLK_REFCOUNT_INIT. Before commit e63d2228ef83 ("ublk: simplify<br /> aborting ublk request"), __ublk_fail_req() would decrement the reference<br /> count before completing the failed request. However, that commit<br /> optimized __ublk_fail_req() to call __ublk_complete_rq() directly<br /> without decrementing the request reference count.<br /> The leaked reference count incorrectly allows user copy and zero copy<br /> operations on the completed ublk request. It also triggers the<br /> WARN_ON_ONCE(refcount_read(&amp;io-&gt;ref)) warnings in ublk_queue_reinit()<br /> and ublk_deinit_queue().<br /> Commit c5c5eb24ed61 ("ublk: avoid ublk_io_release() called after ublk<br /> char dev is closed") already fixed the issue for ublk devices using<br /> UBLK_F_SUPPORT_ZERO_COPY or UBLK_F_AUTO_BUF_REG. However, the reference<br /> count leak also affects UBLK_F_USER_COPY, the other reference-counted<br /> data copy mode. Fix the condition in ublk_check_and_reset_active_ref()<br /> to include all reference-counted data copy modes. This ensures that any<br /> ublk requests still owned by the ublk server when it exits have their<br /> reference counts reset to 0.
Gravedad: Pendiente de análisis
Última modificación:
14/01/2026

CVE-2025-71071

Fecha de publicación:
13/01/2026
Idioma:
Inglés
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> iommu/mediatek: fix use-after-free on probe deferral<br /> <br /> The driver is dropping the references taken to the larb devices during<br /> probe after successful lookup as well as on errors. This can<br /> potentially lead to a use-after-free in case a larb device has not yet<br /> been bound to its driver so that the iommu driver probe defers.<br /> <br /> Fix this by keeping the references as expected while the iommu driver is<br /> bound.
Gravedad: Pendiente de análisis
Última modificación:
14/01/2026

CVE-2025-71072

Fecha de publicación:
13/01/2026
Idioma:
Inglés
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> shmem: fix recovery on rename failures<br /> <br /> maple_tree insertions can fail if we are seriously short on memory;<br /> simple_offset_rename() does not recover well if it runs into that.<br /> The same goes for simple_offset_rename_exchange().<br /> <br /> Moreover, shmem_whiteout() expects that if it succeeds, the caller will<br /> progress to d_move(), i.e. that shmem_rename2() won&amp;#39;t fail past the<br /> successful call of shmem_whiteout().<br /> <br /> Not hard to fix, fortunately - mtree_store() can&amp;#39;t fail if the index we<br /> are trying to store into is already present in the tree as a singleton.<br /> <br /> For simple_offset_rename_exchange() that&amp;#39;s enough - we just need to be<br /> careful about the order of operations.<br /> <br /> For simple_offset_rename() solution is to preinsert the target into the<br /> tree for new_dir; the rest can be done without any potentially failing<br /> operations.<br /> <br /> That preinsertion has to be done in shmem_rename2() rather than in<br /> simple_offset_rename() itself - otherwise we&amp;#39;d need to deal with the<br /> possibility of failure after successful shmem_whiteout().
Gravedad: Pendiente de análisis
Última modificación:
14/01/2026

CVE-2025-71073

Fecha de publicación:
13/01/2026
Idioma:
Inglés
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> Input: lkkbd - disable pending work before freeing device<br /> <br /> lkkbd_interrupt() schedules lk-&gt;tq via schedule_work(), and the work<br /> handler lkkbd_reinit() dereferences the lkkbd structure and its<br /> serio/input_dev fields.<br /> <br /> lkkbd_disconnect() and error paths in lkkbd_connect() free the lkkbd<br /> structure without preventing the reinit work from being queued again<br /> until serio_close() returns. This can allow the work handler to run<br /> after the structure has been freed, leading to a potential use-after-free.<br /> <br /> Use disable_work_sync() instead of cancel_work_sync() to ensure the<br /> reinit work cannot be re-queued, and call it both in lkkbd_disconnect()<br /> and in lkkbd_connect() error paths after serio_open().
Gravedad: Pendiente de análisis
Última modificación:
14/01/2026

CVE-2025-71068

Fecha de publicación:
13/01/2026
Idioma:
Inglés
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> svcrdma: bound check rq_pages index in inline path<br /> <br /> svc_rdma_copy_inline_range indexed rqstp-&gt;rq_pages[rc_curpage] without<br /> verifying rc_curpage stays within the allocated page array. Add guards<br /> before the first use and after advancing to a new page.
Gravedad: Pendiente de análisis
Última modificación:
19/01/2026

CVE-2025-71069

Fecha de publicación:
13/01/2026
Idioma:
Inglés
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> f2fs: invalidate dentry cache on failed whiteout creation<br /> <br /> F2FS can mount filesystems with corrupted directory depth values that<br /> get runtime-clamped to MAX_DIR_HASH_DEPTH. When RENAME_WHITEOUT<br /> operations are performed on such directories, f2fs_rename performs<br /> directory modifications (updating target entry and deleting source<br /> entry) before attempting to add the whiteout entry via f2fs_add_link.<br /> <br /> If f2fs_add_link fails due to the corrupted directory structure, the<br /> function returns an error to VFS, but the partial directory<br /> modifications have already been committed to disk. VFS assumes the<br /> entire rename operation failed and does not update the dentry cache,<br /> leaving stale mappings.<br /> <br /> In the error path, VFS does not call d_move() to update the dentry<br /> cache. This results in new_dentry still pointing to the old inode<br /> (new_inode) which has already had its i_nlink decremented to zero.<br /> The stale cache causes subsequent operations to incorrectly reference<br /> the freed inode.<br /> <br /> This causes subsequent operations to use cached dentry information that<br /> no longer matches the on-disk state. When a second rename targets the<br /> same entry, VFS attempts to decrement i_nlink on the stale inode, which<br /> may already have i_nlink=0, triggering a WARNING in drop_nlink().<br /> <br /> Example sequence:<br /> 1. First rename (RENAME_WHITEOUT): file2 → file1<br /> - f2fs updates file1 entry on disk (points to inode 8)<br /> - f2fs deletes file2 entry on disk<br /> - f2fs_add_link(whiteout) fails (corrupted directory)<br /> - Returns error to VFS<br /> - VFS does not call d_move() due to error<br /> - VFS cache still has: file1 → inode 7 (stale!)<br /> - inode 7 has i_nlink=0 (already decremented)<br /> <br /> 2. Second rename: file3 → file1<br /> - VFS uses stale cache: file1 → inode 7<br /> - Tries to drop_nlink on inode 7 (i_nlink already 0)<br /> - WARNING in drop_nlink()<br /> <br /> Fix this by explicitly invalidating old_dentry and new_dentry when<br /> f2fs_add_link fails during whiteout creation. This forces VFS to<br /> refresh from disk on subsequent operations, ensuring cache consistency<br /> even when the rename partially succeeds.<br /> <br /> Reproducer:<br /> 1. Mount F2FS image with corrupted i_current_depth<br /> 2. renameat2(file2, file1, RENAME_WHITEOUT)<br /> 3. renameat2(file3, file1, 0)<br /> 4. System triggers WARNING in drop_nlink()
Gravedad: Pendiente de análisis
Última modificación:
19/01/2026