Instituto Nacional de ciberseguridad. Sección Incibe
Instituto Nacional de Ciberseguridad. Sección INCIBE-CERT

Vulnerabilidades

Con el objetivo de informar, advertir y ayudar a los profesionales sobre las últimas vulnerabilidades de seguridad en sistemas tecnológicos, ponemos a disposición de los usuarios interesados en esta información una base de datos con información en castellano sobre cada una de las últimas vulnerabilidades documentadas y conocidas.

Este repositorio con más de 75.000 registros esta basado en la información de NVD (National Vulnerability Database) – en función de un acuerdo de colaboración – por el cual desde INCIBE realizamos la traducción al castellano de la información incluida. En ocasiones este listado mostrará vulnerabilidades que aún no han sido traducidas debido a que se recogen en el transcurso del tiempo en el que el equipo de INCIBE realiza el proceso de traducción.

Se emplea el estándar de nomenclatura de vulnerabilidades CVE (Common Vulnerabilities and Exposures), con el fin de facilitar el intercambio de información entre diferentes bases de datos y herramientas. Cada una de las vulnerabilidades recogidas enlaza a diversas fuentes de información así como a parches disponibles o soluciones aportadas por los fabricantes y desarrolladores. Es posible realizar búsquedas avanzadas teniendo la opción de seleccionar diferentes criterios como el tipo de vulnerabilidad, fabricante, tipo de impacto entre otros, con el fin de acortar los resultados.

Mediante suscripción RSS o Boletines podemos estar informados diariamente de las últimas vulnerabilidades incorporadas al repositorio.

CVE-2025-31970

Fecha de publicación:
06/05/2026
Idioma:
Inglés
*** Pendiente de traducción *** HCL DFXAnalytics is affected by an Insecure Security Header configuration vulnerability where the Content-Security-Policy does not define strict directives for object-src and base-uri, which could allow an attacker to exploit injection vectors such as Cross-Site Scripting (XSS)
Gravedad CVSS v3.1: MEDIA
Última modificación:
06/05/2026

CVE-2026-6860

Fecha de publicación:
06/05/2026
Idioma:
Inglés
*** Pendiente de traducción *** A TCP client can perform a TLS handshake and present the server name extension with a server name that is accepted by a server wildcard name, e.g. if the server is configured with a certificate accepting *.example.com, any XYZ.example.com where xyz is a valid name can be used.
Gravedad CVSS v4.0: MEDIA
Última modificación:
06/05/2026

CVE-2026-43975

Fecha de publicación:
06/05/2026
Idioma:
Inglés
*** Pendiente de traducción *** FolderUploadsFileManager in Apache Wicket does not validate or sanitize the uploadFieldId parameter or the clientFileName<br /> before constructing file paths, allowing an unauthenticated attacker to<br /> write arbitrary files outside the intended upload directory or read <br /> files from arbitrary locations on the server.<br /> <br /> This issue affects Apache Wicket: from 8.0.0 through 8.17.0, from 9.0.0 through 9.22.0, from 10.0.0 through 10.8.0.<br /> <br /> Users are recommended to upgrade to version 10.9.0, which fixes the issue.
Gravedad CVSS v3.1: MEDIA
Última modificación:
06/05/2026

CVE-2026-43646

Fecha de publicación:
06/05/2026
Idioma:
Inglés
*** Pendiente de traducción *** Exposure of Sensitive Information to an Unauthorized Actor vulnerability in Apache Wicket.<br /> <br /> This issue affects Apache Wicket: from 8.0.0 through 8.17.0, from 9.0.0 through 9.22.0, from 10.0.0 through 10.8.0.<br /> <br /> Users are recommended to upgrade to version 10.9.0, which fixes the issue.
Gravedad CVSS v3.1: ALTA
Última modificación:
06/05/2026

CVE-2026-43113

Fecha de publicación:
06/05/2026
Idioma:
Inglés
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> wifi: wl1251: validate packet IDs before indexing tx_frames<br /> <br /> wl1251_tx_packet_cb() uses the firmware completion ID directly to index<br /> the fixed 16-entry wl-&gt;tx_frames[] array. The ID is a raw u8 from the<br /> completion block, and the callback does not currently verify that it<br /> fits the array before dereferencing it.<br /> <br /> Reject completion IDs that fall outside wl-&gt;tx_frames[] and keep the<br /> existing NULL check in the same guard. This keeps the fix local to the<br /> trust boundary and avoids touching the rest of the completion flow.
Gravedad: Pendiente de análisis
Última modificación:
06/05/2026

CVE-2026-43114

Fecha de publicación:
06/05/2026
Idioma:
Inglés
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> netfilter: nft_set_pipapo_avx2: don&amp;#39;t return non-matching entry on expiry<br /> <br /> New test case fails unexpectedly when avx2 matching functions are used.<br /> <br /> The test first loads a ranomly generated pipapo set<br /> with &amp;#39;ipv4 . port&amp;#39; key, i.e. nft -f foo.<br /> <br /> This works. Then, it reloads the set after a flush:<br /> (echo flush set t s; cat foo) | nft -f -<br /> <br /> This is expected to work, because its the same set after all and it was<br /> already loaded once.<br /> <br /> But with avx2, this fails: nft reports a clashing element.<br /> <br /> The reported clash is of following form:<br /> <br /> We successfully re-inserted<br /> a . b<br /> c . d<br /> <br /> Then we try to insert a . d<br /> <br /> avx2 finds the already existing a . d, which (due to &amp;#39;flush set&amp;#39;) is marked<br /> as invalid in the new generation. It skips the element and moves to next.<br /> <br /> Due to incorrect masking, the skip-step finds the next matching<br /> element *only considering the first field*,<br /> <br /> i.e. we return the already reinserted "a . b", even though the<br /> last field is different and the entry should not have been matched.<br /> <br /> No such error is reported for the generic c implementation (no avx2) or when<br /> the last field has to use the &amp;#39;nft_pipapo_avx2_lookup_slow&amp;#39; fallback.<br /> <br /> Bisection points to<br /> 7711f4bb4b36 ("netfilter: nft_set_pipapo: fix range overlap detection")<br /> but that fix merely uncovers this bug.<br /> <br /> Before this commit, the wrong element is returned, but erronously<br /> reported as a full, identical duplicate.<br /> <br /> The root-cause is too early return in the avx2 match functions.<br /> When we process the last field, we should continue to process data<br /> until the entire input size has been consumed to make sure no stale<br /> bits remain in the map.
Gravedad: Pendiente de análisis
Última modificación:
06/05/2026

CVE-2026-43116

Fecha de publicación:
06/05/2026
Idioma:
Inglés
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> netfilter: ctnetlink: ensure safe access to master conntrack<br /> <br /> Holding reference on the expectation is not sufficient, the master<br /> conntrack object can just go away, making exp-&gt;master invalid.<br /> <br /> To access exp-&gt;master safely:<br /> <br /> - Grab the nf_conntrack_expect_lock, this gets serialized with<br /> clean_from_lists() which also holds this lock when the master<br /> conntrack goes away.<br /> <br /> - Hold reference on master conntrack via nf_conntrack_find_get().<br /> Not so easy since the master tuple to look up for the master conntrack<br /> is not available in the existing problematic paths.<br /> <br /> This patch goes for extending the nf_conntrack_expect_lock section<br /> to address this issue for simplicity, in the cases that are described<br /> below this is just slightly extending the lock section.<br /> <br /> The add expectation command already holds a reference to the master<br /> conntrack from ctnetlink_create_expect().<br /> <br /> However, the delete expectation command needs to grab the spinlock<br /> before looking up for the expectation. Expand the existing spinlock<br /> section to address this to cover the expectation lookup. Note that,<br /> the nf_ct_expect_iterate_net() calls already grabs the spinlock while<br /> iterating over the expectation table, which is correct.<br /> <br /> The get expectation command needs to grab the spinlock to ensure master<br /> conntrack does not go away. This also expands the existing spinlock<br /> section to cover the expectation lookup too. I needed to move the<br /> netlink skb allocation out of the spinlock to keep it GFP_KERNEL.<br /> <br /> For the expectation events, the IPEXP_DESTROY event is already delivered<br /> under the spinlock, just move the delivery of IPEXP_NEW under the<br /> spinlock too because the master conntrack event cache is reached through<br /> exp-&gt;master.<br /> <br /> While at it, add lockdep notations to help identify what codepaths need<br /> to grab the spinlock.
Gravedad: Pendiente de análisis
Última modificación:
06/05/2026

CVE-2026-43117

Fecha de publicación:
06/05/2026
Idioma:
Inglés
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> btrfs: tracepoints: get correct superblock from dentry in event btrfs_sync_file()<br /> <br /> If overlay is used on top of btrfs, dentry-&gt;d_sb translates to overlay&amp;#39;s<br /> super block and fsid assignment will lead to a crash.<br /> <br /> Use file_inode(file)-&gt;i_sb to always get btrfs_sb.
Gravedad: Pendiente de análisis
Última modificación:
06/05/2026

CVE-2026-43120

Fecha de publicación:
06/05/2026
Idioma:
Inglés
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> RDMA/irdma: Fix double free related to rereg_user_mr<br /> <br /> If IB_MR_REREG_TRANS is set during rereg_user_mr, the<br /> umem will be released and a new one will be allocated<br /> in irdma_rereg_mr_trans. If any step of irdma_rereg_mr_trans<br /> fails after the new umem is allocated, it releases the umem,<br /> but does not set iwmr-&gt;region to NULL. The problem is that<br /> this failure is propagated to the user, who will then call<br /> ibv_dereg_mr (as they should). Then, the dereg_mr path will<br /> see a non-NULL umem and attempt to call ib_umem_release again.<br /> <br /> Fix this by setting iwmr-&gt;region to NULL after ib_umem_release.<br /> <br /> Fixed: 5ac388db27c4 ("RDMA/irdma: Add support to re-register a memory region")
Gravedad: Pendiente de análisis
Última modificación:
06/05/2026

CVE-2026-43115

Fecha de publicación:
06/05/2026
Idioma:
Inglés
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> srcu: Use irq_work to start GP in tiny SRCU<br /> <br /> Tiny SRCU&amp;#39;s srcu_gp_start_if_needed() directly calls schedule_work(),<br /> which acquires the workqueue pool-&gt;lock.<br /> <br /> This causes a lockdep splat when call_srcu() is called with a scheduler<br /> lock held, due to:<br /> <br /> call_srcu() [holding pi_lock]<br /> srcu_gp_start_if_needed()<br /> schedule_work() -&gt; pool-&gt;lock<br /> <br /> workqueue_init() / create_worker() [holding pool-&gt;lock]<br /> wake_up_process() -&gt; try_to_wake_up() -&gt; pi_lock<br /> <br /> Also add irq_work_sync() to cleanup_srcu_struct() to prevent a<br /> use-after-free if a queued irq_work fires after cleanup begins.<br /> <br /> Tested with rcutorture SRCU-T and no lockdep warnings.<br /> <br /> [ Thanks to Boqun for similar fix in patch "rcu: Use an intermediate irq_work<br /> to start process_srcu()" ]
Gravedad: Pendiente de análisis
Última modificación:
06/05/2026

CVE-2026-43118

Fecha de publicación:
06/05/2026
Idioma:
Inglés
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> btrfs: fix zero size inode with non-zero size after log replay<br /> <br /> When logging that an inode exists, as part of logging a new name or<br /> logging new dir entries for a directory, we always set the generation of<br /> the logged inode item to 0. This is to signal during log replay (in<br /> overwrite_item()), that we should not set the i_size since we only logged<br /> that an inode exists, so the i_size of the inode in the subvolume tree<br /> must be preserved (as when we log new names or that an inode exists, we<br /> don&amp;#39;t log extents).<br /> <br /> This works fine except when we have already logged an inode in full mode<br /> or it&amp;#39;s the first time we are logging an inode created in a past<br /> transaction, that inode has a new i_size of 0 and then we log a new name<br /> for the inode (due to a new hardlink or a rename), in which case we log<br /> an i_size of 0 for the inode and a generation of 0, which causes the log<br /> replay code to not update the inode&amp;#39;s i_size to 0 (in overwrite_item()).<br /> <br /> An example scenario:<br /> <br /> mkdir /mnt/dir<br /> xfs_io -f -c "pwrite 0 64K" /mnt/dir/foo<br /> <br /> sync<br /> <br /> xfs_io -c "truncate 0" -c "fsync" /mnt/dir/foo<br /> <br /> ln /mnt/dir/foo /mnt/dir/bar<br /> <br /> xfs_io -c "fsync" /mnt/dir<br /> <br /> <br /> <br /> After log replay the file remains with a size of 64K. This is because when<br /> we first log the inode, when we fsync file foo, we log its current i_size<br /> of 0, and then when we create a hard link we log again the inode in exists<br /> mode (LOG_INODE_EXISTS) but we set a generation of 0 for the inode item we<br /> add to the log tree, so during log replay overwrite_item() sees that the<br /> generation is 0 and i_size is 0 so we skip updating the inode&amp;#39;s i_size<br /> from 64K to 0.<br /> <br /> Fix this by making sure at fill_inode_item() we always log the real<br /> generation of the inode if it was logged in the current transaction with<br /> the i_size we logged before. Also if an inode created in a previous<br /> transaction is logged in exists mode only, make sure we log the i_size<br /> stored in the inode item located from the commit root, so that if we log<br /> multiple times that the inode exists we get the correct i_size.<br /> <br /> A test case for fstests will follow soon.
Gravedad: Pendiente de análisis
Última modificación:
06/05/2026

CVE-2026-43119

Fecha de publicación:
06/05/2026
Idioma:
Inglés
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> Bluetooth: hci_sync: annotate data-races around hdev-&gt;req_status<br /> <br /> __hci_cmd_sync_sk() sets hdev-&gt;req_status under hdev-&gt;req_lock:<br /> <br /> hdev-&gt;req_status = HCI_REQ_PEND;<br /> <br /> However, several other functions read or write hdev-&gt;req_status without<br /> holding any lock:<br /> <br /> - hci_send_cmd_sync() reads req_status in hci_cmd_work (workqueue)<br /> - hci_cmd_sync_complete() reads/writes from HCI event completion<br /> - hci_cmd_sync_cancel() / hci_cmd_sync_cancel_sync() read/write<br /> - hci_abort_conn() reads in connection abort path<br /> <br /> Since __hci_cmd_sync_sk() runs on hdev-&gt;req_workqueue while<br /> hci_send_cmd_sync() runs on hdev-&gt;workqueue, these are different<br /> workqueues that can execute concurrently on different CPUs. The plain<br /> C accesses constitute a data race.<br /> <br /> Add READ_ONCE()/WRITE_ONCE() annotations on all concurrent accesses<br /> to hdev-&gt;req_status to prevent potential compiler optimizations that<br /> could affect correctness (e.g., load fusing in the wait_event<br /> condition or store reordering).
Gravedad: Pendiente de análisis
Última modificación:
06/05/2026