Instituto Nacional de ciberseguridad. Sección Incibe
Instituto Nacional de Ciberseguridad. Sección INCIBE-CERT

Vulnerabilidades

Con el objetivo de informar, advertir y ayudar a los profesionales sobre las ultimas vulnerabilidades de seguridad en sistemas tecnológicos, ponemos a disposición de los usuarios interesados en esta información una base de datos con información en castellano sobre cada una de las ultimas vulnerabilidades documentadas y conocidas.

Este repositorio con más de 75.000 registros esta basado en la información de NVD (National Vulnerability Database) – en función de un acuerdo de colaboración – por el cual desde INCIBE realizamos la traducción al castellano de la información incluida. En ocasiones este listado mostrará vulnerabilidades que aún no han sido traducidas debido a que se recogen en el transcurso del tiempo en el que el equipo de INCIBE realiza el proceso de traducción.

Se emplea el estándar de nomenclatura de vulnerabilidades CVE (Common Vulnerabilities and Exposures), con el fin de facilitar el intercambio de información entre diferentes bases de datos y herramientas. Cada una de las vulnerabilidades recogidas enlaza a diversas fuentes de información así como a parches disponibles o soluciones aportadas por los fabricantes y desarrolladores. Es posible realizar búsquedas avanzadas teniendo la opción de seleccionar diferentes criterios como el tipo de vulnerabilidad, fabricante, tipo de impacto entre otros, con el fin de acortar los resultados.

Mediante suscripción RSS o Boletines podemos estar informados diariamente de las ultimas vulnerabilidades incorporadas al repositorio.

CVE-2022-50471

Fecha de publicación:
04/10/2025
Idioma:
Inglés
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> xen/gntdev: Accommodate VMA splitting<br /> <br /> Prior to this commit, the gntdev driver code did not handle the<br /> following scenario correctly with paravirtualized (PV) Xen domains:<br /> <br /> * User process sets up a gntdev mapping composed of two grant mappings<br /> (i.e., two pages shared by another Xen domain).<br /> * User process munmap()s one of the pages.<br /> * User process munmap()s the remaining page.<br /> * User process exits.<br /> <br /> In the scenario above, the user process would cause the kernel to log<br /> the following messages in dmesg for the first munmap(), and the second<br /> munmap() call would result in similar log messages:<br /> <br /> BUG: Bad page map in process doublemap.test pte:... pmd:...<br /> page:0000000057c97bff refcount:1 mapcount:-1 \<br /> mapping:0000000000000000 index:0x0 pfn:...<br /> ...<br /> page dumped because: bad pte<br /> ...<br /> file:gntdev fault:0x0 mmap:gntdev_mmap [xen_gntdev] readpage:0x0<br /> ...<br /> Call Trace:<br /> <br /> dump_stack_lvl+0x46/0x5e<br /> print_bad_pte.cold+0x66/0xb6<br /> unmap_page_range+0x7e5/0xdc0<br /> unmap_vmas+0x78/0xf0<br /> unmap_region+0xa8/0x110<br /> __do_munmap+0x1ea/0x4e0<br /> __vm_munmap+0x75/0x120<br /> __x64_sys_munmap+0x28/0x40<br /> do_syscall_64+0x38/0x90<br /> entry_SYSCALL_64_after_hwframe+0x61/0xcb<br /> ...<br /> <br /> For each munmap() call, the Xen hypervisor (if built with CONFIG_DEBUG)<br /> would print out the following and trigger a general protection fault in<br /> the affected Xen PV domain:<br /> <br /> (XEN) d0v... Attempt to implicitly unmap d0&amp;#39;s grant PTE ...<br /> (XEN) d0v... Attempt to implicitly unmap d0&amp;#39;s grant PTE ...<br /> <br /> As of this writing, gntdev_grant_map structure&amp;#39;s vma field (referred to<br /> as map-&gt;vma below) is mainly used for checking the start and end<br /> addresses of mappings. However, with split VMAs, these may change, and<br /> there could be more than one VMA associated with a gntdev mapping.<br /> Hence, remove the use of map-&gt;vma and rely on map-&gt;pages_vm_start for<br /> the original start address and on (map-&gt;count live_grants atomic counter and/or the map-&gt;vma<br /> pointer (the latter of which is now removed). This prevents the<br /> userspace from mmap()&amp;#39;ing (with MAP_FIXED) a gntdev mapping over the<br /> same address range as a previously set up gntdev mapping. This scenario<br /> can be summarized with the following call-trace, which was valid prior<br /> to this commit:<br /> <br /> mmap<br /> gntdev_mmap<br /> mmap (repeat mmap with MAP_FIXED over the same address range)<br /> gntdev_invalidate<br /> unmap_grant_pages (sets &amp;#39;being_removed&amp;#39; entries to true)<br /> gnttab_unmap_refs_async<br /> unmap_single_vma<br /> gntdev_mmap (maps the shared pages again)<br /> munmap<br /> gntdev_invalidate<br /> unmap_grant_pages<br /> (no-op because &amp;#39;being_removed&amp;#39; entries are true)<br /> unmap_single_vma (For PV domains, Xen reports that a granted page<br /> is being unmapped and triggers a general protection fault in the<br /> affected domain, if Xen was built with CONFIG_DEBUG)<br /> <br /> The fix for this last scenario could be worth its own commit, but we<br /> opted for a single commit, because removing the gntdev_grant_map<br /> structure&amp;#39;s vma field requires guarding the entry to gntdev_mmap(), and<br /> the live_grants atomic counter is not sufficient on its own to prevent<br /> the mmap() over a pre-existing mapping.
Gravedad CVSS v3.1: MEDIA
Última modificación:
23/01/2026

CVE-2022-50470

Fecha de publicación:
04/10/2025
Idioma:
Inglés
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> xhci: Remove device endpoints from bandwidth list when freeing the device<br /> <br /> Endpoints are normally deleted from the bandwidth list when they are<br /> dropped, before the virt device is freed.<br /> <br /> If xHC host is dying or being removed then the endpoints aren&amp;#39;t dropped<br /> cleanly due to functions returning early to avoid interacting with a<br /> non-accessible host controller.<br /> <br /> So check and delete endpoints that are still on the bandwidth list when<br /> freeing the virt device.<br /> <br /> Solves a list_del corruption kernel crash when unbinding xhci-pci,<br /> caused by xhci_mem_cleanup() when it later tried to delete already freed<br /> endpoints from the bandwidth list.<br /> <br /> This only affects hosts that use software bandwidth checking, which<br /> currenty is only the xHC in intel Panther Point PCH (Ivy Bridge)
Gravedad CVSS v3.1: ALTA
Última modificación:
23/01/2026

CVE-2025-39953

Fecha de publicación:
04/10/2025
Idioma:
Inglés
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> cgroup: split cgroup_destroy_wq into 3 workqueues<br /> <br /> A hung task can occur during [1] LTP cgroup testing when repeatedly<br /> mounting/unmounting perf_event and net_prio controllers with<br /> systemd.unified_cgroup_hierarchy=1. The hang manifests in<br /> cgroup_lock_and_drain_offline() during root destruction.<br /> <br /> Related case:<br /> cgroup_fj_function_perf_event cgroup_fj_function.sh perf_event<br /> cgroup_fj_function_net_prio cgroup_fj_function.sh net_prio<br /> <br /> Call Trace:<br /> cgroup_lock_and_drain_offline+0x14c/0x1e8<br /> cgroup_destroy_root+0x3c/0x2c0<br /> css_free_rwork_fn+0x248/0x338<br /> process_one_work+0x16c/0x3b8<br /> worker_thread+0x22c/0x3b0<br /> kthread+0xec/0x100<br /> ret_from_fork+0x10/0x20<br /> <br /> Root Cause:<br /> <br /> CPU0 CPU1<br /> mount perf_event umount net_prio<br /> cgroup1_get_tree cgroup_kill_sb<br /> rebind_subsystems // root destruction enqueues<br /> // cgroup_destroy_wq<br /> // kill all perf_event css<br /> // one perf_event css A is dying<br /> // css A offline enqueues cgroup_destroy_wq<br /> // root destruction will be executed first<br /> css_free_rwork_fn<br /> cgroup_destroy_root<br /> cgroup_lock_and_drain_offline<br /> // some perf descendants are dying<br /> // cgroup_destroy_wq max_active = 1<br /> // waiting for css A to die<br /> <br /> Problem scenario:<br /> 1. CPU0 mounts perf_event (rebind_subsystems)<br /> 2. CPU1 unmounts net_prio (cgroup_kill_sb), queuing root destruction work<br /> 3. A dying perf_event CSS gets queued for offline after root destruction<br /> 4. Root destruction waits for offline completion, but offline work is<br /> blocked behind root destruction in cgroup_destroy_wq (max_active=1)<br /> <br /> Solution:<br /> Split cgroup_destroy_wq into three dedicated workqueues:<br /> cgroup_offline_wq – Handles CSS offline operations<br /> cgroup_release_wq – Manages resource release<br /> cgroup_free_wq – Performs final memory deallocation<br /> <br /> This separation eliminates blocking in the CSS free path while waiting for<br /> offline operations to complete.<br /> <br /> [1] https://github.com/linux-test-project/ltp/blob/master/runtest/controllers
Gravedad CVSS v3.1: MEDIA
Última modificación:
23/01/2026

CVE-2025-39952

Fecha de publicación:
04/10/2025
Idioma:
Inglés
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> wifi: wilc1000: avoid buffer overflow in WID string configuration<br /> <br /> Fix the following copy overflow warning identified by Smatch checker.<br /> <br /> drivers/net/wireless/microchip/wilc1000/wlan_cfg.c:184 wilc_wlan_parse_response_frame()<br /> error: &amp;#39;__memcpy()&amp;#39; &amp;#39;cfg-&gt;s[i]-&gt;str&amp;#39; copy overflow (512 vs 65537)<br /> <br /> This patch introduces size check before accessing the memory buffer.<br /> The checks are base on the WID type of received data from the firmware.<br /> For WID string configuration, the size limit is determined by individual<br /> element size in &amp;#39;struct wilc_cfg_str_vals&amp;#39; that is maintained in &amp;#39;len&amp;#39; field<br /> of &amp;#39;struct wilc_cfg_str&amp;#39;.
Gravedad CVSS v3.1: ALTA
Última modificación:
23/01/2026

CVE-2025-39951

Fecha de publicación:
04/10/2025
Idioma:
Inglés
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> um: virtio_uml: Fix use-after-free after put_device in probe<br /> <br /> When register_virtio_device() fails in virtio_uml_probe(),<br /> the code sets vu_dev-&gt;registered = 1 even though<br /> the device was not successfully registered.<br /> This can lead to use-after-free or other issues.
Gravedad CVSS v3.1: ALTA
Última modificación:
23/01/2026

CVE-2025-39950

Fecha de publicación:
04/10/2025
Idioma:
Inglés
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> net/tcp: Fix a NULL pointer dereference when using TCP-AO with TCP_REPAIR<br /> <br /> A NULL pointer dereference can occur in tcp_ao_finish_connect() during a<br /> connect() system call on a socket with a TCP-AO key added and TCP_REPAIR<br /> enabled.<br /> <br /> The function is called with skb being NULL and attempts to dereference it<br /> on tcp_hdr(skb)-&gt;seq without a prior skb validation.<br /> <br /> Fix this by checking if skb is NULL before dereferencing it.<br /> <br /> The commentary is taken from bpf_skops_established(), which is also called<br /> in the same flow. Unlike the function being patched,<br /> bpf_skops_established() validates the skb before dereferencing it.<br /> <br /> int main(void){<br /> struct sockaddr_in sockaddr;<br /> struct tcp_ao_add tcp_ao;<br /> int sk;<br /> int one = 1;<br /> <br /> memset(&amp;sockaddr,&amp;#39;\0&amp;#39;,sizeof(sockaddr));<br /> memset(&amp;tcp_ao,&amp;#39;\0&amp;#39;,sizeof(tcp_ao));<br /> <br /> sk = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);<br /> <br /> sockaddr.sin_family = AF_INET;<br /> <br /> memcpy(tcp_ao.alg_name,"cmac(aes128)",12);<br /> memcpy(tcp_ao.key,"ABCDEFGHABCDEFGH",16);<br /> tcp_ao.keylen = 16;<br /> <br /> memcpy(&amp;tcp_ao.addr,&amp;sockaddr,sizeof(sockaddr));<br /> <br /> setsockopt(sk, IPPROTO_TCP, TCP_AO_ADD_KEY, &amp;tcp_ao,<br /> sizeof(tcp_ao));<br /> setsockopt(sk, IPPROTO_TCP, TCP_REPAIR, &amp;one, sizeof(one));<br /> <br /> sockaddr.sin_family = AF_INET;<br /> sockaddr.sin_port = htobe16(123);<br /> <br /> inet_aton("127.0.0.1", &amp;sockaddr.sin_addr);<br /> <br /> connect(sk,(struct sockaddr *)&amp;sockaddr,sizeof(sockaddr));<br /> <br /> return 0;<br /> }<br /> <br /> $ gcc tcp-ao-nullptr.c -o tcp-ao-nullptr -Wall<br /> $ unshare -Urn<br /> <br /> BUG: kernel NULL pointer dereference, address: 00000000000000b6<br /> PGD 1f648d067 P4D 1f648d067 PUD 1982e8067 PMD 0<br /> Oops: Oops: 0000 [#1] SMP NOPTI<br /> Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop<br /> Reference Platform, BIOS 6.00 11/12/2020<br /> RIP: 0010:tcp_ao_finish_connect (net/ipv4/tcp_ao.c:1182)
Gravedad CVSS v3.1: MEDIA
Última modificación:
23/01/2026

CVE-2025-39949

Fecha de publicación:
04/10/2025
Idioma:
Inglés
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> qed: Don&amp;#39;t collect too many protection override GRC elements<br /> <br /> In the protection override dump path, the firmware can return far too<br /> many GRC elements, resulting in attempting to write past the end of the<br /> previously-kmalloc&amp;#39;ed dump buffer.<br /> <br /> This will result in a kernel panic with reason:<br /> <br /> BUG: unable to handle kernel paging request at ADDRESS<br /> <br /> where "ADDRESS" is just past the end of the protection override dump<br /> buffer. The start address of the buffer is:<br /> p_hwfn-&gt;cdev-&gt;dbg_features[DBG_FEATURE_PROTECTION_OVERRIDE].dump_buf<br /> and the size of the buffer is buf_size in the same data structure.<br /> <br /> The panic can be arrived at from either the qede Ethernet driver path:<br /> <br /> [exception RIP: qed_grc_dump_addr_range+0x108]<br /> qed_protection_override_dump at ffffffffc02662ed [qed]<br /> qed_dbg_protection_override_dump at ffffffffc0267792 [qed]<br /> qed_dbg_feature at ffffffffc026aa8f [qed]<br /> qed_dbg_all_data at ffffffffc026b211 [qed]<br /> qed_fw_fatal_reporter_dump at ffffffffc027298a [qed]<br /> devlink_health_do_dump at ffffffff82497f61<br /> devlink_health_report at ffffffff8249cf29<br /> qed_report_fatal_error at ffffffffc0272baf [qed]<br /> qede_sp_task at ffffffffc045ed32 [qede]<br /> process_one_work at ffffffff81d19783<br /> <br /> or the qedf storage driver path:<br /> <br /> [exception RIP: qed_grc_dump_addr_range+0x108]<br /> qed_protection_override_dump at ffffffffc068b2ed [qed]<br /> qed_dbg_protection_override_dump at ffffffffc068c792 [qed]<br /> qed_dbg_feature at ffffffffc068fa8f [qed]<br /> qed_dbg_all_data at ffffffffc0690211 [qed]<br /> qed_fw_fatal_reporter_dump at ffffffffc069798a [qed]<br /> devlink_health_do_dump at ffffffff8aa95e51<br /> devlink_health_report at ffffffff8aa9ae19<br /> qed_report_fatal_error at ffffffffc0697baf [qed]<br /> qed_hw_err_notify at ffffffffc06d32d7 [qed]<br /> qed_spq_post at ffffffffc06b1011 [qed]<br /> qed_fcoe_destroy_conn at ffffffffc06b2e91 [qed]<br /> qedf_cleanup_fcport at ffffffffc05e7597 [qedf]<br /> qedf_rport_event_handler at ffffffffc05e7bf7 [qedf]<br /> fc_rport_work at ffffffffc02da715 [libfc]<br /> process_one_work at ffffffff8a319663<br /> <br /> Resolve this by clamping the firmware&amp;#39;s return value to the maximum<br /> number of legal elements the firmware should return.
Gravedad CVSS v3.1: MEDIA
Última modificación:
25/03/2026

CVE-2025-39941

Fecha de publicación:
04/10/2025
Idioma:
Inglés
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> zram: fix slot write race condition<br /> <br /> Parallel concurrent writes to the same zram index result in leaked<br /> zsmalloc handles. Schematically we can have something like this:<br /> <br /> CPU0 CPU1<br /> zram_slot_lock()<br /> zs_free(handle)<br /> zram_slot_lock()<br /> zram_slot_lock()<br /> zs_free(handle)<br /> zram_slot_lock()<br /> <br /> compress compress<br /> handle = zs_malloc() handle = zs_malloc()<br /> zram_slot_lock<br /> zram_set_handle(handle)<br /> zram_slot_lock<br /> zram_slot_lock<br /> zram_set_handle(handle)<br /> zram_slot_lock<br /> <br /> Either CPU0 or CPU1 zsmalloc handle will leak because zs_free() is done<br /> too early. In fact, we need to reset zram entry right before we set its<br /> new handle, all under the same slot lock scope.
Gravedad CVSS v3.1: MEDIA
Última modificación:
23/01/2026

CVE-2025-39945

Fecha de publicación:
04/10/2025
Idioma:
Inglés
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> cnic: Fix use-after-free bugs in cnic_delete_task<br /> <br /> The original code uses cancel_delayed_work() in cnic_cm_stop_bnx2x_hw(),<br /> which does not guarantee that the delayed work item &amp;#39;delete_task&amp;#39; has<br /> fully completed if it was already running. Additionally, the delayed work<br /> item is cyclic, the flush_workqueue() in cnic_cm_stop_bnx2x_hw() only<br /> blocks and waits for work items that were already queued to the<br /> workqueue prior to its invocation. Any work items submitted after<br /> flush_workqueue() is called are not included in the set of tasks that the<br /> flush operation awaits. This means that after the cyclic work items have<br /> finished executing, a delayed work item may still exist in the workqueue.<br /> This leads to use-after-free scenarios where the cnic_dev is deallocated<br /> by cnic_free_dev(), while delete_task remains active and attempt to<br /> dereference cnic_dev in cnic_delete_task().<br /> <br /> A typical race condition is illustrated below:<br /> <br /> CPU 0 (cleanup) | CPU 1 (delayed work callback)<br /> cnic_netdev_event() |<br /> cnic_stop_hw() | cnic_delete_task()<br /> cnic_cm_stop_bnx2x_hw() | ...<br /> cancel_delayed_work() | /* the queue_delayed_work()<br /> flush_workqueue() | executes after flush_workqueue()*/<br /> | queue_delayed_work()<br /> cnic_free_dev(dev)//free | cnic_delete_task() //new instance<br /> | dev = cp-&gt;dev; //use<br /> <br /> Replace cancel_delayed_work() with cancel_delayed_work_sync() to ensure<br /> that the cyclic delayed work item is properly canceled and that any<br /> ongoing execution of the work item completes before the cnic_dev is<br /> deallocated. Furthermore, since cancel_delayed_work_sync() uses<br /> __flush_work(work, true) to synchronously wait for any currently<br /> executing instance of the work item to finish, the flush_workqueue()<br /> becomes redundant and should be removed.<br /> <br /> This bug was identified through static analysis. To reproduce the issue<br /> and validate the fix, I simulated the cnic PCI device in QEMU and<br /> introduced intentional delays — such as inserting calls to ssleep()<br /> within the cnic_delete_task() function — to increase the likelihood<br /> of triggering the bug.
Gravedad CVSS v3.1: ALTA
Última modificación:
23/01/2026

CVE-2025-39948

Fecha de publicación:
04/10/2025
Idioma:
Inglés
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> ice: fix Rx page leak on multi-buffer frames<br /> <br /> The ice_put_rx_mbuf() function handles calling ice_put_rx_buf() for each<br /> buffer in the current frame. This function was introduced as part of<br /> handling multi-buffer XDP support in the ice driver.<br /> <br /> It works by iterating over the buffers from first_desc up to 1 plus the<br /> total number of fragments in the frame, cached from before the XDP program<br /> was executed.<br /> <br /> If the hardware posts a descriptor with a size of 0, the logic used in<br /> ice_put_rx_mbuf() breaks. Such descriptors get skipped and don&amp;#39;t get added<br /> as fragments in ice_add_xdp_frag. Since the buffer isn&amp;#39;t counted as a<br /> fragment, we do not iterate over it in ice_put_rx_mbuf(), and thus we don&amp;#39;t<br /> call ice_put_rx_buf().<br /> <br /> Because we don&amp;#39;t call ice_put_rx_buf(), we don&amp;#39;t attempt to re-use the<br /> page or free it. This leaves a stale page in the ring, as we don&amp;#39;t<br /> increment next_to_alloc.<br /> <br /> The ice_reuse_rx_page() assumes that the next_to_alloc has been incremented<br /> properly, and that it always points to a buffer with a NULL page. Since<br /> this function doesn&amp;#39;t check, it will happily recycle a page over the top<br /> of the next_to_alloc buffer, losing track of the old page.<br /> <br /> Note that this leak only occurs for multi-buffer frames. The<br /> ice_put_rx_mbuf() function always handles at least one buffer, so a<br /> single-buffer frame will always get handled correctly. It is not clear<br /> precisely why the hardware hands us descriptors with a size of 0 sometimes,<br /> but it happens somewhat regularly with "jumbo frames" used by 9K MTU.<br /> <br /> To fix ice_put_rx_mbuf(), we need to make sure to call ice_put_rx_buf() on<br /> all buffers between first_desc and next_to_clean. Borrow the logic of a<br /> similar function in i40e used for this same purpose. Use the same logic<br /> also in ice_get_pgcnts().<br /> <br /> Instead of iterating over just the number of fragments, use a loop which<br /> iterates until the current index reaches to the next_to_clean element just<br /> past the current frame. Unlike i40e, the ice_put_rx_mbuf() function does<br /> call ice_put_rx_buf() on the last buffer of the frame indicating the end of<br /> packet.<br /> <br /> For non-linear (multi-buffer) frames, we need to take care when adjusting<br /> the pagecnt_bias. An XDP program might release fragments from the tail of<br /> the frame, in which case that fragment page is already released. Only<br /> update the pagecnt_bias for the first descriptor and fragments still<br /> remaining post-XDP program. Take care to only access the shared info for<br /> fragmented buffers, as this avoids a significant cache miss.<br /> <br /> The xdp_xmit value only needs to be updated if an XDP program is run, and<br /> only once per packet. Drop the xdp_xmit pointer argument from<br /> ice_put_rx_mbuf(). Instead, set xdp_xmit in the ice_clean_rx_irq() function<br /> directly. This avoids needing to pass the argument and avoids an extra<br /> bit-wise OR for each buffer in the frame.<br /> <br /> Move the increment of the ntc local variable to ensure its updated *before*<br /> all calls to ice_get_pgcnts() or ice_put_rx_mbuf(), as the loop logic<br /> requires the index of the element just after the current frame.<br /> <br /> Now that we use an index pointer in the ring to identify the packet, we no<br /> longer need to track or cache the number of fragments in the rx_ring.
Gravedad CVSS v3.1: MEDIA
Última modificación:
25/03/2026

CVE-2025-39942

Fecha de publicación:
04/10/2025
Idioma:
Inglés
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> ksmbd: smbdirect: verify remaining_data_length respects max_fragmented_recv_size<br /> <br /> This is inspired by the check for data_offset + data_length.
Gravedad CVSS v3.1: MEDIA
Última modificación:
25/03/2026

CVE-2025-39943

Fecha de publicación:
04/10/2025
Idioma:
Inglés
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> ksmbd: smbdirect: validate data_offset and data_length field of smb_direct_data_transfer<br /> <br /> If data_offset and data_length of smb_direct_data_transfer struct are<br /> invalid, out of bounds issue could happen.<br /> This patch validate data_offset and data_length field in recv_done.
Gravedad CVSS v3.1: ALTA
Última modificación:
06/04/2026