Vulnerabilities

With the aim of informing, warning and helping professionals with the latest security vulnerabilities in technology systems, we have made a database available for users interested in this information, which is in Spanish and includes all of the latest documented and recognised vulnerabilities.

This repository, with over 75,000 registers, is based on the information from the NVD (National Vulnerability Database) – by virtue of a partnership agreement – through which INCIBE translates the included information into Spanish.

On occasions this list will show vulnerabilities that have still not been translated, as they are added while the INCIBE team is still carrying out the translation process. The CVE  (Common Vulnerabilities and Exposures) Standard for Information Security Vulnerability Names is used with the aim to support the exchange of information between different tools and databases.

All vulnerabilities collected are linked to different information sources, as well as available patches or solutions provided by manufacturers and developers. It is possible to carry out advanced searches, as there is the option to select different criteria to narrow down the results, some examples being vulnerability types, manufacturers and impact levels, among others.

Through RSS feeds or Newsletters we can be informed daily about the latest vulnerabilities added to the repository. Below there is a list, updated daily, where you can discover the latest vulnerabilities.

CVE-2025-27829

Publication date:
01/04/2025
An issue was discovered in Stormshield Network Security (SNS) 4.3.x before 4.3.35. If multicast streams are enabled on different interfaces, it may be possible to interrupt multicast traffic on some of these interfaces. That could result in a denial of the multicast routing service on the firewall.
Severity CVSS v4.0: Pending analysis
Last modification:
01/04/2025

CVE-2025-28131

Publication date:
01/04/2025
A Broken Access Control vulnerability in Nagios Network Analyzer 2024R1.0.3 allows low-privilege users with "Read-Only" access to perform administrative actions, including stopping system services and deleting critical resources. This flaw arises due to improper authorization enforcement, enabling unauthorized modifications that compromise system integrity and availability.
Severity CVSS v4.0: Pending analysis
Last modification:
11/07/2025

CVE-2025-28132

Publication date:
01/04/2025
A session management flaw in Nagios Network Analyzer 2024R1.0.3 allows an attacker to reuse session tokens even after a user logs out, leading to unauthorized access and account takeover. This occurs due to insufficient session expiration, where session tokens remain valid beyond logout, allowing an attacker to impersonate users and perform actions on their behalf.
Severity CVSS v4.0: Pending analysis
Last modification:
18/06/2025

CVE-2025-25041

Publication date:
01/04/2025
A vulnerability in the HPE Aruba Networking Virtual Intranet Access (VIA) client could allow malicious users to overwrite arbitrary files as NT AUTHORITY\SYSTEM (root). A successful exploit could allow the creation of a Denial-of-Service (DoS) condition affecting the Microsoft Windows Operating System. This vulnerability does not affect Linux and Android based clients.
Severity CVSS v4.0: Pending analysis
Last modification:
03/04/2025

CVE-2025-21986

Publication date:
01/04/2025
In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> net: switchdev: Convert blocking notification chain to a raw one<br /> <br /> A blocking notification chain uses a read-write semaphore to protect the<br /> integrity of the chain. The semaphore is acquired for writing when<br /> adding / removing notifiers to / from the chain and acquired for reading<br /> when traversing the chain and informing notifiers about an event.<br /> <br /> In case of the blocking switchdev notification chain, recursive<br /> notifications are possible which leads to the semaphore being acquired<br /> twice for reading and to lockdep warnings being generated [1].<br /> <br /> Specifically, this can happen when the bridge driver processes a<br /> SWITCHDEV_BRPORT_UNOFFLOADED event which causes it to emit notifications<br /> about deferred events when calling switchdev_deferred_process().<br /> <br /> Fix this by converting the notification chain to a raw notification<br /> chain in a similar fashion to the netdev notification chain. Protect<br /> the chain using the RTNL mutex by acquiring it when modifying the chain.<br /> Events are always informed under the RTNL mutex, but add an assertion in<br /> call_switchdev_blocking_notifiers() to make sure this is not violated in<br /> the future.<br /> <br /> Maintain the "blocking" prefix as events are always emitted from process<br /> context and listeners are allowed to block.<br /> <br /> [1]:<br /> WARNING: possible recursive locking detected<br /> 6.14.0-rc4-custom-g079270089484 #1 Not tainted<br /> --------------------------------------------<br /> ip/52731 is trying to acquire lock:<br /> ffffffff850918d8 ((switchdev_blocking_notif_chain).rwsem){++++}-{4:4}, at: blocking_notifier_call_chain+0x58/0xa0<br /> <br /> but task is already holding lock:<br /> ffffffff850918d8 ((switchdev_blocking_notif_chain).rwsem){++++}-{4:4}, at: blocking_notifier_call_chain+0x58/0xa0<br /> <br /> other info that might help us debug this:<br /> Possible unsafe locking scenario:<br /> CPU0<br /> ----<br /> lock((switchdev_blocking_notif_chain).rwsem);<br /> lock((switchdev_blocking_notif_chain).rwsem);<br /> <br /> *** DEADLOCK ***<br /> May be due to missing lock nesting notation<br /> 3 locks held by ip/52731:<br /> #0: ffffffff84f795b0 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x727/0x1dc0<br /> #1: ffffffff8731f628 (&amp;net-&gt;rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x790/0x1dc0<br /> #2: ffffffff850918d8 ((switchdev_blocking_notif_chain).rwsem){++++}-{4:4}, at: blocking_notifier_call_chain+0x58/0xa0<br /> <br /> stack backtrace:<br /> ...<br /> ? __pfx_down_read+0x10/0x10<br /> ? __pfx_mark_lock+0x10/0x10<br /> ? __pfx_switchdev_port_attr_set_deferred+0x10/0x10<br /> blocking_notifier_call_chain+0x58/0xa0<br /> switchdev_port_attr_notify.constprop.0+0xb3/0x1b0<br /> ? __pfx_switchdev_port_attr_notify.constprop.0+0x10/0x10<br /> ? mark_held_locks+0x94/0xe0<br /> ? switchdev_deferred_process+0x11a/0x340<br /> switchdev_port_attr_set_deferred+0x27/0xd0<br /> switchdev_deferred_process+0x164/0x340<br /> br_switchdev_port_unoffload+0xc8/0x100 [bridge]<br /> br_switchdev_blocking_event+0x29f/0x580 [bridge]<br /> notifier_call_chain+0xa2/0x440<br /> blocking_notifier_call_chain+0x6e/0xa0<br /> switchdev_bridge_port_unoffload+0xde/0x1a0<br /> ...
Severity CVSS v4.0: Pending analysis
Last modification:
03/11/2025

CVE-2025-21977

Publication date:
01/04/2025
In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> fbdev: hyperv_fb: Fix hang in kdump kernel when on Hyper-V Gen 2 VMs<br /> <br /> Gen 2 Hyper-V VMs boot via EFI and have a standard EFI framebuffer<br /> device. When the kdump kernel runs in such a VM, loading the efifb<br /> driver may hang because of accessing the framebuffer at the wrong<br /> memory address.<br /> <br /> The scenario occurs when the hyperv_fb driver in the original kernel<br /> moves the framebuffer to a different MMIO address because of conflicts<br /> with an already-running efifb or simplefb driver. The hyperv_fb driver<br /> then informs Hyper-V of the change, which is allowed by the Hyper-V FB<br /> VMBus device protocol. However, when the kexec command loads the kdump<br /> kernel into crash memory via the kexec_file_load() system call, the<br /> system call doesn&amp;#39;t know the framebuffer has moved, and it sets up the<br /> kdump screen_info using the original framebuffer address. The transition<br /> to the kdump kernel does not go through the Hyper-V host, so Hyper-V<br /> does not reset the framebuffer address like it would do on a reboot.<br /> When efifb tries to run, it accesses a non-existent framebuffer<br /> address, which traps to the Hyper-V host. After many such accesses,<br /> the Hyper-V host thinks the guest is being malicious, and throttles<br /> the guest to the point that it runs very slowly or appears to have hung.<br /> <br /> When the kdump kernel is loaded into crash memory via the kexec_load()<br /> system call, the problem does not occur. In this case, the kexec command<br /> builds the screen_info table itself in user space from data returned<br /> by the FBIOGET_FSCREENINFO ioctl against /dev/fb0, which gives it the<br /> new framebuffer location.<br /> <br /> This problem was originally reported in 2020 [1], resulting in commit<br /> 3cb73bc3fa2a ("hyperv_fb: Update screen_info after removing old<br /> framebuffer"). This commit solved the problem by setting orig_video_isVGA<br /> to 0, so the kdump kernel was unaware of the EFI framebuffer. The efifb<br /> driver did not try to load, and no hang occurred. But in 2024, commit<br /> c25a19afb81c ("fbdev/hyperv_fb: Do not clear global screen_info")<br /> effectively reverted 3cb73bc3fa2a. Commit c25a19afb81c has no reference<br /> to 3cb73bc3fa2a, so perhaps it was done without knowing the implications<br /> that were reported with 3cb73bc3fa2a. In any case, as of commit<br /> c25a19afb81c, the original problem came back again.<br /> <br /> Interestingly, the hyperv_drm driver does not have this problem because<br /> it never moves the framebuffer. The difference is that the hyperv_drm<br /> driver removes any conflicting framebuffers *before* allocating an MMIO<br /> address, while the hyperv_fb drivers removes conflicting framebuffers<br /> *after* allocating an MMIO address. With the "after" ordering, hyperv_fb<br /> may encounter a conflict and move the framebuffer to a different MMIO<br /> address. But the conflict is essentially bogus because it is removed<br /> a few lines of code later.<br /> <br /> Rather than fix the problem with the approach from 2020 in commit<br /> 3cb73bc3fa2a, instead slightly reorder the steps in hyperv_fb so<br /> conflicting framebuffers are removed before allocating an MMIO address.<br /> Then the default framebuffer MMIO address should always be available, and<br /> there&amp;#39;s never any confusion about which framebuffer address the kdump<br /> kernel should use -- it&amp;#39;s always the original address provided by<br /> the Hyper-V host. This approach is already used by the hyperv_drm<br /> driver, and is consistent with the usage guidelines at the head of<br /> the module with the function aperture_remove_conflicting_devices().<br /> <br /> This approach also solves a related minor problem when kexec_load()<br /> is used to load the kdump kernel. With current code, unbinding and<br /> rebinding the hyperv_fb driver could result in the framebuffer moving<br /> back to the default framebuffer address, because on the rebind there<br /> are no conflicts. If such a move is done after the kdump kernel is<br /> loaded with the new framebuffer address, at kdump time it could again<br /> have the wrong address.<br /> <br /> This problem and fix are described in terms of the kdump kernel, but<br /> it can also occur<br /> ---truncated---
Severity CVSS v4.0: Pending analysis
Last modification:
30/10/2025

CVE-2025-21983

Publication date:
01/04/2025
In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> mm/slab/kvfree_rcu: Switch to WQ_MEM_RECLAIM wq<br /> <br /> Currently kvfree_rcu() APIs use a system workqueue which is<br /> "system_unbound_wq" to driver RCU machinery to reclaim a memory.<br /> <br /> Recently, it has been noted that the following kernel warning can<br /> be observed:<br /> <br /> <br /> workqueue: WQ_MEM_RECLAIM nvme-wq:nvme_scan_work is flushing !WQ_MEM_RECLAIM events_unbound:kfree_rcu_work<br /> WARNING: CPU: 21 PID: 330 at kernel/workqueue.c:3719 check_flush_dependency+0x112/0x120<br /> Modules linked in: intel_uncore_frequency(E) intel_uncore_frequency_common(E) skx_edac(E) ...<br /> CPU: 21 UID: 0 PID: 330 Comm: kworker/u144:6 Tainted: G E 6.13.2-0_g925d379822da #1<br /> Hardware name: Wiwynn Twin Lakes MP/Twin Lakes Passive MP, BIOS YMM20 02/01/2023<br /> Workqueue: nvme-wq nvme_scan_work<br /> RIP: 0010:check_flush_dependency+0x112/0x120<br /> Code: 05 9a 40 14 02 01 48 81 c6 c0 00 00 00 48 8b 50 18 48 81 c7 c0 00 00 00 48 89 f9 48 ...<br /> RSP: 0018:ffffc90000df7bd8 EFLAGS: 00010082<br /> RAX: 000000000000006a RBX: ffffffff81622390 RCX: 0000000000000027<br /> RDX: 00000000fffeffff RSI: 000000000057ffa8 RDI: ffff88907f960c88<br /> RBP: 0000000000000000 R08: ffffffff83068e50 R09: 000000000002fffd<br /> R10: 0000000000000004 R11: 0000000000000000 R12: ffff8881001a4400<br /> R13: 0000000000000000 R14: ffff88907f420fb8 R15: 0000000000000000<br /> FS: 0000000000000000(0000) GS:ffff88907f940000(0000) knlGS:0000000000000000<br /> CR2: 00007f60c3001000 CR3: 000000107d010005 CR4: 00000000007726f0<br /> PKRU: 55555554<br /> Call Trace:<br /> <br /> ? __warn+0xa4/0x140<br /> ? check_flush_dependency+0x112/0x120<br /> ? report_bug+0xe1/0x140<br /> ? check_flush_dependency+0x112/0x120<br /> ? handle_bug+0x5e/0x90<br /> ? exc_invalid_op+0x16/0x40<br /> ? asm_exc_invalid_op+0x16/0x20<br /> ? timer_recalc_next_expiry+0x190/0x190<br /> ? check_flush_dependency+0x112/0x120<br /> ? check_flush_dependency+0x112/0x120<br /> __flush_work.llvm.1643880146586177030+0x174/0x2c0<br /> flush_rcu_work+0x28/0x30<br /> kvfree_rcu_barrier+0x12f/0x160<br /> kmem_cache_destroy+0x18/0x120<br /> bioset_exit+0x10c/0x150<br /> disk_release.llvm.6740012984264378178+0x61/0xd0<br /> device_release+0x4f/0x90<br /> kobject_put+0x95/0x180<br /> nvme_put_ns+0x23/0xc0<br /> nvme_remove_invalid_namespaces+0xb3/0xd0<br /> nvme_scan_work+0x342/0x490<br /> process_scheduled_works+0x1a2/0x370<br /> worker_thread+0x2ff/0x390<br /> ? pwq_release_workfn+0x1e0/0x1e0<br /> kthread+0xb1/0xe0<br /> ? __kthread_parkme+0x70/0x70<br /> ret_from_fork+0x30/0x40<br /> ? __kthread_parkme+0x70/0x70<br /> ret_from_fork_asm+0x11/0x20<br /> <br /> ---[ end trace 0000000000000000 ]---<br /> <br /> <br /> To address this switch to use of independent WQ_MEM_RECLAIM<br /> workqueue, so the rules are not violated from workqueue framework<br /> point of view.<br /> <br /> Apart of that, since kvfree_rcu() does reclaim memory it is worth<br /> to go with WQ_MEM_RECLAIM type of wq because it is designed for<br /> this purpose.
Severity CVSS v4.0: Pending analysis
Last modification:
30/10/2025

CVE-2025-21985

Publication date:
01/04/2025
In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> drm/amd/display: Fix out-of-bound accesses<br /> <br /> [WHAT &amp; HOW]<br /> hpo_stream_to_link_encoder_mapping has size MAX_HPO_DP2_ENCODERS(=4),<br /> but location can have size up to 6. As a result, it is necessary to<br /> check location against MAX_HPO_DP2_ENCODERS.<br /> <br /> Similiarly, disp_cfg_stream_location can be used as an array index which<br /> should be 0..5, so the ASSERT&amp;#39;s conditions should be less without equal.
Severity CVSS v4.0: Pending analysis
Last modification:
30/10/2025

CVE-2025-21982

Publication date:
01/04/2025
In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> pinctrl: nuvoton: npcm8xx: Add NULL check in npcm8xx_gpio_fw<br /> <br /> devm_kasprintf() calls can return null pointers on failure.<br /> But the return values were not checked in npcm8xx_gpio_fw().<br /> Add NULL check in npcm8xx_gpio_fw(), to handle kernel NULL<br /> pointer dereference error.
Severity CVSS v4.0: Pending analysis
Last modification:
01/10/2025

CVE-2025-21984

Publication date:
01/04/2025
In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> mm: fix kernel BUG when userfaultfd_move encounters swapcache<br /> <br /> userfaultfd_move() checks whether the PTE entry is present or a<br /> swap entry.<br /> <br /> - If the PTE entry is present, move_present_pte() handles folio<br /> migration by setting:<br /> <br /> src_folio-&gt;index = linear_page_index(dst_vma, dst_addr);<br /> <br /> - If the PTE entry is a swap entry, move_swap_pte() simply copies<br /> the PTE to the new dst_addr.<br /> <br /> This approach is incorrect because, even if the PTE is a swap entry,<br /> it can still reference a folio that remains in the swap cache.<br /> <br /> This creates a race window between steps 2 and 4.<br /> 1. add_to_swap: The folio is added to the swapcache.<br /> 2. try_to_unmap: PTEs are converted to swap entries.<br /> 3. pageout: The folio is written back.<br /> 4. Swapcache is cleared.<br /> If userfaultfd_move() occurs in the window between steps 2 and 4,<br /> after the swap PTE has been moved to the destination, accessing the<br /> destination triggers do_swap_page(), which may locate the folio in<br /> the swapcache. However, since the folio&amp;#39;s index has not been updated<br /> to match the destination VMA, do_swap_page() will detect a mismatch.<br /> <br /> This can result in two critical issues depending on the system<br /> configuration.<br /> <br /> If KSM is disabled, both small and large folios can trigger a BUG<br /> during the add_rmap operation due to:<br /> <br /> page_pgoff(folio, page) != linear_page_index(vma, address)<br /> <br /> [ 13.336953] page: refcount:6 mapcount:1 mapping:00000000f43db19c index:0xffffaf150 pfn:0x4667c<br /> [ 13.337520] head: order:2 mapcount:1 entire_mapcount:0 nr_pages_mapped:1 pincount:0<br /> [ 13.337716] memcg:ffff00000405f000<br /> [ 13.337849] anon flags: 0x3fffc0000020459(locked|uptodate|dirty|owner_priv_1|head|swapbacked|node=0|zone=0|lastcpupid=0xffff)<br /> [ 13.338630] raw: 03fffc0000020459 ffff80008507b538 ffff80008507b538 ffff000006260361<br /> [ 13.338831] raw: 0000000ffffaf150 0000000000004000 0000000600000000 ffff00000405f000<br /> [ 13.339031] head: 03fffc0000020459 ffff80008507b538 ffff80008507b538 ffff000006260361<br /> [ 13.339204] head: 0000000ffffaf150 0000000000004000 0000000600000000 ffff00000405f000<br /> [ 13.339375] head: 03fffc0000000202 fffffdffc0199f01 ffffffff00000000 0000000000000001<br /> [ 13.339546] head: 0000000000000004 0000000000000000 00000000ffffffff 0000000000000000<br /> [ 13.339736] page dumped because: VM_BUG_ON_PAGE(page_pgoff(folio, page) != linear_page_index(vma, address))<br /> [ 13.340190] ------------[ cut here ]------------<br /> [ 13.340316] kernel BUG at mm/rmap.c:1380!<br /> [ 13.340683] Internal error: Oops - BUG: 00000000f2000800 [#1] PREEMPT SMP<br /> [ 13.340969] Modules linked in:<br /> [ 13.341257] CPU: 1 UID: 0 PID: 107 Comm: a.out Not tainted 6.14.0-rc3-gcf42737e247a-dirty #299<br /> [ 13.341470] Hardware name: linux,dummy-virt (DT)<br /> [ 13.341671] pstate: 60000005 (nZCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--)<br /> [ 13.341815] pc : __page_check_anon_rmap+0xa0/0xb0<br /> [ 13.341920] lr : __page_check_anon_rmap+0xa0/0xb0<br /> [ 13.342018] sp : ffff80008752bb20<br /> [ 13.342093] x29: ffff80008752bb20 x28: fffffdffc0199f00 x27: 0000000000000001<br /> [ 13.342404] x26: 0000000000000000 x25: 0000000000000001 x24: 0000000000000001<br /> [ 13.342575] x23: 0000ffffaf0d0000 x22: 0000ffffaf0d0000 x21: fffffdffc0199f00<br /> [ 13.342731] x20: fffffdffc0199f00 x19: ffff000006210700 x18: 00000000ffffffff<br /> [ 13.342881] x17: 6c203d2120296567 x16: 6170202c6f696c6f x15: 662866666f67705f<br /> [ 13.343033] x14: 6567617028454741 x13: 2929737365726464 x12: ffff800083728ab0<br /> [ 13.343183] x11: ffff800082996bf8 x10: 0000000000000fd7 x9 : ffff80008011bc40<br /> [ 13.343351] x8 : 0000000000017fe8 x7 : 00000000fffff000 x6 : ffff8000829eebf8<br /> [ 13.343498] x5 : c0000000fffff000 x4 : 0000000000000000 x3 : 0000000000000000<br /> [ 13.343645] x2 : 0000000000000000 x1 : ffff0000062db980 x0 : 000000000000005f<br /> [ 13.343876] Call trace:<br /> [ 13.344045] __page_check_anon_rmap+0xa0/0xb0 (P)<br /> [ 13.344234] folio_add_anon_rmap_ptes+0x22c/0x320<br /> [ 13.344333] do_swap_page+0x1060/0x1400<br /> [ 13.344417] __handl<br /> ---truncated---
Severity CVSS v4.0: Pending analysis
Last modification:
01/10/2025

CVE-2025-21978

Publication date:
01/04/2025
In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> drm/hyperv: Fix address space leak when Hyper-V DRM device is removed<br /> <br /> When a Hyper-V DRM device is probed, the driver allocates MMIO space for<br /> the vram, and maps it cacheable. If the device removed, or in the error<br /> path for device probing, the MMIO space is released but no unmap is done.<br /> Consequently the kernel address space for the mapping is leaked.<br /> <br /> Fix this by adding iounmap() calls in the device removal path, and in the<br /> error path during device probing.
Severity CVSS v4.0: Pending analysis
Last modification:
03/11/2025

CVE-2025-21980

Publication date:
01/04/2025
In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> sched: address a potential NULL pointer dereference in the GRED scheduler.<br /> <br /> If kzalloc in gred_init returns a NULL pointer, the code follows the<br /> error handling path, invoking gred_destroy. This, in turn, calls<br /> gred_offload, where memset could receive a NULL pointer as input,<br /> potentially leading to a kernel crash.<br /> <br /> When table-&gt;opt is NULL in gred_init(), gred_change_table_def()<br /> is not called yet, so it is not necessary to call -&gt;ndo_setup_tc()<br /> in gred_offload().
Severity CVSS v4.0: Pending analysis
Last modification:
03/11/2025