Vulnerabilities

With the aim of informing, warning and helping professionals with the latest security vulnerabilities in technology systems, we have made a database available for users interested in this information, which is in Spanish and includes all of the latest documented and recognised vulnerabilities.

This repository, with over 75,000 registers, is based on the information from the NVD (National Vulnerability Database) – by virtue of a partnership agreement – through which INCIBE translates the included information into Spanish.

On occasions this list will show vulnerabilities that have still not been translated, as they are added while the INCIBE team is still carrying out the translation process. The CVE  (Common Vulnerabilities and Exposures) Standard for Information Security Vulnerability Names is used with the aim to support the exchange of information between different tools and databases.

All vulnerabilities collected are linked to different information sources, as well as available patches or solutions provided by manufacturers and developers. It is possible to carry out advanced searches, as there is the option to select different criteria to narrow down the results, some examples being vulnerability types, manufacturers and impact levels, among others.

Through RSS feeds or Newsletters we can be informed daily about the latest vulnerabilities added to the repository. Below there is a list, updated daily, where you can discover the latest vulnerabilities.

CVE-2026-43401

Publication date:
08/05/2026
In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> cpufreq: intel_pstate: Fix NULL pointer dereference in update_cpu_qos_request()<br /> <br /> The update_cpu_qos_request() function attempts to initialize the &amp;#39;freq&amp;#39;<br /> variable by dereferencing &amp;#39;cpudata&amp;#39; before verifying if the &amp;#39;policy&amp;#39;<br /> is valid.<br /> <br /> This issue occurs on systems booted with the "nosmt" parameter, where<br /> all_cpu_data[cpu] is NULL for the SMT sibling threads. As a result,<br /> any call to update_qos_requests() will result in a NULL pointer<br /> dereference as the code will attempt to access pstate.turbo_freq using<br /> the NULL cpudata pointer.<br /> <br /> Also, pstate.turbo_freq may be updated by intel_pstate_get_hwp_cap()<br /> after initializing the &amp;#39;freq&amp;#39; variable, so it is better to defer the<br /> &amp;#39;freq&amp;#39; until intel_pstate_get_hwp_cap() has been called.<br /> <br /> Fix this by deferring the &amp;#39;freq&amp;#39; assignment until after the policy and<br /> driver_data have been validated.<br /> <br /> [ rjw: Added one paragraph to the changelog ]
Severity CVSS v4.0: Pending analysis
Last modification:
12/05/2026

CVE-2026-43402

Publication date:
08/05/2026
In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> kthread: consolidate kthread exit paths to prevent use-after-free<br /> <br /> Guillaume reported crashes via corrupted RCU callback function pointers<br /> during KUnit testing. The crash was traced back to the pidfs rhashtable<br /> conversion which replaced the 24-byte rb_node with an 8-byte rhash_head<br /> in struct pid, shrinking it from 160 to 144 bytes.<br /> <br /> struct kthread (without CONFIG_BLK_CGROUP) is also 144 bytes. With<br /> CONFIG_SLAB_MERGE_DEFAULT and SLAB_HWCACHE_ALIGN both round up to<br /> 192 bytes and share the same slab cache. struct pid.rcu.func and<br /> struct kthread.affinity_node both sit at offset 0x78.<br /> <br /> When a kthread exits via make_task_dead() it bypasses kthread_exit() and<br /> misses the affinity_node cleanup. free_kthread_struct() frees the memory<br /> while the node is still linked into the global kthread_affinity_list. A<br /> subsequent list_del() by another kthread writes through dangling list<br /> pointers into the freed and reused memory, corrupting the pid&amp;#39;s<br /> rcu.func pointer.<br /> <br /> Instead of patching free_kthread_struct() to handle the missed cleanup,<br /> consolidate all kthread exit paths. Turn kthread_exit() into a macro<br /> that calls do_exit() and add kthread_do_exit() which is called from<br /> do_exit() for any task with PF_KTHREAD set. This guarantees that<br /> kthread-specific cleanup always happens regardless of the exit path -<br /> make_task_dead(), direct do_exit(), or kthread_exit().<br /> <br /> Replace __to_kthread() with a new tsk_is_kthread() accessor in the<br /> public header. Export do_exit() since module code using the<br /> kthread_exit() macro now needs it directly.
Severity CVSS v4.0: Pending analysis
Last modification:
12/05/2026

CVE-2026-43403

Publication date:
08/05/2026
In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> nsfs: tighten permission checks for ns iteration ioctls<br /> <br /> Even privileged services should not necessarily be able to see other<br /> privileged service&amp;#39;s namespaces so they can&amp;#39;t leak information to each<br /> other. Use may_see_all_namespaces() helper that centralizes this policy<br /> until the nstree adapts.
Severity CVSS v4.0: Pending analysis
Last modification:
12/05/2026

CVE-2026-43404

Publication date:
08/05/2026
In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> mm: Fix a hmm_range_fault() livelock / starvation problem<br /> <br /> If hmm_range_fault() fails a folio_trylock() in do_swap_page,<br /> trying to acquire the lock of a device-private folio for migration,<br /> to ram, the function will spin until it succeeds grabbing the lock.<br /> <br /> However, if the process holding the lock is depending on a work<br /> item to be completed, which is scheduled on the same CPU as the<br /> spinning hmm_range_fault(), that work item might be starved and<br /> we end up in a livelock / starvation situation which is never<br /> resolved.<br /> <br /> This can happen, for example if the process holding the<br /> device-private folio lock is stuck in<br /> migrate_device_unmap()-&gt;lru_add_drain_all()<br /> sinc lru_add_drain_all() requires a short work-item<br /> to be run on all online cpus to complete.<br /> <br /> A prerequisite for this to happen is:<br /> a) Both zone device and system memory folios are considered in<br /> migrate_device_unmap(), so that there is a reason to call<br /> lru_add_drain_all() for a system memory folio while a<br /> folio lock is held on a zone device folio.<br /> b) The zone device folio has an initial mapcount &gt; 1 which causes<br /> at least one migration PTE entry insertion to be deferred to<br /> try_to_migrate(), which can happen after the call to<br /> lru_add_drain_all().<br /> c) No or voluntary only preemption.<br /> <br /> This all seems pretty unlikely to happen, but indeed is hit by<br /> the "xe_exec_system_allocator" igt test.<br /> <br /> Resolve this by waiting for the folio to be unlocked if the<br /> folio_trylock() fails in do_swap_page().<br /> <br /> Rename migration_entry_wait_on_locked() to<br /> softleaf_entry_wait_unlock() and update its documentation to<br /> indicate the new use-case.<br /> <br /> Future code improvements might consider moving<br /> the lru_add_drain_all() call in migrate_device_unmap() to be<br /> called *after* all pages have migration entries inserted.<br /> That would eliminate also b) above.<br /> <br /> v2:<br /> - Instead of a cond_resched() in hmm_range_fault(),<br /> eliminate the problem by waiting for the folio to be unlocked<br /> in do_swap_page() (Alistair Popple, Andrew Morton)<br /> v3:<br /> - Add a stub migration_entry_wait_on_locked() for the<br /> !CONFIG_MIGRATION case. (Kernel Test Robot)<br /> v4:<br /> - Rename migrate_entry_wait_on_locked() to<br /> softleaf_entry_wait_on_locked() and update docs (Alistair Popple)<br /> v5:<br /> - Add a WARN_ON_ONCE() for the !CONFIG_MIGRATION<br /> version of softleaf_entry_wait_on_locked().<br /> - Modify wording around function names in the commit message<br /> (Andrew Morton)<br /> <br /> (cherry picked from commit a69d1ab971a624c6f112cea61536569d579c3215)
Severity CVSS v4.0: Pending analysis
Last modification:
12/05/2026

CVE-2026-43387

Publication date:
08/05/2026
In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> staging: rtl8723bs: properly validate the data in rtw_get_ie_ex()<br /> <br /> Just like in commit 154828bf9559 ("staging: rtl8723bs: fix out-of-bounds<br /> read in rtw_get_ie() parser"), we don&amp;#39;t trust the data in the frame so<br /> we should check the length better before acting on it
Severity CVSS v4.0: Pending analysis
Last modification:
12/05/2026

CVE-2026-43388

Publication date:
08/05/2026
In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> mm/damon/core: clear walk_control on inactive context in damos_walk()<br /> <br /> damos_walk() sets ctx-&gt;walk_control to the caller-provided control<br /> structure before checking whether the context is running. If the context<br /> is inactive (damon_is_running() returns false), the function returns<br /> -EINVAL without clearing ctx-&gt;walk_control. This leaves a dangling<br /> pointer to a stack-allocated structure that will be freed when the caller<br /> returns.<br /> <br /> This is structurally identical to the bug fixed in commit f9132fbc2e83<br /> ("mm/damon/core: remove call_control in inactive contexts") for<br /> damon_call(), which had the same pattern of linking a control object and<br /> returning an error without unlinking it.<br /> <br /> The dangling walk_control pointer can cause:<br /> 1. Use-after-free if the context is later started and kdamond<br />    dereferences ctx-&gt;walk_control (e.g., in damos_walk_cancel()<br />    which writes to control-&gt;canceled and calls complete())<br /> 2. Permanent -EBUSY from subsequent damos_walk() calls, since the<br />    stale pointer is non-NULL<br /> <br /> Nonetheless, the real user impact is quite restrictive. The<br /> use-after-free is impossible because there is no damos_walk() callers who<br /> starts the context later. The permanent -EBUSY can actually confuse<br /> users, as DAMON is not running. But the symptom is kept only while the<br /> context is turned off. Turning it on again will make DAMON internally<br /> uses a newly generated damon_ctx object that doesn&amp;#39;t have the invalid<br /> damos_walk_control pointer, so everything will work fine again.<br /> <br /> Fix this by clearing ctx-&gt;walk_control under walk_control_lock before<br /> returning -EINVAL, mirroring the fix pattern from f9132fbc2e83.
Severity CVSS v4.0: Pending analysis
Last modification:
12/05/2026

CVE-2026-43389

Publication date:
08/05/2026
In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> mm: memfd_luo: always dirty all folios<br /> <br /> A dirty folio is one which has been written to. A clean folio is its<br /> opposite. Since a clean folio has no user data, it can be freed under<br /> memory pressure.<br /> <br /> memfd preservation with LUO saves the flag at preserve(). This is<br /> problematic. The folio might get dirtied later. Saving it at freeze()<br /> also doesn&amp;#39;t work, since the dirty bit from PTE is normally synced at<br /> unmap and there might still be mappings of the file at freeze().<br /> <br /> To see why this is a problem, say a folio is clean at preserve, but gets<br /> dirtied later. The serialized state of the folio will mark it as clean. <br /> After retrieve, the next kernel will see the folio as clean and might try<br /> to reclaim it under memory pressure. This will result in losing user<br /> data.<br /> <br /> Mark all folios of the file as dirty, and always set the<br /> MEMFD_LUO_FOLIO_DIRTY flag. This comes with the side effect of making all<br /> clean folios un-reclaimable. This is a cost that has to be paid for<br /> participants of live update. It is not expected to be a common use case<br /> to preserve a lot of clean folios anyway.<br /> <br /> Since the value of pfolio-&gt;flags is a constant now, drop the flags<br /> variable and set it directly.
Severity CVSS v4.0: Pending analysis
Last modification:
12/05/2026

CVE-2026-43390

Publication date:
08/05/2026
In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> nstree: tighten permission checks for listing<br /> <br /> Even privileged services should not necessarily be able to see other<br /> privileged service&amp;#39;s namespaces so they can&amp;#39;t leak information to each<br /> other. Use may_see_all_namespaces() helper that centralizes this policy<br /> until the nstree adapts.
Severity CVSS v4.0: Pending analysis
Last modification:
12/05/2026

CVE-2026-43391

Publication date:
08/05/2026
In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> nsfs: tighten permission checks for handle opening<br /> <br /> Even privileged services should not necessarily be able to see other<br /> privileged service&amp;#39;s namespaces so they can&amp;#39;t leak information to each<br /> other. Use may_see_all_namespaces() helper that centralizes this policy<br /> until the nstree adapts.
Severity CVSS v4.0: Pending analysis
Last modification:
12/05/2026

CVE-2026-43392

Publication date:
08/05/2026
In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> sched_ext: Fix starvation of scx_enable() under fair-class saturation<br /> <br /> During scx_enable(), the READY -&gt; ENABLED task switching loop changes the<br /> calling thread&amp;#39;s sched_class from fair to ext. Since fair has higher<br /> priority than ext, saturating fair-class workloads can indefinitely starve<br /> the enable thread, hanging the system. This was introduced when the enable<br /> path switched from preempt_disable() to scx_bypass() which doesn&amp;#39;t protect<br /> against fair-class starvation. Note that the original preempt_disable()<br /> protection wasn&amp;#39;t complete either - in partial switch modes, the calling<br /> thread could still be starved after preempt_enable() as it may have been<br /> switched to ext class.<br /> <br /> Fix it by offloading the enable body to a dedicated system-wide RT<br /> (SCHED_FIFO) kthread which cannot be starved by either fair or ext class<br /> tasks. scx_enable() lazily creates the kthread on first use and passes the<br /> ops pointer through a struct scx_enable_cmd containing the kthread_work,<br /> then synchronously waits for completion.<br /> <br /> The workfn runs on a different kthread from sch-&gt;helper (which runs<br /> disable_work), so it can safely flush disable_work on the error path<br /> without deadlock.
Severity CVSS v4.0: Pending analysis
Last modification:
12/05/2026

CVE-2026-43393

Publication date:
08/05/2026
In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> btrfs: fix chunk map leak in btrfs_map_block() after btrfs_chunk_map_num_copies()<br /> <br /> Fix a chunk map leak in btrfs_map_block(): if we return early with -EINVAL,<br /> we&amp;#39;re not freeing the chunk map that we&amp;#39;ve just looked up.
Severity CVSS v4.0: Pending analysis
Last modification:
12/05/2026

CVE-2026-43394

Publication date:
08/05/2026
In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> nfsd: Fix cred ref leak in nfsd_nl_listener_set_doit().<br /> <br /> nfsd_nl_listener_set_doit() uses get_current_cred() without<br /> put_cred().<br /> <br /> As we can see from other callers, svc_xprt_create_from_sa()<br /> does not require the extra refcount.<br /> <br /> nfsd_nl_listener_set_doit() is always in the process context,<br /> sendmsg(), and current-&gt;cred does not go away.<br /> <br /> Let&amp;#39;s use current_cred() in nfsd_nl_listener_set_doit().
Severity CVSS v4.0: Pending analysis
Last modification:
12/05/2026