CVE-2026-31648

Severity CVSS v4.0:
Pending analysis
Type:
CWE-190 Integer Overflow or Wraparound
Publication date:
24/04/2026
Last modified:
27/04/2026

Description

In the Linux kernel, the following vulnerability has been resolved:<br /> <br /> mm: filemap: fix nr_pages calculation overflow in filemap_map_pages()<br /> <br /> When running stress-ng on my Arm64 machine with v7.0-rc3 kernel, I<br /> encountered some very strange crash issues showing up as "Bad page state":<br /> <br /> "<br /> [ 734.496287] BUG: Bad page state in process stress-ng-env pfn:415735fb<br /> [ 734.496427] page: refcount:0 mapcount:1 mapping:0000000000000000 index:0x4cf316 pfn:0x415735fb<br /> [ 734.496434] flags: 0x57fffe000000800(owner_2|node=1|zone=2|lastcpupid=0x3ffff)<br /> [ 734.496439] raw: 057fffe000000800 0000000000000000 dead000000000122 0000000000000000<br /> [ 734.496440] raw: 00000000004cf316 0000000000000000 0000000000000000 0000000000000000<br /> [ 734.496442] page dumped because: nonzero mapcount<br /> "<br /> <br /> After analyzing this page’s state, it is hard to understand why the<br /> mapcount is not 0 while the refcount is 0, since this page is not where<br /> the issue first occurred. By enabling the CONFIG_DEBUG_VM config, I can<br /> reproduce the crash as well and captured the first warning where the issue<br /> appears:<br /> <br /> "<br /> [ 734.469226] page: refcount:33 mapcount:0 mapping:00000000bef2d187 index:0x81a0 pfn:0x415735c0<br /> [ 734.469304] head: order:5 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0<br /> [ 734.469315] memcg:ffff000807a8ec00<br /> [ 734.469320] aops:ext4_da_aops ino:100b6f dentry name(?):"stress-ng-mmaptorture-9397-0-2736200540"<br /> [ 734.469335] flags: 0x57fffe400000069(locked|uptodate|lru|head|node=1|zone=2|lastcpupid=0x3ffff)<br /> ......<br /> [ 734.469364] page dumped because: VM_WARN_ON_FOLIO((_Generic((page + nr_pages - 1),<br /> const struct page *: (const struct folio *)_compound_head(page + nr_pages - 1), struct page *:<br /> (struct folio *)_compound_head(page + nr_pages - 1))) != folio)<br /> [ 734.469390] ------------[ cut here ]------------<br /> [ 734.469393] WARNING: ./include/linux/rmap.h:351 at folio_add_file_rmap_ptes+0x3b8/0x468,<br /> CPU#90: stress-ng-mlock/9430<br /> [ 734.469551] folio_add_file_rmap_ptes+0x3b8/0x468 (P)<br /> [ 734.469555] set_pte_range+0xd8/0x2f8<br /> [ 734.469566] filemap_map_folio_range+0x190/0x400<br /> [ 734.469579] filemap_map_pages+0x348/0x638<br /> [ 734.469583] do_fault_around+0x140/0x198<br /> ......<br /> [ 734.469640] el0t_64_sync+0x184/0x188<br /> "<br /> <br /> The code that triggers the warning is: "VM_WARN_ON_FOLIO(page_folio(page +<br /> nr_pages - 1) != folio, folio)", which indicates that set_pte_range()<br /> tried to map beyond the large folio’s size.<br /> <br /> By adding more debug information, I found that &amp;#39;nr_pages&amp;#39; had overflowed<br /> in filemap_map_pages(), causing set_pte_range() to establish mappings for<br /> a range exceeding the folio size, potentially corrupting fields of pages<br /> that do not belong to this folio (e.g., page-&gt;_mapcount).<br /> <br /> After above analysis, I think the possible race is as follows:<br /> <br /> CPU 0 CPU 1<br /> filemap_map_pages() ext4_setattr()<br /> //get and lock folio with old inode-&gt;i_size<br /> next_uptodate_folio()<br /> <br /> .......<br /> //shrink the inode-&gt;i_size<br /> i_size_write(inode, attr-&gt;ia_size);<br /> <br /> //calculate the end_pgoff with the new inode-&gt;i_size<br /> file_end = DIV_ROUND_UP(i_size_read(mapping-&gt;host), PAGE_SIZE) - 1;<br /> end_pgoff = min(end_pgoff, file_end);<br /> <br /> ......<br /> //nr_pages can be overflowed, cause xas.xa_index &gt; end_pgoff<br /> end = folio_next_index(folio) - 1;<br /> nr_pages = min(end, end_pgoff) - xas.xa_index + 1;<br /> <br /> ......<br /> //map large folio<br /> filemap_map_folio_range()<br /> ......<br /> //truncate folios<br /> truncate_pagecache(inode, inode-&gt;i_size);<br /> <br /> To fix this issue, move the &amp;#39;end_pgoff&amp;#39; calculation before<br /> next_uptodate_folio(), so the retrieved folio stays consistent with the<br /> file end to avoid <br /> ---truncated---

Vulnerable products and versions

CPE From Up to
cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:* 6.1.159 (including) 6.2 (excluding)
cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:* 6.6.117 (including) 6.6.135 (excluding)
cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:* 6.12.1 (including) 6.12.82 (excluding)
cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:* 6.13 (including) 6.18.23 (excluding)
cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:* 6.19 (including) 6.19.13 (excluding)
cpe:2.3:o:linux:linux_kernel:6.12:-:*:*:*:*:*:*
cpe:2.3:o:linux:linux_kernel:7.0:rc1:*:*:*:*:*:*
cpe:2.3:o:linux:linux_kernel:7.0:rc2:*:*:*:*:*:*
cpe:2.3:o:linux:linux_kernel:7.0:rc3:*:*:*:*:*:*
cpe:2.3:o:linux:linux_kernel:7.0:rc4:*:*:*:*:*:*
cpe:2.3:o:linux:linux_kernel:7.0:rc5:*:*:*:*:*:*
cpe:2.3:o:linux:linux_kernel:7.0:rc6:*:*:*:*:*:*
cpe:2.3:o:linux:linux_kernel:7.0:rc7:*:*:*:*:*:*