Acknowledged
Created: Aug 24, 2025
Updated: Aug 26, 2025
Found In Version: 10.24.33.1
Severity: Standard
Applicable for: Wind River Linux LTS 24
Component/s: Kernel
In the Linux kernel, the following vulnerability has been resolved:EOL][EOL]arm64/entry: Mask DAIF in cpu_switch_to(), call_on_irq_stack()[EOL][EOL]`cpu_switch_to()` and `call_on_irq_stack()` manipulate SP to change[EOL]to different stacks along with the Shadow Call Stack if it is enabled.[EOL]Those two stack changes cannot be done atomically and both functions[EOL]can be interrupted by SErrors or Debug Exceptions which, though unlikely,[EOL]is very much broken : if interrupted, we can end up with mismatched stacks[EOL]and Shadow Call Stack leading to clobbered stacks.[EOL][EOL]In `cpu_switch_to()`, it can happen when SP_EL0 points to the new task,[EOL]but x18 stills points to the old task's SCS. When the interrupt handler[EOL]tries to save the task's SCS pointer, it will save the old task[EOL]SCS pointer (x18) into the new task struct (pointed to by SP_EL0),[EOL]clobbering it.[EOL][EOL]In `call_on_irq_stack()`, it can happen when switching from the task stack[EOL]to the IRQ stack and when switching back. In both cases, we can be[EOL]interrupted when the SCS pointer points to the IRQ SCS, but SP points to[EOL]the task stack. The nested interrupt handler pushes its return addresses[EOL]on the IRQ SCS. It then detects that SP points to the task stack,[EOL]calls `call_on_irq_stack()` and clobbers the task SCS pointer with[EOL]the IRQ SCS pointer, which it will also use ![EOL][EOL]This leads to tasks returning to addresses on the wrong SCS,[EOL]or even on the IRQ SCS, triggering kernel panics via CONFIG_VMAP_STACK[EOL]or FPAC if enabled.[EOL][EOL]This is possible on a default config, but unlikely.[EOL]However, when enabling CONFIG_ARM64_PSEUDO_NMI, DAIF is unmasked and[EOL]instead the GIC is responsible for filtering what interrupts the CPU[EOL]should receive based on priority.[EOL]Given the goal of emulating NMIs, pseudo-NMIs can be received by the CPU[EOL]even in `cpu_switch_to()` and `call_on_irq_stack()`, possibly *very*[EOL]frequently depending on the system configuration and workload, leading[EOL]to unpredictable kernel panics.[EOL][EOL]Completely mask DAIF in `cpu_switch_to()` and restore it when returning.[EOL]Do the same in `call_on_irq_stack()`, but restore and mask around[EOL]the branch.[EOL]Mask DAIF even if CONFIG_SHADOW_CALL_STACK is not enabled for consistency[EOL]of behaviour between all configurations.[EOL][EOL]Introduce and use an assembly macro for saving and masking DAIF,[EOL]as the existing one saves but only masks IF.
CREATE(Triage):(User=pbi-cn) [CVE-2025-38670 (https://nvd.nist.gov/vuln/detail/CVE-2025-38670)