Architecture brief: ******************* - On a WRL8 hypervisor, a WRL6 guest is run. - On the hypervisor 10 x 1G huge pages are created and 4 of them are used as backing memory for the guest. - The guest OS creates 1024 x 2M hugepages out of the 4G of its memory. - ovs is run on the hypervisor with 1 x 1G huge page as dpdk memory. - A dpdk application (srxpfe) is run in the guest and it uses 256 x 2M huge pages for dpdk packet buffers and the remaining 756 x 2M huge pages for non-dpdk purposes. - The ovs on the hypervisor and the dpdk application in the guest communicate through the shared huge page memory. Problem: ******** - After shutting down the guest, instead of 4 x 1G huge pages returned to free pool, only 3 x 1G huge pages are returned to free pool. Hypervisor details: ******************* root@local-node:~# uname -a Linux local-node 4.1.27-rt30-WR8.0.0.25_ovp #1 SMP Sat Dec 14 20:48:11 PST 2019 x86_64 x86_64 x86_64 GNU/Linux root@local-node:~# cat /etc/os-release ID="wrlinux" NAME="Wind River Linux" VERSION="8.0.0.25 (OVP)" VERSION_ID="8.0.0.25" PRETTY_NAME="Wind River Linux 8.0.0.25 (OVP)" <MORE DETAILS CAN BE FOUND IN THE ATTACHMENT> Guest details: ************** root@localhost:~# uname -a Linux localhost 3.10.55-ltsi-WR6.0.0.15_standard #1 SMP PREEMPT Mon Apr 8 17:41:22 PDT 2019 x86_64 GNU/Linux root@localhost:~# cat /etc/host-version HOST_VERSION=1.0.0 <MORE DETAILS CAN BE FOUND IN THE ATTACHMENT> This sh below is described in the attached file, however you can just run: _$ cat /sys/kernel/mm/hugepages/hugepages-1048576kB/free_hugepages_ root@local-node:~# ./[get-mem-data.sh|http://get-mem-data.sh/] 1G total = 10 1G free = 5 2M total = 156 2M free = 156 Destroy the guest and check details on the hypervisor again: ************************************************************ root@local-node:~# virsh destroy vsrx Domain vsrx destroyed root@local-node:~# ./[get-mem-data.sh|http://get-mem-data.sh/] 1G total = 10 1G free = 8 2M total = 156 2M free = 156