Embedded Intel® Core™ Processors
Communicate Intel® Core™ Hardware, Software, Firmware, Graphics Concerns
1194 Discussions

Need help on DPDK uses Linux hugepages

BHu7
Beginner
3,608 Views

rte_mempool_create needs continues memory and we use 2MB hugepage. We want to get 1GB continues memory or more, currently want 1GB.

We tried reserve 1024 pages of 2 MB and dpdk code for 32-bit (currently we are) limited to using 1 GB so only takes the first 512 pages:

# ifndef RTE_ARCH_X86_64

/* for 32-bit systems, limit number of hugepages to 1GB per page size */

hpi->num_pages[0] = RTE_MIN(hpi->num_pages[0],

RTE_PGSIZE_1G / hpi->hugepage_sz);

# endif

And we found this 512 pages will be continues so we get 1 GB continues memory. But the other 512 pages will be waste since our application will not use it.

Then we try change to reserve only 512 pages, but this time we cannot get the 1GB continues. Actually even we reserve 850 pages or 750 pages some number smaller than 1024 pages, we cannot got 1 GB continues memory.

Any one know why? And any way to help it? We only want 1GB continues memory but we do not want to over reserve pages.

0 Kudos
8 Replies
idata
Employee
1,400 Views

Hi HuBugui,

Welcome to the Intel® Embedded Community.

We are working on getting an answer to your question. Have a great day and we'll be talking with you soon!

Best Regards,

Leon

Muthurajan_J_Intel
1,400 Views

Hi

 

Thank you for using DPDK.

A General observation regarding huge page -

The allocation of hugepages should be done at boot time or as soon as possible after system boot to prevent memory from being fragmented in physical memory. To reserve hugepages at boot time, a parameter is passed to the Linux* kernel on the kernel command line.

As you may be aware of, there are 2 Huge page sizes Intel Processor supports - 1) 2 Meg and 2) 1 Gig

Since you want 1 Gig continuous memory, can we please recommend you to use 1 Gig Huge page size (instead of 2 Meg)

Specify in the GRUB for 1 Page of 1 Gig Huge page .

For 1G pages, the size must be specified explicitly and can also be optionally set as the default hugepage size for the system. For example, to reserve 1G of hugepage memory in the form of one 1G page, the following options should be passed to the kernel:

default_hugepagesz=1G hugepagesz=1G hugepages=1

Please kindly refer to section 2.3 and subsections there of in http://dpdk.org/doc/intel/dpdk-start-linux-1.7.0.pdf http://dpdk.org/doc/intel/dpdk-start-linux-1.7.0.pdf

Also, let us know the following information

1) What is your kernel version?

2) How many sockets your machne has?

3) Plese let us know the output of cat /proc/cpuinfo

Thanks,

0 Kudos
BHu7
Beginner
1,400 Views

Hi,

Thanks a lot for kindly help.

We are running virtual machine on KVM. Kernel version is 2.6.32-220.el6.adx.x86_64.

It seems KVM guest unable to get pdpe1gb flag from host CPU, so it does not support 1GB hugepage size. :-(

I found someone also saw this issue: https://bugs.launchpad.net/qemu/+bug/1248959 Bug # 1248959 "pdpe1gb flag is missing in guest running on Intel ..." : Bugs : QEMU ()

Even in KVM I set "Copy host CPU configuration" and saw pdpe1gb flag is showed as "require".

The guest is 1 socket for 2 cores.

[root@VirtualADX ~]# cat /proc/cpuinfo

processor : 0

vendor_id : GenuineIntel

cpu family : 6

model : 42

model name : Intel Xeon E312xx (Sandy Bridge)

stepping : 1

cpu MHz : 2594.210

cache size : 4096 KB

fpu : yes

fpu_exception : yes

cpuid level : 13

wp : yes

flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx lm constant_tsc unfair_spinlock pni pclmulqdq ssse3 cx16 sse4_1 sse4_2 x2apic popcnt aes xsave avx f16c rdrand hypervisor lahf_lm xsaveopt fsgsbase smep

bogomips : 5188.42

clflush size : 64

cache_alignment : 64

address sizes : 46 bits physical, 48 bits virtual

power management:

processor : 1

vendor_id : GenuineIntel

cpu family : 6

model : 42

model name : Intel Xeon E312xx (Sandy Bridge)

stepping : 1

cpu MHz : 2594.210

cache size : 4096 KB

fpu : yes

fpu_exception : yes

cpuid level : 13

wp : yes

flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx lm constant_tsc unfair_spinlock pni pclmulqdq ssse3 cx16 sse4_1 sse4_2 x2apic popcnt aes xsave avx f16c rdrand hypervisor lahf_lm xsaveopt fsgsbase smep

bogomips : 5188.42

clflush size : 64

cache_alignment : 64

address sizes : 46 bits physical, 48 bits virtual

power management:

Thanks

0 Kudos
BHu7
Beginner
1,400 Views

The host is 12 core machine with same kernel. But it may change.

[root@centos-139277 ~]# cat /proc/cpuinfo

processor : 0

vendor_id : GenuineIntel

cpu family : 6

model : 62

model name : Intel(R) Xeon(R) CPU E5-2630 v2 @ 2.60GHz

stepping : 4

cpu MHz : 1200.000

cache size : 15360 KB

physical id : 0

siblings : 12

core id : 0

cpu cores : 6

apicid : 0

initial apicid : 0

fpu : yes

fpu_exception : yes

cpuid level : 13

wp : yes

flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 sse4_2 x2apic popcnt aes xsave avx f16c rdrand lahf_lm ida arat epb xsaveopt pln pts dts tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms

bogomips : 5188.42

clflush size : 64

cache_alignment : 64

address sizes : 46 bits physical, 48 bits virtual

power management:

0 Kudos
Natalie_Z_Intel
Employee
1,400 Views

Hi! Would you post your qemu/kvm command-lines to start VMs, and make sure you used the options below to assign host huge pages to guest as guest huge pages. -memm-paht /mnt/huge -mem-prealloc ?

0 Kudos
BHu7
Beginner
1,400 Views

I tried using virt-manager and also command line qemu-kvm with mem-path and mem-prealloc, it seems not help:

On host:

HugePages_Total: 6

HugePages_Free: 3

HugePages_Rsvd: 1

HugePages_Surp: 0

Hugepagesize: 1048576 kB

DirectMap4k: 6144 kB

DirectMap2M: 2058240 kB

DirectMap1G: 14680064 kB

start guest:

/usr/libexec/qemu-kvm -cpu host -drive file=/root/img/centtos.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -m 4096 -smp 2,cores=1,threads=1,sockets=2 --enable-kvm -name "os2" -nographic -vnc :3 -monitor unix:/tmp/vm1monitor,server,nowait -net none -no-reboot -mem-path /mnt/huge_1GB -mem-prealloc -netdev type=tap,id=net1,script=no,downscript=no,ifname=ovsvhost80,vhost=on -device virtio-net-pci,netdev=net1,mac=00:00:00:00:00:01,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off -netdev type=tap,id=net2,script=no,downscript=no,ifname=ovsvhost81,vhost=on -device virtio-net-pci,netdev=net2,mac=00:00:00:00:00:02,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off

The guest using 2M hugepage with 512 pages, still only get 313 pages continues memory.

0 Kudos
DannyYigan_Z_Intel
1,400 Views

Can you use relatively new kernel in your VMs?

0 Kudos
DannyYigan_Z_Intel
1,400 Views

One more suggestion is to post your DPDK relevant questions in dpdk.org mainling list which is the official DPDK open source community and there are plenty of DPDK experts and users who might be able to provide suggestions to your issue.

0 Kudos
Reply