Re: hugepages question

From: Mark J. Bobak <mark_at_bobak.net>
Date: Fri, 15 Feb 2019 14:35:39 -0500
Message-ID: <CAFQ5ACKpwd_U7iEytMhyXN-HPSYJqRx6gvJpHSrtKUGdZ-3X5w_at_mail.gmail.com>



Hehe.....yeah, Tony gave me a sweet deal! ;-)

On Fri, Feb 15, 2019 at 1:40 PM Mark W. Farnham <mwf_at_rsiz.com> wrote:

> LOL. Did you have to pay Tony Soprano for all the extra garbage runs?
>
>
>
> Great story.
>
>
>
> *From:* oracle-l-bounce_at_freelists.org [mailto:
> oracle-l-bounce_at_freelists.org] *On Behalf Of *Mark J. Bobak
> *Sent:* Friday, February 15, 2019 11:25 AM
> *To:* jrodriguez2_at_pythian.com
> *Cc:* Backseat DBA; Andy Wattenhofer; oracle-l-freelist; Paul Drake;
> santos_at_pobox.com
> *Subject:* Re: hugepages question
>
>
>
> Once you allocate hugepages, it prevents you using that memory for any
> other purpose. If you're planning the use of the server, that's probably
> fine
>
>
>
> Also, to add more to hugepages without a reboot, you can (theoretically)
> do 'sudo sysctl -p' and then 'cat /proc/meminfo' to see if it fully
> allocated all the newly requested hugepages. If not all hugepages were
> allocated, you can do 'sudo sysctl -p' again, and check /proc/meminfo again.
>
>
>
> I remember, at least one time in the past, where I wanted to add a new DB,
> and I couldn't afford a bounce of the server, due to other production DBs
> on the server, where I wrote a simple script that did "grep HugePages_Total
> /proc/meminfo |awk -F: '{ print $2 }'" and kept calling 'sudo sysctl -p'
> over and over, until the output of the previous command was equal to the
> value in '/etc/sysctl.conf'. It took a few minutes, but it worked.
>
>
>
> -Mark
>
>
>
>
>
> On Thu, Feb 14, 2019 at 11:53 AM Jose Rodriguez <jrodriguez2_at_pythian.com>
> wrote:
>
> I am curious to know why over allocating HP is a bad idea.
>
> I mean, if you know you will end up increasing the size of an SGA or the
> number of SGAs in the server, why not allocate HP ahead of time.
>
> Save some RAM for OS, some for PGA and the rest should go to SGA, that
> 'rest' should be in HP sooner or later, so do it sooner.
>
> Does it make sense? Or am I missing something?
>
>
> [image: Pythian] <http://www.pythian.com/>
>
> *Jose Rodriguez* | Oracle Project Engineer | [image: LinkedIn]
> <https://www.linkedin.com/company/pythian>
> *t* +1 613 565 8696 <+1+613+565+8696> *ext.* 1393
> *m* +34 607 55 49 91 <+34+607+55+49+91>
> jrodriguez2_at_pythian.com
> *www.pythian.com* <https://www.pythian.com/>
>
> [image: Pythian] <https://www.pythian.com/email-footer-click>
>
>
>
>
>
> On Thu, 14 Feb 2019 at 17:19, Jeff Chirco <backseatdba_at_gmail.com> wrote:
>
> Hi Andy, yes I did reboot. Oh I originally set it to 81000 and then
> rebooted. Yesterday I tried to increase it to 9100 without a reboot.
>
>
>
> Since it requires a reboot then that kind of is a bummer if I ever need to
> add another database to a server. I'd have to bring everything down to
> increase hugepages. I could over allocate hugepages but I read that is a
> bad idea.
>
>
>
> Jeff
>
>
>
> On Thu, Feb 14, 2019 at 8:05 AM Andy Wattenhofer <watt0012_at_umn.edu> wrote:
>
> Did you reboot after the last change to hugepages?
>
>
>
> [root]# grep nr_hugepages /etc/sysctl.conf
>
> vm.nr_hugepages=*91000*
>
> [...]
>
> [root]# cat /proc/meminfo
>
> HugePages_Total: *81793*
>
> HugePages_Free: 8
> HugePages_Rsvd: 7
> HugePages_Surp: 0
>
>
>
> Yes, you must reboot for 91000 to take effect. The huge page RAM is
> reserved at boot time.
>
>
>
> Andy
>
>
>
>
>
> On Thu, Feb 14, 2019 at 9:15 AM Jeff Chirco <backseatdba_at_gmail.com> wrote:
>
> Thanks for the replies. Forgot to mention this is Oracle Linux 7 running
> 12.2.0.1. These are output from my Development server but production is
> identicial except hugepage size is lower cause it has less database.
> Development and Production both have 256gb of RAM.
>
> I also noticed when I first set huegpages using the below command it
> increased hugepages but not to the amount I tried to set it to. Not until a
> server reboot did it get set the full amount. If I need to increase it
> again I would prefer to not have to reboot the server.
>
> # sysctl -w vm.nr_hugepages=value
>
>
>
> Regarding hugepages_settings.sh I did follow Tim's notes but I ended up
> finding an updated script under *Doc ID 401749.1*
>
>
> [root]# grep memlock /etc/security/limits.conf
> # - memlock - max locked-in-memory address space (KB)
> oracle hard memlock 237265300
> # oracle-rdbms-server-11gR2-preinstall setting for memlock hard limit is
> maximum of 128GB on x86_64 or 3GB on x86 OR 90 % of RAM
> oracle soft memlock 237265300
>
> [root]# grep nr_hugepages /etc/sysctl.conf
> vm.nr_hugepages=91000
>
>
> [root]# ulimit -l
> 64
>
> [root]# ulimit -a
> core file size (blocks, -c) 0
> data seg size (kbytes, -d) unlimited
> scheduling priority (-e) 0
> file size (blocks, -f) unlimited
> pending signals (-i) 1029648
> max locked memory (kbytes, -l) 64
> max memory size (kbytes, -m) unlimited
> open files (-n) 1024
> pipe size (512 bytes, -p) 8
> POSIX message queues (bytes, -q) 819200
> real-time priority (-r) 0
> stack size (kbytes, -s) 8192
> cpu time (seconds, -t) unlimited
> max user processes (-u) 1029648
> virtual memory (kbytes, -v) unlimited
> file locks (-x) unlimited
>
>
> Alert log from one of my databases
> **********************************************************************
> 2019-02-12T21:26:58.786205-08:00
> Dump of system resources acquired for SHARED GLOBAL AREA (SGA)
>
> 2019-02-12T21:26:58.790788-08:00
> Per process system memlock (soft) limit = 226G
> 2019-02-12T21:26:58.793885-08:00
> Expected per process system memlock (soft) limit to lock
> SHARED GLOBAL AREA (SGA) into memory: 8196M
> 2019-02-12T21:26:58.800179-08:00
> Available system pagesizes:
> 4K, 2048K
> 2019-02-12T21:26:58.806848-08:00
> Supported system pagesize(s):
> 2019-02-12T21:26:58.809846-08:00
> PAGESIZE AVAILABLE_PAGES EXPECTED_PAGES ALLOCATED_PAGES ERROR(s)
> 2019-02-12T21:26:58.813115-08:00
> 2048K 65468 4098 4098 NONE
> 2019-02-12T21:26:58.816156-08:00
> Reason for not supporting certain system pagesizes:
> 2019-02-12T21:26:58.819060-08:00
> 4K - Large pagesizes only
> 2019-02-12T21:26:58.822058-08:00
> **********************************************************************
>
>
>
> [root]# ipcs -m
>
> ------ Shared Memory Segments --------
> key shmid owner perms bytes nattch status
> 0x00000000 589824 oracle 600 10485760 78
> 0x00000000 622593 oracle 600 3372220416 78
> 0x00000000 655362 oracle 600 8388608 78
> 0x76c55318 688131 oracle 600 2097152 78
> 0x00000000 11862020 oracle 600 10485760 120
> 0x00000000 11894789 oracle 600 3204448256 120
> 0x00000000 11927558 oracle 600 8388608 120
> 0x34ef509c 11960327 oracle 600 2097152 120
> 0x00000000 11993096 oracle 600 10485760 95
> 0x00000000 12025865 oracle 600 2298478592 95
> 0x00000000 12058634 oracle 600 8388608 95
> 0x507a78a4 12091403 oracle 600 2097152 95
> 0x00000000 12124172 oracle 600 10485760 207
> 0x00000000 12156941 oracle 600 6425673728 207
> 0x00000000 12189710 oracle 600 8388608 207
> 0x866b8aa0 12222479 oracle 600 2097152 207
> 0x00000000 12255248 oracle 600 12582912 81
> 0x00000000 12288017 oracle 600 8573157376 81
> 0x00000000 12320786 oracle 600 6291456 81
> 0x0c51f55c 12353555 oracle 600 2097152 81
> 0x00000000 12386324 oracle 600 12582912 79
> 0x00000000 12419093 oracle 600 8573157376 79
> 0x00000000 12451862 oracle 600 6291456 79
> 0x7afafd64 12484631 oracle 600 2097152 79
> 0x00000000 12779544 oracle 600 12582912 90
> 0x00000000 12812313 oracle 600 8573157376 90
> 0x00000000 12845082 oracle 600 6291456 90
> 0xdde48098 12877851 oracle 600 2097152 90
> 0x00000000 12910620 oracle 600 12582912 86
> 0x00000000 12943389 oracle 600 8573157376 86
> 0x00000000 12976158 oracle 600 6291456 86
> 0xbae4e74c 13008927 oracle 600 2097152 86
> 0x00000000 13303840 oracle 600 12582912 85
> 0x00000000 13336609 oracle 600 8573157376 85
> 0x00000000 13369378 oracle 600 6291456 85
> 0x4c7ac944 13402147 oracle 600 2097152 85
> 0x00000000 13434916 oracle 600 12582912 86
> 0x00000000 13467685 oracle 600 8573157376 86
> 0x00000000 13500454 oracle 600 6291456 86
> 0xf06d661c 13533223 oracle 600 2097152 86
> 0x00000000 13828136 oracle 600 12582912 87
> 0x00000000 13860905 oracle 600 8573157376 87
> 0x00000000 13893674 oracle 600 6291456 87
> 0xdb5acf68 13926443 oracle 600 2097152 87
> 0x00000000 13959212 oracle 600 12582912 87
> 0x00000000 13991981 oracle 600 8573157376 87
> 0x00000000 14024750 oracle 600 6291456 87
> 0xcdd9f634 14057519 oracle 600 2097152 87
> 0x00000000 14352432 oracle 600 10485760 102
> 0x00000000 14385201 oracle 600 6425673728 102
> 0x00000000 14417970 oracle 600 8388608 102
> 0x4bd6517c 14450739 oracle 600 2097152 102
> 0x00000000 14483508 oracle 600 10485760 116
> 0x00000000 14516277 oracle 600 6425673728 116
> 0x00000000 14549046 oracle 600 8388608 116
> 0x33ec8074 14581815 oracle 600 2097152 116
> 0x00000000 14876728 oracle 600 12582912 87
> 0x00000000 14909497 oracle 600 8573157376 87
> 0x00000000 14942266 oracle 600 6291456 87
> 0xb40541dc 14975035 oracle 600 2097152 87
> 0x00000000 1179975740 oracle 600 12582912 90
> 0x00000000 1180008509 oracle 600 8455716864 90
> 0x00000000 1180041278 oracle 600 34359738368 90
> 0x00000000 1180074047 oracle 600 123731968 90
> 0x00000000 15401024 oracle 600 12582912 91
> 0x00000000 15433793 oracle 600 8573157376 91
> 0x00000000 15466562 oracle 600 6291456 91
> 0x36329238 15499331 oracle 600 2097152 91
> 0x00000000 15532100 oracle 600 12582912 118
> 0x00000000 15564869 oracle 600 8573157376 118
> 0x00000000 15597638 oracle 600 6291456 118
> 0xa0206b88 15630407 oracle 600 2097152 118
> 0x00000000 16449608 oracle 600 12582912 86
> 0x00000000 16482377 oracle 600 8573157376 86
> 0x00000000 16515146 oracle 600 6291456 86
> 0xc402bb18 16547915 oracle 600 2097152 86
> 0x00000000 16318540 oracle 600 12582912 87
> 0x00000000 16351309 oracle 600 8573157376 87
> 0x00000000 16384078 oracle 600 6291456 87
> 0x261e431c 16416847 oracle 600 2097152 87
> 0x00000000 16187472 oracle 600 12582912 168
> 0x00000000 16220241 oracle 600 20937965568 168
> 0x00000000 16253010 oracle 600 56623104 168
> 0x35b8fa94 16285779 oracle 600 2097152 168
> 0x00000000 547127380 oracle 600 12582912 85
> 0x00000000 547160149 oracle 600 251658240 85
> 0x00000000 547192918 oracle 600 8321499136 85
> 0x00000000 547225687 oracle 600 4595712 85
> 0x9cd83398 547258456 oracle 600 40960 85
> 0x00000000 792199257 oracle 600 10485760 106
> 0x00000000 792232026 oracle 600 1644167168 106
> 0x00000000 792264795 oracle 600 4781506560 106
> 0x00000000 792297564 oracle 600 8388608 106
> 0xc78a1c8c 792330333 oracle 600 32768 106
> 0x6762e034 1180106846 oracle 600 28672 90
>
> [root]# cat /proc/meminfo
> MemTotal: 263623524 kB
> MemFree: 2169856 kB
> MemAvailable: 25248596 kB
> Buffers: 28 kB
> Cached: 68234496 kB
> SwapCached: 4392 kB
> Active: 56288772 kB
> Inactive: 32305392 kB
> Active(anon): 38139416 kB
> Inactive(anon): 28588332 kB
> Active(file): 18149356 kB
> Inactive(file): 3717060 kB
> Unevictable: 0 kB
> Mlocked: 0 kB
> SwapTotal: 16777212 kB
> SwapFree: 16732780 kB
> Dirty: 1020 kB
> Writeback: 0 kB
> AnonPages: 20491988 kB
> Mapped: 10605612 kB
> Shmem: 46232620 kB
> Slab: 2937544 kB
> SReclaimable: 2197872 kB
> SUnreclaim: 739672 kB
> KernelStack: 68544 kB
> PageTables: 1451148 kB
> NFS_Unstable: 0 kB
> Bounce: 0 kB
> WritebackTmp: 0 kB
> CommitLimit: 64832940 kB
> Committed_AS: 93046052 kB
> VmallocTotal: 34359738367 kB
> VmallocUsed: 772444 kB
> VmallocChunk: 34358900732 kB
> HardwareCorrupted: 0 kB
> AnonHugePages: 0 kB
> CmaTotal: 0 kB
> CmaFree: 0 kB
> HugePages_Total: 81793
> HugePages_Free: 8
> HugePages_Rsvd: 7
> HugePages_Surp: 0
> Hugepagesize: 2048 kB
> DirectMap4k: 514768 kB
> DirectMap2M: 41201664 kB
> DirectMap1G: 228589568 kB
>
>
>
> On Thu, Feb 14, 2019 at 6:50 AM Jeff Chirco <backseatdba_at_gmail.com> wrote:
>
> Ooops sorry yes Oracle Linux 7.4
>
>
>
> On Wed, Feb 13, 2019 at 4:31 PM Paul Drake <bdbafh_at_gmail.com> wrote:
>
> OS info might be relevant.
>
>
>
> Assuming Linux:
>
>
>
> ipcs -m
>
> cat /proc/meminfo
>
>
>
>
>
>
>
> On Wed, Feb 13, 2019, 7:12 PM Jeff Chirco <backseatdba_at_gmail.com wrote:
>
> Ok I just recently enabled hugepages on Production and Development.
> Everything is going good but I noticed that on my development environment
> which consists of a bunch of thin clones, when I remove and recreate the
> same clone the database fails to create, I am getting:
>
> ORA-27137: unable to allocate large pages to create a shared memory segment
>
> I have the parameter *use_large_pages=ONLY*
>
> So it makes sense I would get this error when there is not enough pages.
> However I just removed this same database and am recreating it. I would
> think that would free it up but apparently not. Does pages not get
> released immediately when a database is removed? Is there something I
> should run to free it? Or just not have use_large_pages=ONLY and leave it
> at TRUE on Dev? I had to set it to TRUE to get this database up.
>
>
>
> Thanks,
>
> Jeff
>
>
>
> --
>
>
>
>

--
http://www.freelists.org/webpage/oracle-l
Received on Fri Feb 15 2019 - 20:35:39 CET

Original text of this message