Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: sysmap_64bit: rmap ovflo, lost

Re: sysmap_64bit: rmap ovflo, lost

From: Don Morris <dmorris_at_cup.hp.com>
Date: Thu, 27 Apr 2006 14:43:16 GMT
Message-ID: <8M44g.6912$8_.4737@news.cpqcorp.net>


joel-garry_at_home.com wrote:
> helpful gurus:
>
> hp-ux 11.11 4 processor rpr3340 box crashed last night. Trying to
> figure out how to prevent this in the future. Oracle 9.2.0.6.
>
> The uptime was a little over 2 months. Looking at syslog, I see lots
> (>17K lines) of:
> Apr 25 20:47:34 ZEUS vmunix: sysmap_64bit: rmap ovflo, lost
> [68419543,68419559)

Ouch.. kernel virtual address space is so fragmented it is getting dropped on the floor.

Next chance you get, double the nsysmap64 kernel tunable -- that's a workaround... not a definite fix.

More information:
http://docs.hp.com/en/TKP-90202/re68.html

>
> They started at the exact time my Oracle RMAN backup started. The
> script that does the backup does a number of things, such as remove old
> backup files, run the RMAN script (nocatalog), then compress some of
> the backup files onto an nfs device. The RMAN completed, system
> crashed during compress.

My bet? This backup results in lots of little I/O buffers being created which are freed in a very asynchronous manner -- with more than a few taking a *long* time to be freed.

The easiest way for the kernel virtual address space to get fragmented is to have lots of little dynamic allocations be made... and then lots of little pieces to be freed back which can't form their original larger ranges (coalesce) because pieces are missing. I/O buffers (being small bits of kernel dynamic memory) are unsurprisingly good at this.

>
> Looking at
> http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=70397
> (neither of the links in there work for me), I now have the idea that
> something fragmented kernel memory. But what? I was about to write a
> script to periodically capture the largest processes while RMAN is
> running, but then I started wondering if it is not really RMAN, but
> something previous to RMAN that sets up the problem. Looking again at
> syslog, I see the rmaps happening on a few days in April at various
> times during the day, once during production day and 7 times off-hours
> (sometimes during RMAN, sometimes during compress), April 10-15, but no
> other times since boot. If it were RMAN, wouldn't I see the problem
> whenever RMAN ran? And why this time did it go nuts and crash the
> system, but not the other times?

You only see the problem when the data structures to hold free kernel virtual address space overflow. In other words, you may be fragmented every time RMAN runs -- but just not fragmented *enough*. Alternately, it may be that you have an intermittent I/O timeout or somesuch which causes some runs of RMAN to hold on to every Nth buffer for a long time and cause the critical fragmentation problem -- where your other runs manage to release the buffers together and they coalesce back up without much fragmentation.

>
> Using the 'UNIX95= ps -e -o "vsz args" |sort' command, I see that some
> third party application processes get big: 132640K is the biggest just
> now, (those are killed off nightly if the users forget to log off - but
> later than this backup). So I tried 'ps -efl|sort -nk10|tail -10' ,
> which shows that same process as 29527 pages (and lets me see exactly
> who it is). But I don't quite get what vsz and sz are telling me, I
> guess I need to subtract some shared memory? man ps isn't too clear.

User virtual address space is managed completely differently. ps isn't going to help you here at all... (although if you're running lots and lots of processes, that can consume a lot of dynamic memory to manage the process metadata -- and can lead to fragmentation as well. Hence why nsysmap64 defaults to being based off of nproc for nproc > 800).

>
> I don't see how to figure which process is fragmenting memory. Don't
> have glance. Should I be looking for processes that get bigger and
> smaller, rather than the largest? There is a transaction monitor that
> appears to be doing that. Or should I watch for something continually
> growing? I don't know of anything that has changed on this system
> specific to this month, and don't really see how a memory leak could
> come and go and come back big when users and cron do the same thing
> day-to-day.

You really don't have the tools to find this -- HP Support does. You should have gotten a dump when the panic occurred, I highly recommend using your support channels to track down the root cause of the fragmentation and see what they recommend.

>
> Is it really going to be necessary to reboot this thing monthly?

You can mitigate it by increasing nsysmap64 as mentioned above -- but you still need Support to help figure out the root cause to make sure increasing the sysmap capacity isn't just applying a bandaid.

Don

>
> Any help appreciated, I'm trying to do as much as possible before the
> hardware folk start interrupting production.
>
> This is pretty typical swapinfo:
>
> # swapinfo -am
> Mb Mb Mb PCT START/ Mb
> TYPE AVAIL USED FREE USED LIMIT RESERVE PRI NAME
> dev 4096 633 3463 15% 0 - 1
> /dev/vg00/lvol2
> reserve - 3400 -3400
> memory 6320 4499 1821 71%
>
> TIA
>
> jg
Received on Thu Apr 27 2006 - 09:43:16 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US