RE: RAC 'gc blocks lost' in stress test
Date: Tue, 18 Mar 2008 06:40:51 -0700 (PDT)
Thanks, Jon. Yes, we use hugepages:
$ grep Huge /proc/meminfo
Hugepagesize: 2048 kB
I find that in a stress test, dropped packets can go up. Normal usage even for months will not cause the problem (observed on other RAC's we manage here). We work closely with the network engineers. So far we haven't found any issue.
> From: "Crisler, Jon" <Jon.Crisler_at_usi.com>
> Although you don't have that many dropped blocks, ideally you should be
> at zero. We have RAC systems that can run for a month with zero dropped
> packets. Here is a quick example of a heavily used system that has only
> been running for a few days, and reports zero errors.
> bond0 Link encap:Ethernet HWaddr 00:17:08:7D:B6:54
> inet addr:10.193.4.7 Bcast:10.193.4.31 Mask:255.255.255.224
> UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
> RX packets:464186898 errors:0 dropped:0 overruns:0 frame:0
> TX packets:510848771 errors:0 dropped:0 overruns:0 carrier:0
> collisions:0 txqueuelen:0
> RX bytes:472399237908 (439.9 GiB) TX bytes:532685382437
> (496.1 GiB)
> Ths would be on Intel-based HP systems, gigabit enet, bonded, and
> connected to Cisco 3750 switches.
> I would say that if the problem does not grow, you should be ok. If it
> grows, then have your network or platform people do a QA on the network
> hardware and configuration. Also, just for my info, do you run
> hugepages ?
Never miss a thing. Make Yahoo your home page. http://www.yahoo.com/r/hs Received on Tue Mar 18 2008 - 08:40:51 CDT