Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Garbage collection not happening?

Garbage collection not happening?

From: Jacy Grannis <jacygrannis_at_yahoo.com>
Date: 28 Aug 2002 13:32:18 -0700
Message-ID: <b94ddb06.0208281232.56d62cb1@posting.google.com>


I have a very curious problem that I can not figure out a good cause for. Here's the story: we are using iPlanet 6.0sp2 on jdk1.3.1, and Oracle 8.1.7. There is an option on our site where a user can click a link and download a .csv file containing a lot of information from the db. This .csv file can be as much as 6MB big. It has, at most, about 17,000 rows in it. Each row has ~60 columns. The exact numbers depend on options the user has selected before choosing to do the download. If the user selects the option that would pull down the entire data-set, this invariably causes the machine to run out of memory. An OutOfMemoryError is thrown, and the server restarts itself. It seems to run out of memory in the loop where it is pulling the data out of the result set. However, I assure you it is doing nothing unusual in that loop. The loop is of the form:

MyObject mo;
List l = new ArrayList();
while (rs.next())
{

  mo = new MyObject();

  mo.setData1(rs.getString("data1"));
  mo.setData2(rs.getString("data2"));
  ...many more similar calls...

  l.add(mo);
}
That's it, nothing else fancy. However, and here's what's strange, after processing a little over 1000 rows, all of a sudden the memory usage on the machine shoots up dramatically--several hundred MB of memory usage--until the server crashes. Now, I *know* that the .csv file is only 6MB big, b/c if I run the same method making the same calls to the DB as a standalone application, I end up with about 6MB of memory usage and a 6MB .csv file. So, this would lead me to think that it's iPlanet that is the issue. Now, my initial thought was that there was some background thread that was being spawned that was generating lots of objects that the garbage collector couldn't get rid of. In order to find out what objects were being created, I modified java.lang.Object to keep track of all the objects that were created in the system. This modification involved a HashMap and WeakReferences and I think it's actually a nifty little piece of code that I can send you if you're interested. At any rate, what I did was output to a log file every object that was created. My ouput looks like this:

32653 oracle.sql.CharacterSetByte instantiated 32649 oracle.sql.CHAR instantiated
40181 java.lang.String instantiated
32654 oracle.sql.CharacterSetByte instantiated 32650 oracle.sql.CHAR instantiated

The number is the number of objects of that type which have not yet been garbage collected. These particular lines of output are from close before the server crashed. As you can see, there are lots of instances of these types of objects. Many more than should be needed to stay in memory. Perhaps, you might say, you are keeping references to these objects around in your version of java.lang.Object. Not so. If I run the same test as a standalone app, I do get nearly as large numbers of these objects, but I see a rise and fall in the number of active objects that you would expect from garbage collection. I actually ran many tests of this as a standalone app with to see how objects were being created and destroyed. One test looked like this:

for (int i = 0; i < 10000; i++)
{

  new ArrayList(10000);
  new String();
  new StringBuffer(100000);
}

When I ran this test, the number of active objects of each of those types never rose above 10. They were all garbage collected. However, I ran another test. In that test, I read from a 147kb text file, line by line, using a BufferedReader. I read from the file 5 times, putting the lines in an arraylist. Afterwards I discarded the arraylist and started again. I was careful to close my Readers each iteration through the loop. I repeated this entire process 5 times (so the file got read 25 times). The number of strings was much greater, and they seemed to be garbage collected much less frequently.  That's all well and good, it's not a well-defined process and you can't expect it to do garbage collection just whenver. It is worth noting, though that even after all references to any strings had been discarded, the number of strings stayed in the tens-of-thousands. All other types of objects (stringbuffer, for ex.) stayed under 10 or 20. Now, I wasn't interning these strings. So why did the count for the strings stay so high? This has bearing on my problem on the server, b/c it seems that on the server, when reading out of the result set, MANY objects are being created, and they aren't getting garbage collected. Does iPlanet somehow interfere with the normal behaviour of the garbage collector? why are so very many objects remaining in memory when their usefulness should have long passed? And why does my memory use shoot up when it's run on the webserver, but stay relatively constant when run as a standalone app?

Here's another wrinkle. If I set up iPlanet to use 1.4.0_01 instead of 1.3.1, the memory usage does *not* shoot up. However, after about 13,000 rows, it just freezes. The cpu utilization drops, and no further processing takes place. What is going on? I haven't the faintest idea. Thanks for *any* help you can give. Received on Wed Aug 28 2002 - 15:32:18 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US