Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: max_dump_file_size - what value to use?
Walt wrote:
> By default, max_dump_file_size is set to unlimited. I'd like to
> restrict it and am looking for a reasonable value. Oracle support says
> that there is no recomended value and to ask other users what they're
> using. So here I am.
>
> Background: we had a process spin out of control and write a trace file
> that eventually grew to 17 Gig (thats Gig, not Meg) when it ran out of
> disc space. I'd like to keep that from happening again.
>
> This seems like a fairly straightforward question to answer - take a
> reasonable max file size and divide by the db_block_size to arrive at
> cap for a trace file. If we ever encounter a trace file larger than
> that, we'll lose trace after it grows that largh - I think I can live
> with that. Any other downsides for making it too small?
>
> Oracle 9.2 on Win23k. ~17 Gig of space for "logs" (on a separate drive
> from the dbf files & the server software). In three years, we've never
> cleared out the bdump cdump or udump directories because there was
> plenty of space.
>
>
>
> //Walt
Imposing a limit is inherently dangerous. What if the system needs to
dump the system state? It will be truncated and consequently it will be
useless for further analysis by support.
Also please note that a trace file is always opened in append mode. Ie:
if a thread number is reused, the file is not overwritten but it is
appended to the already existing file.
IMO, you would better
- design a script to periodically clean older trace files
- use Oracle Enterprise Manager to warn you when the freespace gets
below a certain threshold.
In past releases the limit was 10240 (the unit of this parameter is 512 bytes, not 1 database block), 5M, which is useful to keep endusers from consuming all disks only.
-- Sybrand Bakker Senior Oracle DBAReceived on Wed Oct 25 2006 - 09:13:40 CDT
![]() |
![]() |