Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: max datafile size

Re: max datafile size

From: Raymond <r_h_at_hetnet.nl>
Date: Thu, 1 Nov 2001 21:46:54 +0100
Message-ID: <9rsbuk$5bk$1@news1.xs4all.nl>


From metalink:

Introduction



  This article describes "2Gb" issues. It gives information on why 2Gb is a   magical number and outlines the issues you need to know about if you are   considering using Oracle with files larger than 2Gb in size. It also   looks at some other file related limits and issues.

  The article has a Unix bias as this is where most of the 2Gb issues arise   but there is information relevant to other (non-unix) platforms.   Articles giving port specific limits are listed in the last section.

  Topics covered include:

	Why is 2Gb a Special Number ?
	Why use 2Gb+ Datafiles ?
	Export and 2Gb
	SQL*Loader and 2Gb
	Oracle and other 2Gb issues
	Port Specific Information on "Large Files"


Why is 2Gb a Special Number ?



  Many CPU's and system call interfaces (API's) in use today use a word   size of 32 bits. This word size imposes limits on many operations.

  In many cases the standard API's for file operations use a 32-bit signed   word to represent both file size and current position within a file (byte   displacement). A 'signed' 32bit word uses the top most bit as a sign   indicator leaving only 31 bits to represent the actual value (positive or   negative). In hexadecimal the largest positive number that can be   represented in in 31 bits is 0x7FFFFFFF , which is +2147483647 decimal.   This is ONE less than 2Gb.

  Files of 2Gb or more are generally known as 'large files'. As one might   expect problems can start to surface once you try to use the number   2147483648 or higher in a 32bit environment. To overcome this problem   recent versions of operating systems have defined new system calls which   typically use 64-bit addressing for file sizes and offsets. Recent Oracle   releases make use of these new interfaces but there are a number of issues   one should be aware of before deciding to use 'large files'.

  Another "special" number is 4Gb. 0xFFFFFFFF in hexadecimal can be   interpreted as an UNSIGNED value (4294967295 decimal) which is one less   than 4Gb. Adding one to this value yields 0x00000000 in the low order   4 bytes with a '1' carried over. The carried over bit is lost when using   32bit arithmetic. Hence 4Gb is another "special" number where problems   may occur. Such issues are also mentioned in this article.

What does this mean when using Oracle ?



  The 32bit issue affects Oracle in a number of ways. In order to use large   files you need to have:
  1. An operating system that supports 2Gb+ files or raw devices
  2. An operating system which has an API to support I/O on 2Gb+ files
  3. A version of Oracle which uses this API

  Today most platforms support large files and have 64bit APIs for such files.
  Releases of Oracle from 7.3 onwards usually make use of these 64bit APIs but
  the situation is very dependent on platform, operating system version and   the Oracle version. In some cases 'large file' support is present by   default, while in other cases a special patch may be required.

  At the time of writing there are some tools within Oracle which have not   been updated to use the new API's, most notably tools like EXPORT and   SQL*LOADER, but again the exact situation is platform and version specific.

Why use 2Gb+ Datafiles ?



  In this section we will try to summarise the advantages and disadvantages   of using "large" files / devices for Oracle datafiles:

  Advantages of files larger than 2Gb:

	On most platforms Oracle7 supports up to 1022 datafiles.
	With files < 2Gb this limits the database size to less than 2044Gb.
	This is not an issue with Oracle8 which supports many more files.
        (Oracle8 supported 1022 files PER TABLESPACE).

	In reality the maximum database size in Oracle7 would be less than
        2044Gb due to maintaining separate data in separate tablespaces.
        Some of these may be much less than 2Gb in size. Larger files
        allow this 2044Gb limit to be exceeded.

	Larger files can mean less files to manage for smaller databases.

	Less file handle resources required.


  Disadvantages of files larger than 2Gb:

	The unit of recovery is larger. A 2Gb file may take between 15 minutes
	and 1 hour to backup / restore depending on the backup media and
	disk speeds.  An 8Gb file may take 4 times as long.

	Parallelism of backup / recovery operations may be impacted.

	There may be platform specific limitations - Eg: Asynchronous IO
	operations may be serialised above the 2Gb mark.

	As handling of files above 2Gb may need patches, special configuration
	etc.. there is an increased risk involved as opposed to smaller files.
	Eg: On certain AIX releases Asynchronous IO serialises above 2Gb.


  Important points if using files >= 2Gb

	Check with the OS Vendor to determine if large files are supported
	and how to configure for them.

	Check with the OS Vendor what the maximum file size actually is.

	Check with Oracle support if any patches or limitations apply
	on your platform , OS version and Oracle version.

	Remember to check again if you are considering upgrading either
	Oracle or the OS in case any patches are required in the release
	you are moving to.

	Make sure any operating system limits are set correctly to allow
	access to large files for all users.

	Make sure any backup scripts can also cope with large files.

	Note that there is still a limit to the maximum file size you
	can use for datafiles above 2Gb in size. The exact limit depends
	on the DB_BLOCK_SIZE of the database and the platform. On most
 	platforms (Unix, NT, VMS) the limit on file size is around
	4194302*DB_BLOCK_SIZE.

        See the details in the Alert in [NOTE:112011.1] which describes
        problems with resizing files, especially to above 2Gb in size.

  Important notes generally

	Be careful when allowing files to automatically resize. It is
  	sensible to always limit the MAXSIZE for AUTOEXTEND files to less
	than 2Gb if not using 'large files', and to a sensible limit
	otherwise. Note that due to [BUG:568232] it is possible to specify
	an value of MAXSIZE larger than Oracle can cope with which may
	result in internal errors after the resize occurs. (Errors
	typically include ORA-600 [3292])

	On many platforms Oracle datafiles have an additional header
	block at the start of the file so creating a file of 2Gb actually
	requires slightly more than 2Gb of disk space. On Unix platforms
	the additional header for datafiles is usually DB_BLOCK_SIZE bytes
	but may be larger when creating datafiles on raw devices.


  2Gb related Oracle Errors:

	These are a few of the errors which may occur when a 2Gb limit
	is present. They are not in any particular order.
	  ORA-01119 Error in creating datafile xxxx
   	  ORA-27044 unable to write header block of file
	  SVR4 Error: 22: Invalid argument
	  ORA-19502 write error on file 'filename', blockno x (blocksize=nn)
	  ORA-27070 skgfdisp: async read/write failed
	  ORA-02237 invalid file size
          KCF:write/open error dba=xxxxxx block=xxxx online=xxxx
file=xxxxxxxx
          file limit exceed.
	  Unix error 27, EFBIG


Export and 2Gb



 2Gb Export File Size

  At the time of writing most versions of export use the default file   open API when creating an export file. This means that on many platforms   it is impossible to export a file of 2Gb or larger to a file system file.

  There are several options available to overcome 2Gb file limits with   export such as:

 Other 2Gb Export Issues



  Oracle has a maximum extent size of 2Gb. Unfortunately there is a problem   with EXPORT on many releases of Oracle such that if you export a large table
  and specify COMPRESS=Y then it is possible for the NEXT storage clause   of the statement in the EXPORT file to contain a size above 2Gb. This   will cause import to fail even if IGNORE=Y is specified at import time.   This issue is reported in [BUG:708790] and is alerted in [NOTE:62436.1]

  An export will typically report errors like this when it hits a 2Gb   limit:

	. . exporting table                   BIGEXPORT
	EXP-00015: error on row 10660 of table BIGEXPORT,
		column MYCOL, datatype 96
	EXP-00002: error in writing to export file
	EXP-00002: error in writing to export file
	EXP-00000: Export terminated unsuccessfully


  There is a secondary issue reported in [BUG:185855] which indicates that   a full database export generates a CREATE TABLESPACE command with the   file size specified in BYTES. If the filesize is above 2Gb this may   cause an ORA-2237 error when attempting to create the file on IMPORT.   This issue can be worked around be creating the tablespace prior to   importing by specifying the file size in 'M' instead of in bytes.   [BUG:490837] indicates a similar problem.

 Export to Tape



  The VOLSIZE parameter for export is limited to values less that 4Gb.   On some platforms may be only 2Gb.
  This is corrected in Oracle 8i. [BUG:490190] describes this problem.

SQL*Loader and 2Gb



  Typically SQL*Loader will error when it attempts to open an input   file larger than 2Gb with an error of the form:
	SQL*Loader-500: Unable to open file (bigfile.dat)
	SVR4 Error: 79: Value too large for defined data type

  The examples in [NOTE:30528.1] can be modified to for use with SQL*Loader   for large input data files.
  Oracle 8.0.6 provides large file support for discard and log files in   SQL*Loader but the maximum input data file size still varies between   platforms. See [BUG:948460] for details of the input file limit.   [BUG:749600] covers the maximum discard file size.

Oracle and other 2Gb issues



  This sections lists miscellaneous 2Gb issues: Received on Thu Nov 01 2001 - 14:46:54 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US