Advice on running Oracle with SGA > 20 GB

From: vsevolod afanassiev <vsevolod.afanassiev_at_gmail.com>
Date: Mon, 1 Feb 2010 14:15:33 -0800 (PST)
Message-ID: <f8e94b0a-35da-463e-afdc-829cda787ca6_at_f17g2000prh.googlegroups.com>



The amount of memory provisioned in new servers keeps growing, we have
several servers where single image of OS (Solaris, AIX, HP-UX) has access to
more than 32 GB RAM. A question arises how to use this memory most efficiently to run Oracle databases, in particular what problems one may experience when running Oracle 9iR2 or 10gR2 with SGA > 20 GB? I am particularly interested in systems with mixed workload, i.e. where we see simultaneously highly efficient index-based queries, very inefficient
queries joining multiple tables (consuming in excess of 800 million buffer gets
per execution), large updates, dynamic SQL, ad-hoc queries, etc. What would you do on a server with 100 - 200 GB RAM that runs one Oracle instance?
- would you configure direct I/O (bypass UNIX filesystem buffer
cache)?
- would you configure SGA around 50 - 70% of physical RAM?
  • would you use 8K block size of bigger blocks?
  • would you allow automatic resizing of SGA components in 10g (ASMM) or use fixed sizes?
  • would you use some OS features like "large pages" on AIX in combination with LOCK_SGA?
Received on Mon Feb 01 2010 - 16:15:33 CST

Original text of this message