Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> ORACLE on Linux - IO bottleneck

ORACLE on Linux - IO bottleneck

From: Wyvern <edersm_at_wanadoo.es>
Date: 8 Feb 2006 09:57:02 -0800
Message-ID: <1139421422.410026.26550@z14g2000cwz.googlegroups.com>


Hello,

we've RedHat Linux AS3 (upd. 1) running in a 8 proccessor IA64 Itanium II machine with RAM-16Gb. Oracle 9i (OLTP database) under RawDevices with
an oracle blocksize of 8k and 2 QLogic Fibre Channel Adapters (QLA2340) with
Symmetrix EMC disk array. We also have ASync I/O activated in oracle. Kernel version is "2.4.21-9.EL #1 SMP"
All tablespaces are created with an 8K block size.

Some oracle params:


db_block_buffers                     integer     0
db_block_checking                    boolean     FALSE
db_block_checksum                    boolean     TRUE
db_block_size                        integer     8192
db_cache_advice                      string      ON
db_cache_size                        big integer 1090519040
db_create_file_dest                  string
db_create_online_log_dest_1          string
db_create_online_log_dest_2          string
db_create_online_log_dest_3          string
db_create_online_log_dest_4          string
db_create_online_log_dest_5          string
db_domain                            string
db_file_multiblock_read_count        integer     8
db_file_name_convert                 string
db_files                             integer     512
db_keep_cache_size                   big integer 0
dblink_encrypt_login                 boolean     FALSE
db_recycle_cache_size                big integer 0
dbwr_io_slaves                       integer     0
db_writer_processes                  integer     1
db_16k_cache_size                    big integer 0
db_2k_cache_size                     big integer 0
db_32k_cache_size                    big integer 0
db_4k_cache_size                     big integer 0
db_8k_cache_size                     big integer 0
filesystemio_options                 string      ASYNCH
---------------------------------------------------------------------------------------------------------------

Look at this:
---------------------------------------------------------------------------­-----------------------------------

# sar (copy-paste a little fragment but all day is similar) 00:00:00 CPU %user %nice %system %iowait %idle

.......
10:05:00 all 29,54 0,00 6,14 27,96 36,36

10:10:00 all 46,80 0,00 5,82 15,32 32,06

10:15:00 all 30,88 0,00 2,96 17,90 48,25

10:20:00 all 32,21 0,00 7,69 19,01 41,09

10:25:00 all 37,14 0,00 6,27 38,62 17,97

10:30:00 all 33,94 0,00 7,20 29,62 29,24

10:35:00 all 52,47 0,00 10,31 28,08 9,15

10:40:00 all 55,75 0,00 5,87 13,78 24,60

10:45:00 all 26,06 0,00 7,99 12,23 53,72

---------------------------------------------------------------------------­-----------------------------------

# iostat 2 /dev/sdc1 (there are many disks but this is mainly used)

cpu-med: %user %nice %sys %iowait %idle

          13,02 0,00 0,40 1,85 84,73

Device:            tps   Blq_leid/s   Blq_escr/s   Blq_leid   Blq_escr
sdc1           1534,25      2673,31       174,09       5344        348


cpu-med:  %user   %nice    %sys %iowait   %idle
          13,20    0,00    1,52   11,42   73,86


Device:            tps   Blq_leid/s   Blq_escr/s   Blq_leid   Blq_escr
sdc1           8482,14     15847,74       125,06      31680        250

---------------------------------------------------------------------------­-----------------------------------

# iostat -x 2 /dev/sdc1

Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util

sdc1       6833,34 114,06 7902,86 95,05 14743,20  209,10  7371,60
104,55     1,87    34,12    4,26   0,12  93,15


cpu-med:  %user   %nice    %sys %iowait   %idle
          15,23    0,00    2,11   11,32   71,34


Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util

sdc1       7149,99  25,01 8151,98 35,52 15295,47  134,07  7647,73
67,03     1,88    32,95    4,03   0,11  92,75

---------------------------------------------------------------------------­-----------------------------------

I´ve only shown /dev/sdc statistics because almost ALL the RAWs are in

that device.

Well, now the questions (¡¡of course!! ;-) ):

   1.- There is a generic low performance at the system, at SO level, at oracle level and at application level. ¿Can we suppose that there is an IO bottleneck?

   2.- ¿Are the "wrqm/s" / "rrqm/s" values correct?

We've different DDBB with similar characteristics running under similar

hardware but with EXT3 and we've got much better performance. We also have RAW DEVICES under AIX (4.3 and 5.2) and the performance is

PERFECT. I've been reading different manuals and documentation about Raw Devices

and Direct I/O in Linux (from Oracle an Redhat). Everything makes me think that oracle
blocksize (8K) should not affect performance drastically because we use DirectIO.

Well, we´ve been analizing diferent statistics from Storage system (ECM - Symmetrix) and the number of IOs the /dev/sdc device is doing is

near hardware limit (about 8000) and the medium IO size of all of them is
more or less 2k. I don´t know why this happen when all the IO against this
device is done by oracle and oracle have an 8K db_block_size.

¿Any idea?

Maybe this message is OFF-TOPIC but I think it doesn´t, sorry me if it

is.

Some help, please please please ....

Thanks in advance. Received on Wed Feb 08 2006 - 11:57:02 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US