Re: TDPO and restore speed problem

From: Peter Hitchman <pjhoraclel_at_gmail.com>
Date: Sat, 19 Oct 2013 16:55:49 +0100
Message-ID: <CAPMSPxNoOb4O9CXyWHe6B9w-+PrNziwqV=DNxT0JW9Nc616u1A_at_mail.gmail.com>



Hi,
I remember something like this years ago, the problem was solved by altering the network packet size.
So have you checked out what the OS is doing network packet wise, I am thinking about the MTU settings to increase the packet size? But honestly my experience with this kind of rman problem and TSM. /TDPO is that you just keep adding channels until you get to a point where you can live with it!
Regards
Pete

On 19 October 2013 15:29, tefetufe . <coskan_at_gmail.com> wrote:

> Hi,
> We are doing some backup restore tests on our brand new environment using
> tivoli tdpo.
> Using rman We are backing up 8TB uncompressed backup in the same campus
> with 6 channels in 3 hours using 10Gb network and writing into VTL
>
> When we try to restore same backup from another location using 1Gb card
> with 6 channels and same rman settings we are somehow limited with *16MB
> /sec *per channel (our previous restores never hit this issue when DB was
> runnin on solaris and 10G)
>
> No matter how many channels we run total throughput is increasing in linear
> but per channel limit is still 16MB/sec
>
> I'm kind suspicious about this *16MB/sec *wall we are hitting. As we are
> not even using whole network bandwidht. I expect we can go higher with less
> channles but it is not the case. no matter what the number of channels are
> it is always 16MB/sec
> There is no hardware multiplexing
> We already tried playin with tcpwindow size on tdpo settings and
> BACKUP_TAPE_IO_SLAVES=TRUE
>
>
> Database is 11.2.0.3 and using filesystem (veritas with cio) on Redhat 6
>
> My real wonder is if anybody seen this limitation per channel at all
>
>
> strace sample outpu
>
> % time seconds usecs/call calls errors syscall
> ------ ----------- ----------- --------- --------- ----------------
> 94.33 0.418938 38 10943 5620 semtimedop
> 3.16 0.014031 0 64380 times
> 1.44 0.006395 1 10617 semctl
> 0.81 0.003601 1 5442 io_submit
> 0.26 0.001169 1 1396 io_getevents
> 0.00 0.000000 0 168 getrusage
> ------ ----------- ----------- --------- --------- ----------------
> 100.00 0.444134 92946 5620 total
>
>
> perf sample output is like below.
>
> 56.27% oracle oracle [.] sxorcopychk
> 21.92% oracle [kernel.kallsyms] [k] 0xffffffff8103ba4a
> 2.29% oracle oracle [.] krbddoh
> 1.22% oracle oracle [.] krbr1b1
> 0.98% oracle oracle [.] ksfvsubmit
> 0.97% oracle oracle [.] ksliwat
> 0.91% oracle oracle [.] krbr1b2
> 0.89% oracle oracle [.] krbrpr
> 0.64% oracle oracle [.] kcbhcvbo
> 0.53% oracle oracle [.] krbcdb
>
>
>
> Appreciate if anybody hit the same issue before
>
>
> Regards
> --
> --
> Coskan GUNDOGAR
>
>
>
> Email: coskan_at_gmail.com
> Blog: http://coskan.wordpress.com
> Twitter: http://www.twitter.com/coskan
> Linkedin: http://uk.linkedin.com/in/coskan
>
>
> --
> http://www.freelists.org/webpage/oracle-l
>
>
>

-- 
Regards

Pete


--
http://www.freelists.org/webpage/oracle-l
Received on Sat Oct 19 2013 - 17:55:49 CEST

Original text of this message