Re: TSM Tape performance

From: Freek D'Hooge <freek.dhooge_at_gmail.com>
Date: Fri, 29 Apr 2016 08:33:02 +0200
Message-ID: <1461911582.25419.28.camel_at_dhoogfr-lpt1>



reposting to list

Sanjay,

input_bytes_per_sec_display is the throughput for reading from the datafiles and output_bytes_per_sec_display is the throughput towards your backup destination (be it tape or disk) Following documentation link shows an example of a query using this information to the backup
speed: http://docs.oracle.com/database/121/BRADV/rcmreprt.htm#BRADV90911

I'm now a little bit confused on the problem. Are you now saying that rman backups using the public network are achieving a good backup rate, but not when using a dedicated (high speed) network?
Or is it that rman backups are always slow, but filesystem backups are running fine?

Kind regards,

Freek

On vr, 2016-04-29 at 00:45 +0000, Sanjay Mishra wrote:
> Thanks Stefan/Freek/Mladen
>
>
> Point is not about the size and timing. Yes there are several good
> points in the chain and will also check few important consideration.
> What I am trying is that why regular Public network backup are running
> fine while if the same been done using Dedicated high speed interface
> run sometime good but mostly speed is not even comparable to regular
> network.
>
>
> eg. is that It take 1 hr for 500G for Incremental(using BCT) on
> Regular interface while Dedicated interface is taking sometime 45min
> for the same but lots of time 4-5hr which is not acceptable to the
> configuration.
>
>
> Regular Interface always has consistency for the backup size/timing
> while dedicated interface has big difference and very inconsistent.
>
>
> Can someone help in clearing the difference on these two columns for v
> $rman_backup_job_details
> - INPUT_BYTES_PER_SEC_DISPLAY
> - OUTPUT_BYTES_PER_SEC_DISPLAY
>
>
> I want to compare the rate of using Regular vs Dedicated backup
> interface. I had now one node been setup to use regular prod interface
> and another node with Dedicated vlan. Not sure if the above two column
> are good to compare and what are the details for these column as not
> able to get from any Manual/book as what is best to compare. Backup
> are not compressed.
>
>
> TIA
> Sanjay
>
> On Thursday, April 28, 2016 1:00 PM, Mladen Gogala
> <gogala.mladen_at_gmail.com> wrote:
>
>
>
>
>
> Hi Sanjay,
>
> My responses are in-line:
>
> On 04/27/2016 03:33 PM, Sanjay Mishra (Redacted sender smishra_97 for
> DMARC) wrote:
>
> > Hi
> >
> >
> > I had a environment which is 11g R2 RAC and have 15 database running
> > on it. Some are very big double digit terabytes and so rman backup
> > to TSM tape library is taking long time. We worked with TSM and got
> > dedicated vlan backup interface but it is not behaving correctly as
> > sometime backup work good but another backup after one backup take
> > 5-6 time more. If using regular interphase the performance is
> > constant. Os level backup using dedicated gigabit vlan backup
> > interface is having not issue.
>
>
> 1GB LAN is simply too slow to handle double digit TB. What you need is
> 10GB or better (bonded interfaces). With 1GB NIC, you can reasonably
> expect between 300 GB/hr and 350 GB/hr. It will take you 3 hours to
> backup 1TB. With "double digit TB", it will take you triple digits in
> hours. That will give a new meaning to the notion of weekly backup.
> However funny it may sound, you should probably do incremental level 1
> backup into the FRA and then backup FRA once per week.
>
>
>
> > Only RMAN backup is the issue. Can someone share any experience as
> > how to monitor and trace the issue. Ticket was opened with Oracle
> > support but they found nothing on both OS (Using OEL) and Database
> > (11.2.0.4). TSM team saying that OS level backup has no issue and it
> > is only rRMAN which is having issue. Moreover RMAN work good
> > sometime but having intermittent issue.
> >
> >
> > So if someone share how they think this can be handled or any of
> > their experience.
>
> First and foremost, the database of that size should not be run
> without a standby. The first layer of protection for such a large
> database should always be a surviving RAC instance. The second layer
> of protection should be standby and the 3rd layer should be backup.
> Now, with 20GB/sec, Commvault can achieve 8TB/hr with deduplication
> and without compression and 9.2 TB/hr with compression only. The
> client has opted for deduplication, since the savings in storage are
> significant. 9GB/hr should be enough to backup a very large 100TB
> database in about 12 hours. Backups with compression is very CPU
> intensive, so it should be running on standby db, not the primary one.
> However, the infrastructure you will need is rather significant. 1 GB
> LAN will definitely not cut it. If you don't have such a wide pipe,
> you may consider putting a VTL on the DB server and connect the disk
> library to the DB server by 16GB F/C controller. VTL will copy it to
> the tape, when the allocated disk library fills up. Commvault
> software suite can do all of the above. However, you will still need
> good infrastructure
>
>
>
>
>
> >
> >
> >
> > TIA
> > Sanjay
>
>
> Regards
>
> --
> Mladen Gogala
> Oracle DBA
> Tel: (347) 321-1217
>
>
>
>
>
>

--
http://www.freelists.org/webpage/oracle-l
Received on Fri Apr 29 2016 - 08:33:02 CEST

Original text of this message