Re: Question on RMAN restore from tape

From: Andrea Monti <>
Date: Thu, 19 Dec 2019 11:29:15 +0100
Message-ID: <>

Leng is right, but be careful

In my experience, I see that rman will never tries to read the same "tape" in parallel. This is wise but this could limit your restore parallelism: you should double check the restore parallelism you can do by testing restore or double checking the "media ids" of your tapes


Il giorno mer 18 dic 2019 alle ore 20:07 Leng <> ha scritto:

> Hi Keith,
> You’ll need to play with filesperset or maxpiecesize or maxsetsize to get
> a size that will work for you. Most often the default will not be useful if
> you only want to restore a single small file from a large backuppiece.
> Cheers,
> Leng
> > On 19 Dec 2019, at 5:01 am, Keith Moore <> wrote:
> >
> > I am working for a client that has an Exadata Cloud at customer. We
> just migrated a large database and I am setting up backups. The backups go
> to the Object Storage that is part of the Cloud at Customer environment and
> backups and restores are done through a tape interface.
> >
> > As part of the testing, I tried to restore a single 5 GB archivelog and
> eventually killed it after around 12 hours.
> >
> > After tracing and much back and forth with Oracle support, it was found
> that the issue is related to filesperset. The archivelog was part of a
> backup set with 45 archive logs and was around 500 GB in size. To restore
> the archive log, the entire 500 GB has to downloaded, throwing away what is
> not needed.
> >
> > The obvious solution is to reduce filesperset to a low number.
> >
> > But, my question for people with knowledge of other backup systems
> (hello Mladen) is whether this is normal. It is horribly inefficient for
> situations like this. Since object storage is “dumb”, maybe there is no
> other option but it seems like this should be filtered on the storage end
> rather than transferring everything over what is already a slow interface.
> >
> > Keith --
> >
> >
> >
> --

Received on Thu Dec 19 2019 - 11:29:15 CET

Original text of this message