Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Scalable Performace - Inserts/Updates

Re: Scalable Performace - Inserts/Updates

From: Niall Litchfield <n-litchfield_at_audit-commission.gov.uk>
Date: Tue, 19 Dec 2000 12:53:28 -0000
Message-ID: <91nlof$6lk$1@soap.pipex.net>

Not redo log group , redo log member. You will want at least 3 redo log groups. The point about hardware mirroring over software isn't changed though

pedantically

--
Niall Litchfield
Oracle DBA
Audit Commission UK
"Nuno Souto" <nsouto_at_nsw.bigpond.net.au.nospam> wrote in message
news:3a3f44db.7069533_at_news-server...

> On Mon, 18 Dec 2000 17:05:43 GMT, petergunn_at_hotmail.com wrote:
>
> >I have been involved on the edge of several realtime
> >projects in recent years which have had limited success
> >due to a requirement to store a transaction log in
> >Oracle (or Sybase). These systems would typically use
> >a single Solaris server with fast disks and lots of memory,
> >but would invariably grind to a halt at >200 individually
> >commited inserts/updates per second.
>
> Why are you doing a commit for every log entry? Just commit them in
> batches of say 200 and bingo!: you can run thousands of them a second
> in a small system.
>
> >
> >An obvious and more successful approach would be to
> >decouple the DB from the messaging system and implement
> >queuing (or get middleware that already does that), but
> >it would be desirable to have a complete and up-to-date
> >DB rather than using kludgy work arounds like this.
>
> That works too but I agree it's clumsy. If you can queue, then you
> can write more than one log entry per commit, so we're back to above
> option which solves your problem in one go.
>
> >
> >If I wanted to implement an Oracle server that could
> >cope with 1000+ commited inserts/updates per second as
> >well as 1000+ simple indexed queries per second what
> >sort of designs would be relevant?
> >
>
> No designs. 1000+ inserts per second can be done with just about any
> small system.
>
> 1000+ commits per second is a totally different animal!
> Just get the fastest disks you can get hold of, make your redo logs
> raw disk files or even cached memory files (using an EMC or similar)
> and fire away. Oh yeah: don't plan on making that server also run
> anything else. I know the word DEDICATED means sumethin to you
> real-time people, so use it!
>
> Use more than one process to do this. Preferably with one table per
> process and no indexes on the table. Or use INITRANS and FREELISTS to
> let one table cope efficiently with multiple inserts.
>
> And if your DBA insists on having more than one redo log group, fire
> him/her! Mirror by hardware, not software!
>
>
> Cheers
> Nuno Souto
> nsouto_at_bigpond.net.au.nospam
> http://www.users.bigpond.net.au/the_Den/index.html
Received on Tue Dec 19 2000 - 06:53:28 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US