Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.misc -> Re: Oracle performance in high-volume data storage.
Petri J. Riipinen wrote:
> Our requirements:
> - Constant database writing, rate being about 2,5k / second
> this writing must not cause any blocking to the writer processes.
Should not be any problem, as long as you get the right hardware and setting.
> - Reading at the same time in bursts (when someone runs the analyzing tool).
Depend on your reading frequency and workload.
> - The data will be stored for about 2 weeks and then the data will
> be deleted every day. So the database will grow up to 3GB or so.
3G ? No problem at all, typical Oracle DB goes 100G, some to Teras.
> - The data contains a timestamp and about 20 numeric fields.
Why is it a problem ?
> - The database is mission-critical and it must be on-line 24h.
24 hr, you mean you target 0s downtime. Then you need much-much-more consideration, like Backup-Recovery stretegy, Redundency, etc.
> - The queries will be made on the timestamp + a numeric field or two.
If query made at front-end, what's the problem ? Received on Mon Apr 14 1997 - 00:00:00 CDT
![]() |
![]() |