Re: 1 TB _a day_ at CERN (was: 21 terabytes at NYNEX)
Date: 1996/05/25
Message-ID: <mjrDrynAI.Kr5_at_netcom.com>#1/1
In article <Drx6LM.985_at_unixhub.slac.stanford.edu>,
Ian A. MacGregor <ian_at_tethys.SLAC.Stanford.EDU> wrote:
>In article <mjrDrvvrB.EqG_at_netcom.com>, mjr_at_netcom.com (Mark Rosenbaum) writes:
>|> In article <Pine.BSI.3.91.960517164133.16987A-100000_at_cripp
>|> >On Sun, 12 May 1996, Mark Rosenbaum wrote:
>|>
>|> This sounds like it may be a tape library issue. I am assuming that CERN
>|> is NOT trying to put 150 PB online in the near future, or even 15 in the
>|> next 3 years. If that is the case then the data would reside on nearline
>|> tape robots and only the indexing info and currently used data would be
>|> on disk.
>|>
>|> So what is the disk requirement to go with the PB tape requirement?
>|>
>|>
>Unless things have changed over the past few months CERN expects to have 1 PB
>of data online; i.e., on disk.
1 PB is 10^15 bytes. Disk cost in that volume would be > $0.10 per 10^6 bytes or $1.00 per 10^7. 1 PB would be $10^8 or $100,000,000. Using 10 GB drives it would be 100K drives. With an MTBF of 200K Hrs you would see a drive failure every 2 hours. Maybe CERN is going to get 1 TB of disk, which is still a very large number for most organizations.
>The question still stands as to when Rdb,
>Oracle, or any of the multi-dimensional databases will be able to handle
>this much information. Hmmm... Perhaps Oracle has a patch :)
>
First off Oracle is an RDBMS not an MDDB. Second, it would be unusual to load experimental data into an RDBMS. Important info like date and type of experiment as well as other usefull info (sorry I'm not a sub atomic partical physicist so I don't know all the relevant info). Managing just the important info may require A 64 bit OS and 64 bit RDBMS. These are just now hiting the market (DEC has had a 64 bit OS for a while now but the RDBMS are now coming out). Received on Sat May 25 1996 - 00:00:00 CEST