Re: Sorry, but...

From: Noons <wizofoz2k_at_gmail.com>
Date: Mon, 9 Jan 2012 19:28:17 -0800 (PST)
Message-ID: <8f4eb3b3-4179-4d6d-96af-617329d8c9e2_at_j9g2000vby.googlegroups.com>



I really shouldn't. but the amount of incorrectness, confusion and mis- information is staggering.
Are you sure you're not an "Ace"? You really sound like one...

On Jan 10, 1:45 pm, onedbguru <onedbg..._at_yahoo.com> wrote:

> "big". :) :) :) :)  Some of the big iron at the site in question have
> 3-4x that much.

Obviously a waste of resources and money.

> (Sun 6900 x 48 dual-core x192GB memory and more than 350TB of storage
> loading in excess of 1TB/day with thousands of mind-numbing decision
> support queries run daily. - and this was only one of 4500 database
> instances in that company.)

Funny. All that iron to do that? I do 8TB/day on a 30GB sga DW, with a Power6 IBM lpar with 8 cores attached to it. With plenty of capacity to spare. That's called V-A-L-U-E, as in bank-for-buck. As opposed to throwing hardware at a non-existing problem because the client can afford it.
That lpar does at the limit 220000 LIOPS per core. Did you measure that in any of yours? No? Ah well, what can I say... Any fool can claim infinite capacity when infinite has not been quantified anywhere...

> If shared servers actually functioned without having an instance in
> one "partition" affect all of the other "partitions", they might be
> more acceptable.

I was not talking about multi-instance, shared server. There goes the total lack of understanding again, doesn't it? Ever tried to understand before jumping into "judgement day" mode? It might be you got no clue what was being talked about in the first place?
Or would you rather we all addressed you as "bwanna"?

> high workload capabilities at a time.  I think the DEC GS series came
> closest to achieving that in a hardware partitioned system.

Live and learn. IBM's Power6 walks all over anything ever done by DEC or Sun.
And they got P7s now...

> a joke - in all it's incarnations.  I would only trust VMware to house
> a database server if all they needed was something more than a
> spreadsheet accessed by only a few people at a time.

Funny enough, we have all our MSSQL server dbs (around 120 in production, across around 10 servers) on VMWare and they service around 3000 online users on a global HA Sharepoint portal. I'm sure all they do is play with a single spreadsheet at a time...

> were doing and created the next "buzzword syndrome" stampede. And then
> those of us out in the trenches were left to try and "make it work"
> because the decision was made. Period. And the smart people in the
> room once again retorted, "Oh no, here we go again..."

Yup, pretty much what we saw about Oracle's vm stuff. That's why we dropped it and went with RHEV for Linux and VMWare for Windoze and MSSQL. As well as IBM's own vio for Power6.

> BTW, RAC is not new (not even in 2000-2001 when 9i RAC came out.)

It certainly isn't. My first parallel server installation dates to Oracle release 6.2, back in 1991. That was for an Oracle financials installation at the Sydney Western Health area. Successful, as well. Worked like a charm. Back then. I wouldn't even contemplate doing that nowadays, RAC or non-RAC.

> DEC
> had "RAC" (shared/cluster database Rdb on VMS Clusters) as early as
> Rdb/Version 1 and VMS Version 4 (1984-ish) with clustering. In 1990,
> DEC's "cloud" computing capabilities were responsible for helping to
> factor Fermat's 9th Number (800+ vax servers on the DEC internal
> network where internal email showed > than 800 contributors, the white
> paper guestimated 700).  I know some MicroVAX II's (much slower than
> the MV3100's mentioned in the paper) that were used in the effort.
>

<yawn>

another ex-RDB "expert" telling us how good it was way back when. There is a reason DEC and RDB are gone into history, deal with it. And it has nothing to do with the competence or otherwise of the folks using/not using it. Received on Mon Jan 09 2012 - 21:28:17 CST

Original text of this message