Re: Database or store to handle 30 Mb/sec and 40,000 inserts/sec
Date: Sat, 11 Feb 2006 11:07:51 -0000
I read a lot of words but no substance there, you don't mention features, you don't go into detail, you don't specify links to back up your case.
For a fully documented reference implementation on how they did Barnes and Noble go here: http://www.microsoft.com/sql/solutions/bi/projectreal.mspx - it contains a ton of white papers and architecture documents, a very good source of material if you are starting to develop a big system, if you need more just ask.
A design necessary to get SQL Server to handles TB's puts the onus on the database designer to get the database design and physical implementation of it right and also of your system engineer to get the hardware right; that is the same for ALL vendors. The days when you brush SQL Server away as a developer tool are long gone.
DB2 was my foundation, 5 years DB2 and the past 13 years Microsoft SQL Server, I have done a bit of Oracle but not worth mentioning.
For ALL vendor databases all you need is one newbie writing a cross join between big tables and you suddenly have a load on your system.
I here the MS biggotry often, my take is that you should use the right tool
for the right job and at the right cost and not blindly lead yourselves down
alleys you can't get out of, I recall the Oracle pricing model and lock-into
their software model.
SQL Server MVP
http://sqlserverfaq.com - free video tutorials
"Joel Garry" <joel-garry_at_home.com> wrote in message
Tony Rogerson wrote:
> Right then Joel, lets have a go then.
> Whats your argument?
> Even on a build your own box costing around £500 can deliver over 50MBytes
> second write and read speeds using Windows Server.
> Go for the 64 bit version and you can get quite a few GBytes of memory,
> entry level boards <£100 take 4GB of DDR.
> REmember the poster said 30megabits (which I read as MBytes) and 40,000
> per second; SQL Server will do that without problem.
> SQL Server will handle TB's too, if like with ALL vendor databases, you
> design it properly.
Well, here's where I disagree. The design necessary to get SQL Server to handle TB's of data along with random transactional queries puts the onus on programmers to do it right. All you need is one newbie and you are screwed. Unless you use the new feature that makes it work like Oracle. So let's see: New unproven feature or risk of manual error. New unproven feature or risk of manual error. New unproven feature that probably has bugs (like with ALL vendor databases new features), or near-100% chance of manual error. I'll pass. Oracle handles MVCC right by default, Oracle environments have more problems with SQL-Server people who haven't unlearned doing it wrong than with the actual native environment. Then there's recovery. May be a lot simpler in your way, but simpler isn't necessarily better when you get to TB. Depends. There is no such thing as "without problem." A few years ago I would have said Rdb, by the way, but Oracle has blown by, even with some historical baggage. Every couple of years I think to myself "self, MS has a new generation of stuff, let's give it a try." And every couple of years I discover all the things wrong, the hard way. And then I become an Oracle/unix bigot all over again. Perhaps it's because I started on similar hardware as Bill Gates and can't understand why he allows things to be so bad, he ought to know better. I'd think "Maybe he just forked off too early," but considering the previous experience of the NT team, that doesn't work. jg -- _at_home.com is bogus. Culture Clash: http://www.businessweek.com/technology/content/feb2006/tc20060209_810527.htm Received on Sat Feb 11 2006 - 12:07:51 CET