Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: native Oracle-port on Linux -- what would it take?

Re: native Oracle-port on Linux -- what would it take?

From: Eric Lee Green <e_l_green_at_hotmail.com>
Date: 1997/12/23
Message-ID: <D5864FBF358B6707.FC73AFE21FB317D6.ECC55C353273BE46@library-proxy.airnews.net>#1/1

On 21 Dec 1997 19:42:47 GMT, tip <tip_at_blahblah.com> wrote:
>there is NO WAY the pc hardware end of linux would be able to handle running
>oracle in any decent manner. plus its tweaking is rather limited compared to
>solaris and hpux. i have run oracle on both suns and hp boxes - there's just

Well, the latest SMP servers from Compaq and DEC look pretty good. RAID-5 SCSI, hot-swappable drives, multiply-redundant hot-swappable power supplies, what more could you want? The only thing that will down it is if the motherboard or CPU burns up (no recent OS, alas, is as reliable as the old Honeywell Multics, where a single processor going out just meant disabling that processor and not even having processes burp). I've seen one configured with 512mb of memory. Now, true, this isn't going to run a large business, but it will certainly run a small-to-midsize business without even breaking a sweat.

>no way linux is ready (yet) for oracle.

Well, I'd agree, to a certain extent. Oracle wants to do like they do with SCO -- put out a version and have it still running 5 years later without even a recompile. The Linux world is famous for shifting hither and fro. Heck, there's not even a stable libc yet.

But none of this has anything to do with whether Oracle can technically be run under Linux. The only current limitation is that it'd have to use raw partitions for its disk storage, since ext2fs is limited to 2gb for file size.

>i currently run oracle 7.2.3 on hp9000/s800/k210's running hpux 10.10 with a
>gig of memory each. that's alright for running oracle.
>
>but imagine a pc with linux and a motherboard limitation of 128M, a 200Mhz
>processor... i don't think so...

Err, obviously you haven't checked out the "big iron" PC servers used for SCO Unix and Windows NT. Most of them don't SHIP with less than 128m of memory, much less have a LIMIT of 128m. In fact, SCO had to put out an OpenServer revision because they didn't properly support 1gb of memory (only 512mb) and somebody caught them on it. 200mhz processors? Well, the quad-Pentium Pro, certainly. But a quad Pentium Pro box has more CPU horsepower than the fancy hp9000/etc. boxes, it's the I/O that has traditionally been the bottleneck of PC-family computers. Still is, for that matter. But the I/O bottleneck has narrowed greatly with hardware RAID controllers with bus-mastering PCI.
>linux is good for being a workstation, or a small server - but the heavy duty
>shit - better leave it to superior hardware and os's.

Well, Linux also runs on that superior hardware :-). (See: Linux/SPARC, Linux/ALPHA, Linux/PowerPC). In fact, Linus Torvalds' only Intel-based PC is an old '386 he keeps around for nostalgic purposes -- he does all his own work on an Alpha running Red Hat Linux.

As for the "superior OS" bit, the biggest limitations in Linux are in the file system. The ext2fs is a good simple clean little file system, but not exactly featuresome. There's nothing stopping anybody from improving ext2 (create ext3?) or implementing a new file system -- Linux has supported multiple filesystems with ease since the Linux 1.1 days -- but nobody has done it. There's also various clunkiness at the OS level, but let's face it, all versions of Unix have accreted a bit of clunkiness here and there. Heck, there's still stuff in Unix that exists for no reason other than that K&R&P did it that way back in 1975.

Somebody brought up "well, it's not as tunable". Most of the tuning decisions of the past, such as, e.g., how many buffers to allocate for i/o operations, have been taken over by the dynamic buffer cache. A dynamic buffer cache is provably optimal, since it is always in sync with your usage patterns, while a statically-allocated buffer cache must be tuned every time you change your work load or usage patterns. The algorithms used for the dynamic buffer cache probably need a bit of tuning themselves, but there's nothing at all wrong with the idea of a dynamic buffer cache.

(And do note that there's still a LOT you can tune about Linux -- it's just that, unlike with older Unixes, the default Linux values tend to be reasonable enough that nobody ever bothers).

-- 
Eric Lee Green   exec_at_softdisk.com          Executive Consultants
Systems Specialist                    Educational Administration Solutions
   You might be a redneck if you put on insect repellant prior to a date.
Received on Tue Dec 23 1997 - 00:00:00 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US