Re: The Revenge of the Geeks

From: Arved Sandstrom <>
Date: Thu, 24 Jan 2013 19:31:21 -0400
Message-ID: <evjMs.58963$1l4.9593_at_newsfe29.iad>

On 01/24/2013 06:10 PM, BGB wrote:
[ SNIP ]
> errm, so you can't just copy all the files over to ones' servers? and/or
> recompile the code for ones' servers?...
> granted, dunno much about business systems, but I was under the
> understanding that most were some combination of:
> rack mounts running Linux, typically with x86 CPUs, and with Gigabit
> Ethernet or 10GbE or similar linking them all together.
> one or more server computers in a desktop-like form factor, sometimes
> with multi-CPU boards, Xeon or Opteron chips, and craploads of RAM
> installed, and sometimes also in a LAN. AFAIK, Linux is also popular
> here. (though I guess Windows XP, Windows Vista, and Windows Server,
> also make an appearance).
> something more strange, like IBM mainframes or similar, where everyone
> uses them via funky multi colored textual interfaces inside of a
> terminal emulator, ... pretty much everything I have read about them
> sounds strange.
> as for data sharing (between lots of networked servers), I am less sure,
> I would think maybe something like NFS or SAMBA, but then thinking of
> it, NFS or Samba might not scale well if the number of servers becomes
> sufficiently large (like, people would probably want to locally cache
> files, rather than always doing IO over the network, ...).
> I guess alternatively, an option could be a sort of centralized
> batch-push or batch-pull, where a daemon or similar is used to update
> all the servers, or something... (say, on a schedule, they pull from a
> Git or Hg repository or something...).
> but, in any case, people have probably figured out all this stuff already.
> otherwise, not entirely sure why developing for these would be all that
> much different than dealing with a normal PC or Linux box.

"Server" - sometimes the actual computer/OS (real or virtual) running the "serving" application(s), sometimes the "serving" applications, sometimes the combination, sometimes a cluster of one or more of the above etc - is a role, not a technical specification. A "server" performs a function for client applications, that's basically all there is to it.

What particular hardware/OS/application software configurations are the right ones depends entirely on performance requirements, reliability and necessary quality of service etc. These days you can have pretty sizeable user bases served off a consumer tower or laptop, granted best running a "server"-variant OS, and this is often acceptable. A consumer-level computer these days blows away servers of not so long ago, so the kinds of things you mention above aren't what define "server" - it's the function, not the form.

As to the technical, these days we're moving away from direct access to physical servers and storage, it's all VMs and private/public clouds. If you're administering VMs you'll certainly be aware where your physical CPUs/cores are, and what real devices have your storage, but everything gets pooled and divvied up from there.

AHS Received on Fri Jan 25 2013 - 00:31:21 CET

Original text of this message