Re: The Revenge of the Geeks
Date: Thu, 31 Jan 2013 02:22:18 -0600
Message-ID: <ked9nd$3sg$1_at_news.albasani.net>
On 1/30/2013 7:12 PM, Arne Vajhøj wrote:
> On 1/30/2013 4:22 AM, BGB wrote: >> On 1/29/2013 9:05 PM, Arne Vajhøj wrote: >>> On 1/27/2013 10:16 PM, BGB wrote: >>>> On 1/27/2013 6:40 PM, Arne Vajhøj wrote: >>>>> On 1/27/2013 1:47 PM, BGB wrote:
>>>>>> On 1/27/2013 5:46 AM, Arved Sandstrom wrote:
>>>>>>> Usually in the enterprise world you have little or no leeway as to >>>>>>> how >>>>>>> systems talk to each other. You may have a few options to choose >>>>>>> from, >>>>>>> but rolling your own is looked upon askance. >>>>>>> >>>>>>
>>>>>> well, this is where the whole "mandatory interop or orders from
>>>>>> above"
>>>>>> comes in. in such a case, people say what to do, and the
>>>>>> programmer is
>>>>>> expected to do so.
>>>>>>
>>>>>> but, I more meant for cases where a person has free say in the
>>>>>> matter.
>>>>>>
>>>>>> and, also, a person still may choose an existing option, even if bad,
>>>>>> because it is the least effort, or because it is still locally the
>>>>>> best
>>>>>> solution.
>>>>>>
>>>>>> like, rolling ones' own is not required, nor necessarily always the
>>>>>> best
>>>>>> option, but can't necessarily be summarily excluded simply for
>>>>>> sake of
>>>>>> "standards", as doing so may ultimately just make things worse
>>>>>> overall.
>>>>> >>>>> It almost can. >>>>> >>>>> If you go non standard and problems arise, then you are in >>>>> deep shit. >>>>> >>>> >>>> depends on costs... >>>> >>>> if "liability" is involved, or the functioning of the software is >>>> "mission critical" or something, then there is more reason for concern. >>>> >>>> >>>> for many types of apps though, hardly anyone gives a crap how any of it >>>> works internally anyways, and people can pretty much do whatever. >>>> >>>> (like, if it crashes or breaks violently, oh well, the user will start >>>> it up again, and at worst probably the user will think less of the >>>> product if it is too much of a buggy piece of crap, ...). >>> >>> Not everything is important. >>> >>> But best practices should be based on an assumption about it >>> being important. >>> >> >> could be, or it could be that the importance of the system is another >> factor to be weighed in considering design choices (along with other >> design-quality concerns, concerns over what other people will think, of >> possible consequences, ...). >> >> like, if the importance is high, then choosing the most well proven >> technologies is best, and if fairly low, then it may mostly boil down to >> "whatever works". > > Not really. > > Unless there really is an advantage of going with the non-standard > solution, then you would go for standard even for the less important > stuff. >
usually, there are advantages in edge-cases.
in cases where a standard technology exists which does the job fairly well, then it usually makes most sense to use it.
for example, PNG and JPEG are pretty good, so there are few reasons not to use them (for most things image-storage related).
like, not being chained to standards, does not mean making a standard of non-standard either.
where it makes more sense to ignore the standardized technologies is when they either aren't very good, or are notably bad (and actually using them would likely leave the product worse off).
it is like, trying to take VRML seriously as a 3D model format.
VRML is standardized, but the W3C at the time seemingly managed to get nearly everything wrong in the design. then later, they tried again with X3D, which sort of competes against COLLADA, which AFAICT is much more popular (despite X3D being shoved into a larger number of other standards, like HTML5 and MPEG-4...).
likewise for the OSI protocols, ... people were largely just like "whatever" and continued using TCP/IP (IETF largely won this battle).
nevermind if, officially (as per the standards), JPEG was replaced by JPEG-2000 around 13 years ago, and more recently there is JPEG-XR.
meanwhile, the original JPEG remains the more well-supported format by most software (much easier to find apps which read/write JPEG images than JP2 or JXR images).
...
sometimes, using a standard technology may actually make things worse off in other ways.
for example, it is popular at present for people to make various file formats consisting of XML documents or similar packaged up into a ZIP-based container.
while this is easier to approve as "open" or "standard", it comes with a
drawback:
some applications are prone to detect the ZIP-related magic values, and
automatically change the file extension to ZIP, which can sometimes
prove rather annoying (whereas, if a non-ZIP container format were used,
these tools would more often leave the file alone).
it also makes little sense if the intention is actually for the application to keep the data to itself, such as for a proprietary file-format, where it may actually be to ones advantage if unaware parties (such as competitors, ...) have little idea what the file contains.
>> so, a lot is about balancing relevant factors, because while being well >> proven and standard technology is one thing, other factors, like >> development time, and how well it holds up against the offerings by >> ones' competitors, or how much end-user appeal there is, as well as >> potentially the computational performance, ... may also be factors. > > They can. > > But the home-made solutions often promise to be better but rarely > delivers. >
I have generally been having generally good results with various custom-designed technologies.
but, this again comes back to cost/benefit tradeoffs: if the results of the choice don't pay off well, it means they did not make a good choice, not that having had an option to make choice was to blame.
like, having the freedom to make a choice does not mean freedom from any consequences of having made the choice.
like, having freedom to make choices also means the freedom to shoot oneself in the foot.
sometimes, a simple direct solution can also be better than a bigger
"standard" solution, for example:
passing simple lists or arrays for internal messages, vs using DOM or
similar;
using a HashMap or similar, vs using an RDBMS, to store key/value pairs
or similar;
passing plain data, rather than using RPC or similar;
...
like, a possible premise:
don't use a sledgehammer to do what can easily enough be done with a
tack-hammer.
like, even if the standard solution is to store data in a DBMS, the HashMap may be simpler, easier, and also potentially significantly faster, ...
BTW: have been, mostly for the hell of it, in the process of porting my stuff to be able to work on Native Client. most of the work thus far is in trying to migrate the stupid 3D renderer from the full OpenGL to OpenGL ES.
to make this work, I am as well having to make a "hand-made solution": namely, migrating much of the code from using normal OpenGL calls to using a set of wrapper functions (which will then fake things and pass the results off to GLES). (too much of the code still relies on the existence of the "fixed function pipeline", so for GLES it needs to all be faked...).
I guess the more standard solution here would be to rewrite all the code directly (rather than forcing it onto wrappers), but this would be more work.
but, then again, it is notably easier in this case to go with a non-standard technology (Native Client), even if largely tied to a single browser, than to go with a more standardized technology (IOW: trying to rewrite a 3D engine into HTML5+JS+WebGL to shove it into a browser).
granted, for targeting a browser up-front (writing a new engine ground-up or similar), the HTML5+JS+WebGL route could probably make more sense (at least assuming "general" things, like that the browsers are smart enough to know how to cache compiled code and similar...). Received on Thu Jan 31 2013 - 09:22:18 CET