[LWN Logo]
[LWN.net]

Sections:
 Main page
 Linux in the news
 Security
 Kernel
 Distributions
 Development
 Commerce
 Announcements
 Back page
All in one big page

See also: last week's Back page page.

Linux links of the week


Who's Who in Open Softwareis a new site which is putting together a complete database of contributors to free software projects. They're up to about 400 people now, and looking for more submissions. Have a look - are you listed?

Eric Raymond tells us he's been spending his vacation putting out a new version of the Jargon File. As he puts it: "A browse through the Jargon File is like a voyage of rediscovery. These are the Linux culture's roots."

Section Editor: Jon Corbet


April 15, 1999

   

 

Letters to the editor


Letters to the editor should be sent to editor@lwn.net. Preference will be given to letters which are short, to the point, and well written. If you want your email address "anti-spammed" in some way please be sure to let us know. We do not have a policy against anonymous letters, but we will be reluctant to include them.
 
   
Date: Thu, 08 Apr 1999 18:43:23 +0000
From: "Joshua M. Yelon" <jyelon@egenesis.com>
To: lwn@lwn.net
Subject: The so-called "failure" of Mozilla.

To those who feel that Mozilla's lateness constitutes failure, I would
like to present a different point of view, from the perspective of a
professional programmer.

Imagine taking a regular power drill, and trying to add an "electric
mixer" feature.  To do it, you'd have to build some sort of adapter that
adds a second chuck, for the second blade.  It might work, but the
adapter would probably be stick out from the side of the drill.  It
would be lopsided and hard to handle.  Worse yet, you'd have to use a
chuck key to insert the mixer blades.  In short, it might work, but it
wouldn't be as good as using a mixer that was designed to be a mixer.

Software is in some ways similar: it's like an appliance, and then
adapters are added.  Consider netscape.  It was originally designed to
display plain old HTML.  Then an adapter was added to handle forms, and
another adapter was added for tables, then one for secure http, and one
for frames, and one for reading news, and for streaming audio, and on,
and on.  It became like a power drill so covered with adapters that you
can't even see the power drill any more.

This process happens to all large software projects.  Users ask for new
features, and we have to tack on adapters to support those features. 
But eventually, there comes a time when the software is so covered with
adapters that you just have to overhaul the whole system.  Essentially,
you throw out the adapter-covered power drill and start over, inventing
a new super-appliance from scratch.  The result is a tool that does what
it was designed to do.  Tools that do what they were designed to do
always work better than adapters.

This is where managers come in.  The engineer says, "This drill with
adapters thing sucks.  Let's start over and make an appliance that does
what it's designed to do."  The manager says, "how long will it take?" 
The engineer says, "one year."  The manager says, "No way.  We can't
afford a year of engineering and no new features.  Remember, our
competitors are going to release new features this month!"

This, I think, is what happened to netscape.  Netscape 3.0 was
moderately buggy, and fairly large, which is a pretty good sign that it
contains too many adapters.  Netscape 4.0 was terribly buggy, and huge.
I suspect the problem is that the managers were unwilling to take the
time to do an overhaul.  They were under such pressure to compete with
internet explorer that they couldn't take a year off for maintenance.

Then, netscape went open source, and the inevitable happened.  People
saw that it needed an overhaul, badly, and they did the overhaul.  As I
understand it, they completely rewrote the rendering code, and many
other parts of netscape.  Yes, they knew it was going to take a long
time, but they also knew that in the long run, it was necessary.

What this shows is that the open source community has a longer-term
perspective than corporations do, and that the open-source community is
more motivated by quality than politics.  And for the consumer, it means
that instead of getting a buggy browser now, they get a reliable browser
in one year.  In my mind, that's the right tradeoff.

- Josh
   
Date: Sat, 10 Apr 1999 12:45:17 +0800 (WST)
From: Greg Mildenhall <greg@networx.net.au>
To: derek@fortstar.demon.co.uk
Subject: The GNOME disaster.

Derek, I just read your letter to the LWN ed.

While I agree with most of your sentiments, I feel I should point out
to you that your suggestion: "Steps need to be taken to push Red Hat
and the other distributors towards offering KDE by default."  Is a
little futile and counter-productive when you consider that it is
curently illegal to distribute KDE.

What is needed is for the GNOME developers to get their feet back on
the ground and offer a stripped-down core version of GNOME (yes, a
small, fast and stable one) by taking advantage of GNOME's beautifully
componentised architecture.

I feel they are being sidetracked by the bells and whistles and the
bold promises when they should be concentrating on getting the basics
right first. If you build a good infrastructure, every man and his dog
will want to do exterior decorating for you - there's no need at all
to develop vast amounts of add-ons until you've got a lean, mean
foundation on which to build them.

-Greg Mildenhall

   
Date: Tue, 13 Apr 1999 21:00:07 -0700
From: Tim Hanson <tjhanson@tscnet.com>
To: editor@lwn.net
Subject: Mindcraft

Naturally, any study which contradicts a other speed comparisons
between Linux and Microsoft Windows NT bears some scrutiny, especially
when the "research" appears to have been commissioned by Microsoft.

Validity of any research is in question until someone else duplicates
the result given the same parameters.  Since Red Hat appears to be the
potentially most injured party, someone from that company should offer
to work with Mindcraft in an effort to duplicate the results under
pristine conditions, using neutral participants or at least parties
with opposing interests, off-the-shelf software and identical
hardware, the hardware traded halfway through the testing and the
software purchased from retail outlets by surprise.

We know Microsoft is not above falsifying tests from the DOJ fiasco
last month, and we know they are not above using proxies to do their
FUDing for them.  Let's see if Mindcraft can put up, before anyone
here asks them to shut up.

Tim Hanson
   
Date: Wed, 14 Apr 1999 15:29:45 +0100
From: Phill Hugo <plh102@york.ac.uk>
To: lwn@lwn.net
Subject: An open offer

> Subject: An open offer
> Date: Wed, 14 Apr 1999 04:13:33 +0100 (BST)
> From: Phill Hugo <plh102@york.ac.uk>
> To: info@mindcraft.com, sales@mindcraft.com, jobs@mindcraft.com
> CC: malda@slashdot.org
> 
> Mindcraft,
> 
> I would like to offer my services. I am a adept Linux engineer and have
> deployed numerous servers based on Linux in past employment. While nothing
> as large as those given in your benchmark tests, I do have an
> understanding of large systems ranging from theory to practice - we have
> many very large multiuser systems in the Academic world.
> 
> I notice from your benchmarking report that none of your engineers seemed
> particularly apt to configure Linux - the rather sad ommision of the very
> well documented "memory=xxxxmb" kernel variable showed that quite clearly
> - something a great many New User Linux FAQ sheets mention within the
> first 100 words.
> 
> So then, I would like to offer Mindcraft the use of my services. I offer
> to set up your server, the very same server you have reported on, using
> only the newsgroups and mailing lists you claim to have used youselves as
> aid and you can repeat the benchmarks - perhaps even publishing the
> results if Microsoft permit it.
> 
> I, and I am sure a great many others would love to see a fair scientific
> test of the two systems pushed to their limits. I do not feel that this is
> what you have offered to date. I am sure you will view this chance to
> once again clarify your independant and honest position favourably.
> 
> I look forward to hearing from you.
> 
> Phill Hugo
> www.gnu.org/~phill
   
Date: Tue, 13 Apr 1999 17:25:27 -0400
From: Lamar Owen <lamar.owen@wgcr.org>
Subject: Linux support -- kudos
To: editor@lwn.net

Having been a computer professional for over a decade now, I am
accustomed to customer support that tries to workaround problems instead
of actually fixing them -- especially problems that seem to be spurious
and intermittent that are only bothering my installation.

I was pleasantly surprised this week with the excellent support that the
Linux kernel developers (in particular, Alan Cox) can provide.

First, a backgrounder on my installation and my problem: I am engineer
with WGCR radio in Pisgah Forest, North Carolina, USA.  On May 1, 1997,
we joined the ranks of many other radio stations and opened a web site
with streaming audio available of our live radio signal.  Bucking the
conventional wisdom of the time, I selected RedHat Linux 4.1 as the
operating system for the Pentium Pro server I was building.  The
RealAudio Server was (and is) available for Linux -- in fact, the
availability of the RealAudio server was a primary point for my OS
selection.

I selected the high-performance multithreaded AOLserver webserver in
lieu of Apach due to AOLserver's peerless database integration, and
selected PostgreSQL 6 for my RDBMS.

I immediately found Linux to be highly stable and uniquely reliable. 
Through several kernel upgrades (and even a break-in via the BIND
inverse query overrun), Linux continually proved itself worthy -- it was
even pleasant to see Linux finally getting the press it deserves.

Well, a month ago I did yet another needed kernel upgrade as part of a
major system update -- RedHat 5.0 to Mandrake 5.3.  The upgrade went
smoothly -- 30 minutes after downing the server, it was back up and
fully functional.  I was elated -- which was to be a short-lived
feeling.  A short 90 hours after bootup, the sound card driver died a
horrible screaming death -- DMA errors.  This on a machine that had
routinely stayed up 80-90 days before.  I was stumped -- I rebooted the
machine, and waited to see what would happen.

85 hours after reboot, horrible screaming death.  Hmmmm....  It was a
sound card issue, so, I contacted Mr. Sound Card -- Alan Cox --
directly.  He suggested building a plain 2.0.36 kernel, without the
RedHat modular sound drivers, and seeing what that did. 

So, I did.  In the process, I received a considerable education in
building kernels and moving them around -- anyone who has tried to get
two version 2.0.36 kernels to coexist on the same box knows the feeling
-- but I was doing the build on one machine and ftp'ing the kernel and
its modules over to the production machine, which has no compilers for
security reasons.

To make a long story short, I got a vanilla 2.0.36 kernel built with the
proper configuration and got it booted.  95 hours later, the sound
driver is still humming.  So, I e-mailed Alan and let him know.  Only
then do I find out that I am the only one on the planet that he is aware
of that has this issue on a repeatable basis -- and he's been
corresponding via e-mail with me with an average latency of less than
twenty minutes!

To say that I'm impressed would be an understatement of Biblical
proportions.  This is the best technical support I have experienced --
and, having administered a large LAN/WAN at a major US corporation, I
have experienced some shoddy tech support from tier-one vendors.

As busy as he is, he still took the time to help -- and to do so without
any "attitude" like some other vendors show. This, to me, is the true
spirit of the Free Software movement. Kudos!

Lamar Owen
WGCR Internet Radio
   
Date: Wed, 14 Apr 1999 09:23:47 -0500
From: Hajo Smulders <hajo@mindspring.com>
To: lwn@lwn.net
Subject: Some comments on NT vs. linux

	First of all; I am NOT a n NT hater. I use NT on a daily basis
for development. Some of my favorite tools (Delphi; C++ builder) only
run on MS platforms. I get a Blue Screen Of Death about twice a week;
but that is usually because of my own stupid programming. Also since
this is my development box a BSOD is not that annoying; I just
reboot. NT tends to do a diskcheck and restart without hick-ups 9 out
of 10 times. I have a lot of good software on Windows (linux is
getting better; but Applixware is no MS office; Blender is No
Lightwave; Gimp is no photoshop and command line compile tools are
kind of a pain if you are used to Borland's IDEs).
	My major gripe with NT is that whenever I change something
like an IP address; a routing configuration; a binding etc... I have
to reboot.  This is ridicilous! However this is also not something to
dismiss NT.  One more thing I am very gratefull to MS for is a
standard. As a teenage programmer I remember writing a Joust like game
and having to completely rewrite it for Sinclair,VIC, Commodore, Atari
and later Atari St, Amiga etc...
	Now for my gripes with the testing. Having done tests on a
mission critical system for a financial start-up of NT vs. Linux, NT
scored lower than Linux on all aspects of performance. We do not use
an intermedialry such as apache or do any fileserving; we work purely
with passing objects through sockets and within a CORBA framework. Our
system consists of a Back-end database server running Linux (Solaris
was considered; but rejected because of financial reasons; we do hope
to acquire a Sun starfire once we get some cash flow. Porting from
Linux should be trivial). Hooked into the Back-end server we run three
groups of application servers. These are clustered; load balanced;
redundant through Application Management servers.
	The Back-end was never considered to be done on an NT
machine. We foresee up to 5 million transactions a day within 2 years
done by up to 10,000 concurrent users. We do not conisder NT to have
the scalability of Unix systems. One other Operating system that had
our consideration was OS/400 by IBM. The reason we choose against it
was that initial cash outlay was greater than for a UNiX based system
and that it was harder to scale piece by piece. Also the development
experience for advanced technology (clustering; parallel
processing...) tends to be easier to find in the Unix community. The
reason we set on Linux instead of Solaris for x86 processors was wider
hardware support; faster development from the community regarding
tools; Good availability of back-end databses (Oracle, Informix,
DB2...) and a very intelligent and talented user/developer
base. (note: the fact that it is free was not an issue; on a project
such as this the few thousand dollars you save on an OS is a pittance
compared with costs arising from support; problems up sclaing...)
	For the mid-end we did evaluations of NT4, NT5(beta) vs. Linux
and Solaris. The kernel on Linux is a stripped down 2.2.4 kernel with
TCP/IP, CORBA libraries, SMP support, terminal only. The loaded OS
takes less than 2 megs of memory. On NT we couldn't do that sort of
thing...  Our application servers have 256 Mb of Ram in them (I do not
believe that Linux currently supports more than 1G of memory; maybe
that flawed the comparison article...
	On testing of prototype applications both written in Eiffel or
C++ Linux and Solaris were faster on average of 162% (throughput as
well as actual calculation scores combined). NT did perform an order
of magnitude better running JAVA applications. We believe that the MS
JIT is responsible for that. Our main complaint against NT was that
both versions of NT when overloaded on purpose crashed! The Unix based
systems would slow to a crawl; but not crash.
	Since our Application servers run as a clustered pool Unix has
better support than NT (Clustering in Nt5 is problematic to state that
mildly).  There is more knowledge of working parallel with clusters of
workstations for Unix than there is for NT.(Beowulf anyone...?)
	Finally; and this is not a technical issue: The availability
of Source is very important to us. It allowed us to strip unneeded
parts from the kernel (What's not in there can't bug you!). It allows
us an upgrade path uncomparable to any closed source OS and finnally
it allows us to build functions directly into the kernel. (Fast,
highly optimized for our objects/packets CORBA/Network io is
considered at this moment.)
	I was highly surprised by the findings of this study. I have
heard a lot of legitimate complaints about Linux (I have some myself
as stated earlier) performance or stability has NEVER been one of
them. Is Linux still limited to the 1G memory barrier? If so you would
have your answer right there.

	Hajo Smulders, CTO Instatrade
 

 

 
Eklektix, Inc. Linux powered! Copyright © 1999 Eklektix, Inc., all rights reserved
Linux ® is a registered trademark of Linus Torvalds