[LWN Logo]
[LWN.net]

Sections:
 Main page
 Security
 Kernel
 Distributions
 Development
 Commerce
 Linux in the news
 Announcements
 Back page
All in one big page

See also: last week's Back page page.

Linux Links of the Week


Atheos is a free operating system for Intel boxes. It has its own kernel, written from scratch, and its own window system. It is POSIX enough to run bash, but the window system is not compatible with X. It's intended to be a desktop system; worth a look if Linux is getting old and boring.

[TUX] "The face of the world has changed in 2076. Go figure. The evil multi-national corporate conglomerate empire MegaSoft® reached out and nuked someone (just about everyone, as it turns out) in 2023 with a nuclear-capable e-mail virus, designed to stop anti-trust suits. It worked, really, really well. Evil wins." Into this grim situation steps TermUnitX (TUX) to save the day. It is, of course, an online comic book, and it's wild. Worth a look.

Section Editor: Jon Corbet


October 12, 2000

   

 

This week in history


Two years ago (October 15, 1998 LWN): The word went around that Oracle was about to launch its own Linux distribution. Two years later, one can probably say that the rumor has not stood the test of time.

If Microsoft could crush us, it would already have done so. It is now several months too late for them to succeed.

Their window began to close when the first of the enterprise database announcements hit the streets. With Oracle's announcement of a bundled, supported, Oracle-over-Linux combination on CD-ROM offering the 24/7 reliability unattainable with NT, it has effectively slammed shut. [...]

Not only can't they crush us, but it will take a reversal of present trends for them to avoid a collapse into irrelevance within eighteen months.

-- Eric Raymond, in LWN's "Letters to the Editor" column.

Well, it seemed that way at the time...

Larry Wall was the recipient of the first Free Software Foundation Award.

The development kernel was 2.1.125; Linus announced that the last of the showstopper bugs had been fixed, and that it was about time to move into the pre-2.2 series. Meanwhile, one kernel hacker decided to go looking for foul language in the kernel source, and was not pleased with the results. We posted the resulting linux-kernel posting with a warning that it wasn't for the easily offended; it was one of the most popular files we have ever put up.

And no, the kernel source has not gotten any cleaner, at least not in the comments. User-visible output is held to a "suitable for children" standard, but comments in the source itself are unregulated...

One year ago (October 14, 1999 LWN): TurboLinux racked up its first big round of equity financing. Longstanding retailer LinuxMall.com also pulled in a sizeable investment from SCO, which was clearly beginning to realize that it needed to take Linux more seriously. Both of these investments were announced at the Atlanta Linux Showcase, which was underway.

Mr. Miller says that about 40 investors have approached Turbolinux, offering a total of nearly $200 million in potential funding. A lot has changed since Mr. Miller and his wife founded Turbolinux seven years ago.
-- The Red Herring, October 11, 1999

OpenSSH 1.0 was released; it was the first free ssh release in a very long time.

VA Linux Systems, O'Reilly & Associates, and SGI announced plans to produce a commercial, boxed version of the Debian distribution.

VA also filed for its initial public offering of stock, setting in motion what was to be the most spectacular IPO of the year.

In making such a bold move (Solaris is their core product) Sun is embracing everything that has made the Open Source movement such a success. Everything, that is, except that bit about opening up their source code.
-- Feed Magazine was unimpressed by the Solaris code "release."

 
   

 

Letters to the editor


Letters to the editor should be sent to letters@lwn.net. Preference will be given to letters which are short, to the point, and well written. If you want your email address "anti-spammed" in some way please be sure to let us know. We do not have a policy against anonymous letters, but we will be reluctant to include them.
 
   
Date: Thu, 5 Oct 2000 13:24:37 -0400
From: Eric Kidd <eric.kidd@pobox.com>
To: letters@lwn.net
Subject: Source Forge concerns

I'm the lead developer of a SourceForge-hosted project, and I have two
concerns about the site.

* Silly ranking schemes

SourceForge now uses the Advogato trust metric to assign developer
rankings. Unfortunately, the Advogato trust metric is fairly broken, even
on Advogato. (I say this as someone who has undeserved "Master"
credentials, and would prefer to be a mere "Journeyer".)

Quite frankly, the SourceForge implementation of this idea is tacky and
juvenile. I'd prefer to opt out of the ranking systems, and just use
SourceForge as a development tool.

But if I can't do that, I'd like to take my projects elsewhere. Which
brings me to my next concern...

* Exporting

I can't export my project (at least not easily). I can export the CVS
repository, the web site, and some of the data. But there's no obvious way
to dump the bug tracker, forums, etc.

So even though the SourceForce code is available, and you can run it on
your own server, there's no obvious way to move a SourceForge project
elsewhere without losing data.

How to fix it: A nice, big "Export Project as XML" button on the project
administrator screen would make me sleep better at night. ;-)

It's not that I don't trust the SourceForge folks--they're remarkably
helpful--it's just that hosting a project on somebody else's servers
requires an extraordinary amount of trust.

Cheers,
Eric
   
Date: Tue, 10 Oct 2000 14:40:53 -0400
From: "Jay R. Ashworth" <jra@baylink.com>
To: letters@lwn.net
Subject: GCC/RedHat Imbroglio and Version Numbering

In this weeks' Daily News, an item was posted wherein the GCC steering
committee effectively says "you shouldn't have included 2.96 in any product
release; that's not a real production release."

I have no particular sympathy for their complaint, frankly; that's what that
get for using a "real" release number.  This is a topic on which I've
ranted before; had they called that GCC 3.0alpha1, everyone would have
known better, I think.  I hope...

Cheers,
-- jra
-- 
Jay R. Ashworth                                                jra@baylink.com
Member of the Technical Staff     Baylink
The Suncoast Freenet         The Things I Think
Tampa Bay, Florida     http://baylink.pitas.com                +1 727 804 5015

   
Date: Thu, 05 Oct 2000 06:46:24 -0400 (EDT)
From: Arlen Carlson <adcarlso@visinet.ca>
To: letters@lwn.net
Subject: Commercial "Debian"-style security

In this week's (Oct. 5) LWN, it seems that there is disappointment that the
commercial companies have not filled in the Debian security support...However
it should be noted that the commerical companies point their security updates
to the Debian mirrors.  Thus the apparent lack of "security patches" on their
own servers.

There is no real lack of interest on the part of the commercial
companies...they've just chosen to go the "Debian way".

-----------------------------------
Arlen Carlson <adcarlson@iname.com>

"Love is an ideal thing, marriage a real thing; a confusion of the real with 
the ideal never goes unpunished."
                -- Goethe


This message was sent by XFmail (Linux)

                        -o)
                        /\\
                       _\_v

The penguins are coming...
         the penguins are coming...
-----------------------------------
   
To: Branden Robinson <branden@debian.org>
Subject: Re: Outrage at Debian dropping security for 2.1
Date: Thu, 05 Oct 2000 13:31:39 +0100
From: Tethys <tet@isengard.europe.dg.com>


Branden Robinson writes:

> Does Mr. Peacock expect Debian to provide security updates for Debian
> 2.0, 1.3, 1.2, or 1.1?  Does he expect, say, Red Hat, to provide security
> updates for 6.0?  How about 5.0?  4.2?  1.0?

I can't speak for Mr. Peacock, but yes, *I* expect security updates
for non-current versions of an OS. Fortunately, Red Hat does provide
them, and currently supports its 5.x, 6.x and 7.x releases.

As pointed out on LWN's front page this week, administrators are
reticent to upgrade an OS that's working well, when a smaller
security update would do just as well. Until Debian realise this,
their distribution will never gain widespread acceptance in
commercial environments. Maybe that's not one of their goals,
but it's something they currently don't seem to be aware of.

Tet
   
Date: Thu, 5 Oct 2000 09:22:30 +0200
From: Marko Schulz <in6x059@public.uni-hamburg.de>
To: letters@lwn.net
Subject: Debian drops slink security updates

It makes me angry and sad, if people are accused, because they speak
the truth. 

When the previous version of debian (2.1 aka slink) was released, the
next security update, didn't even mention the version before that (2.0
aka hamm). Now the folks from debian are even making a published
deadline and they get slammed for it. If they would have dropped
updates for slink silently nobody would have cried.

If one wants a secure system, he has to stay mildly current. The worst
bugs still get eliminated from 2.0.X-kernels, but there are others in
it, that just won't be removed, because it would take too much. I
expect the same for old versions of distributions, may they be called
SuSe, Red Hat or Mandrake. I too don't follow the newest-version-craze
and stay with older versions for quite some time, but I wouldn't rely
on them as being too secure.

-- 
marko schulz

 "Sind Comics Kunst?"    "Ist doch scheißegal!"
                        Stefan Dinters Antwort auf eine Podiumsfrage,
                        Comicsalon 1997 in Hamburg
   
Date: Fri, 6 Oct 2000 15:26:38 -0400
To: letters@lwn.net
Subject: Upgrading Debian
From: Zygo Blaxell <zblaxell@feedme.hungrycats.org>

>apt-get update
>apt-get dist-upgrade
>apt-get clean

The Debian Mantra.  ;-)

I'm a Debian advocate.  I use Debian on all of my Linux systems at home,
and I've successfully introduced it as an upgrade path at work at two
companies--there will be no more new Red Hat systems, and the old
ones will be replaced rather than upgraded.  

Frankly, the reason why I advocate Debian (stable) as my first choice,
and Debian (unstable) as my second choice, is because of the nice
semi-automated update mechanism, and because of the half-legion of
developers behind it.  On my own desktop and laptop systems, where the
entire user population (i.e. me and my spouse) has eight years of Linux
development experience combined with root access and a bootable rescue
CD, Debian is close to perfect.  On mission-critical systems, Debian's
'stable' distribution with daily upgrades to keep current with security
patches is definitely the way to go.

That said, even I, a rabid Debian fan and developer wanna-be, who runs
dist-upgrade in parallel on dozens of machines at a time every day, do
NOT blindly run dist-upgrade on the day after a Debian release without
testing it on non-critical machines first!

The first problem with this idea is that the Debian FTP archive layout
changed in the last release, so you can't get past 'apt-get upgrade'
without manual intervention if you're using cryptographically-enhanced
packages from non-US (and who isn't, really?).

Changes to NSS in glibc alone are enough to cause serious, widespread,
and downright weird problems during and after the upgrade.  Any package
that relies on a lot of shared library components is going to be
confused for several minutes, and the ones that aren't confused are
likely to be simply unavailable during that time.  Daemons are
problematic--some will stop at some arbitrary point during the upgrade
and restart at some arbitrary point after, others will continue to
function throughout the upgrade, and some will fail in unpredictable
ways depending on the exact timing of races between external user
access and dpkg's manipulation of the filesystem.  Some packages rely
on scripts to rewrite configuration files as the programs that use them
change syntax--we can only hope that those scripts preserve exactly the
semantics of the old configuration files.  dpkg itself has been known
to crash during large upgrades--especially upgrades that affect itself
or its own dependencies--and dpkg or apt-get sometimes crash while their
own dependencies are not satisfied, which means you can't use either
of these fine tools to clean up the mess afterwards.  

Hopefully, you won't be forced to recover the system using 'ar', 'zcat',
and 'tar'.  If you're smart, you install 'sash', and avoid several
failure modes that might result in having to dig out the boot floppies.

Upgrading Debian is a whole lot easier than upgrading other distributions,
but it is by no means perfect.

And so far we're only talking about Debian packages, not any third-party
or local packages that might have been installed outside of the Debian
package management system.  apt-get does not consider at all the
possibility of breaking installed packages that it doesn't know about
(how can it?), and will happily break them.  Even third-party packages
packaged as .deb files sometimes have useless or missing Depends: fields,
which effectively makes them invisible to apt-get.

This kind of widespread system reconfiguration can't be bug-free.
It may, in fact, be less prone to failure than back-porting many security
patches at the source level, but in practice there are a tiny number
of security-related patches compared to the total number of revisions
between stable releases, so the trade-off doesn't pay off.

A single back-ported security patch is a single, localized change,
designed to fix a single, specific problem--as a rule of thumb, there's
roughly an 85% chance of doing it successfully.  A full dist-upgrade
rarely leaves any installed non-documentation packages untouched--the
probability of making hundreds of changes, each at 85% probability of
success, without making any mistakes, is left as an exercise to the reader.

   
Date: Sat, 7 Oct 2000 14:11:54 -0700 (PDT)
From: Patrick Ennis <DzuSwei@excite.com>
To: lwn@lwn.nwt, letters@lwn.net
Subject: For shame!

Dear Sirs,
Please do not malign the good folks at Libranet. They make it VERY CLEAR
that this is a DEBIAN distro, only compartamentalized to make it more
accessible to those of us who aren't 24 hour users. Please make it clear
that Libranet is simply making the fine Debian distro more user-friendly and
accessible, they aren't yet a true distro in their own right. And so any
user updates are through APT, ust like Debian... because it IS Debian. To
portray the fine folks at Libranet as being unconsciencious is simply a
complete falacy on your part. Their support is both the best, and the
quickest, of any Linux outfit. Period. The folks at Libranet are, quite
simply, everything Linux should be! They love linux, give it to anyone who
asks and makes it known to any who BOTHER TO ASK. And to be quite honest,
they are the only one of the four 'major' Canadian distros (Corel, Stormix,
MaxOS, and Libranet) that treats the user as a thinking human being, and
gives them the option of either mindlessly installing linux or masterfully
guiding it onto your computer to the Nth degree. Who else lets you do this?
Like this? To this degree? NO ONE! So please, valued Sirs, try Libranet
before you malign it so easily. In my opinion, it is linux as Linus meant it
to be! If there is a fault, which I question, it is with the folks at
Debian.org themselves. And even then, a simple run of APT will plug the gap.
In short, if any Libranet user is concerened about any of their update
needs, they need only run Apt to get the 'latest and greatest'. Do not
malign the folks at Libranet. They do more, better, for free, than anyone
else in linux.

Thank you,

Patrick Ennis

Dzuswei@excite.com

   
Date: Wed, 11 Oct 2000 14:26:55 -0500
From: Dub Dublin <dub@infowave.com>
To: letters@lwn.net
Subject: Electronic, not digital signatures - there's a difference

Your report last week of digital signatures becoming law is inaccurate.
As I understand it (not a lawyer and all that), what became law on
October 1st was electronic signatures, not digital signatures.  There's
a very important difference:  electronic signatures are used to make
electronic contracts enforceable, like click-through license agreements
and online puchasing or services agreements.  Unlike digital signatures
(which rely on some sort of cryptographic method of providing
authentication, non-repudiation, and content integrity), electronic
signatures are simply an entry in a database somewhere - but with this
law, that database now has the full force of a paper signature,
regardless of its own accuracy or security.  (This may well turn out to
have far larger implications for online rights than DMCA or UCITA ever
could.)

This is a crucial difference, and the reason that electronic signatures
were opposed by some consumer advocates and the handful of congressmen
who bothered to read and understand the bill.  It boggles the mind that
this legislation passed 426-4.  (Three Republicans and only a single
Democrat voted against it, about the typical ratio for privacy issues,
but a very poor turnout.)  To be fair, the bill does provide for
informing customers of paper alternatives (if any), and contains some
other notification provisions as well, but these are obviously far, far,
short of the protection that would be provided by a true digital
signature, even one based on questionable cryptographic methods.

In short, there's a big difference between electronic and digital
signatures, and we need to use the correct terminology in both
discussing the issue and in framing a response.

Dub Dublin



   
To: letters@lwn.net
Subject: ECN
From: Graham Murray <graham@webwayone.co.uk>
Date: 05 Oct 2000 07:00:30 +0000

ECN, while still new and experimental, has the potential to be a very
useful protocol which by greatly reducing the number of resent packets
could improve bandwidth utilisation which would be to everyone's
advantage. However to do this it needs to be widely accepted and
implemented. This will not happen while high profile sites reject
connection which indicate that they are willing to use ECN. 

So, I think it is a pity that we are being forced to disable it in
order to communicate with certain sites. While I accept that this is,
at least in the short-term, necessary, I think that we should also be
informing the "offending" sites of the error of their ways. 
   
Date: 10 Oct 2000 23:46:55 -0000
From: Eric Smith <eric@brouhaha.com>
To: letters@lwn.net
Subject: Synopsys on Linux

Gentlemen,

On October 10 you reported on Synopsys making more of their ASIC tools
available on Linux.  You referenced an EE Times article in which Gary
Smith, chief EDA analyst at Dataquest said "64-bit Linux isn't yet
available".  Gary needs a wakeup call.  64-bit Linux has been available
for *years* on Alpha platforms, and is also available on Sparc and MIPS
platforms.  In fact, it's even available for Intel's upcoming IA-64
chips (e.g., Merced), despite the fact that the chips aren't even
available.

However, I hope that Mr. Smith is correct in his statement that "Linux
is knocking NT out of the design world".  I've seen numerous cases of
companies trying to do EDA (or any kind of engineering) on Windows NT
boxes, largely based on false Total Cost of Ownership claims by
Microsoft, and discovering the hard way that Windows NT is *abysmal* as
an engineering platform.

In the same article, Mike Glenn of Avanti Corp. says of Linux that "it
won't replace the Unix environment".  Mr. Glenn needs to wake up and
smell the coffee.  More and more companies are discovering that Linux
works just as well as (or, in many cases, better than) proprietary Unix
solutions, and yet has much lower costs (both purchase and support).

Despite some silly statements by analysts and vendors, it is now clear
that Linux-based EDA is an idea whose time has come.

Raul Camposano was quoted as expecting to find "Windows NT strong only
in FPGA design".  One of the leading FPGA chip vendors has stated that
they have no plans to support Linux.  Interestingly enough several of
their competitors are working on FPGA support.  As an engineer, given
the choice between otherwise coparable chips, I'll pick the ones which I
can develop for without using Windows NT.  The choise is a no-brainer.

Eric Smith
   
From: Mark_Wiley@marcam.com
To: letters@lwn.net
Date: Fri, 6 Oct 2000 12:09:11 -0400
Subject: TUX and beyond



With the recent release of the TUX 1.0 Kernel HTTP Server, I was thinking
about the future of such an offering. TUX is a Linux kernel space HTTP
server. Its primary function is to serve up static pages and images and
pass along more complex requests to user space programs, such as Apache.
Its current design is to minimize the impact of current web server
implementations. Apache doesn't know there is anything going on, just
some configuration changes.

As I look at the growing importance of Web services and review the
equally important growth of the underlying TCP/IP protocol a few decades
ago, I must wonder about the directions that TUX or implementations like
it will take.

Consider. TCP/IP is only a communications protocol and protocol stack.
All that really needs to be in the kernel is the network card driver.
But we recognize the importance of TCP/IP, the complexity and wide use
of its services. To make dealing with it easier, we made several adaptations.

   1. The stack is in the kernel for speed and security.
   2. The TCP/IP stack is capable of dealing with multiple IP addresses on
each of multiple network adaptors.
   3. Security is handled through an administrator interface allowing
detailed settings for each adaptor/address.
   4. We give it a friendly usable API that allows any process to register
itself as being a service in the TCP/IP space (Sockets/Accept). User space
programs don't have to deal with raw network packets.

The result is that the stack does not need to be individually configured
for any new services. The services register themselves. The general
configuration and security of the stack can be controled by an administrator.

Lets take a similar approach to web services. Lets make a web service stack:

   1. The web stack should be capable of providing multiple service
hierarchies with different protocols (http, https, other...) configured
individually to IP-Range / TCP Port. Java has a model for plugable protocol
handlers that might be useful. When new protocols are needed, plug in a
protocol handler that understands it.
   2. Each service hierarchy has its own security interface for what
request source IP address it accepts, what authentication it expects,
where its WebRoot is, what programs/users may register as active content
providers.

   3. An API is provided for userspace programs to register themselves as
active content providers. These API would allow a program to enumerate what
service hierarchies are currently active. It would allow a program to select
one service hierarchy and register itself to it. The program would then specify
where in the hierarchy its address space would begin. Finally the API would
provide a blocking wait for service requests similar to the socket accept.
There are already models for server components out there with Request and
Response interfaces for service control (Java Servlet, ASP, ...). Make
something similar available from the kernel server through the API.

This whole interface requires the replacement of many current web apps
with versions that use the new API, but in the end it would make web
applications as portable and plugable as TCP/IP applications are now.
It would also make writing web applications easier and more modular.

TUX is already a good starting point. It can be expanded to include more
features. But more importantly, it is in a good position to become the
implementation standard for such a model. Do it right in an Open Source
fasion before some company gets into the act and tries to poke a lot of
propriatery requirements into the mix. Make the standard Open first.
Then let it catch on elsewhere.

Mark Wiley.
mark.wiley@pcsinc.net


   
Date: Fri, 06 Oct 2000 07:53:39 +0800
From: Leon Brooks <leon@brooks.smileys.net>
To: letters@lwn.net
Subject: You don't see much Linux

Steve Ballmer is quoted as saying:
> You don't see much Linux in (business) customers. You see some Linux
> in Web sites and application service providers, but it's less than
> the press hype." 

There are two points worthy of note here, either or both of which explain how
Steve can say this without too much crossing of the fingers behind the back.

Firstly, and this really has been done to death: where Linux is working and
where Linux is seen are two quite different concepts.

Many managers and CEOs are quite shocked to discover either that the backbone of
their IT shop has been Linux for the last three years or that the reason their
[insert favourite service here] has either sped up or stopped crashing, or given
up being taken out regularly by crackers (or all of the above) in recent days is
that it's no longer based on Windows. Bill Gates himself would not have been
pleased to discover that every computer in the campus' new Bill Gates building
was running Linux while he was touring it during the opening ceremony.

It is especially pleasing to see FreeBSD, OpenBSD et al springing up more and
more often in the ground plowed by ``media darling'' Linux. Repeat the mantra:
``choice is good.''

Secondly, I'm not sure how often Microsoft's boss would *expect* people to show
him Linux systems: he lives in a Microsoft-saturated environment, and would have
to go about anonymous and disguised to have a hope of seeing any reality.

The reality is that something like half of all web servers are Linux-based, and
something like a quarter of them are Microsoft-based.

Compounding the issue, he lives right at the heart of Microsoft country.
Internet head-counts show Microsoft more often in the USA and in corporate
culture than anywhere else. The further up the corporate ladder you look, the
more Microsoft you find. Microsoft has spent a lot of time and money making it
so. This is where Steve lives and gads about. Note the parable of the blind men
and the elephant (http://www.anointedlinks.com/elephant.html) for it applies
here.

In Germany, Microsoft servers are nearly a third less common than average.
Non-US domains also seem to have a less pronounced ``us and them''
Apache-against-IIS focus. Educational domains also seem more willing than
average to use something other than Apache, IIS or Netscape.

What ``we'' (Open Source oriented people) need to be aware of is that this
blindness is just as true for us as for Steve and Bill.

We use Linux (replace with your chosen OS-OS as appropriate) daily to solve
problems, automate drudgery and banish the three apocalyptic horsemen
(Bluescreen, Virus and Cracker) from our world. Often, all we see is Linux - so
all we know is Linux.

We lose touch with people who use Microsoft products daily, to whom three
crashes a day and living in fear of viruses is normal, to whom Word
spontaneously electing to no longer display a task-bar is a major issue, and for
whom the sight of an AfterSTEP or Enlightenment desktop is very disturbing.

-- 
#include <signal.h>
#include <time.h>
main(){srandom(time(0));for(;;){int pid=random()%30000;if(pid>1
&&pid!=getpid())kill(pid,random()&1?SIGSTOP:SIGBUS);sleep(10);}}
 

 

 
Eklektix, Inc. Linux powered! Copyright © 2000 Eklektix, Inc., all rights reserved
Linux ® is a registered trademark of Linus Torvalds