[LWN Logo]
[LWN.net]

Sections:
 Main page
 Security
 Kernel
 Distributions
 Development
 Commerce
 Linux in the news
 Announcements
 Back page
All in one big page

See also: last week's Back page page.

Linux Links of the Week


The Open Source Developers Exchange is meant to be a rallying point for open source developers and their projects. Projects can be registered with an indication of the sort of help they would like to have, and developers can link up with those that are appealing.

Still lacking a decent Linux implementation is IMPS - the Infinite Monkey Protocol Suite. Certainly we could make good use of it over here at LWN...

Section Editor: Jon Corbet


December 7, 2000

   

 

This week in history


Two years ago (December 10, 1998 LWN): IBM released its "Jikes" Java compiler under an almost-open-source license. The first DB2 beta was also put up for free download. Sun announced its support for Linux on the Sparc, SGI announced support for Samba, and joined Linux International as well.

Linus Torvalds was a guest of honor in the Finnish Presidential Palace for the Independence Day celebration. This photo of Linus and Tove is still available online.

ZDNet shows us that some things never change...

Linux is awesome in many ways, but no matter how you slice it, it's still basically an evolved port of a 20+ year-old operating system, and with that age comes a certain amount of baggage. Linux may be far more efficient than Windows, but it still carries the past on its shoulders, and (more importantly) lacks many of the futuristic technologies built into BeOS from the start.

One year ago (December 9, 1999 LWN): Andover.Net went public on December 8, at an initial price of $18 per share; it quickly rose to $63. When the December 9 LWN hit the press, the VA Linux Systems IPO was still looming; check this space next week if you don't know what happened.

Sometime later this week, another Linux-related company, called VA Linux Systems, will go public under the ticker symbol LNUX. It will probably double or triple in price while market pundits criticize it for being another overhyped IPO.
-- Business Week.

Both Cosource.com and SourceXchange officially launched. One year later, both are still at it.

Corel found itself at the center of a controversy again when it refused to allow minors to download its distribution.

Sun announced the release of Java 2 for Linux. The announcement contained no mention of the Blackdown Linux team, which actually did most of the work with this release.

KDevelop 1.0 was released. The XFree86 team announced that XFree86 4.0 would not be out before the end of the year.

Red Hat announced more deals with Dell, including one in which Dell systems would come with 90 days of Red Hat support - which replaced the Linuxcare support that Dell was offering before. O'Reilly, meanwhile, launched the O'Reilly Network.

 
   

 

Letters to the editor


Letters to the editor should be sent to letters@lwn.net. Preference will be given to letters which are short, to the point, and well written. If you want your email address "anti-spammed" in some way please be sure to let us know. We do not have a policy against anonymous letters, but we will be reluctant to include them.
 
   
To: letters@lwn.net
Subject: About FrameMaker
From: Damien WYART <wyart@iie.cnam.fr>
Date: 30 Nov 2000 11:04:10 +0100

Dear LWN editors,

First of all, thanks for your excellent job on Linux news.

Your column about FrameMaker for Linux is very interesting. But I think
you forgot to mention (La)TeX as an excellent tool to produce long
documents. Of course, there is no advertising on (La)TeX, so people who
use it have often heard of it by some friend or colleague. Many users
are to be found in universities or research labs.

There are nice Emacs and vim mode availables for editing (La)TeX
documents, and even the basic document styles give quite nice results.
And best of all, it is free software (except for some advanced 3rd party
packages). And ease of use and installation have been really improved
with Sebastian Rahtz's TeXLive distribution. Support for (La)TeX is
really good : many documents float around on the WWW, many books are
available, and newsgroups are very active and helpful.

(La)TeX is very reliable : TeX kernel is believed to have no remaining
bug, and has been existing for more than 20 years !

There is a third way between dead FrameMaker and traditional (often
proprietary) word processors. I think this is worth telling it. No,
using (La)TeX is not so hard ! Yes, using it to write other documents
than scientific articles is common ! Even letters are nicely formatted
by LaTeX.

Best regards, 
-- 
Damien WYART / wyart@iie.cnam.fr
   
Date: Sat, 02 Dec 2000 17:47:31 +0100
From: Fred Mobach <fred@mobach.nl>
To: lwn@lwn.net
Subject: Adobe and FrameMaker

Hello,

Surprising that this Linux Weekly News starts with a _not so_
interesting story on this closed source application ;-). Typesetting is
not my forth or interest and so my interest in FrameMaker is just zero.

Just two general remarks on this subject.

First, it might be possible that the interested GNU/Linux users
recognise the fact that there exists more powerful tools for this kind
of a job than just the expensive FrameMaker, although not really
expensive for those who are professionally working with this tool. They
might also have noticed that their preferred tools are Free Software.

Second, they might have noticed that their data is kept hostage because
of the not-open format in which their data is stored.

I do not object commercial software on my preferred platforms but I do
object the storage of my data in an undocumented format.

Regards,

Fred
-- 
Fred Mobach - fred@mobach.nl - postmaster@mobach.nl
Systemhouse Mobach bv - The Netherlands - since 1976
/"\
\ /
 X  ASCII RIBBON CAMPAIGN
/ \ AGAINST HTML MAIL

   
Date: Tue, 05 Dec 2000 12:35:35 +0200
From: Michel Clasquin <clasqm@mweb.co.za>
Subject: Word processing formats
To: letters@lwn.net

on Sat, 25 Nov 2000, Julio Cesar Gazquez wrote:

"Unfortunately, the world never knew an open, well 
defined, free word processor ...file format."

If you have been around long enough, you might remember XyWrite and its 
offspring Nota Bene. The file format was in plain ASCII - you can hardly get 
more open than that - yet there was very little you couldn't do with it. The 
contemporary equivalent would probably be TeX/LaTeX or XML.

It's a pity that the XyWrite format didn't catch on: if necessary, you could write 
a fully formatted file with footnotes, indents, even graphics,, with nothing more 
than DOS's EDIT. You could even have used COPY CON if you felt the need to 
show off your manhood, and for the true masochist there was, may the gods 
help us, EDLIN. Of course most of us just used the XyWrite program itself: 
even its macros were saved in plain ole ASCII and could be hand-edited outside 
the program.

Unfortunately XyWrite jumped on the Windows/WYSIWYG bandwagon too late, 
and its file format died with it. But even today a lot of Windows wp apps can 
still read and write the XyWrite format. A "Save as XYW" function might not be 
a bad option to take for Linux wp programmers looking for cross-platform 
usability. In particular, translating between TeX or XML and a subset of XYW 
should be a fairly trivial exercise.

   
Date: Thu, 30 Nov 2000 02:19:51 -0600 (CST)
From: Mike Coleman <mcoleman2@kc.rr.com>
To: letters@lwn.net
Subject: Re: Linux and Viruses

Linux may be relatively immune from viruses of the sort now common on the
Windows platform, but as you say, it doesn't necessarily follow that we should
smugly assume that Linux is immune in general.  If you're not at least a
little worried about this problem, you might want to take a look at the
SUBTERFUGUE project motivation document at

		    http://subterfugue.org/motivation.html

for some thoughts on why you *should* be.

Mike Coleman
   
Date: Thu, 30 Nov 2000 10:05:58 -0500
From: "Jay R. Ashworth" <jra@baylink.com>
To: letters@lwn.net
Subject: Universal RPMs.

In last week's LWN, you editorialized a bit on the topic of whether
RPM packages could be installed on multiple distributions, and what
distribution maintainers ought to be doing to make that easier.

Cart?  That thing behind you is a 'horse'.

The problem is that the installation of a package is a point in
system administration that is *very* dependent on an undocumented
system API called the "installation interface"; that is, packages
which need to be installed need to know *a lot*, in many cases, about
the distribution on which they're being installed, in order to put
files in the expected places, set up boot-time start and cron
entries, and many other similar activities.

This is just another place in Unix where the API is loose, and this
particular one is where the Linux Standard Base people and the
Filesystem Standard project are trying to make things a bit easier.

The distro maintainers could make things a bit easier, too, by better
documenting what they *do* have, and there are other things which
could help, too; I'll pick one particular nail to hammer on.

The files on a Red hat distribution (among others) in the
/etc/rc.d/init.d directory constitute a sort of "service manager
interface", in conjunction with SysVinit, they're one of the few
ideas stolen from NT that I like.  But, while many Linux
distributions provide the "chkconfig" command for setting services in
this directory to be enabled or disabled in specific runlevels, that
command doesn't provide a user interface for turning something on or
off, or restarting it, *right now*.  I created my on, called svc:

/etc/rc.d/init.d/$1 $2

Real complicated, right?  People do that all the time, right?  So why
hasn't RH already added that to the distro?

In general, anytime that part of an installation involves "put this
file in the right place" or "change this [parameter in] this system
control file (inittab)", there should probably be a program that does
the work, the call to which can be standardized across systems, and
the underlying actions can be specific to a distribution.

The unifying of installation packages is a laudable goal indeed. 

But let's put the *effort* in the right place, shall we?

Cheers,
-- jra
-- 
Jay R. Ashworth                                                jra@baylink.com
Member of the Technical Staff     Baylink
The Suncoast Freenet         The Things I Think
Tampa Bay, Florida     http://baylink.pitas.com                +1 727 804 5015
   
From: Richard Simpson <rsimpson@ewrcsdra.demon.co.uk>
To: letters@lwn.net
Subject: RPM compatibility across distributions
Date: Thu, 30 Nov 2000 12:06:19 +0000

In your last issue you discussed the problems of having multiple package
formats.  But it would be a good start if distributions could even agree on the
base names for packages.  IMHO SuSE is the main culprit here, although the
others could also get a grip.

As an example, the documentation for python:

RedHat 7.0	python-docs-1.5.2-27
Mandrake 7.2	python-docs-1.5.2-12mdk
OpenLinux	python-doc-1.5.2-1
SuSE 6.4	pyth_doc-1.5.2-118

Apart from 'docs' and 'doc', SuSE is seriously hampered by limiting RPM names
to 8 characters.  Why do they do this?  I tried SuSE for a while, but constant
rpm name conflicts finally drove me away.  They also don't append the version
number to the file name, so if I try to download several versions of a package
(e.g. to try and resolve a problem) they will all be called foo_bar.rpm (even
the Alpha ones!!).

I propose that distributions try to select a common base name (NOT limited to 8
characters, this isn't DOS) and append an abbreviation to the final release
number.  So we could have foo_bar-1.2.3-4mdk, foo_bar-1.2.3-4rh,
foo_bar-1.2.3-4suse etc.  There would be no danger of two different
distributions producing different files with the same name and package
dependency checks would work better.

I agree that this does not solve the rpm/deb/tgz/etc problem, but at least it
is an improvement on where we stand today.

Richard Simpson

-- 
----------------------------------------------------------------------------

Richard Simpson
Farnborough, Hants, Uk                 Fax: 01252 392976
rsimpson@ewrcsdra.demon.co.uk
   
Date: Fri, 1 Dec 2000 09:10:01 +0000
From: Philip Armstrong <phil@kantaka.co.uk>
To: letters@lwn.net
Subject: Re: Linux Package Management Needs a Wakeup Call, LWN 01Dec2000

You write:

"Meanwhile, the true winner will likely be the first distribution
 that can handle both rpm and .deb files."

This is already possible through the alien package under Debian.

However, I think your article (and the original that you reference)
misses the fundamental reason for the packaging incompatabilities, and
why they are never going to be resolved. This is because what counts
is not the packaging format itself, but the policies each distribution
decides on for things like the placement of configuration files,
dependency handling and so on.

It is the establishment of consistent packaging policies that defines
the quality of a given distribution, and it is the mismatch between
the different policies decided on by each of the distributions that
makes it difficult to install packages from 'alien' distributions
cleanly. For instance a RedHat vim package is never going to be aware
of the need to update the /etc/alternatives directory on a Debian
system if necessary.

Things like the LSB allow minimal compatability to be achieved in
regards things like the location of files, but no more.

In the end, the existence of an .rpm or .deb suffix on a package is
irrelevant. What matters is who has packaged it and for which
distribution.

Yours,

Phil Armstrong

-- 
http://www.kantaka.co.uk/ .oOo. public key: http://www.kantaka.co.uk/gpg.txt

   
From: Ronald Cole <ronald@forte-intl.com>
Date: Thu, 30 Nov 2000 14:40:55 -0800 (PST)
To: letters@lwn.net
Subject: static initializers

In my copy of K&&R2, section 4.9 says: "In the absence of explicit
initialization, external and static variables are guaranteed to be
initialized to zero".

So how about a compromise?  Since the redundancy just amounts to
documentation, why not do this:

	       static int some_variable /* = 0 */;

-- 
Forte International, P.O. Box 1412, Ridgecrest, CA  93556-1412
Ronald Cole <ronald@forte-intl.com>      Phone: (760) 499-9142
President, CEO                             Fax: (760) 499-9152
My GPG fingerprint: C3AF 4BE9 BEA6 F1C2 B084  4A88 8851 E6C8 69E3 B00B
   
Date: Fri, 01 Dec 2000 13:02:18 +0000
From: Thomas Sippel - Dau <t.sippel-dau@ic.ac.uk>
To: letters@lwn.net, t.sippel-dau@ic.ac.uk
Subject: Elevator algorithms

Hello,

I saw with interest your discussion of elevator algorithms, and the result
of your tests. My conclusion is that the notion of elevator algorithms 
deserves - well, a shot between the eyes, at least as far as operating 
system kernels are concerned.

There is nothing wrong with the original anlysis, of course. It is no good
wasitng time moving disk arms around and doing very little. What is not
appreciated, however, is that sorting is mostly a martingale:

   suppose s1 .. sn are sorting algorithms, X is a set of objects,
   and S1 .. Sn is the sequencing of the objects in X after they have
   been sorted by algorithm si, i = 1..n. Thus, for all i,

       (X, Si) = si ( X )

   Now a property of sorting is that for any choice of sorts i1 to im

       si1 ( si2 ( ... ( sim ( X ) ) ... ) ) = si1 ( X ) = (X, Si1)

Thus there is no point wasting time on sorting if the data is sorted 
again later on. This is for idealised sorting algorithms, which can
always establish a definite sequencing between two objects - no need 
for secondary keys etc.

However, this is exactly what is happening, whether the kernel sorts I/O
request by block number (or cylinder address) is mostly irrelevant,
the disk has a cache and will sort it again, using - yes, an elevator
algorithm.

Thus you could try to re-run your highly scientific tests with one of:

   o  disabling the cache on the disk (don't know how you would do 
      that, maybe using a 10 year old notebook drive with only 
      256 kb of cache approximates well)

   o  trying to overrun the cache on the disk drive by making every
      i/o request be 85% or so of the cache size, although with SCSI
      callbacks you would still have a problem

   o  scrambling the block numbers in the elevator algorithm before
      feeding them to the sorting alorithm, i.e. ensuring a random
      request sorting

My hunch is that scrambling them has little effect, and that the others
are very hard to get to show an effect.

I would say the best way to optimise the I/O would be to limit the I/O
request size to

              Size of embbedded cache
       sqrt ( ----------------------- )
                Size of disk sector 

i.e. if a disk has 8 Mb embeded cache and the sector size is 4096 bit,
then limit the size of a single request to 45 sectors. That leaves the
elevator algorithm on the disk enough space to do its thing, and even 
very small caches (like the 256 kb mentioned above) would still allow
8 sector (or 4 kilobyte "page size") transfers.  

                                Thomas

*   Why not use metric units and get it right first time, every time ?
*
*   email: cmaae47 @ imperial.ac.uk
*   voice: +4420-7594-6912 (day)
*   fax:   +4420-7594-6958
*   snail: Thomas Sippel - Dau
*          Linux Services Manager
*          Imperial College of Science, Technology and Medicine
*          The Center for Computing Services
*          Exhibition Road
*          Kensington SW7 2BX
*          Great Britain
   
Date: Tue, 5 Dec 2000 17:34:50 +1100
To: letters@lwn.net
Subject: More problems with biometrics
From: David Gibson <dgibson@linuxcare.com.au>

In the letters for the 30th of November LWN edition, Rob Knop
<rknop@pobox.com> makes an excellent point against the use of
biometrics for authentication:

	However, they suffer from one really huge flaw in comparison to
	passwords.  If your password is stolen, you can change it.
	You can't change your fingerprint.

Unfortunately, biometric systems have another inherent flaw which make
them essentially useless as a sole means of authentication.

Because measurements will vary slightly with conditions (and sensor
noise), to be useful if a biometric system accepts a certain input it
must also accept input which is close to it. This means that the
mathematical model underlying the system must be smooth (sufficiently
similar input gives similar output). This is unlike a traditional
password system: 'password' and 'passwore' (1 bit difference) have
completely different md5 hashes.

A standard calculus result (the implicit function theorem) implies
that a smooth function is (at least numerically) invertible. Which
means that if the model is known (and we all know how well security
through obscurity works) it is relatively straightforward to
synthesise input to satisfy it.

Synthesised input might not even look like it came from a human at all
(to another human), but nonetheless will convince the categorisation
model used in the system. So although many biometric systems are quite
reliable at correctly categorising the input from different people,
without a reliable means to check if the input is really coming from a
physically present person (which no-one has come up with) a system for
which a biometric test is sufficient to gain access cannot be
considered secure.

Biometric tests can be useful where they are usin in addition to
passwords or other conventional techniques (i.e. in order to gain
access you must both pass the biometric test, and provide the correct
password).

-- 
David Gibson, Technical Support Engineer, Linuxcare, Inc.
+61 2 6262 8990
dgibson@linuxcare.com, http://www.linuxcare.com/ 
Linuxcare. Support for the revolution.
 

 

 
Eklektix, Inc. Linux powered! Copyright © 2000 Eklektix, Inc., all rights reserved
Linux ® is a registered trademark of Linus Torvalds