[LWN Logo]
[LWN.net]

Sections:
 Main page
 Security
 Kernel
 Distributions
 Development
 Commerce
 Linux in the news
 Announcements
 Back page
All in one big page

See also: last week's Back page page.

Linux links of the week


The Linux Game Tome is back, after a long interruption. Have a look for the latest in Linux gaming news.

The Linux for Laptop Computers (or LiLaC) site is maintained by Werner Heuser, author of the Linux Laptop HOWTO. It is a comprehensive source of information on issues with laptop systems, and contains what must be the definitive list of working PCMCIA cards.

Section Editor: Jon Corbet


May 4, 2000

   

 

Letters to the editor


Letters to the editor should be sent to letters@lwn.net. Preference will be given to letters which are short, to the point, and well written. If you want your email address "anti-spammed" in some way please be sure to let us know. We do not have a policy against anonymous letters, but we will be reluctant to include them.
 
   
Date: 27 Apr 2000 20:35:24 -0000
From: Eric Smith <eric@brouhaha.com>
To: letters@lwn.net
Subject: Linus on kernel debuggers

Linux Weekly News of 27-Apr-2000 reports:

    Linus has long had a dislike for interactive kernel debuggers. His
    position is that they lead developers to fix symptoms; he would rather
    they stare at the source and come to an understanding of the real problem.

Yeah, user-space GDB is a real drag also.  Too many programmers use
debuggers as a crutch, when it is obvious that simply staring at the
source code is *so* much more likely to result in true enlightenment.

I always hate it when the hardware guys use logic analyzers,
oscilliscopes, and simulators to find bugs in their designs.  Obviously
the correct way to verify or fix hardware designs is to stare at the
Verilog code and schematics until your eyes bleed.

And when my car's engine is making a funny noise, I'm apalled when the
mechanic actually looks under the hood and finds the problem
immediately, rather than spending a few days thinking about what could
cause that sound.  It's hard to believe that using methods like that,
anyone could develop a true understanding of what's going on in there.

Engineers today rely way too much on fancy-shmancy tools to help with
their tasks.  Back in the old days, we only had printf(), and WE LIKED IT.
So what if it took weeks to fix obscure kernel bugs that with better tools
could be found in days; the important thing is the process, not the results.
Just like the "new math".

Of course, good engineers don't use time-saving tools like compilers,
either.  If the program is worth writing, it's worth writing in
assembler.  Or better yet, toggling in binary via the front-panel switches.
(It's not a proper computer unless it has front-panel switches.)


Seriously, though, I don't understand why Linux thinks that debuggers
"lead developers to fix symptoms".  I've used both kernel- and
user-space debuggers to find many hundreds of bugs, and in the vast
majority of cases have been able to fix the underlying problem, not just
symptoms.  It's not at all obvious how having an additional tool and
source of information can make it more difficult or less likely to
develop an understanding of a problem.

Eric
   
From: LucFrench@aol.com
Date: Thu, 27 Apr 2000 05:34:05 EDT
Subject: Re: soft tissue
To: allan@stokes.ca
CC: letters@lwn.net

Allan Stokes wrote:

> It's exactly the same when look at the bone yard of proprietary encryption
> algorithms broken and ignore the "soft tissue" of proprietary encryption
> which hasn't been broken.  Or as most people assume "hasn't been broken
> yet".  Which is exactly what the dinosaur people assumed about the kinds of
> fossil fragments they had not yet found.
>
> Perhaps someday the mathematics of "provable security" will be invented and
> they will look back at some of the proprietary work done today and discover
> that some of it was actually warm blooded after all.

I think I can make a statement that any professional cryptographer will agree 
with:

If your encryption method (be it an algorithm, the underlying random number 
generator, or the program itself) must be kept secret in order to be secure, 
it is insecure.

The reason most cryptographers dislike proprietary algorithms boils down to 
this. (There are some wrinkles and qualifications to that statement, but not 
many; and there are some other problems, dealing with trust and generic 
attacks.)

I'd also like to point out that "Hasn't been broken yet" is the status of all 
computer based encryption algorithms, both open and closed. It's similar to 
all scientific theories' status of "Hasn't been disproven yet".

As to the possibility of a Unified Security Theory, such a thing is quite 
frankly impossible. It's too complicated to go into in a letters column, but 
boil down to the fact that 'secure' and 'insecure' are qualitative, rather 
then quantitative, terms. Quantifying either requires complete knowledge of 
all possible attacks; and from here we get into circular logic.

BTW, the cold-blooded dinosaur theory came about because all known lizards 
are cold blooded, and dinosaurs were clearly big lizards; therefore, 
dinosaurs were probably cold blooded. The syllogism was flawed only because 
of an unknown assumption (lizards don't have to be cold blooded). You've used 
a flawed syllogism as well, that because Assumption A and Assumption B look 
alike, and Assumption A has been proven false, Assumption B must be false as 
well.

Just as a final note:

> Of course, anyone dumb enough to trust someone who spends too much time
> alone in a dark room deserves what they get.  But that doesn't mean they
> were wrong.  People spend too much time forming opinions about what is
> technically possible (we don't know) and then end up misplacing the emphasis
> which belongs entirely on the social issue of what kind of development
> processes we choose to trust.

Actually, the underlying assumption is closer to "if you are hiding 
something, you may have reason to be hiding something, and since we don't 
know what you're hiding, we are therefore unable to trust you". *THAT* is why 
it's so important that encryption methods be open, and known vulnerabilities 
be listed; else, how can I, Strawman Consumer In Need Of A More Secure 
Computer Then What I Have Now, trust that you, Strawman Encryption Expert 
Trying To Sell Me Something, are being honest?

This is what is driving people to open source for security in the first 
place; if you have the code, implementation flaws are easy to spot (assuming 
the code isn't some kind of hairball), and back doors are hard to put in 
(bogus stories involving fish of a South American persuasion notwithstanding).

Thanks
Luc "Jude the Secure" French
   
Date: Thu, 27 Apr 2000 12:46:24 -0500
From: "John J. Adelsberger III" <jja@wallace.lusArs.net>
To: letters@lwn.net
Subject: "Warm blooded security"

In his letter dated April 21, Allan Stokes points out(validly) that
lack of peer review and general understanding does not necessarily
imply insecurity of an encryption algorithm.

He fails to make the point that is crucial when you need a working
system on a deadline in the real world: there is no provable
security today, and at this track, the rules are a little strange:

1) Betting on the underdog does not inrease the payout if you win; all
   winning bets pay the same.
2) The fact that my horse wins does not necessarily imply that yours
   will lose.
3) Despite 1 and 2, there are still underdogs and favorites.

Under those circumstances, betting on an underdog is all loss and no
gain.  Certainly, it would be wrong to disparage researchers into
unknown areas of math, whether crypto or otherwise - but deploying
their research prototypes as production systems is a most brazen
sort of foolishness, and trusting an algorithm that is unknown to
most of the research community over one that has been widely 
examined is even worse.

-- 
John J. Adelsberger III				ETAONRISHDLFCMUGPYWBVKXJQZ
jja@lusars.net
   
From: nride@us.ibm.com
To: ghaverla@freenet.edmonton.ab.ca, letters@lwn.net
Date: Thu, 27 Apr 2000 11:55:34 -0600
Subject: Printing in Linux



The UNIX printing model is dreadfully lacking. Certainly you can find out
that you have some printers out there by looking in the printcap file, but
what do you know about those printers? The whole printcap concept was
designed in the days when your printer could handle raw ASCII and not much
else.

If you look at the archetecture of more "Modern" operating systems, there
is a well defined API for printing. A device manufacturer codes a driver to
that API and the applications use that API to render their jobs. Thus, all
applications use the same rendering API, and the same rendering API can
render to PostScript, PCL or whatever other printer language the printer
uses.

Contrast this to UNIX where the application has to implement its own
rendering routines or use a third party library. If the user wants to print
to a PCL printer and his application only does PostScript, he's out of luck
or has to use GhostScript. GhostScript effectively becomes the printer
driver for many people and PostScript becomes the rendering API. While this
state of affairs isn't too bad, it's not too good either. If you can cover
PCL and PostScript, I'd guesstimate that you'd be covering about 90 to 95
percent of the printers out there.

As for finding out attributes of the printer (How to use those extra paper
trays, etc) Adobe has a Portable Printer Definition format for PostScript
printers and Microsoft has a Generic Printer Definition format for PCL
printers. You can code a "Driver" outside the application to read these two
formats and add headers to your jobs and you'd be doing pretty well. The
Printer Working Group (www.pwg.org) is working on an XML based printer
definition format, as well. There's room for a lot of cleverness in
building interfaces from files in these formats.

For approaches that involve actually building the correct archetecture, I
know of a couple of projects underway. There is an open API called XPRT
which looks kind of interesting. You send X commands to a server and it
renders a printing language. Xfree86 4.0 seems to have some XPRT support.
The Gnome people seem to also be working on an API which I assume you'd
have to use Gnome to take advantage of. Personally, I'd like to see an open
solution which doesn't lock you in to a particular environment while it
brings UNIX printing up to date.

--
Bruce Ide      nride@us.ibm.com

IBM PSC Driver Development


   
To: letters@lwn.net
Subject: Printing is definately below-par
From: Alan Shutko <ats@acm.org>
Date: 27 Apr 2000 10:04:46 -0400

Gordon Haverland stated that printing on Linux is perfectly fine.
Unfortunately, he's missing a lot of fundamental points.

While it is true that you can simply pass stuff off to LPR and
something will come out the printer, there's a lot of other stuff that
printing apps need to know that has never been a consideration of lpr.

First, the current Linux "printing system" gives no information to
apps of the fonts available.  While X can do this, you would need to
install fonts in two places.  X only provides bitmaps, so you can't
pass outlines to the printer unless you write your own font subsystem
and require that fonts be installed there.  (You'd also have to do
this to gain additional metric information that isn't available via
X.)

How do you ask lpr what sizes of paper are available, or tell it to
collate and staple output jobs?  You can't.  How do you select the
tray?  You can't.  While some of these problems are fixed in LPRng for
certain printers using certain filtering systems, other problems
remain.

-- 
Alan Shutko <ats@acm.org> - In a variety of flavors!


   
From: "Corfield, Richard" <RICHARDCO@POLK.CO.UK>
To: "'letters@lwn.net'" <letters@lwn.net>, 
Subject: Re: Printing and Re: WordPerfect "review"
Date: Thu, 27 Apr 2000 12:53:14 +0100


I'd agree that the existing print system under UNIX is very strong. Its
standardisation on 
PostScript as a printing language should make life a lot easier for software
developers, users and administrators alike. 

	Software developers because they only need write drivers for
postscript

	Users because they can always preview output, and manipulate it with
tools such as GS, GV and mpage.

	Administrators because they can redirect output to any printer
knowing that its in a language that the printer driver can understand. They
can also filter it on the way (mpage for example). The job is only
translated from Postscript at the last minute.

My experiences with some other printing systems seem to confirm these
advantages. Its a pain when you can't preview output and its a pain when you
can't print to a printer because you can't find the driver disk to set up
your wordprocessor to be able to create jobs for it. Its also a pain when
one version of the OS has a printer driver with mpage like functions and the
version I work on doesn't so I can't print two sides on one from my desktop.
Under Linux you can apply mpage to any print queue regardless of the type of
printer on the end of it.

The only problem with the system is that the word processor has no knowledge
of what the capabilities of the printer are. This causes me problems at
times because my printer cannot handle small margins and some programs try
to print into them so some edges can be lost. 

What seems to be needed is a way for the word processor to find out what
paper sizes and margins the particular printer supports, whilst keeping the
advantage of a printer agnostic printing language and without bringing in
all of the problems with a certain other system when you do this.
For example I find it annoying when I load a document into a word processor
to find that it has been "Changed" because my word processor has a different
default printer to the original author's.

Perhaps an extension to lpd or lpr or lpstat could allow the word processor
to find things like paper size, margins and perhaps colour capability. Then
the only problem is what happens when you try to print a document on a less
capable printer than that intended by the original author, and you're not
using something like LaTeX where it doesn't really matter too much.

Certainly something that must happen is that this must be solved correctly,
rather than just
copying the solution from somewhere else, warts and all.

 - Richard.

<Standard Disclaimer: All oppinions my own and not those of my employer, or
probably 
 anyone else in my office for that matter>

   
Date: Tue, 2 May 2000 21:10:33 -0400
From: "Jay R. Ashworth" <jra@baylink.com>
To: Tammy_Cavadias@zdcommunity.com
CC: letters@lwn.net
Subject: http://www.zdnet.com/zdnn/stories/comment/0,5859,2555159,00.html

... the Taschek column on Open Source.

Was that op-ed?  If so, it should have been slugged.  If not, it's
shoddy journalism.

[Looks again]  Ok, it's commentary.

Is it ZDnet's intent to reduce readers' opinion of its worth by
running commentaries that are obviously based on factually incorrect
premises and incomplete research?  *I* could write opinion columns too
(and I do, my weblog address is listed below), but that doesn't mean
anyone would read them...

A major fraction of the Internet depends completely on open-source
software, written well before the term was coined. BIND (name
service), sendmail, (mail transport), perl (CGI scripting, among other
things) and Apache (>55% of all publicly accessible webservers) are
just the most visible examples.

If this column wasn't a troll, then I must cast my lot with the people
who'll be ignoring ZDnet completely if it's going to publish writing
of this caliber.

Cheers,
-- jra
-- 
Jay R. Ashworth                                                jra@baylink.com
Member of the Technical Staff     
The Suncoast Freenet
Tampa Bay, Florida     http://baylink.pitas.com                +1 888 806 1654


   
Date: Sun, 30 Apr 2000 04:30:10 -0700
From: Jim Dennis <jimd@starshine.org>
To: Microsoft.atr@usdoj.gov
Subject: Protocols, APIs and File Format Libraries 

Hi,

 As a long time observer in this industry (having done stints in
 tech support, quality assurance, development, and systems administration)
 and as a writer and industry journalist (having written one book on 
 Systems Administration, a few articles for various magazines, and an
 online technical support column) I'd like to comment on the 
 possible measures that you will take vis a vis the Microsoft case.

 (Duh!  That's what this e-mail address is for, isn't it?)

 Obviously we'd like to arrive at a solution which is remedial,
 punitive, and deterrent.  

 I don't believe that a breakup and ongoing regulation of Microsoft's
 business will achieve these goals.  Unfortunately any ongoing regulation
 of their company is likely to be mired in bureaucracy and hampered as
 it becomes increasing politicized.  It will be interesting to see how
 future administrations will meddle in these affairs.

 The set of measures that I believe would offer the greatest remedy
 to consumers and industry participants while acting as a fitting 
 punishment to Microsoft and possibly even deter future monopolists
 would be to require that Microsoft fully document their protocols
 (networking), APIs (programming interfaces) and file formats.

 These elements of their software are necessary to achieve
 interoperability.

 They are also the very elements that Microsoft has obscured and changed
 in order to stifle competition and thwart innovation throughout the 
 rest of the industry.

 Furthermore I'd recommend that the only form of acceptable documentation
 for these software standards would be a set of functional "reference
 implementations" of the necessary utilities and libraries to perform a
 the core operations of each protocol, call each API function, or 
 read, parse and manipulate each file format.  These reference
 implementations would have to be published in true "open source" form and
 subject to a FREE license --- which which specifically allows commercial
 and proprietary derivative works by third parties.  (I would not recommend
 the GPL --- a BSD like license would be more appropriate).

 I'd also recommend that the court should require that these
 implementations be delivered in a timely fashion, that they be in ANSI standard C
 and/or C++ programming languages, that they must compile on non-MS operating
 system platforms (pick three --- Linux, FreeBSD, Solaris, etc) and
 non-Intel (x86) hardware.

 A reference implementation is the only adequate documentation which can
 be accepted in this case --- since any other form is subject to endless
 subjective legal wrangling and interpretation which would necessarily 
 cross the disciplines of computer science and jurisprudence.  

 A reference implementation would also include a compliance suite.  It
 can be unequivocably and objectively judged.  Either it compiles and
 demonstrates interoperation with Microsoft's own software, or it fails and
 Microsoft can be fined, enjoined from releasing new products, etc.

 Of course Microsoft would be required to deliver such a suite for their
 existing operating systems and applications suites.  This would be
 required to be done in a timely fashion.  

 More importantly: The court should hold that Microsoft much continue to
 publish such reference implementations for all future upgrades and
 products. For future upgrades and products it would be reasonable to
 require that these reference implementations be published prior to software
 release and tested immediately.  Micrsoft could then be enjoined from
 distributing new products and new revenue generating upgrades of their
 existing products until their products are judged to be in compliance
 (interoperable) with their own published reference standards.

 I believe that this approach has a number of advantages:

	* Objective criteria
	* Respect of Microsoft's legitimate intellectual property
	* Freedom to innovate
	* Low cost to the Federal Government
	* Little requisite regulatory burden
	* Promotion of both the competing commercial software 
	  industry interests and the open source and free software
	  movements (and other consumer interests).
	* Low public perception of meddling and bureacracy
	* Academic and educational benefits

 Note that this plan does respect Microsoft's rights to its intellectual
 property.  It doesn't require them to publish or provide any access
 to the source code of their current and future products.  These 
 reference implementations can be completely independent or they can be
 functional subsets of Microsoft's code.

 It also allows Microsoft to continue to modify their software
 (operating systems and applications) and to freely "integrate"
 and separate these products for their own business interests.  However,
 it does so in a way that ensures that other companies can make 
 competing products, enhancement utilities, etc.  It also ensures
 that the U.S. Federal Government, international interests, and other
 parties will be able to maintain their document, networking and 
 internal software infrastructures regardless of what Microsoft does
 with their future products and upgrades.  (Indeed with these reference
 standards you significantly reduce the risks associated with use of 
 proprietary software in general).

 Regardless of any breakup, and in addition to any regulatory 
 oversight and punitive fines that you impose on Microsoft I 
 recommend that you do the rest of the country some tangible 
 good by requiring reference standard implementations of all 
 networking protocols, application programming interfaces and 
 file formats that Microsoft uses to "integrate" their clients
 to their servers, and to interface their applications with their
 operating systems, and to sort and interchange their documents
 configuration data, account and management databases, etc.

 Thanks.

--
Jim Dennis                                            
jdennis@linuxcare.com
Linuxcare: Linux Corporate Support Team:           
http://www.linuxcare.com
   
Date: Wed, 03 May 2000 13:09:52 -0400
From: walt smith <waltech@bcpl.net>
To: letters@lwn.net
Subject: several fundamentals

Dear LWN readers,

Distributions are proliferating like mosquitos
in spring.  The basis for being in business and
obtaining funding for doing so has almost always
been that money is to be had from service, and not
sales of the basic (software) box.  I disagree.

However, service appears to be a necessary reason in order
to achieve funding for Linux companies to exist.  The
Microsoft empire is largely based on sales of (software) box's
at a reasonable price.  I believe their prices, historically,
have been reasonable. Many Linux boxes are approaching
the same price basis.  There is the caveat that the
sink and entire kitchen are thrown in with the box.  In other
words, sales of the box are significant and important, and should
be recognized as such. I feel more notice of this is necessary.
--
I had an interesting experience recently. It should
prove valuable to marketing psychologists. After actually
spending some loot on a Linux box off-the-shelf, I had
a revelation. Like many,  I'd borrowed Linux CD's,
and also downloaded free Linux from the Internet.
When I went to "retailer", I knew I was purchasing "free" software.
Why ?  It wasn't free!  But it was the feeling of getting
something free and paying a little for the tiny manual and
box.  The convenience of the CD !!  ? or was it?
Nope!  It was the feeling that, even free, that the box wasn't a sole
source product. I knew the product would be upgraded "soon",
but so what? it was cheap ($29).  I knew it was free if I
really wanted it to be free!  I knew if I didn't like it, I had a
choice of multiple different packages with the same underlying
function and commonality (/lilo/, /usr/, ext2fs et. al)!!  ..all
free or shelved at competitive prices!  It was a feeling of buying
into a community;  that same feeling non techies have buying
Nikes or Windows.  And I knew it was ALL there - no additional
$$ for a wordprocessor, compiler - Jeeze! I even had multiple
selections.
--
I read on LWN that the new 2.4 kernel isn't due until possibly Autumn.
This is a shame!!  One of the Hallmarks of Linux is the *continuous*
improvement.  Waiting beyond some magical number isn't good for
me psychologically nor a significant part of the Linux community.
I understand the tradeoff  for the stability that many (larger
corportaions)
require.  Perhaps there's too much concern? PLEASE! this is not a swipe
at Alan or Linus!!!  I'm  concerned about the market momentum, and I
don't
have a solution.
--
One last prediction.  SGI has moved in the right direction to adopt
Linux.
If they were a company with diversified products, it'd be even better.
I believe neither SGI, nor Sun can continue as they have been: major
changes
are coming.  This might mean the names SGI and Sun disappear, like DEC.

regards,

Walt Smith, Baltimore



 

 

 
Eklektix, Inc. Linux powered! Copyright © 2000 Eklektix, Inc., all rights reserved
Linux ® is a registered trademark of Linus Torvalds