[LWN Logo]
[LWN.net]

Sections:
 Main page
 Security
 Kernel
 Distributions
 Development
 Commerce
 Linux in the news
 Announcements
 Back page
All in one big page

See also: last week's Back page page.

Linux links of the week


This week, we will feature a pair of Liz's favourite links. We've mentioned them before, they've been around as long as LWN has, but they are still two sites she loves to go back and read on a regular basis. Note, they are best read as a pair:

Section Editor: Jon Corbet


February 24, 2000

   

 

Letters to the editor


Letters to the editor should be sent to letters@lwn.net. Preference will be given to letters which are short, to the point, and well written. If you want your email address "anti-spammed" in some way please be sure to let us know. We do not have a policy against anonymous letters, but we will be reluctant to include them.
 
   
From: Collins_Paul@emc.com
To: letters@lwn.net
Subject: capability bits
Date: Thu, 17 Feb 2000 08:31:10 -0500

Dear Editor:

From this week's LWN, Kernel Development section:

> Given the amount of trouble people (and distributors) 
> have with the existing permission bits, how will they 
> cope with dozens of capability bits that must be 
> correctly set on every file? 

If capability bits (CBs) in the filesystem were implemented like this, then
we would have a problem.

The Right Way is as follows:

You would set the standard system-wide capabilities on /; they would then
flow down to the rest of the filesystem.  If you needed to augment/restrict
capabilities, you would do so on the specific file or directory that needed
it, in a fashion similar to the Inherited Rights Filter in Netware.

Both NetWare's filesystem and NDS operate like this; permissions are
dynamic, based on the location of an object/file/directory.  Active
Directory uses a static arrangement, where the rights are placed with each
object individually.

Issues that would need to be resolved:

1) Do capabilities flow across mount points?  (Mount-time option?)

2) What impact will looking up capabilities have on filesystem performance,
given the dynamic configuration outlined above?  (Caching would absorb much
of the overhead.)

3) Would there be user/group/other capabilities?  (Extension: Access Control
Lists?)

Sincerely,

Paul Collins

--
Please note that I speak on behalf of no-one but myself.
   
From: Andrew Kenneth Milton <akm@mail.theinternet.com.au>
Subject: Re: [Zope] Press Release: WorldPilot 1.0 released
To: Jens Vagelpohl <Jens@digicool.com>
Date: Fri, 18 Feb 2000 10:22:31 +1000 (EST)
CC: "'chris@linuxdev.net'" <chris@linuxdev.net>,

+----[ Jens Vagelpohl ]---------------------------------------------
|
| WorldPilot will run on any platform supported by Zope, so far it has
| been tested on Linux, Windows and Solaris.

And FreeBSD! d8)

-- 
Totally Holistic Enterprises Internet|  P:+61 7 3870 0066   | Andrew Milton
The Internet (Aust) Pty Ltd          |  F:+61 7 3870 4477   | 
ACN: 082 081 472                     |  M:+61 416 022 411   | Carpe Daemon
PO Box 837 Indooroopilly QLD 4068    |akm@theinternet.com.au| 
   
Date: Thu, 17 Feb 2000 12:40:23 -0800
From: kenengel <kenengel@linuxstart.com>
To: letters@lwn.net
Subject: UCITA in its full glory

Let's assume the worst possible outcome becomes reality and UCITA passes in
all states, is ratified into law, even perhaps amended to the U.S. Consti-
tution. We agree it is a soberingly radical and cynical proposition. Then I
propose we resort to a radical and cynical counteraction. I'm advocating 
nothing less than widespread civil disobedience.

I propose everyone who opposes UCITA disregard it at every turn - only when
it's appropriate. That is, if you discover you are being manipulated by
your vendor, either with ludicrous licensing terms or litigation or just 
poor quality product, then engage in counter-offensive tactics.

        Make as many copies of proprietary license-per-copy software, or
    other "protected" information like databases, as you need. Distribute
    freely.

        Hack a work-around on software that expires or is remotely
    disabled. Hack the software so that it doesn't violate your privacy by,
    for example, collecting confidential information or scanning your hard
    drive.
        I suspect such methods will be published by L0pht, 2600, Cult of
    the Dead Cow, et. al. I hope they do, and I would encourage anyone to
    use them.

        Publish candid reviews and critiques of such licensed software.

        Reverse-engineer it to your heart's content, for whatever purpose,
    whether to fix interoperability or just to see how it works. Break it
    apart and use the useful bits in your own programs.

        And simply ignore any other contractual terms you find
    unacceptable.

Sure, it's easier said than done. But it will be necessary when we face the
reality that there is no other recourse. Just as people have died to estab-
lish and protect Freedom of Speech, some companies and individuals will
suffer violations to freedom, not only court injunctions and legal costs,
as the ugly consequences are brought to light. It is likely, if UCITA truly
becomes the Law of the Land, it will take a judge, or a panel of judges,
with real integrity to strike it down, as one did to CDA, in order to
restore fairness and sanity.

----------------------
Do you do Linux? :) 
Get your FREE @linuxstart.com email address at: http://www.linuxstart.com
   
From: "Stuart Herbert" <S.Herbert@sheffield.ac.uk>
To: <letters@lwn.net>
Cc: <esr@snark.thyrsus.com>
Subject: LWN Letter: VA's Aquisition of Slashdot
Date: Fri, 18 Feb 2000 01:02:02 -0000

Hi there,

I've been reading with interest all of the concern about VA's purchase of
Andover.net, and what it might mean for the future direction of Slashdot.

Why should we care?

Slashdot may be a community resource, but I'm not sure what particular
community that might be.  It certainly isn't the OpenSource / Free Software
area many of us have worked to build over the years.  If you're not involved
with one of the "headline" packages, *you* try getting anything published on
there.

These days, sites like Alan Cox's www.linux.org.uk provide a far better
range (and quality!) of articles (and in a more timely manner too!) than
Slashdot ever has.

Slashdot is fast aquiring a reputation as a place for wannabes, not as a
place where the real work gets done.  If VA were to close it down tomorrow,
those of us at the coal face probably wouldn't even notice.  If ESR and VA
want to provide the community with real resources (as they have already with
SourceForge), then concentrate on making Freshmeat the premier site instead.

I've never met those behind Slashdot, and I wish them well for the future.

Best regards,
Stu
--
Stuart Herbert                               S.Herbert@sheffield.ac.uk
Generic NQS Maintainer                            http://www.gnqs.org/
--

   
To: letters@lwn.net
Date: Fri, 18 Feb 2000 06:49:05 -0800
From: "Jonathan Day" <jd9812@my-Deja.com>
Subject: SGI's STP

Hi,

SGI's Scheduled Transfer Protocol looks an
ingenious piece of code, and I look forward to
seeing what people actually do with it. (Code
isn't meant to be used for the purpose for which
it's written. With suitable application of
hammers, saws and an editor, code can be applied
to problems the original coders never envisaged.)

As for Larry McVoy's suggestion, see: Commodore
PET, IEEE 488, and the 4040 floppy disk drive.
Back in the 1970's, Commodore had this crazy
idea of building computers into the printers and
disk drives, so that the main processor could do
useful work, rather than be tied up. If you were
copying a file from one drive to another, for
example, using the main computer as a buffer was
considered daft. You could even attach two
drives to each other hand have them copy files,
without having a main computer connected at all.

I see this new protocol as allowing an extension
of this 70's technology and also meeting Larry's
idea of having a uniform network bus. Ethernet
devices are now faster than most PCs, and
certainly faster than most "traditional"
parallel busses. The benefits of parallel
transmission cease to be relevent if you can't
keep up with a serial stream. In fact, there is
absolutely no need to stick to hard disks with
this. Printers can already be networked, but
this might make life a lot easier for them.

Also, why stick to peripherals? ISA, EISA, MCA,
PCI, VME, etc, are all fancy ways of connecting
what are really "external" devices to the
computer. You've then got all sorts of
electronic wizardry to handle local busses, etc.
All this is very expensive, as you've got to lay
expensive, high-precision parallel tracks to
each expensive, high-precision connector. And
the faster the bus, the higher the precision and
the higher the cost of the bus. That's one of
the reasons the original PC had only 20 address
lines. The difference in cost was worth it.

If you put -everything- on one gigantic, very
high-speed serial ethernet connection, you don't
have the high-precision to worry about.
A bunch of gigabit ethernet adaptors, and some
ethernet cable is orders of magnitude cheaper
than a high-precision VME rack. (The rack alone
can cost $3000.) This would put every device and
every card on a completely uniform bus. Cards to
other devices would be reduced to simple 2-way
or 3-way hubs with maybe some processing.

Lastly, as this e-mail is getting long, if
everything's on ethernet, you can use all of the
tricks that have been developed for it. Want to
find the nearest idle disk drive? Do an anycast.
Want to do software RAID 5, without overhead?
Multicast, where the drives to be used are all
in a common group.

Jonathan Day



--== Sent via Deja.com http://www.deja.com/ ==--
Share what you know. Learn what you don't.
   
Date: Mon, 21 Feb 2000 18:10:19 -0500
From: Zygo Blaxell <zblaxell@genki.hungrycats.org>
To: metcalfe@infoworld.com, letters@lwn.net
Subject: "From the Ether", Friday Feb 11, 2000


--ZGiS0Q5IWpPtfppv
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: quoted-printable

I can't speak as a representative of the "open source community", but
it's statistically likely that other people coincidentally share the=20
following views:

CPU's have traditionally been closed-source.  Certainly, Transmeta is
doing nothing particularly new by releasing a chip without also releasing
the complete specifications required to manufacture a clone on that home
chip foundry set you have in your basement.  They're also not particularly
doing anything new by not releasing the full source code for their
proprietary application software (for that is really all that it is)
that emulates an x86 virtual machine.

While this isn't a step forward, it's not a step backward either.
We already have proprietary code in Linux device drivers (they're
a constant pain to use, they're buggy, they can't be distributed
with the kernel...but they still exist anyway).  We already have
proprietary virtual x86 emulators for Linux (VMware for x86-on-x86 and
Digital's thingy for x86-on-Alpha).  Of course, if we actually had usable
open-source x86 virtual machines and CPU's whose chip mask specifications
could be copied off the Internet from Transmeta's competitors, then we
could fairly hold Transmeta up to higher standards.

I think Linux's single greatest feature is its ability to embrace
fragmentation.  I don't think there can be such a thing as bad
fragmentation.

What killed the Unix market wasn't fragmentation.  Fragmentation is
the scapegoat.  Fragmentation is the scary word that Microsoft says to
make you want to buy Windows.  What killed the Unix market was
the conflict of interest between Unix vendors and Unix customers.
This conflict will kill Microsoft too, eventually, although nothing is
going to kill a ~$300 billion dollar company very quickly.

Most customers choose the vendor that does well the one thing that
they want done, and resist paying for anything else, so they search for
products that meet their requirements at the lowest cost.  Most vendors
want to maximize their profit per transaction and minimize their
engineering costs, so they figure out what 90% of the market wants,
prioritize the mutually exclusive goals, compromise until the cost is
low enough to be feasible, and build many copies of a single box that
tries to do all of those things at the same time.

Old-style Unix vendors succeeded as well as they did because they picked
customers who all wanted the same thing.  That means their boxes did one
thing well, and they formed a niche or vertical market around that thing.
IBM dominated the business data processing market.  SGI made frighteningly
cool I/O and graphics subsystems.  Sun got its start making really
cheap Unix hardware.  HP made computer systems that were built like
scientific instruments.  SCO targeted the vertical applications market.
Digital made (and Compaq is still making) CPU's with floating point
performance that is still unmatched today.  They all ran Unix (and usually
at least one other OS as well), and they collectively decimated those
vendors who were building complete software and hardware solutions that
didn't or couldn't run Unix.

During the 1980's, dozens of small-workstation (not to say "personal
computer") vendors tried to sell a lot of complete packages that would--by
themselves, using only a single vendor's components--attempt to be and
do everything one could want in a personal computer.  Few could really
understand what, exactly, everything one could want in a personal computer
was--and those who did understand knew that the cost was prohibitive
and compromised their designs in order to stay on budget.  As a result,
everyone who tried, failed.

In the absence of a real technological leader, the market drifted along
with the company that happened to have the cheapest, most accessible,
and most extensible hardware.  Small hardware vendors could focus on
doing their one thing well, then put it on a card that fits into a PC.
So if one wanted a cool PC, one would buy a box full of cards from
different vendors, each of which does its thing well, and collectively
it does a better job for the customer than the competition's box which
does everything but nothing very well.  Collectively, the PC vendors
decimated their competition as well, by focusing on keeping the one
component of a PC that they manufactured as small, fast, and cheap as
possible.

Apple boldly rushed into the marketplace as a total PC solution from a
single vendor.  Their major problem was that they refused to compromise
on the technical quality of their equipment or software, so every Mac was
saddled with a totally untested OS design, expensive SCSI disk drives and
Sony Trinitron monitors--components that few users needed and even fewer
were willing to pay for--while PC vendors were shipping a 20-year-old
OS design with crappy IDE disk drives with retrofitted 13" TV sets on
top for just under the same price.  We all know who was ultimately more
successful in the marketplace--very few users will actually notice even
a 50% difference in disk speed, since most "desktop" users use their
disks less than 1% of the time anyway.

We are seeing the same effects beginning to affect software.  Sun's
attempt to force Java, unfragmented, into the marketplace has almost
completely fallen apart.  Those who actually want or need Java are willing
to accept it--if "just a few" modifications are made here and there, and
if it worked with this or that system a little better, and if it supported
this set of application-specific extensions, and if any corporation other
than Sun Microsystems had stewardship over it.  Java by itself can only
deliver its value proposition if Java is the totally dominant run-time
environment out of of all candidates in existence, so Java is only
successful in places where it has no viable alternatives to begin with.

Microsoft is beginning to realize its mistake when it tried to make
Windows do everything, and is now launching a variety of slightly
different versions of Windows in an attempt to specialize its products
more, hoping that if one or two fail then the others may still be
viable.  So far, every attempt by Microsoft so far to expand Windows and
Windows-related products into areas outside of what they are good at has
failed miserably.  Microsoft would do better to spend its resources on
protecting what advantage it does have--consumer-level desktop operating
systems--than taking huge risks to try to force their products to do
additional things that dozens of others are already doing very well,
thank you.

Meanwhile, Linux is being tweaked by thousands of people to do thousands
of different things well.  Collectively, Linux will decimate the OS
software market because every vendor is focusing on making Linux do=20
their one thing well.

Fragmentation is what made the PC into the dominant desktop hardware
platform.  With only minimal central control over the system design--a PC
hardware vendor can get away with anything, as long as they don't violate
too many patents and the machine still does something--anything--well
enough for someone to want to buy it--the PC exploded into thousands of
different variations with designs optimized for everything from low cost
to low power to high performance to hostile environments.  If you want a
custom PC that just _looks_ cool, you can even find someone, somewhere,
who manufactures hardware with esthetically pleasing form factors.  If you
want a standard PC that works well with everyone else's stuff but doesn't
do anything spectacular on its own, you can get one of those too.  Every
customer can find a product that fits their needs.

Fragmentation is what will make Linux into the dominant operating
system platform.  With only minimal central control over the system
implementation--a Linux distribution vendor can get away with anything,
as long as they don't violate any licenses and the software still
does something--anything--well enough for someone to want to buy
it--Linux will explode into thousands of different variations with
implementations optimized for everything from low cost to low setup time
to high performance to high availablity.  If you want a custom OS that
just _looks_ cool, you can even find someone, somewhere, who writes
esthetically pleasing KDE or GNOME themes and pre-installs them with
the OS.  If you want a standard Linux that works well with everyone's
software but doesn't do anything spectacular on its own, you can get one
of those too.  Every customer can find a product that fits their needs.

--=20
Opinions expressed are my own, I don't speak for my employer, and all that.
Encrypted email preferred.  Go ahead, you know you want to.  ;-)
OpenPGP at work: 3528 A66A A62D 7ACE 7258 E561 E665 AA6F 263D 2C3D

--ZGiS0Q5IWpPtfppv
Content-Type: application/pgp-signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.1 (GNU/Linux)
Comment: For info see http://www.gnupg.org

iD8DBQE4scXa5mWqbyY9LD0RAitKAKCZo4HjQw4bUSFvm6dcLSb+8zY2oQCgtaas
OSg9sLnpfp2J+YH4RaibfJg=
=VQZV
-----END PGP SIGNATURE-----

--ZGiS0Q5IWpPtfppv--
   
Date: Mon, 21 Feb 2000 22:12:50 -0800
From: Jim Dennis <jimd@starshine.org>
To: rick@linuxdevices.com
CC: editor@lwn.net
Subject: KU Real-Time Linux (KURT)

In your recent white paper I think you missed making reference

to the Kansas University real-time Linux project:

    http://hegel.ittc.ukans.edu/projects/kurt/

Here's an excerpt from their old web pages (since
their new web pages presents most of the docs in
large PDF and PostScript files, their link to the
"old" pages is preferable for the white paper and
executive audiences).

``A purely binary distinction between hard and
soft real-time is clearly not acceptable for all
applications. Many applications have requirements
spanning a continuum between the two. To service a
wider range of requirements, we have developed a
firm real-time Linux. We call this system "KURT"
Linux for KU Real-Time Linux.

KURT Linux allows for explicit scheduling of any
real-time events rather than just processes. This
provides a more generic framework onto which
normal real-time process scheduling is
mapped. Since event scheduling is handled by the
system, addition of new events such as periodic
sampling data acquisition cards (video, lab
equipment, etc.) is highly simplified.

KURT introduces two modes of operation - the
normal mode and the real-time mode. In normal
mode, the system acts as a generic Linux
system. When the kernel is running in real-time
mode, it only executes real-time processes. While
in real-time mode, the system can no longer be
used as a generic workstation, as all of its
resources are dedicated to executing its real-time
responsibilities as accurately as possible.

A simple system call allows the kernel to be
toggled between real-time and normal mode. During
the setup phase, the schedule of events to be
executed in real-time mode is specified and the
various processes that are to be executed in
real-time mode are marked. The kernel is then
switched to real-time mode. When all of the
real-time tasks finish execution, the kernel can
be switched back to the normal mode.

Presently, this system is available only for the
i386 architecture. Porting KURT to other
architectures requires only minimal additions. If
you are interested in porting KURT to other
architectures, please send mail to
kurt@ittc.ukans.edu.  ''

You can find more of this sort of info at:

    http://smalllinux.netpedia.net/links/embedded.html

I found your white paper linked from Linux Weekly News: Daily.

I've heard that someone was working on unifying
the RTLinux, RTAI, and KURT work into one set of
patches which would allow users and kernel
developers to build kernels with soft and/or hard
real-time extensions enabled.  I don't know the
details, though the principles at Zentropix
probably do.

--
Jim Dennis                                             jdennis@linuxcare.com
Linuxcare: Linux Corporate Support Team:            http://www.linuxcare.com



 

 

 
Eklektix, Inc. Linux powered! Copyright © 2000 Eklektix, Inc., all rights reserved
Linux ® is a registered trademark of Linus Torvalds