[LWN Logo]

To:	Aman Singla <aman@cthulhu.engr.sgi.com>
Subject: Re: Scheduled Transfer Protocol on Linux 
Date:	Fri, 11 Feb 2000 11:24:12 -0800
From:	Larry McVoy <lm@bitmover.com>

: > Folks, I was at SGI when they did this stuff and I've used it personally
: > quite a bit.  It's very cool.  I think this is definitely worth a look
: > and I'll be playing around with it.
: > 
: > One thing that I've dreamed about for a while is getting the disk drive
: > vendors to put STP down in the drives.  Then we throw out the SCSI/IDE
: > cables and use RJ45 connectors to talk to both the network and the disks.
: > Think hot plug.  Cool, no?
: 
: SCSI over STP is already specified. Infact, SGI and Genroco did
: a prototype demo at CERN last October with SCSI over STP running out of
: Origin2000 with GSN connecting through a Genroco bridge to a fibre
: channel raid array running pure SCSI. That same bridge has blades
: for GbE also.

This is cool but what I want is both more cool and more mundane.
It's the mundane aspect that makes it cool.  Here's the picture:

Imagine that cheapo IDIE drives could be bought soon with 100BaseT and
not so soon with GigEth over copper connecters.  They cost about $100
more than your current IDE drives (which are essentially free :-)

Imagine Linux with STP in the kernel on _both_ ends of the connection.
On the disk drive, the embedded 680x0 is gone, replaced with an embedded
Celeron (or whatever is cheap, maybe Transmeta wants to play in this
space; hmm, that's a cool idea, maybe I should call Ditzel).  

Linux on the disk drive is not as far fetched as you might think - the
drives already have two processors out there, the analogue thing that 
runs the arm and the more normal thing that manages the cache, does
any protocol, whatever.  Lift the existing normal one, drop in a Celeron,
add a little memory and let Linux manage the cache.  And run STP, HTTP,
NFS, SMB, etc.  You know have an IP address problem, but that's what test
networks are for (or the 127.0.0.0 net, there are a bunch of spares).

OK, now image a nice little industry springing up around this.  First you
have software only STP.  People make money by putting all the pieces
together and selling 'em.  The Ethernet card people start noticing.
3Com comes out with an STP GigE card that does one channel of STP in
hardware.  Next they come out with 100baseT cards that do 1 channel.
Next they have multi channel GigE cards.  Next a multi channel 10 GigE
card.

The disk drive people make deals with the enet card people and the
disk drives get one channel of STP in hardware.

Now you have a nice family of soft, cheap but fast cards, not so cheap
but faster cards, really expensive and really fast cards, and fairly
cheap network attached disk drives.

Think about it.  Wouldn't it be cool if PCs came with little bays in
the back that you slide disks into and they hot plugged?  Hot plugging
is no big deal to the networking crowd - we've been doing it forever -
but it is a huge deal to the disk drive crowd.  Think of all the people
who would buy more disk space if they could upgrade it themselves.
Most people can handle plugging in a networking cable but freak out at
the thought of ribbon cables...

So why is this mundane?  Because it is targetted at the low end - at the
single disk drive, at the cheapy card.  Why is that cool?  Because the 
low end brings volume sales and that drives the prices down and still
lets people make money.  The dream here is to have STP over ethernet
be ubiquitous - anybody with a couple of PCs and ethernet now has 
the makings of a real cluster.

<rant>

What I _DONT_ want is for clustering interconnects to be esoteric.  I
want all the fancy, expensive, low volume interconnects to just die.
That includes SCI, Fibre channel, SCSI, ATM, all that shit.  Make it
go away.  Just give me a world that has one kind of cable with 
one kind of packet and focus all the attention on making that cable
faster and the physical connectors cheaper.  That's a way better 
picture than all the low volume stuff.

There are lots of people who will scream about this idea.  I think
you'll find a lot of them at Fibre channel, SCI, etc., companies.

When I was pushing 100mbit ethernet at Sun (this was before it existed),
all the same people were screaming that it couldn't be done, that FDDI
(remember that) was better, etc.  What they didn't count on was the 
amazing success that comes with a volume answer.

</rant>

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/