[LWN Logo]

Date:	Thu, 5 Nov 1998 09:58:57 +0100 (MET)
From:	MOLNAR Ingo <mingo@valerie.inf.elte.hu>
To:	linux-raid@vger.rutgers.edu
Subject: RELEASE: RAID-0,1,4,5 and LVM version 0.90, 1998.11.05


this is an alpha release of the latest Linux RAID0145 drivers, against
kernel 2.1.125 and 2.0.35. (a 2.1.127 port will follow as soon as 2.1.127
gets out) Also this package contains a prototype (kernel-space and MD
based) LVM implementation.

WARNING: we are still not out of alpha status, some of the features are
not widely tested. It should be mostly ok, but a backup never hurts ...

you can find raid0145-19981105-2.1.125.gz, raid0145-19981105-2.1.125.gz
and raidtools-19981105-0.90.tar.gz in the usual alpha directory: 

	http://linux.kernel.org/pub/linux/daemons/raid/alpha

new RAID features/fixes in this release:
========================================

   = 'raid=noautodetect' boot time option added

   = /proc/sys/dev/md/speed-limit to runtime-configure reconstruction
     speed.

   = initrd and reboot fixes by Luca Berra <bluca@comedia.it>

   = RAID5.HOWTO by Jakob Ostergaard (jakob@ostenfeld.dk)

   = the 'negative counter' bugfix

   = and varios smaller or bigger things i forgot ...

experimental LVM support:
=========================

the biggest change is the rewrite to get LVM into MD. This ment the
rewrite of varios MD pieces, eg. md_arrays[] is gone and mddev is now
runtime allocated, along with minor device numbers. This enables us to
utilize the rather scarce minor device number space efficiently, which is
a must for LVM.

the LVM is already tightly integrated with the MD stuff. The RAID
superblock layer still has to be cleaned up a bit more to serve as a
generic 'storage container identification layer', independently starting
up both RAID and LVM (or both, or stacked) devices. But we are almost
there.

the LVM implementation lacks proper user-space support, but people who are
interested and want to comment on the design are welcome to take a look at
lvm_p.h, lvm.h and lvm.c. In raidtools there is an 'mkpv' utility, which
prepares partitions to be added to the LVM: 

	./mkpv -f /dev/sdc6
	/dev/sdc6's size: 51892 KB.
	/dev/sdc6's rounded size: 51000 KB.
	creating VG ...
	creating LV 1 ...
	initializing block groups on /dev/sdc6.

	[root@hell raidtools]# ./mkpv -f /dev/sdc7
	/dev/sdc7's size: 51892 KB.
	/dev/sdc7's rounded size: 51000 KB.
	creating VG ...
	creating LV 1 ...
	initializing block groups on /dev/sdc7.
	[root@hell raidtools]#

a sample raidtab entry to utilize the above PVs:

#
# Create an LVM Volume Group out of two Physical Volumes:
#

raiddev /dev/md0
        raid-level              lvm #-volume-group
        nr-raid-disks           2
        persistent-superblock   1
        chunk-size              16
        device                  /dev/sdc7
        raid-disk               0
        device                  /dev/sdc6
        raid-disk               1

and after 'mkraid -f /dev/md0', the VG will show up in /proc/mdstat:

	[root@hell /root]# cat /proc/mdstat
	Personalities : [linear] [raid0] [raid1] [raid5] [lvm]
	read_ahead 128 sectors
	md0 : active lvm sdc6[1] sdc7[0] 0 blocks<LV1 1/20000 blocks used>
	unused devices: <none>
	[root@hell /root]#

currently 'mkpv' creates a single hardcoded 80M LV, which is mapped to
/dev/md9. /dev/md9 can then be used to create a filesystem.

	[root@hell /root]# mke2fs -b 4096 /dev/md9
	[root@hell /root]# df /mnt
	Filesystem         1024-blocks  Used Available Capacity Mounted on
	/dev/md9               50140      52    47500      0%   /mnt
	[root@hell /root]#
	[root@hell /root]# cat /proc/mdstat
	Personalities : [linear] [raid0] [raid1] [raid5] [lvm]
	read_ahead 128 sectors
	md0 : active lvm sdc6[1] sdc7[0] 0 blocks<LV1 427/20000 blocks used>
	unused devices: <none>
	[root@hell /root]#

this Logical Volume can be stopped/started in normal raidtools fashion,
and can be autostarted/root mounted as well.

the kernel side of the LVM support code is mostly finished, one major
component that is still lacking at the moment is proper integration with
the buffer cache. User-space needs some serios coding and properly thought
out utilities. This release of the LVM code is ment to give people an
opportunity to comment on the design, before i build too many things
around it :) The physical layout will almost certainly change in a
nonmaintainable way. 

this LVM implementation differs very much from 'typical' LVM
implementations (AIX, HP-UX, Veritas), it's a 'block-level LVM' (i'm not
sure wether this term exists at all), with an allocation granularity (LVM
blocksize) of 4K. This design is pretty 'daring' but enables us to do
advanced block device features like filesystem-independent migration,
resizing, defragmentation, on-demand storage management, software-based
badblock-handling, snapshotting and multiversioning. But i first want to
finalize (and discuss) the core design (which should already provide all
the 'legacy' LVM operations like spanning a filesystem over arbitrary
devices, and basic storage management) before adding 'applications' to the
core level.

the LVM implementation does not impact overall RAID stability, people
using RAID should just disable LVM in the kernel config.

enjoy. Reports, comments, flames, feature-requests welcome. Let me know if
i have missed/forgotten some patch sent to me.

-- mingo