[LWN Logo]
[Timeline]
Date: Wed, 1 Nov 2000 03:23:13 +0100
From: stefane fermigier <sf@fermigier.com>
To: lwn@lwn.net
Subject: Toronto Linux Expo, day 1.

Hi,

this is a summary of the tutorials at Linux Expo in Toronto (day 1).

This first edition of Linux Expo Toronto is more a conference than an expo,
with just 15 registered exhibitors, and none if the big names like Red Hat,
MandrakeSoft, SuSE, Corel, Caldera, VA or Lineo. Fortunately, the conference
program is rich (4 tutorials today, 4 keynotes tomorrow (by Bob Young - Red
Hat, Jacques Lemarois - MandrakeSoft, Steve Illington - Oracle and Tim Ney -
FSF, and 35 technical talks after that).

I attended today two tutorials: Linux 2.4 and Linux Security. Here are my
raw notes.

	S. Fermigier, Toronto, october 30, 2000.

==============================================================================

Tutorial: Linux 2.4
===================

By Eric Dumas. Slides available on http://www.freenix.org/~dumas/Toronto2000/

Linux Kernel History
--------------------

First version in sept/oct 1991, broadcasted to the Internet. Catches interest
of students, researchers, hackers.

Linux *is* a Unix Operating System (standard Unix commands, standards Unix
behavior).

1991: 0.01 in September, 73 kb compressed. Not portable, no press release.
You needed Minix to use it. 0.02 in October: bash and gcc.

1992: usable and stable; "Linux is Obsolete" thread with A. Tanenbaum.

1993: more mature release (0.99 -> SLS distribution).

1994: 1.0 network layer is added. First stable release.

1995: 1.2. Supports Intel, Sparc and Alpha.

1996: 2.0: many drivers added. Linux leaves the hacker and student area.

1997/1998: very long development phase with huge core modifications.

1999: 2.2: multiple architectures, SMP is stable.

2000: 2.4 ???.

Who are the developers ? -> 371 registered developers (see CREDITS file) with a
strong developer core team: Linux Torvalds, Alan Cox (takes care of 2.2
maintenance), David Miller, Stephen Tweedie, Richard Gooch...

Who own Linux ? Name trademarked by Linux Torvalds (because someone tried to
trademark it before him and screw the community). Source code under GPL. Linux
belongs to its developers. But there is just on Linux kernel!

Kernel size: from 170000 lines (1.0) to 3M (2.4). Drivers and filesystems
take most of the space. But you can still use it with a 386 with 8 Mb of
memory (the kernel only loads what it needs).

Development process: stable series / development series.

What has changed? Before 1996/1997, it was mostly unknown. It was very
difficult to get specifications to write the drivers (cf. Matrox). None
of the developers was paid to work on the kernel. First Linux
conferences in Europe (Linux Kongress in Berlin). Now many kernel
developers are working for the Linux companies. Linux is known by everybody.

What hasn't changed: Linux is still free. Companies keep giving back
to the community. No commercial pressure on the kernel development.
Quality is still the important point.

What has changed: (try to) speed up the development cycles. More
and more drivers. More companies actively contributing to the kernel.
The Linux spirit of the first years has changed.

The key of Linux: interoperability. Deals with most of the existing
filesystems, network layers. It has emulators (Dosemu, Wine), native
emulators (Solaris on Sparc, Tru64 on Alpha). It's easy to port C/C++ code
to Linux.

Present -> future: used to be a hobby, now a professional tool
(very stable, good for developers, but many applications for non computer
people too, like games).


Linux 2.4
---------

2.4 development was supposed to be much faster. 2.4 release was originally
planned for December 1999.

What's new: new architectures: SUN, from Sun3 to UtraSparc 10000, Motorola 403
and 405, IA 64: Linux is completely ready for it, IBM S/390, Transmeta Crusoe,
Hitachi SuperH, HP PA-RISC (in progress).  One of the goals: Linux for big
servers: 64 Gb of memory, IDE and SCSI layer rewritten, DMA layer rewritten.

More core features: SMP up to 64 processors on Alpha and Sparc, 
Plug N Play really working, USB = generic bus where you can plug anything, 
I2O, Firewire, I2C, ATM, DECNet, Devfs, Autofs4, NFSv3 (Linux reduces the
gap with FreeBSD), NFSv4 being ported by CMU with support from Sun, Bfs 
(SCO filesystem), flash memory devices, UDF (filesystem for DVD).

Also extended support for end users (game box), with support for video, radio
cards, sound, joystick... But it's still a little bit more complicated to use
than Windows.

Make the machine faster, to beat Windows NT on the benchmarks. 
Khttp: kernel plug-in to allow web servers to be faster, enhanced filesystem,
virtual memory management rewritten, huge buffer cache enhancements.
Linux gets more reliable when the machine is loaded. Linux is keeping
up with Solaris under heavy load.

Make your system less sensitive to hardware failure:
Journalized filesystems: Reiserfs, Ext3, XFS (best current filesystem, from
SGI)... and RAID support extended (levels 0, 1, 5; more hardware supported;
LVM).

Easier to develop: clean and easy way to use PCI interfaces, USB, ISAPNP.
Most of the existing drivers have been updated to use this interface.

Auto-documented kernel. Good because books about the kernel are obsolete
quickly. With 2.4, the kernel source contains a self-documentation system.
Run "gmake psdocs" and look also at kernel-api.dvi.

Why jump to 2.4? It is going to be reliable, scalable, with many drivers.
Some distribution are ready to upgrade as soon as the 2.4 is available.

Ext3
----

Ext2fs was written by Rémy Card with contributions of other Linux developers.
It is good, quick and reliable.

Now: Ext3fs = Ext2fs + journalisation code written by Stephen Tweedie.
Current status: version 0.0.2f = still experimental.

What is journalisation? When you switch off your machine, you have to scan the
whole disk before it gets usable again. Journalisation writes any write
action in a journal.

You have a buffer cache: data is written in memory, not directly on the disk.
Problem if the machine stops and the data are not synced: the filesystem is
then unstable. You have to use e2fsck. It will check the coherence of the
filesystem (inodes, bitmaps, directory structure, freeblocks, timestamps) and
try to repare.

Journalizing the actions to be performed on the FS allows very quick recovery:
you only have to scan the journal, not the filesystem.  Why add journaling to
ext2 instead of write a new filesystem? Because ext2fs is working great, using
what already exists saves time and it allows to jump from ext2 to ext3 easily.

In ext3, the journal contains transactions (atomic operations).  They are
single update to the filesystem. They have a specific order and may have link
to other transactions.

Installing Ext3fs: patch the kernel, create an ext2fs partition, create the
journal. Unmount and remount the filesystem telling the inode of the journal.
This is really easy.

Why Ext3fs? because it is simple and keeps all of the major features of
ext2fs.

[What about Reiserfs? Eric can't say much about it, because he didn't
try it himself.]

Writing kernel drivers
---------------------

Example: see drivers/char/busmouse.c

Remember than in Unix, everything is a file. To define a driver, you register
a structure of pointers to functions that reflect specific file operations.
This file is identified by two numbers (major and minor). See struct
file_operations in busmouse.c. For a simple driver, you don't need to
implement every function.

Usually, to write a driver you take an existing drivers that does something
similar and modify it according to the actual specifications of your device.

Locking in the kernel
---------------------

Why locking? To prevent concurrent access and race conditions, and allow
critical section in SMP.

Spinlocks vs. Semaphores: spinlock = simple logic, you keep trying to get the
spinlock. (Doesn't exist on non-SMP machines). Semaphores = if a task can't
get the semaphore, it is put in the queue is awaken when the semaphore gets 
available.

Locking - read/write. It is valid for n users to read the same resource
at the same time, but you can only have one writer at one time.

The locking system in Linux 2.4 has been enhanced. 

Conclusion
----------

The Linux kernel is making progress: more drivers, more features. Performance
and reliability are still the main focus.

Linux is adapted to all machines: from Intel 386 to UltraSparc 10000).


==============================================================================

Linux and security
==================

By Hervé Schauer, Guillaume Malgras and Sylvain Gil.

(Check http://www.hsc.fr/ressources/presentations/index.html.en for the slides
for the talks, as well as many more, in French and English).

Hervé Schauer is a well-known computer security consultant in France. His
company (www.hsc.fr) has 12 consultants, was started in 1989 and specializes
in Unix and Internet security.

Some of his consultants are very good at Windows security, just to be
able to explain to their customers why they should better use Linux!

Security advantages of open source software
-------------------------------------------

To choose a proper security solution, the user has to understand its own
security ans service needs, and match it with appropriate resources (financial
and human).

Example: Microsoft has switched from Unix technology to its own technology,
and the result is that they have been hacked. HSC consultants are usually able
to hack IIS servers, but not a properly configured Apache on Linux.

Trouble with proprietary software in security is that it's impossible to know
the inner workings of the software, because it's usually treated as a trade
secret. Examples from Cisco or Firewall-1 that claim that they can do things
that it fact they don't do properly (like stateful inspection) -> they are
lying! So you have to do some reverse-engineering because you can't rely on
vendors that you can't trust. This can lead to huge surprises.

Open source software is usually better than proprietary software in security.
Its code is usually better than proprietary (code size, modularity,
readability, quality, documentation). Open source software is driven by the
developer ideas, not by marketing (no deadline pressures...). Of course
there is also bad open source software.

In particular, open source permits source code analysis, fast bug corrections
and support independence. Peer review do works, and no proprietary security
software vendor can afford the same amount of peer review. With respect to bug
corrections, you have the possibility to apply the correction yourself, or ask
someone you trust to do it for you.  Security is not absolute, but you have to
react in time when there is a treat. With open source, you don't have to fear
that the support for your support won't be discontinued after some change of
politics in (like a buy out, a merger or some strategic decision).

Examples of open source security software:

  IP filtering:
  - NetFilter (Linux 2.4) and IP-Filter (OpenBSD, FreeBSD, Unix).

  IP filtering configurators:
  - PHP Firewall Generator, gfcc, easy firewall, Mason...

  Application-level proxies 
  - Squid, Solsoft NSM, Delegate (not very good), FWTK.

  Inbound IP filtering for a server:
  - tcp_wrapper, xinetd

  Vulnerability tests suites
  - Nessus (very good), Whisker, Nmap, Hping.

  Intrusion detection
  - Snort (very good), Shadow, Bro.

  Network level cryptography 
  -> IPsec implementations, like in OpenBSD or FreeS/Wan in Linux.

  Application level cryptography
  - PGP, GPG, SSH, OpenSSH, LSH.

  Authentification
  - Opie, S/Key, Radius, OpenLDAP, Tacacs.

  Secure Web servers
  - Apache + SSL.

  Integrity checking
  - Tripwire, AIDE.

  Logging
  - Swatch, logsurfer, XML-logs, new syslog, modular syslog, syslog NG.

  Name servers and mail transport
  - BIND (BIND 9 has DNS security), Postfix, Sendmail.

  File servers
  - Samba.

  High availability
  - LinuxHA...

Linux 2.2 security
------------------

Linux 2.2 has "capabilities", which are a way to give to a process some, but
not all, of rights usually associated with root. Example: ping only needs
CAP_NET_RAW to access raw sockets. They were first defined in a (now obsolete)
POSIX draft. There are also two projects under way to give capabilities to
files.

You have to compile the kernel with the right option, and use the 'lcap'
program to update capabilities.

Setting capabilities for a process is done by using 'execcap' and 'sucap'.
'fcaps' and 'elfcaps' are two utilities currently under development.

Kernel modules are a threat to kernel security because they are executed in
kernel space ans can intercept the system calls.  A system administrator can
use modules to update drivers without reboot the system, or to divert certain
system call to increase security.  But a cracker can take advantage of
modules, like by hiding files or processes, filtering logs or breaking out of
a chrooted jail.

Another way to increase security in Linux 2.2 is accounting. You have to turn
in on by using the command 'accton' and you can tune it by modifying
/proc/sys/kernel/acct. Commands to use it are 'sa' and 'lastcomm'.

Last important tool is filesystem encryption. You can use many algorithms like
CAST-128, Twofish, Blowfish, DES... You have to use 'losetup' and 'mount' to
setup the encryption properly.

There are many other security patches to the Linux kernel, like non-executable
user stack.

Some secure Linux distribution: nmrcOS (based on Slackware), Bastille-Linux
(based on Red Hat), Stack Guard, Nexus Linux and Slinux (development not yet
started).

See also Subterfuge.

PAM (Pluggable authentification modules) provide a way to upgrade security
functions without recompiling every application (that must perform some kind
of authentification).

Overview of the main applications of Linux for security
-------------------------------------------------------

As a security gateway, use the IP filtering (IP chains and soon NetFilter).
It provides advanced filtering, session analysis and address translation. You
can use a proxy like Squid for HTTP, or more sophisticated proxies like NSM to
perform content analysis. Linux is also very good for Web service (remember to
always put your Web server in a DMZ, not in the same network as the other
servers). Use a split DNS, with a public DNS (only hosts that should be
visible from the outside) and a private DNS. For mail servers, you can use
Postfix and Cucipop (also in the DMZ). Put different services in different
DMZs (if possible). You can use Snort on an intrusion detection station (put
it between the router and the firewall). Use IPsec with FreeS/Wan to create
a VPN.

Linux is excellent for security consultants. It is not possible to do a
complete intrusion test on Windows! You have to use Linux.  Same for corporate
security departments (real-time logging, intrusion detection, vulnerability
tests... -> use Nessus). With Windows, you have to use one server by
applications, you can't mix several applications on the same servers. For
office networks, use Samba.  You can use Linux workstations even if you're
developing Windows software (because you can use CVS over SSH...).

But Linux doesn't protect against bad use. Use of Linux for security is
growing quickly. Do not forget other open source Unices (OpenBSD has a better
IPsec stack, so it's better for some VPN stuff, FreeBSD seems better for Web
servers).

Mail, DNS and Web under Linux
-----------------------------

[Sendmail: I came too late after the break after speaking to Christian
Tibirna of KDE.]

Postfix: written by Wietse Venema (who also wrote tcp_writer and Satan).  Its
goal was to benefit from Sendmail experience, to be easy to administer and so
easy to secure, and to have maximum Sendmail compatibility (for instance,
there a fake 'sendmail' command).

Architecture: it is written the Unix way, with small programs, so that it easy
to read, more difficult to break in, easier to chroot. There is a patch to
allow SMTP-TLS in Sendmail. It has multiple waiting queues (good for
performance).

Configuration: just edit /etc/postfix/master.cf and /etc/postfix/main.cf

You have anti-spam control: blacklist (IP address, RBL, no reverse lookup in
DNS)...

Other systems: Qmail (reputed secure but has less functionality; Exim not
widely used, and has some problems).

DNS is like a very large distributed database. BIND is the universal reference
for DNS. It was written at Berkeley in the 80s (like sendmail).  BIND is
robust and standard. BIND supports Linux. Configuration is easy in simple caes
but can become very complex. You configure it using /etc/named.conf and zone
files.  For each zone file you put RR (resource records). Two zone records ars
mandatory: SOA (don't forget the serial number) and NS. Other resource records
are A (IP address), PTR (reverse) MX (for mail servers) and CNAME (for
aliases).

Security: take the last version (8.2.2pl05). BIND must not run as root and run
it chrooted.

BIND 9: next generation, fully rewritten from scratch. New security features
like DNSSEC and TSIG. There are no credible alternatives to BIND.

Apache is 'the' Web server. Even Microsoft uses it (for Hotmail). There are
many security related modules, like ldap_auth, mod_rewrite and EAccess.
Apache is very secure (no vulnerability since 1.3.4). The config file
'http.conf' is simple and well documented. It has proxying capabilities.
Apache performs access control (.htaccess). Apache has integrated SSL support
(use apache-ssl or mod_ssl). But be very careful with CGI programming.

Squid is an HTTP/HTTPS/FTP proxy/cache. It has ACLs and can perform several
authentification methods. You can also use it as a reverse proxy (to protect
IIS, for instance).

Network encryption under Linux
------------------------------

You have to encrypt network traffic to deny illegal spying on information.
Three needs: authentification, confidentiality and integrity.

IPsec was first designed to work with IPv6. It's an IETF standard since 1995.
Includes key management protocol (IKE). IPsec encrypts traffic at network
layer, so that it can be used by all applications.  There are three protocols:
AH (authentification header), ESP (encapsulated security payload) and IKE.
IPsec uses different algorithms for encryption: 3DES, DES (not secure
anymore), CAST-128, Blowfish and AES. For data authentification: HMAC-MD5 and
HMAC-SHA1. For peer authentification: public keys and shared secrets.

One can use IPsec for VPNs to encrypt traffic between gateways. Under Linux,
use FreeS/Wan (www.freeswan.org). Quite easy to install.  Uses only 3DES.
There is also a native implementation in OpenBSD and KAME (by WIDE consortium
for ***BSD / IPv6).

SSL was designed by Netscape, it is now an IETF standard under the name TLS.
It is used by HTTPS, NNTPS, LDAPS, FTPS, telnets, IMAPS, POP3S, SMTP-TLS.

SSL provides data confidentiality (RC4, DES, 3DES...), data integrity and
authentification (MD5, SHA-1). Server authentification is made with a
certificate (X509v3).  Application: Apache-SSL or mod_ssl. You first need
OpenSSL.

SSH is a secure shell, developed to replace telnet and rlogin, and to allow
file transfer (scp). It provides services like server and client
authentification (via public key cryptography). Encryption of data via IDEA,
DES, 3DES, aRC4. ssh (the program) was written by Tatu Ylonen in Helsinki. It
is now an Internet draft. In 1999, OpenBSD created the OpenSSH project, which
is a 100% free replacement of ssh (that had become proprietary).

SSH server listens on port 22. The client sends a session key, encrypted with
the public key of the server. There is a mutual authentification between
client and server. SSH port redirection allows to access hosts or ports that
normally cannot be accessed.

NetFilter
---------

It's the new IP filtering system in kernel 2.4. Historically, filtering was
first introduced by Alan Cox in 1.1 (ipfw from BSD), then ipwadm in Linux 2.0
by Jos Vos, then ipchains by Rusty Russel and Michael Neuling, and now
NetFilter by Rusty Russel. Rusty Russel works for Watchguard, a company
that makes Linux-based firewalls.

A packet always goes through one of these chains: Input, Output or Forward.
There are these following bultin chains: ACCEPT, DROP, RETURN, REJECT, LOG and
QUEUE (a user program will decide of its fate -> performance problem).

NetFilter can filter packets on input or output of an interface, on source or
destination address, service type, protocol, fragment or not, ICMP type and
code, port (for TCP / UDP), TCP option (SYN...).

One can do connection tracing of TCP, UDP and ICMP connections.
Note: doing stateful inspection is very memory-intensive. 

One incredible feature: one can match a packed created created locally,
by pid, uid, gid, sid.

NetFilter can also do address translation, for instance source address
translation (always done last by the chain POSTROUTING).  Used for IP
masquerading.

Conclusion: not yet ready for production. New and complicated syntax.  But
NetFilter will be as good as ipfilter on BSD, and both are much better than
any commercial firewall offering.

-- 
Stefane Fermigier, Tel: +33 6 63 04 12 77 (mobile).
Portalux.com: le portail Linux.
"Internet: Learn what you know. Share what you don't."