[LWN Logo]

Linux Weekly News
Daily news
Contact us

A Japanese translation of this page is being prepared at ChangeLog.net.

Rik van Riel has made a Dutch version of this page.

Here is a French translation of this page.

See also: On Mindcraft's April 1999 Benchmark by Dan Kegel.

A look at the Mindcraft report

On Tuesday, April 13, Mindcraft released a report claiming to be a comparison of Windows NT and Linux in an enterprise server environment. The summary of this report reads:
Microsoft Windows NT Server 4.0 is 2.5 times faster than Linux as a File Server and 3.7 times faster as a Web Server.
Needless to say, this report has drawn some attention. Since their results differ so strongly from those of other, similar studies, it is normal to want to understand what was different this time around.

This document is a summary of the information that has been gleaned from their report. Thanks are due to the unbelievable number of people who have sent us mail on the topic. We have long since lost our ability to credit everybody individually; to list some names would do a disservice to those that got dropped. So, suffice to say that the following represents the work of a great many people; we have mostly just served as organizers of the information.

Some history

Mindcraft is a company which specializes in testing and benchmarking systems. Their testing is done for paying clients (as opposed to, say, Ziff-Davis, which tests for their publication). Their services page sums things up very well:
With our custom performance testing service, we work with you to define test goals. Then we put together the necessary tools and do the testing. We report the results back to you in a form that satisfies the test goals.
We may never know what the stated "goals" of this test were, but the client was Microsoft. Microsoft also paid for some similar studies: These other tests have brought out charges of unfair practices. See, for example, Novell's response to the Netware test (Mindcraft responded thusly).

It is also interesting to note that, while the other tests include price/performance comparisons, the Linux test omitted them.

Configuration of the Linux server

A number of problems have been found in the way the Linux server was configured in this test. These include:
  • The 2.2 kernel supports a number of tunable parameters in the file system and the buffer cache. Adjustments to the "bdflush" and file system cache size parameters have been known to double performance of high-stress Samba systems. This tuning was not performed.

  • Mindcraft used the 2.2.2 kernel for their tests, even though 2.2.3 was available at that time. 2.2.2 had some well known and well documented TCP problems, particularly relating to interoperability with Windows clients.

  • The server they tested was set up with both NT and Linux on the same disks. Bill Henning (of CPU Review fame) points out that whichever system ended up on the outer part of the disk (where there are more sectors per cylinder) will have a 1.5 to 2-times transfer rate advantage over the other. Mindcraft does not specify which system was installed where, so it is not possible to really know what the effect of placement really was. (We have received a report that the test was actually performed with two separate OS disks, that it was not truly a multiboot system. There is still the question of the data disks, however).

  • The test was performed using a RAID controller which is not well supported with Linux, using version 0.92 of the driver, which had a known problem on SMP systems. A different controller would have likely yielded better results.

  • Several people have pointed out that, in the list of "processes running immediately before" the tests, both Apache and Samba are absent. This is either an oversight on their part, or these servers were running out of inetd. If the latter case holds, it is amazing they got the performance they did. It actually seems unlikely that they were this badly off; an error in the report itself seems more probable.

  • They went to considerable effort to optimize the performance of the network cards under NT. The TCP window was increased to 65536, something which can speed transfers in some situations on a protected net, but is not normally done in real situations. The same optimizations were not performed on Linux.

  • They set up a 1012 MB swap file for NT, but do not mention anything about any swapping arrangements for Linux.

Configuration of Apache

The Apache setup used by Mindcraft does not well match what a real-world web server would use.
  • The Apache configuration is also not suited to large loads. It initially starts 10 servers, and MinSpareServers is set to 1. In particular, quick response to sudden, heavy loads will be reduced by this configuration. It is not an "enterprise" setup.

  • Some questions have been raised about how the Apache logging was set up. It appears that Apache was logging to the same drive the OS was on, which could have hurt performance (NT/IIS was logging to the RAID array). IIS was also apparently configured such that no actual logging would happen at all until after the completion of the test, while Apache was logging every hit.

  • Their Apache configuration disables KeepAlive, an important real-world optimization. (It has been pointed out that the tests do not use KeepAlive in any case).

Configuration of Samba

  • Their Samba configuration sets the widelinks parameter to "no". This setting increases the system call overhead for file name lookups considerably. The penalty is especially severe on SMP systems.

  • All 144 clients used in the tests were Windows 95 and 98 systems. For reasons known best to the Samba folks, Samba performs better with NT clients than with Windows 95 and 98 clients.

  • It does not appear that Samba was set up to use all of the (multiple) ethernet controllers on the system.

Non-issues

A few complaints that have been sent to us probably do not figure into the test results. We list them here in the hopes of helping to slow their propagation and improve the quality of information out there.
  • Some complaints have been raised about the test being run on a 4GB server, even though the Linux kernel, in its default form, can only use 960M of that. Patches can be applied to make 2GB available fairly easily. But, in any case, they claim that NT was limited (with the maxmem parameter) to 1G of memory, so this aspect of the test was fair. It would have been more straightforward of them, however, to have simply remove the other 3G from the system.

  • It has been noted that the test system was running kerneld, portmap and NFS, both of which should have been unnecessary in this situation. But their presence should not have caused problems either.

  • There have been speculations that the test system may have been running out of file handles, but further research suggests this is not likely to have been the case. The 2.2 kernel has a default limit of 4096 - rather higher than previous versions had - and that should have been sufficient in this case.

Summary

It seems clear that the Linux system in this test was not performing up to its full capability. Giving Mindcraft the benefit of the doubt, it could be said that, while they clearly had an NT expert present, they were lacking in Linux expertise and failed to set up the system in an optimal way. A rerun of this test - with suitable Linux expertise at hand - would seem to be in order.

Eklektix, Inc. Linux powered! Copyright 1999 Eklektix, Inc. all rights reserved.
Linux ® is a registered trademark of Linus Torvalds