Linux Scalability Project

Status report for May and June 1999

Center for Information Technology Integration
School of Information at the University of Michigan

The primary goal of this research is to improve the scalability and robustness of the Linux operating system to support greater network server workloads more reliably. We are specifically interested in single-system scalability, performance, and reliability of network server infrastructure products running on Linux, such as LDAP directory servers, IMAP electronic mail servers, and web servers, among others.

Summary

We focused on building our web server test harness during the last two months. We've been joined by new staff and new sponsors. We're continuing to reach out to potential sponsors. Work continues on long-term projects.

Milestones

Challenges

Developer Co-ordination

The current structure of the Linux kernel community is an evolved system of geographically scattered developers who are tied together by an e-mail list, an anonymous CVS server for some non-Intel platforms, and a handful of ftp and web sites.

While there are vague assignments of areas of expertise, there appears to be no public co-ordination of who is doing what and when. This leads to some frustration in the greater kernel development community, because often duplicated effort results, and it appears that some work is favored over other work for no apparent reason. As one .sig states,

There is something frustrating about the quality and speed of Linux
development, ie., the quality is too high and the speed is too high,
in other words, I can implement this XXXX feature, but I bet someone
else has already done so and is just about to release their patch.

For instance, the Linux Scalability Project was founded to focus on improving SMP scalability, among other specific issues. During the past two months, many of the SMP scalability issues in the Linux kernel have been directly addressed by a few Linux developers. The areas that have been addressed so far include threading the networking stack and the page cache, two areas that are critical for scalably providing enterprise-class networking service on SMP hardware.

Because of these issues, the Linux Scalability Project, which originally thought it would concentrate on providing patches, is adapting to a new role within the Linux community. We carefully measure, and then point to areas that may be improved. Sometimes we may provide a fix, but often the community of developers is better equipped to find and fix a performance or scalability problem once it has been identified.

Integrated kernel debugging tools

It is a stated philosophy of the Linux community that there shall be no integrated kernel debugging tools. The high reliability of Linux is often attributed to the lack of traditional debugging tools commonly available to other software projects. The defect rate of Linux is remarkable considering there are few debugging tools available. However, the reason it is so low is more likely because the lack of debugging tools prevents people from getting involved and fixing problems, so only the experts can fix problems. Preventing a flood of fixes and modifications helps keep the change rate lower (ostensibly a good thing), and limits the amount of parallel work that can proceed on the kernel.

Moreover, attributing the high quality of fixes to lack of debugging tools does a disservice to the level of expertise of those working on Linux, and to Linus' ability to recognize and apply a good fix when he sees it. It would be hard for anyone to break out a single reason why Linux bug fixes are the way they are.

It should be stated that the one of the main advantages of GPL-style open-source development projects is that there will never be a bar to becoming involved in such a project. New developers can join open mailing lists and participate immediately with seasoned hackers. But because Linux is a volunteer effort, it must face the risks inherent in having developers with widely varied skill levels working on it. Thus, limiting changes to the kernel has a stabilizing effect, and keeps the risk of damage done by donated code at a minimum. However, even if non-experts aren't allowed to submit code to the Linux kernel, they should still be allowed to hack on their own kernels, and they should have the tools to do this expediently.

One of Linux's success stories is the kernel source code review process. It's not perfect, but it does contribute significantly to the overall consistency of the kernel, moreso than I think the availability and style of debugging tools does. Not having debugging tools also means that some code is not as well tested, because the existing testing methodologies are intrusive and painful (that is, printk's in the kernel). Even though Linux doesn't have lots of unnecessary checks for NULL pointers, for instance, the problems and bugs that do exist seem to languish for a very long time, often cropping up later after significant changes happen to the kernel.

It's been pointed out that there are several interesting interactive kernel debugging tools available to the Linux community, including kdb developed by an SGI software engineer, and the IKD patch maintained by Andrea Archangeli. These are great tools. The problem is that before they can be useful, the developer community must wait for them to become available for the kernel revision they are debugging. Neither IKD nor kdb are ported to the 2.3 series kernels at the moment. I can envision one of these patches integrated into the mainstream kernel and enabled/disabled via compile-time CONFIG. To clarify, I recognize that interactive kernel debugging tools are available, but just not "integrated" as well as they could be.

I'm really interested in two things: improving the quality of information in bugs reported by non-kernel developers, and helping people solve their own problems. Improving bug report quality can only help those who actually do have control over what goes into the kernel distribution. And empowering individuals to solve their own problems is what open source software is all about, isn't it? Both Solaris and AIX, traditionally delivered in binary-only form, have some good ideas to offer here. They've had to develop good tools because their users/customers receive their product in binary-only form.

In conclusion, I believe that anything that controls the rate of change is a good idea. I'd like the change management to be more direct and considered than artificial and arbitrary, though, and it seems like leaving out debugging tools is an artificial throttle on positive changes. For example, more folks could be providing fixes and new features of quality if there were integrated debugging tools and a stronger, more consistent review process. Although, I do realize there is a practical limit on Linus' time to handle a stream of reviewed fixes/features.

In other words, if what you're trying to accomplish is:

then you should state it overtly rather than sneak it in by leaving out the tools to help developers do their jobs. I basically agree with the ends, but not the means.

Performance graphs

These graphs measure the amount of elapsed wall time required for an accept() operation in kernel 2.2.9, and the time required with our patched version of accept(). X-axis units are number of threads waiting in accept(), and Y-axis units are microseconds. This microbenchmark was run on a Dell PowerEdge 6300 with four 450Mhz Xeon processors and 512M of RAM.

If you have comments or suggestions, email linux-scalability @ citi.umich.edu

Copyright 1999