Software

TECHNOLOGY REVIEW

Comparing Linux to System VR4

Two weeks ago, I asked for reader aid in getting documentation on early 1990s System VR4 Unix to help understand what things can be done with Linux today but could not be done with AT&T System VR4.

The question was naive and I need more help (and aspirin) in struggling toward a better answer than I’ve reached so far. In fact, because my interest was originally sparked by Linus Torvalds’ comment that he has nothing to learn from Solaris, the comparison should have been to the SunOS 5.0 kernel from 1992. That kernel, of course, was the first to extensively address the two things that most differentiate the Linux kernel today from SVR4 then: threading and user space operations.

My request for help included a list of some things you can do with Solaris but not with Linux, and more than 40 readers sent me e-mail responding to this by telling me that Linux (or, in several cases, Windows) can do all of those things. I’m sorry to say that I got a little brusque in my responses after the first half dozen or so of these — so, as you will see below, I do need more help, but please don’t tell me that MPO is NUMA or that ZFS is an LVM.

Future Exploration

On the other hand, those responses suggested a frightening thought for future exploration: that the knowledge gap between the Linux and Solaris communities might be much bigger than I think it is. If true, that has interesting implications for Sun’s OpenSolaris effort.

On the other hand, ignorance cuts both ways. Not only had I asked the wrong question to start with, but I found myself feeling less and less adequate to understand just what is going on with the Linux kernel as I dug deeper into it. The udev stuff seems pretty clear, but threading isn’t. Indeed, I’m starting to believe that threading doesn’t have a legitimate role in the Intel x86 hardware environment because the hardware defeats the goal of true concurrency between threads — and if you can’t achieve concurrency, what point is there in threading?

Imagine, for example, trying to build a compiler able to produce an efficient executable in exactly one pass. Nobody does this now, for obvious design reasons consequent to an underlying sequential processing assumption, but it shouldn’t be impossible. “All” it would take is a complete re-appraisal of everything we know about optimization and related issues in a truly concurrent, shared-everything, multi-threading environment with enough threads. That’s not in the cards for Linux on either x86 or Cell, but might soon become possible on Sun’s throughput computing chipsets as these gain in both number of threads and cache size.

The original question can probably be better rephrased in terms of a comparison of the ideas and inventions made real in the two kernels: VR4 versus generic Linux 2.6. In other words, if you had to teach an advanced course in operating systems theory, which kernel would better illustrate the implementation ideas you’d need to get across? Pending further research my guess is that the right answer here is System VR4, mainly because it’s almost equally portable but simpler and clearer.

Changes Appear Artificial

On the other hand, this variation on the main question also raises new issues because many of the changes made to process and memory management between the 2.4 and 2.6 Linux kernels look a bit artificial — meaning that they don’t seem to be direct continuations of code evolution up to 2.4 and thus raise the suspicion that the SCO/IBM lawsuit might be having some unexpected design consequences.

Change the focus, however, to imagine a course in kernel level programming and the Linux kernel is an easy winner because the intervening 12 years of widespread effort and peer review have produced some extremely high quality code. Look, for example, at system calls and interfaces and you see first that they’ve more than doubled in number since VR4. But more importantly, you will see that overheads have generally shrunk in response to performance and reliability benchmarks that reward better kernel integration, better algorithms and better coding.

Beyond the qualitative improvements, there seem to be two main differences resulting, respectively, from the real emergence of Unix threading as a design technology and the somewhat imaginary emergence of the server/desktop distinction fomented by Microsoft marketing. In practice these mean that two things you can do easily with Linux — plug in a USB (or other bus) device like a camera and run a memory resident psuedo server like the Java Virtual Machine efficiently on a dual processor — cannot be done with comparable efficiency and reliability using System VR4. (Note that NCR MP-RAS on the voyageur architecture could have assimilated the USB and run the JVM, just not as efficiently and not as reliably (especially with respect to service umounts).

This isn’t to suggest that there aren’t any cool new technologies in Linux; there are quite a few, but most seem to be implementations of things done elsewhere (particularly BSD) first rather than new solutions. Indeed, I think that’s the general conclusion I’m tending toward with respect to the comparison between Linux today and System VR4 then: the things that are new in Linux generally reflect adaptations of existing ideas to external change, while the things that are better reflect incremental code improvement rather than structural innovation.

Opposite Effect

The other half of the comparison, Linux to BSD, suggests an almost opposite effect. Both groups benchmark their efforts against performance and reliability, but the history of Linux, at least up to 2.4, has been far more evolutionary than BSD’s. Compare Microport’s AT&T Unix port for x86 running OpenLook on a PC with a Cornerstone card and monitor in 1992 to CDE running under Linux 2.4 for x86 today and there are enormous practical differences, but few conceptual ones. (Gnome and KDE, of course, are very different in both concept and execution — but they didn’t come from the Minix/SysV foundation.)

In contrast, the BSD people have had their technical focus fractured by more than a few explosive rebellions. As a result, then and now comparisons reveal significant conceptual differences in the underlying technologies but fundamentally the same operational look and feel. Take a good look at Darwin, for example, and you’ll find essentially nothing in the kernel that’s directly from either BSD 4.3 or SunOS 4, but the ancestral relationship is blindingly obvious to anyone who has used both.

That’s counter-intuitive, but lots of things in these comparisons are. Straying beyond technology, for example, it seems that, despite the best efforts of linuxbase.org and others, there are now more marginally incompatible Linuxes than BSDs — while many BSD people make a strong case for the belief that their periodically explosive development model is more cohesive than that imposed on Linux by major commercial interests like Red Hat and IBM.

If there’s a real bottom line here, the one thing I’m clear on is that I haven’t found it yet, but the questions raised have been more interesting that the answers — so more help would be welcomed.


Paul Murphy, a LinuxInsider columnist, wrote and published The Unix Guide to Defenestration. Murphy is a 20-year veteran of the IT consulting industry, specializing in Unix and Unix-related management issues. .


3 Comments

  • You wrote:"On the other hand, this variation on the main question also raises new issues because many of the changes made to process and memory management between the 2.4 and 2.6 Linux kernels look a bit artificial — meaning that they don’t seem to be direct continuations of code evolution up to 2.4 and thus raise the suspicion that the SCO/IBM lawsuit might be having some unexpected design consequences."
    Why would you prefer that explanation in lieu of more plausible ones??? Seems you have not been browsing the relevant kernel development discussions (even without wading the voluminous kernel mailing list, useful summaries are at lwn.net and kerneltraffic.org). In short, the memory management code of earlier Linux needed updating around version 2.4 and several improved implementations were tried, iterated and sometimes discarded (even during the stable 2.4 series), accompanied with healthy debate on the mailing list. That is how open source development works. Optimal memory management in the kernel is a VERY hard problem, which explains the intense activity surrounding it. And to further show off-the-mark your speculation is, SCO’s lawsuits started AFTER the memory manager mayhem in 2.4, in fact after 2.6 was finally released, so they can hardly have affected the developers…

  • You must be paid by SCO. The difference in two OSs is not only a set of features. But most of all their implementation. It’s important HOW it’s done, not IF.

  • With the issue of threading in x86… Yes, it doesn’t take advantage of the hardware, since the hardware doesn’t exist on that platform. This however is not as big a problem as you think. First, Linux can be compiled for damn near anything in existance, so on systems that can take advantage of it, you can use it at its full potential. Otherwise, it ends up acting like virtual threads, which is important. In a non-threaded system, you run one application at a time, no more. Want to play music in the background? Tough luck, that can’t coexist on a single threaded system unless it is the only thing running, asside from the OS itself.

    As a better example, as someone stuck with Windows for now, I use a client for telnet (for muds) that supports active scripting, but MS script systems don’t support threading models. The result is that if I want to do something relatively simple, like checking what the current top ten status of the mud, I have to use a web browser. The reason being that it takes, on dial-up, 2-3 minutes to load the HTML, so even though I can use MS’ Inet API to load a page in scripting, doing so freezes the client for 2-3 minutes. The two processes can’t simultaniously coexist and do their jobs, even though they are not directly dependent on each other. The reason being simply because the same thread is used to execute both, so one has to stop to wait for the other to finish. Early Win 3.1 had a much simpler threading system which had the same problem. It literally suspended applications not being used, instead of allowing parallel execution.

    So, if there is a question as too why threading is useful in Linux under x86, you have to ask why modern versions of Windows have it (if in a far more limited way that "can’t" use multiple processors when available). The answer is simply that it isn’t DOS and we need to be able to have dozens of services and/or applications running at the same time, something that requires threading.

    There are quite a few people looking forward to when 2+ processors are common, just so they can optimize things to take advantage of it too. 😉

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

LinuxInsider Channels