Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Science

Multics Scheduler 71

davecb wrote to us with a short description of three of the Multics schedulers. The piece has some background information and history as well, which adds another element to the reading.
This discussion has been archived. No new comments can be posted.

Multics Scheduler

Comments Filter:
  • I've been surprised for many years that more theft wasn't committed on some of the many excellent design ideas in Multics by the Linux community.

    When you go to this site, remember Ashworth's Law: "there's usually something even more interesting, somewhere else on the site". My favorite is the list of still-running Multics implementations.

    Even today, ten or more years after there was any hope of reasonable support, there are _still_ three sites using Multics in production.

    As a cross-link, the Motorola HA Linux work ties right in here, conceptually: Multics was originally intended as a 'comuting utility', where any part could be swapped out at any time, including CPUs and memory.

    Good show.

    Cheers,
    -- jra
    -----
  • forgot the link...
    Userfriendly [userfriendly.org]
  • When I saw the IBM System/38 announcement, I said to myself "a Multician worked on this." The nice thing about it was that it had a built-in database as its file-system abstraction. Many of the design choices in S/38 seemed to address problems we'd encountered in using Multics. Turned out I was right: one of the Multics designers had done a lot of consulting at IBM on FS, the "future system" IBM scrapped in 1975, and some of the FS features made it into S/38, the ancestor of AS/400.

    I have been told by IBM colleagues that deep inside AIX, there's a virtual-memory file-mapping OS also derived from FS.

    Does "Multics live on" here? No. The single-level store idea was used by several OS designs, and implemented in various ways. A better way to say it is that this feature "lives on."

  • Many people here seem to think Multics was too huge and complicated. What do you think about EROS [eros-os.org] then? A specifically designed, reliable system, with security and reliability features Unix or Windows people can only dream of. And a hell of a lot more difficult to conceive than Unix.

    To me, it seems the only right way to go from Unix. Because Unix won't be the ideal system forever. What do you think, is EROS too complex to survive?

  • The reports I hear indicate that AS/400 is more directly "rooted" in the Hydra [berkeley.edu] operating system.

    Mark Miller [caplet.com] (who has a really cool web page on advanced OS stuff) used to have a page on Hydra; seems to be gone now, unfortunately.

    Hydra was a capability-based system, and is likely moreso a parent of EROS than it is a child of Multics...

  • The "hot swap everything" feature is vastly more related to the hardware design than to the OS design.

    If you pull the CPU out of your PCI bus system while it's running, it is Rather Likely that you'll do severe damage to the hardware.

    The more we see things like USB, I2O, and serial IDE, where there are I/O controllers independent of the CPU, the more there will be an ability to attach/detach peripherals while the system is running. And if there was a motherboard that provided a "protocol" to allow connecting/disconnecting CPUs and memory online, that would be a further extension of this.

    Mind you, the cost of allowing such a "protocol" would likely substantially increase the cost of the motherboard, and people are generally price sensitive particularly to things that they rarely fiddle with, which discourages adding this feature to any computer systems that any of us can afford to buy...

  • This problem would never have occurred if Linux was developped under a secure model.

  • Although Multics is quite esoteric, it is not overly complicated. In fact, it can be well understood with the mastery of a few fundamental concepts. What killed Multics was the degree to which it fully utilzed the hardware capabilities of the machines it ran on and the competency of the companys that owned it. It was one of many industries that General Electric has abandoned in its lifetime, and Honeywell, who "adopted" it, never wanted to sell it. It was viewed as competition for its GCOS systems. It is amazing how many Multics systems were sold, since you had to beg the salesforce to sell you one.
  • Actually we used to update software asynchronously, too: when I worked in Mineapolis, a different database layout was being exposure-tested during several weekends. I couldn't even tell they were experimenting, much less that they were rolling forward and back with something so low-level. (I was a dumb user in those days, you understand).

    A smarter colleague taught us the algorithm, but that's a rant for another day...

    --dave
  • Until around the last decade (maybe decade and a half), 150KB for a kernel was freaking huge as all hell. The original Unix kernel was about 20KB, and around Version 7 (mid-70s), the kernel was about 60KB.

    --

  • Actually it's more the opposite: they're relatively simple, and intended for a system whose main purpose in life was to provide good interactive performance. The multi-queues trick was a way to keep the cost of scanning the process queue down.

    In modern systems, very large numbers of processes (or threads) can cause you to spend way too much time in the dispatcher: this was the subject of an IBM/Linux paper recently...

    --dave
  • You should add that the RedTape scheme never actually dispatches any useful work; only the internal administrative processes are ever eligible and they are not allowed to complete.
  • Just about everything that comes out of the FSF is a violation of the "KISS" principle. I mean, look at the command line options for tar.. and the confusing maze that is EMACS, and so on! does anyone seriously expect a newbie to be able to use these programs?? come on.. these are the exact opposite of simplicity.

    On the other hand there are many good examples of programming in linux. What comes to mind for me is the 'mutt' email program -- you only need to dig into the configuration if you want to -- and of course vim, tcsh, pine, etc. You only have to learn a few simple commands to use them. Complexity is not 'forced' on you.

  • Actually, the PDP-11 did have 3 rings of protection.

    More correctly, some PDP-11's with MMUs had three protection levels (Kernel, Supervisor, and User), but most operating systems, including but not limited to UNIX, used only two of them. (I think some Digital OSes, e.g. RSX-11M Plus?, may have eventually used Supervisor mode, on machines that had it, to provide an additional address space; I don't know whether they actually used it to provide finer-grained protection levels, however. Bell Labs's MERT used supervisor mode - it ran a low-level kernel in kernel mode, and ran a UNIX kernel-level environment atop it in supervisor mode, with that environment running user-mode code in user mode - i.e., it did use it to provide finer-grained protection levels. There may have been other OSes that did so as well - KSOS?)

    VAXes had four levels, and VMS, as far as I know, used all of them (kernel code in kernel mode, RMS in executive mode(?), command interpreter in supervisor mode, and userland code in user mode); UNIX, however, used only kernel and user.

  • It had tons of weird and wonderful features, like mapping memory to the file system.

    Memory-mapped files are now present in most if not all all modern UNIX-compatible OSes, as well as Windows 9x and Windows NT/2000 - but

    1. at the time, it was a relatively uncommon feature (TENEX might have had it as well; did any other OSes have it, e.g. TSS/360?);
    2. Multics used it as the fundamental file access mechanism, i.e. the equivalents of read()/ReadFile() and write()/WriteFile() were implemented by copying from or to a mapped region of the file (some modern UNIXes, and NT, may, in effect, implement those calls in that fashion inside the kernel, but they still don't have the model of "file access is memory mapping").
  • Apparently Intel designed some Multics-like capabilities into
    some (not all!) members of the IA-32 series.

    The ring stuff, if that's what you're referring to, is, as far as I know, in all members of the IA-32 series, unless you count embedded versions that lack a full-blown IA-32 MMU, as those might not have it.

    It may not be in all members of the IA-16 series (8086/8088, 80186/80188, 80286), but that's another matter.

  • Multics (MULTiplexed Interactive Computer System) was a Bell Labs project that managed to acquire the unwelcome retrofitted acronym "Many Unnecessarily Large Tables In Core Simultaneously". UNICS (UNiplexed Interactive Computer System) was allegedly a "castrated MULTICS", and, as one of those odd quirks of history, the laboratory nickname stuck. Subsequently, they changed the spelling and capitalization and were then able to tell people that "Unix" was not an acronym. ;)
  • by ucblockhead ( 63650 ) on Tuesday March 07, 2000 @07:12AM (#1220428) Homepage Journal
    Ok, well, this has just got to be a troll, but I'm going to respond anyway.

    It should be obvious that one of the primary reasons that Linux is as successful as it is is because when it appeared there were hundreds of thousands of coders and millions of lines of code all immediately available.

    Infrastructure, infrastructure, infrastructure. You can't just tear it all down and make it perfect. You've got to work with the existing structure and move in little steps, so people aren't left holding the bag with nothing to show. Nobody wants to start from scratch. And really, nobody should have to start from scratch.

    If you were to attempt to build up an OS to the level of Linux, from scratch, it would likely take you a decade. For what purpose? To end up with something that is a moderate improvement? Why not just improve what we've got?

    Anyway, I am constantly amazed at the way people get all hyped out about a particular language or a particular OS. If you are a good coder, you can write good code in any language and on any OS. These are tools, people, and arguing about them is like two construction workers having a heated discussion over which hammer is better. Silly. Just pick the best tool available to you and go with it. If you don't like any of the tools, build your own. But don't whine because everyone else is too stupid to agree with you on what tool is the ultimate hammer. When I hear crap like that, I recall the old saying "it is a poor workman who blames his tools".

    Yes, modern Unices are all "hacked together". But then, so is Windows. You see, there are two types of OS, those that people spend a lot of time hacking on, and those that no one uses. The latter have the sort of purity of an unlived in house. Everything is perfect, as the designer intended. But you have to realize that this would not last long were someone to move in.

  • by pb ( 1020 )

    Wow, those are a lot of complicated algorithms. They weren't kidding when they said that MULTICS was huge and overly complicated...

    Some of these make more sense coming from a "batch-job scheduling" perspective. Of course, we still use credit-based scheduling algorithms today (because they rule!) and I can't picture using six queues(!).

    Also, about the name: If MULTICS is an OS designed by a lot of people, then Unix would be an OS designed by one person... :)


    ---
    pb Reply or e-mail; don't vaguely moderate [152.7.41.11].
  • The idea of having the 6 queues is a clever way to make minimize time wasted in context switches by giving long-running non-interactive processes large but less frequent time slices... Sounds cool to me...
  • Right now, I've got a working PL/I compiler going on my pentium. If you'rew interested, drop me a line.

    Um, how does one go about contacting an AC?

    Seriously, though, I think it would be very interesting to see about a free Multics clone for PC hardware. But do you honestly think that Linus knew what he was starting when he began coding the Linux kernel? Even if you don't like Unix in general, the value of his kernel, showing up at that point in time, is inestimable. He doesn't deserve a punch in the face. Do you think the guy who's maintaining FreeDOS should be punched? What if it becomes popular? Sheesh.

  • (see subject line)
    Chris Wareham
  • dimwit. the reason torvalds chose unix is that it worked practically and its not some theoretical CS excercise. it also consumed less resources than the big bloated multics kernel and despite you calling it a hack, unix has worked far more than any other OS and for ar longer because its the *right* way to do things.
  • by codemonkey_uk ( 105775 ) on Tuesday March 07, 2000 @07:33AM (#1220436) Homepage

    Zurk wrote:

    unix has worked far more than any other OS and for ar longer because its the *right* way to do things

    This is a dangerouse attitude, because not only does it assume that what is good for you is good for sombody else (theory of absolute truth), it assumes that what is good now cannot be bettered.

    Unix *is* good, for you, for me, for many people, but no nesseserly good for everyone. No only that, but this assumption that *nix is *right* implies that everything else is *wrong*. Not only everything that has been, but everything that is to come. The simple truth is that at some point in the future, you, I, Linus, IBM, Apple, or, god forbid, Microsoft, might create something better, and we have to be able to accept that, and go with it, or hold back the state of the art, because of bigotry.

    Thad
  • by Dhericean ( 158757 ) on Tuesday March 07, 2000 @07:36AM (#1220437)
    What we need are some schedulers for the new century.

    eBay: The Scheduler process posts information about available slots and processes bid for the time. After the auction period expires the slot is allocated to the winning process (once the 'cheque' has cleared). The duraction of an auction would be short. Reserve prices might be set but this could lead to idle cycles.

    MiddleManagment: The Scheduler makes provisional assignments based upon its favourite strategy, but all of these have to be run past an administrative Daemon which has to authorise all slot allocations. The criteria would be very complex but factors include how sexy the process was and when the admin daemon last ran a golf game.
    Such daemons only operate during core working hours and are easily distracted so the scheduler can sneak important but boring work through.
    The government uses a variant of this called the RedTape scheme.

    Boeing: The Scheduler runs a standard strategy but every so often it accidently allocates all the slots to the local refuse tip where processes dig through the mounds of trash looking for them.

    OpenSource: The Scheduler insists on receiving a copy of the source code of any binary that applies for slots. It then allocates based upon the the degree of innovation (determined by metrics). Library calls or other attempts to restrict access to parts of the program reduce the priority.

    SlashDot: The scheduler sets up pools of resource and processes submit requests for slots into this. Other processes enter their own bids or attach sub requests to other processes requests. Processes not engaged in bidding for resources on that pool rate the requests rewarding particularly appropriate requests and penalising irrelevant requests.
    Particularly successful processes receive a bonus to future requests.
  • Multics (MULTiplexed Interactive Computer System) was a Bell Labs project...

    That's not entirely accurate. From the Multicians [multicians.org] web site:

    Multics (Multiplexed Information and Computing Service) is a timesharing operating system begun in 1965 and still in use today. The system was started as a joint project by MIT's Project MAC, Bell Telephone Laboratories, and General Electric Company's Large Computer Products Division. Prof. Fernando J. Corbató of MIT led the project. Bell Labs withdrew from the development effort in 1969, and in 1970 GE sold its computer business to Honeywell, which offered Multics as a commercial product and sold a few dozen systems.
  • Perhaps my mind has been melted by the presence of so many single user systems that pretend at some aspects of multiuserhood very, very badly.

    But it sure looks like UNIX has its multiuserness at a reasonably deep level. Can you elaborate on the lack of UNIX multiuserness, perchance?

  • I read somewhere that Multics machines feature hot-swappable everything. Peripherals, cards, drives, even CPU's.

    That's why I want a Multics box. Imagine the looks on the faces of your friends as you say, "Check this out!" and immediately yank a CPU out of your machine without the computer skipping a beat. MWHAAHAHAHAHAHAHAHAHAHAHAHAHAHAH
  • oh, I thought you had to have been castrated [magna.com.au] to operate one. Whew.
  • He's talking about mapping memory to files, not files to memory.
  • Who said technical people have no artistic sensibilities? Who said there couldn't possibly be a poet on slashdot? God what a wonderful, superb troll! U R truelie the greatest!!!1! - Gratefully yours WDK
  • He's talking about mapping memory to files, not files to memory.

    To what is he referring when he says "mapping memory to files", and how is this different from "mapping files to memory" (except in the order of the two words relative to "mapping" and "to")? (Or did he just decide to use the words in the reverse order in which they tend to be used when talking about mmap(), in which case we're both talking about the same thing....)

    In Multics, you initiated a segment, which caused a file to appear in your address space; this is, to some extent, similar to opening a file and mmap()ing it, or doing the appropriate Win32 voodoo dance to get something to hand to MapViewOfFile() and then doing so.

    (See the definition of "initiate":

    Said of a segment. Also "to make a segment known" (to a process). To make the supervisor call (or as a side-effect of dynamic linking) that associates a segment number with a file in the file system by making an entry in a process's KST. From that point on, references to that segment number become references to that segment, the fundamental illusion of the Multics Virtual Memory. On Multics, instead of opening files, a user initiates segments, and all the rest is magic [BSG].

    in the "i" section of the Glossary [multicians.org] on the Multicians.org site.)

    I.e., they both involve taking a file name (pathname in UNIX or Win32; pathname, or segment name to be searched for in a path, in Multics) and, after making various calls to low-layer OS code, eventually getting a pointer that can be dereferenced, causing page faults that are satisfied by fetching data from the file, and possibly allowing stores through that pointer, with dirty pages pushed back to the file.

  • the Win2K/Linux situation, in that you have a large and complex system stuffed full of all the features anyone could dream up, stacked up against a small system implementing proven algorithms and techniques

    Which is the system full of all the features anyone could dream up and which is the small system?

    Windows 2000 actually has some pretty nice features. If the user interface were replaced with something sane (by which I mean, doesn't assume I'm an idiot and do something other than what I say), it could be a decent system. I think it's sort of wishful thinking to call Linux small. Sure, the kernel may be only 50 megs of source, but the system as a whole (take any disto you want) is far too big to be understood by anyone but an expert with time to burn. Not that Win2k is better, but it isn't clearly inferior to Linux in this particular case.
  • Um, this isn't as goofy as you might think. There are actually process scheduling methods that work like this (well, the eBay one anyway). It's called Microeconomic scheduling - each resource has a price and to get it you have to bid for it.

    Sorry, this is supposedly my research area, I'll try not to let too many facts ruin a good story again :)
  • Get back to writting my damn MoD player...
  • Does Multics live on, conceptually, in the AS/400? I heard that it uses segments, and a "memory is disk is memory" permanent storage scheme ala multics. Does anyone know more aobut how OS/400 works?

    ...

    Amusingly, AskJeeves returned "Dementia -neurological diseases and disorders - more resources" when I asked, "how is os/400 related to multics [askjeeves.com]"
  • VOS, an OS which runs on Stratus computers, uses ">" as its directory separator character.
  • Aside from your blatant trolling, I'm sure that quite a few people would be interested in a PL/1 compiler being freely available. I certainly would.

    Would you mind contacting me with details on how I might obtain it?

    Many thanks,
    -Dom2
  • Shows you how badly something can be designed
    when everyone wants to throw their two cents in.
    Also when designed primarily by professors who
    have no market constraints.
    Software engineering fortunately learned a few
    things since the 1970s.
    UNIX took the philosophy of simplicity and elegance. Its most elegant version being CMU Mach
    (in NeXT and Apple) and most widespread version in Linux.
    (Victim of Multics at MIT in the 1970s.)

  • by hey! ( 33014 ) on Tuesday March 07, 2000 @08:27AM (#1220455) Homepage Journal
    ... was that Multics was the only system whose directory separator makes sense. Forget "/" and "\" -- Multics used ">". When I switched to Unix, I found the arbitrariness of "/" to be annoying, the way people switching from Unix to Windows find the choice of backslash to be almost deliberately irritating.

    If you read the article, you get a good idea of what Multics was like. It had tons of weird and wonderful features, like mapping memory to the file system. I think it was pretty successful in meeting most of its design goals, but was so large and idiosyncratic it could probably could never be ported to hardware not specifically designed to run it. Scalability runs two ways: Unix cherry picked the successful ideas from Multics, leaving a small but sophisticated OS that could run on low end hardware. The rest is history.

    I think its kind of interesting to compare this to the Win2K/Linux situation, in that you have a large and complex system stuffed full of all the features anyone could dream up, stacked up against a small system implementing proven algorithms and techniques. Having been around a long time, this brief episode of indordinate Microsoft market power doesn't scare me: I know where I want to put my chips, at least in the server market.
  • Oh, does that take me back. I got assigned the review of a MULTICS bid for a big timesharing system. Spent two weeks going through all the system manuals. Rings within rings, system speed adjustment bits, priority handler...

    But ended up with a supercomputer instead...

  • Hot swap everything is not exactly uncommon. Solaris does it even Linux in Motorolas new distro. For most I/O things it is rather straight forward, just disable the thing until a new is in place. When it comes to RAM and CPU it entails having spare capacity and the ability to disable before swap.
    So with enough RAM and more than one CPU, no problem.

    So much for silly laugh...
  • If you were to attempt to build up an OS to the level of Linux, from scratch, it would likely take you a decade. For what purpose? To end up with something that is a moderate improvement? Why not just improve what we've got?

    There is a good reason: diversity. Sure, there are always room for improvement, and things are indeed constantly getting better, but nevertheless changes made are always restricted to the original design. There is no way around it. Even though many people can make many different changes to the current solution, they are still bounded and will be somewhat alike each other, especially taking into consideration that source-level compatibility needs to be maintained in the process. We can never expect drastic changes from small steps.

    I know that "if it ain't broke, don't fix it" is the standard engineering practice (and rightfully so), and I do not think that people should always do everything from scratch. But if you look into the big picture, maintaining high diversity is the utmost concern if any field is to phosperous in the long run. All the biologists in the world will tell you that the loss of DNA diversity (as now taking place in the rain forest) can be the greatest loss for the future generations, and they are hard to make up --- forest can be regrown, and greenhouse gases will clear itself from the atmosphere within decades (if we stop sending more of that up, that is). But diversity, once lost, will not recover within thousands of years.

    It is the same here in the software business. If you want to thrive *now*, patching up current design is the best way to go. But if you want to be able to survive and phosperous for a long, long time, we must have a great level of diversity, and some of that will undoubtedly involving doing things from scratch. That is the way it works.

    True, these works will lag far behind the current state of art. I agree with that. But you can also say the same thing about mammals some 65 million years ago. Diversity saved the day. Diversity is good for life. Diversity is good for us.

  • by Animats ( 122034 ) on Tuesday March 07, 2000 @09:20AM (#1220460) Homepage
    If servers today were as good as Multics, cracking would not be a problem. Multics had a NSA B2 rating. (After five years of trying, Windows NT 4.0 SP6 recently got a C2, which is the absolute minimum and doesn't mean much.) NSA's in-house public access machine, DOCKMASTER [stratus.com], ran Multics until 1998. The Pentagon's MULTICS stored classified information at different levels and kept them apart. This is an achievement in a whole different league than the swiss-cheese systems we see today.

    The rings of protection concept really did work. But it takes hardware support, which the PDP-11 didn't have, so UNIX, and its successors, don't work that way. They should. It would be easy to have ring hardware today; in fact, Pentium Pro/II/III machines have some of it, unused.

    As for scheduling, Multics had Corbato's algorithm from the beginning, while UNIX had a truly awful scheduler for decades.

  • Sorry. I have asked the ISP for temporary relief from its bandwidth limitation policy. They "strive to read all email within 24 hours." -- tom
  • There may have been other OSes that did so as well - KSOS?
    Yes, KSOS did that. I wrote the file system and all the device drivers for KSOS. Unfortunately, those had to run in kernel mode. The UNIX emulator ran in the supervisor ring. In better systems, such as Multics, the drivers and file system were outside the kernel, as they should be.
  • Two things.. First of all, there is no way to find all aliasing bugs in a program unless we solve the halting problem. (which is unsolveable).

    But regardless of that, code which has pointer aliasing bugs is not standards conformant. The compiler is completely within the spec if it miscompiles this code, or if it prints out 'You are an idiot' one million times.

    You can try to write code in the compiler to detect these bugs. It can't be completely accurate, the question is can we make it useful. If it catches every bug, but also flags a lot of standards-conformant code as bad, that's can be more useless than missing instances of this bug.

    Besides which, aliasing issues exist because compilers are using the many registers that a modern RISC CPU's has, and they don't want to be forced to reload potentially dirty data on every memory-write. Choose another language then. C/C++ wasn't, isn't, and can't be fully optimized on this type of modern CPU. Even with aliasing analysis, there are still avoidable ineffeciencies.

  • Many people here seem to think Multics was too huge and complicated.
    It was bigger than AT&T UNIX, but smaller than BSD. By modern OS standards, it was tiny. Remember, it came from an era when a megabyte cost a megabuck.

    What do you think about EROS then?
    Unfinished. As the authors put it, it's at the "hello world" stage. The primary author got his degree and graduated.

    I was looking seriously at EROS and L4 [tu-dresden.de] for some real-time work, but neither is really usable yet. QNX [qnx.com] is fast, elegant, reliable, and successful, but it's sold only on terms that make sense if you're building it into something; there's a big up-front cost. There's no clean, usable, secure open-source solution that doesn't have a bloated kernel. (RT/Linux isn't the answer; it's a hack that allows you to add your real-time application to the kernel.)

    One very real problem is that very few people know how to write for a message-passing system. You have to divide up your program into isolated communicating processes with different security and timing properties. These processes communicate only by message passing, but the message passing is much faster than UNIX programmers are used to. UNIX and Windows programmers aren't used to designing this way. It's as big a change as going to object-oriented programming.

    EROS has the additional problem that the capability crowd tends to be incomprehensible. Norm Hardy, the creator of KeyKos, was known for his impenetrable talks. (He used to have an "explainer", Susan Rajunas) What's needed is something like a well-documented secure web server using a capability-based OS. Then people might pay attention. But EROS isn't finished enough to even start that project.

    UNIX/Linux isn't getting more secure. The CERT advisories look about the same as they did ten years ago. Holes in Sendmail. Holes in BIND. Holes in "rsh". Set-UID-to-root problems. No amount of hacking on the UNIX architecture will help. You've got to have a system with far less trusted code. It's possible. Multics did it.

  • In better systems, such as Multics, the drivers and file system were outside the kernel, as they should be.

    By "outside the kernel", in the Multics context, do you mean "outside of the master mode code" or "outside ring 0"?

    (As I remember, only a very tiny bit of code, in the form of small routines to execute privileged instructions, ran in master mode; for the benefit of most in the audience, the GE 645 and its successors, as I remember, could designate a segment as containing master mode code, and all calls into it would switch you to the equivalent of kernel mode - returns to a segment not so designated would leave master mode - and presumably that segment was accessible only from ring 0, ring 0 not being the same as master mode - most code in the OS, including ring 0 code, ran in slave mode.)

  • Windows 2000 actually has some pretty nice features.

    I don't doubt it for a moment. The MS people are not stupid or insane; they're playing the hand they were dealt. Multics had a lot of really cool stuff, just way more than was necessary.

    Sure, the kernel may be only 50 megs of source, but the system as a whole (take any disto you want) is far too big to be understood by anyone but an expert with time to burn.

    Well, using my Catholic school education and borrowing an idea from Aquinas, there's essential complexity and accidental complexity. Call it architectural vs. implementation if you will. Essentially, Unix is very easy to administer, but it has lots of accidental complexities. The way to deal with it is to RTFM and use the source. The old joke was that the Unix manual was written for a Unix guru with a faulty memory. There's more than a little truth in this, in that you can be a Unix guru without having to remember everything.

    by which I mean, doesn't assume I'm an idiot and do something other than what I say

    I think this is sometimes symptomatic of complex behavior with a simplistic facade thrown over it.
  • I'd have to agree that Linux isn't clearly superior, or even smaller. Code size figures for Windows include all sorts of things which aren't in the Linux kernel.

    But I have to take issue with your comment about the Windows 2000 GUI. I agree it's still not brilliant, but it's noticeably nicer to use than 98. There's still a few silly little things I'm not quite comfortable with, but it's not all that bad a GUI. Really.

    Greg, posting under IE5 as Netscape's playing up. Oh, it's nice to be reminded of just why I hate IE...
  • I did work experience 3-4 years ago with AS/400s and RPG. Weird until you get used to it, but great fun!

    Anyway, my technical knowledge of OS/400 is decidedly limited but the lovely guy who explained it all to me definitely said it didn't care about WHAT storage is used, it just used storage. The boundary between memory and disc wasn't visible.

    Hopefully someone who has worked with them for more than a week and a little more recently can help out here, but that's what I recall.

    Greg
  • Sure, but that wasn't there at the start. Unix started with a smaller and cleaner set of abstractions.

    My point wasn't about memory mapping, but the overall feature set. Multics was a place where the best comp sci minds put everything idea they could think of into. As a user (not an admin), I thought it worked really well, except when weird things like dynamic linking came up and bit you on the ass. Yes I know that Unix has dynamic linking too now, but it isn't quite the same. Even familiar things like dynamic linking and virtual memory seem to have been unusually recherche in Multics.

    Nonetheless, I thought that Multics was a pleasure to use. It was just not well situated to survive where the greatest growth would be.

    You sound like you remember Multics better than I do.
  • Thanks for your post! It makes basically most, or all, of the points I, and a few others, made on the GCC list last year when this issue was raised (yet again).

    Yes, catching all such bugs is believed impossible, but I used the phrase "high-quality" to avoid a longish explanation.

    GCC currently tries to catch 'em, but fails (from what I understand) for the reasons you cite.

    More specifically, what I objected to last year was the assumption that, first, GCC should accommodate such broken code by defaulting to assuming all code is broken (therefore defaulting to lower performance for correct code). I lost that argument; GCC now (as of 2.95.2, perhaps .1) defaults to assuming all code has the aliasing bug.

    Second, I objected to the proposal (by RMS, IIRC) that the internals of GCC be (partially) redesigned to make the detection of these bugs (the warnings) of higher quality.

    Note, it wasn't the "higher quality" part of that to which I objected -- it was the "change GCC" part.

    Even then, it wasn't changing GCC to make it do its job -- generating correct code -- better to which I objected.

    So I made the point that, if you really want to provide a tool that does a high-quality job of catching such errors, go write it -- as a tool separate and distinct from GCC. That way, users of GCC won't have to wait for some distant, upcoming release to quickly install and use the tool (this works for users who don't want to download snapshots of GCC in development, but would try just a smallish tool).

    More importantly, GCC won't be destabilized just to provide a new warning. Warnings printed "casually" in the normal course of a program's business are, for the most part, fine and dandy, but when you start talking about reworking the internals of a large program like GCC just to accommodate a new warning, you're losing sight of what that program is really for.

    Of course, I lost that argument too, since it basically boiled down to saying "people should run lint on questionable code, and leave the compiler to do the compiling" -- which I believe now, much moreso than I did 10 years ago.

    That thinking goes in the opposite direction of much of today's high-profile software development, from Microsoft's macroapps to (as another poster correctly pointed out) the FSF's macrowares.

    As to your point that C/C++ might not be the best language, I -- and one or two other prominent GCC developers -- made that very point to Linus Torvalds himself, in response to his complaints that GCC's huge set of extensions to standard C still wasn't quite exactly what Linux needed.

    Needless to say, I got a few pieces of hate mail for that suggestion (especially to my inappropriate claim that he didn't know anything about good language design in the same post/thread). (There's more I could say about this, but it's best for readers to review the threads at gcc.gnu.org.)

    The symptoms here are all related to the same desire to take something that's good because it's simple and make it ubiquitous by making it complicated (by which I don't necessarily mean making the code more complicated -- deciding that GCC assumes bug-laden code makes the spec for code that conforms to GCC more complicated, compared to, e.g., "ISO C"). I.e. it's easier to say "GCC is your one source for all things compiler-related" and get stuck in the mindset that implies a single executable to make it happen, than to realize that a bunch of separate, but cooperatively and thoughtfully designed, applications would be better. Call the collection something like "Hurd"...well, oops.... ;-)

    Anyway, the pointer-aliasing episode culminated, after tons of heated discussion, in a compiler that assumes code is broken, that doesn't do a good job pointing out how and where it's broken (and, unless they don't think it's important enough to put news of it "coming" on the top-level GCC web page that I occasionally check out, won't for some time to come), and in at least myself leaving the GCC project.

    After all, what that discussion made clear to me should have been obvious by reading the mission statement for the project: generating correct code is not the #1 goal, and it's unclear from the wording whether it's even the #2, #3, or #4 goal. (One could argue that the word "work" means generating correct code, but, in practice, it doesn't -- it means, to GCC developers, generate correct enough code to accommodate the bulk of the code being targeted by GCC at the time. And testing, which is also mentioned, does not mean generating correct code, merely detecting certain forms of incorrect code.)

    Personally, I've decided it's more interesting to explore how to write software that is correct, than software that is feature-rich, since, in my experience, doing the former has always paid off in all sorts of ways (e.g. few-to-no callbacks after writing code that others deploy).

    That being said, I'm certainly grateful for GCC, Linux, and all those other products being out there, being useful, and working in so many more circumstances than much of their so-called "competition" (how much Microsoft software runs on SPARC chips, for example?).

    So it's not a matter of rejecting products outright because they don't embody KISS. It's a matter of encouraging everyone -- not just the developers (like Linus developing Linux) but the users (like Linus using GCC) to respect that principle more, assuming they really believe it had something significant to do with why Unix won out over Multics, VMS, MVS, and so on. That's why I jumped in here: I saw people celebrating the Unix "win" as a "simplicity is better" victory, and wanted to serve notice that many of the people currently prominent in directing the future of Unix haven't gotten the "simplicity is better" memo, or don't believe it, or simply forget to put it into practice too often. Either they're wrong that Unix won because it embodied KISS, or they're going to be wrong that Unix "won" down the road, because what knocks it off is likely to be built out of simpler, not more complicated, components.

    (BTW, I happen to think Open Source distribution embodies the KISS principle as well. The alternative includes many activities, such as working hard to obscure your IP, adding license-checking code or dongle support to your software, etc. that contribute complexity to the product that is not needed by users, except to the extent it lowers their up-front acquisition costs.)

  • Do not post links to Amazon, you may be violating a patent ;-)
  • F.Y.I., that "big bloated multics kernel" was about 150 kbytes. And it had a sexier name (hardcore, not kernel). The only bloat is in the imagination of people who just repeat what they've heard without checking the facts...
  • Let's start coding an PC-MULTICS Operating System *g

  • The system features described were not the result of a committee, or of professors: they were done by one engineer, supported by the tools and process described in other papers at the website, in order to meet commercial requirements.

    The "percentage" feature was required by university customers who funded their multi-million dollar machines by combining funds from multiple departments. The departments wanted to be sure they got what they paid for: "darn CS students are using more than their share of the machine."

    The realtime scheduler was invented to win benchmarks, and it did. A relatively underpowered CPU was able to run specified workloads more efficiently that its competition, because of the tuning control available.

    Both of these problems have gone away, now that each of us has more power on our desktops than the multi-million-dollar machines that a whole university used to have to share.

    I am a fan of Unix and Mach. They solve different problems than the one Multics was solving, and do so in a different economic and technical world.

  • The "create a free Multics clone" idea comes up periodically on the Usenet newsgroup alt.os.multics. [alt.os.multics]

    It runs up against several problems:

    • According to Multicians, an important aspect to Multics was the use of PL/I as the programming language.

      There's not a "free" PL/I compiler.

      Probably the nearest alternative would be to implement using the nearest thing to MacLisp, namely Common Lisp. But down that road lies the rather different path of creating a Lisp OS. [hex.net]

    • Another crucial aspect was the hardware support for memory management and rings.

      Apparently Intel designed some Multics-like capabilities into some (not all!) members of the IA-32 series.

    • One alternative is to create a hardware emulator that emulates the Honeywell hardware. That would allow using existing Multics software, rather than merely involving creating analagous software.

      Unfortunately, that means having to get access to the Multics software base, which is not likely terribly available these days.

      The only system known to still be in production use is at DND: Maritime Command in Halifax, Nova Scotia, and its decommissioning is planned this year. Remember, military system. They're not likely to be willing to give out copies, independent of any rights Honeywell, Bull, and others may have...

    • Another option would be to create a "Multics shell" to parallel Ksh, Zsh, ... and create a vaguely Multics-like environment atop Linux.

      Of course, that doesn't get you segment/ring control, which hurts the emulation.

    • I've had Organick's book on Multics on order from Spamazon for a couple years, and have heard nothing. Some documentation may be hard to come by.
    In effect, the problem with creating a Multics clone is twofold:
    • Determining a criterion for "Multicsness."

      Does it have to run the same software? Does it have to use PL/I? Can it be upgraded with other modern notions?

    • Getting suitable data to specify the system.
  • by cburley ( 105664 ) on Tuesday March 07, 2000 @10:09AM (#1220476) Homepage Journal
    It's interesting to see several comments here (can't read the article right now, it's slashdotted) suggesting that Unix is successful today while Multics isn't due to the former's complexity.

    I think there's a fair amount of legitimacy to that argument. But, having come from an environment (Pr1me) that is to Multics sorta like Linux is to Unix, I'd like to make some observations.

    First, surely some of Multics' complexity was overblown, perhaps because of attempts to have it do more than strictly necessary. I.e. it wasn't sufficiently simplified. We're committing the same "sins" throughout the GNU/Linux universe ourselves, e.g. piling feature after feature onto what should be simple, robust components (such as GCC) rather than providing separate tools. (Compare qmail's design to sendmail's, and then try and figure out why, after tons of discussion on the GCC mailing list last year, there's still no high-quality Open Source way to find pointer-alias bugs in C code.)

    Second, much of what we take for granted in Unices of today was inspired, and to some extent proven in the field, by Multics and its descendants, like PRIMOS. Though not an expert on Linux-style dynamic linking, I've found it, and other things like signaling/exceptions, to be, in at least some crucial ways, poor second cousins compared to the technologies (in which I had substantial expertise) available in PRIMOS -- essentially a cheap sorta-clone of Multics -- back around 1984.

    Third, hardware and networking improvements have probably significantly changed the "balance" of factors affecting design decisions that went into Multics. Back in those days, processors weren't cheap, therefore processes weren't cheap, nor were the process-creation and process-exchange activities we take for granted today. (Compare how many processes get exchanged and/or created just to handle an email or, heck, even a mouse click on a typical modern Unix system to contemporary equivalent activities back in the early 1970s, for example.)

    Yet it seems today's "cutting-edge" Unix applications are designed with a similarly limited mind-set. Applications that truly and (IMO) properly take advantage of the "balance" of a modern Unix system, such as qmail, are few and far between. How many of today's new applications have what amounts to a scheduler inside of a single process, rather than relying on the Unix kernel (Linux, *BSD, whatever), for example? How many could change from using asynch or non-blocking I/O to the more easily understandable, maintainable, and verifiable (security-wise) traditional blocking I/O by breaking out the various functions into distinct processes/executables?

    The "Keep It Simple, Stupid" (KISS) principle indeed may have been violated by the early Multics design sufficiently to doom it in the long run vs. Unix, but that doesn't mean we've entirely honored it in Unix-land either. Especially now that we've "won", in the sense that we're competing primarily against an OS that makes Multics look comparatively simple and straightforward (Windows 2000), it's tempting to just "pile on the code" without further regard to KISS, thinking the goal is to further the position of (some variant of) Unix, rather than the widespread deployment of KISS-based software.

    Let's strive to avoid that temptation, assuming it's not already too late.

"I've seen it. It's rubbish." -- Marvin the Paranoid Android

Working...