Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Science

Surreptitious Communication via Page Faults 128

Martin Pomije writes "This is a really interesting story illustrating how a shared resource can also be a communication channel. If you have any interest in software design or operating systems, the Multics web site is worth your time. Tom Van Vleck has obviously put a lot of time into this and deserves credit for making all of this information available. " It's like 10 years old, but it's really interesting.
This discussion has been archived. No new comments can be posted.

Surreptitious Communication via Page Faults

Comments Filter:
  • First, because AFAIK, it was never actually used as an exploit: it was used to describe (in academic papers) a possible security hole. We admire the cleverness of seeing the useful features, the alternate designs, and the right and wrong paths that were taken. "Script kiddie l00zerz," in contrast, take well-known security holes and repeat them for no intellectual or social gain.
  • by Anonymous Coward
    That's brilliant! How simple. When I was an undergrad, my roommate got a guy's password (using the "trust me" method.. ha) and changed it, then made him play hangman for the new password (would delete all the files if man got hanged). It was funny as shit because the guy won the game, which pissed my roommate off so much that he decided to delete the files anyway (he was unethical enough to screw with the password so why expect him play by the rules with the game). So they both go running off to the nearest computer terminal and when they both reached it at the same time they got into a fist fight for the keyboard while everyone is wondering WTF is going on! Damn those were great days. One of those guys is now an MBA scum... can you guess which one?
  • What are you talking about? Linux is not socialist or fascist, but rather embodies the ideal of the perfect communism, where everyone contributes what they can, and hopefully gets what they need. But since the barrier to entry is so much lower, information is essentially free to all, and distribution problems go to nil!

    ...And do you really think the moderators will fall for something so simplistic?

    [...]

    Damn, I wish I had thought of this one first! :)
    ---
    pb Reply or e-mail; don't vaguely moderate [152.7.41.11].
  • You don`t need to be notified when a page fault occurs. Timing the check should be sufficient to get at least statitistically interesting data.

  • Now thats an interesting covert channel. Make a page somewhere and have the receiver look at is every minute. If the sender wants to send a 17 he makes sure that that the time limit is exceeded exactly 17 minuttes past the hour (by some hacked DDoS tool). Low bandwith sure, but a password is only a couple of hundred bits.
  • I'd say that this article does interest the Comp Sci student portion of Slashdotters. I am taking an operating systems class, and message passing between processes is one of the topics. We have discussed in class how any shared resource is a potential messaging channel, so this Slashdot reader found the story very interesting, as this demonstrates a real-life ramification.

    --

  • s/biff/mesg/

    Other than that, thumbs up. Never used Multics, so I can't say whether
    that part's right, but it sounds plausible :)


    #define X(x,y) x##y
  • So maybe the wave of Denial of Service attacks some weeks ago was transmitting some kind of message...
    --
  • Under Windows, man exploits man. Under Linux, it is exactly the opposite.
  • This makes me wonder if any operating systems currently have signed executables and automatic verification of an executable's signature before execution. This could require the system administrator to sign an executable with his secret key passphrase before it would execute at a privileged level (i.e. root), for example.

    fnord.


  • Bummer !

    I came in late and the page is already "closed". Looks like the ISP "Best" or whatever isn't that good - they close down page access just when it starts to get interesting.

    Can anyone please post the original content of that page? I'd love to know the details.

    Thanks in advance !!

  • isn't this why they use TEMPEST?
  • ya the VMS thing is a common textbox story.
    But what used to be real fun in VMS, was to
    malloc a ton of memory and write it to a file.
    You could catch peoples email, and files. Since
    the system didn't zero the pages at that time.

  • No, it's not. Well, yeah, but that's the point.

    See, the article was submitted to /., and the acknowledgement of it being read was a page fault of Internet proportions. This was a signal which, once received, made the existence of the referenced site irrelevant.

    Slashdotting can be thought of as a form of communication. It's also the sincerest form of flattery on the Internet.
  • It's a question of only allowing programs that are supposed to access the info access to the info, and so forth. You can't just let any process read the page that another process is working with - what if the page contains sensitive data? (Well, you can if you don't care about data security) So in high security environments, you have to take this into account, and prevent this. Security is more than just control of the machine (ie. not getting rooted), it's also maintaining control of data and not losing data. So things like this are a security risk in the sense that someone could access something that they aren't supposed to - they might not be able to change it, but they can still read it.

    itachi

  • yes, Foogle should clearly be stopped. Can someone talk to Malda about this, because all this Foogle is getting out of hand.

    -----------

    "You can't shake the Devil's hand and say you're only kidding."

  • Had a look at a mirror of the site and while I was scanning through the glossary of MULTICS terms I came across the following entry:

    LINUS :

    Logical Inquiry and Update System.

    ...I wonder if there is an entry for TORVOLDS...

  • Isn't this pretty much what Java does, but with less effort required on the part of the code producer than proof-carrying code? So JavaOS is an OS which has the capabilities you're inquiring about.

    The jvm takes the bytecodes when it loads them and runs them through a theorem-prover to verify that the code doesn't do anything illegal, and when the code gets run, the Java security model works so that any attempt by the code to access system resources is checked by the security manager before being permitted.

    When you combine this with the fact that, as of jdk1.2, developers can easily create custom privileges for their apps and then can require mobile code which requires those privileges to be signed by a trusted party, you get a system that seems to me to be much more powerful and easy to use than the one in the paper on proof-carrying code.

  • The ultmate covert channel exploit seems to be watching the pizza delivery dudes in Washington DC.

  • > A more practical alternative would be to have a
    > supply of frozen pizzas and an oven, or even an
    > in-house pizzeria. Sounds like a good > excuse to me.

    But TEMPEST shielding does not cover pizza smells.
    This means somewhere someone is spending lots of my tax money on TEMPER (Transient EMitted Pizza Essence Remover)
  • Even a checksum would be nice.

    OS-9 [microware.com] from miroware used to do CRC16 on code before it would load or run a module on the 6809 (8 bit cpu).

    This concept could be extended today but would require a bit more work becuase modern programs don't load in the whole bit of code before running. How would this work? You could add a crypto key to the file header that had a signing key and then MD5 each block but how would the VM system deal with extra data in the blocks? For code blocks you can tell GCC to waste the last 4 bytes of a vm block (as done on the 32016) but it involes a bunch of jumps that new CPUs don't do so well. The other choice is to simply make the VM system load the needed block plus the next bytes off the disk and do an MD5 like thing with the block and signing key and then throw away the block that has the key in it but that blows away any one to one relationship between disk images and memory.

  • In the book, there is some narration that referres to strange messages that are being transmitted over the public TV channel. It isn't covert but impossible to break without the codebook (code keys).

    The message about slashdot posting is more like this. Post a message in public that only the receiver will know what to do with.
  • Several things need to be noted:

    First, the classic book on Multics - The Multics System; An Examination of Its Structure by the late Elliott Irving Organick is a snap-shot of a work in progress. If you've read the book, you'll have a good feeling for what the group was doing, but not particularly of what Multics finally turned out to be.

    Second, there is recurring discussion in the USENET news group alt.os.multics about a re-implementation of the Multics hardware environment (including tagged memory) on something like a PCI card. Then use the PC as the I/O system. This, of course, is possible because ASICs and DRAM are perpetually getting less and less expensive. The acutal ownership of Multics will have to be worked out before that can happen. Someone should enourage the present owners to GPL the source and binaries.

    It always strikes me sad that Multics is old, but its O/S ideas are, in large part, state of the art. It says something pretty vile about contemporary O/S and architechture research, experimentation and design. VonNeuman CPUs and Unix are not horrible designs (thus they've been successfull), but if we want orders of magnitude improvements now, then it is time to re-evaluate the basics. Seymour Cray came up with the majority of the improvements (hacks if you will) to the Von Neuman architechture in the 1960s and 70s. Intel (et al) has been living off those ideas for two decades. It's time for new ideas (yes, perhaps like the late Intel ipex-432).

    Finally, I wish Linus had read about Multics, before he re-implmented Unix. We'd all be in better shape if that were the case. Then Linux could genuinely be said to be state of the art, not just state of the practice.

  • Seems like the Slashdot effect has taken down another web site. Multics' ISP has shut them down because they have exceeded their bandwidth/hits limit. Luckily there are mirrors in the US and England. [city.ac.uk]

    I can't currently get to the US mirror either, so I am not including the link here.

  • The difference is that communists want to achieve their goal with a revolution whereas socialists believe in a more graduate change.

    That's simply not true. Just take the Socialist Worker's Party or the Socialist Party, both in Britain. Both "revolutionary", although the first believes in violence, the latter in getting elected.

    Unfortunately, both words have become extremely corrupted and debased this century by Cuba, the Soviet Union and China, who were and are not truly socialist (my definition of socialism includes participatory democracy). Even more unfortunately, there isn't a better word for the political view that I support than socialism.

  • Yes. That was a system (VMS?) which has privledged shared libraries. There was a priveledged method in one of those libraries that verified the password. Its implementation read the password passed one character at a time until it got to the end or found a mismatch (passwords were stored in the clear)... So by placing the "trial" password buffer over a page boundary, you could determine when you had all characters before the boundary correct.

    This particular exploit actually used the "page faults per process" metric provided by the OS, rather than the timing information, IIRC.

  • An acquaintance of mine told me that he had broken into a multics system by writing a driver which a) did the driver's job and b) read the password file and communicated it one bit at a time to a user-space program via toggling rwx bits in some file known to both driver and user program.

    He was smart enough that it was probably true.
  • Yeah, does anyone know what his ISP has set for the limit? This seems kinda dumb to me... is this a common thing to see for many sites or is this site just trying to do the Internet in a cheap way?
  • Don't you hate it when they make you do the dishes, AND clean that cat box?
  • Not VMS, but TENEX. VMS ran on VAX, TENEX on
    PDP-10 (the 36-bit line of PDP's).

    Actually, this can also be found in AST's
    Minix Book :)

  • How is this a security risk in a UNIX-ish environment? Can't processes talk to each other as much as they want anyway??

    Yup. IPC and all that (not to mention just leaving a file with any kind of data you want in /tmp). If you have a program that reads important data, and it is set up so that it passes data around in some covert way, you're screwed anyway. It doesn't matter if it's one of these things or a /bin/login that mails passwords to a remote host.
  • I doubt it for two reasons......

    1) If his 'driver' had access to the password file, why did it waste time by sending it one bit at a time??

    2) If the people that ran that system trusted him SO greatly to..... not check his driver code?

    Anyways, that wasn't a very incredible feat, imho.
    It's two system calls (i.e., read and chmod on the driver side and a lstat? on the client side).

  • I'm fairly certain there was also a good write-up of this in Operating Systems, Design and Implementation, author Andrew Tannenbaum. It should be reasonably easy to find.

    Steff

  • Back when I was in college (three months ago), I remember a course on OSs that mentioned this "security problem" of using resources as data channels. Any enlightened souls care to explain what the bid deal is? How is this a security risk in a UNIX-ish environment? Can't processes talk to each other as much as they want anyway?? dave.
  • That the same as lying on a nude-beach, and running into the water everytime a goodlooking chick is walking past, because you don't want to let know you have a boner.

    To be completely fair/let noone know you get aroused by beautiful women/not insult anyone, you also have to run into the water with ugly women...

    Just an analogy...

    Never thought my running into the water was actually covering up a covert comm-channel!
  • i beg to differ... i found the article very interesting.

    sincerely,
    most slashdotters

  • Or just click here [slashdot.org] to get to it.
  • tell me why feds should care if you're cracked
    same reason they should care if i got shot

    Yeah, they get paid for it.
    Welcome to the bottom line, folks.

  • Create a file and only allow it to be read by applications (and users) defined in an ACL (access control list,) you can always add more apps (and or users) to ACL's. This is another way to implement distributed file system security. There is a problem with spies, they will use all possible means to justify the end results, so focused electro magnetic readers can always read impulses from your keyboard or from your parallel/serial port as well as your electromagnetic carriers (Fiber Optics will help in this case but even Fiber Optics that is used today still has to convert to electric impulse and back on repeaters. Good news, Lucent has a new way to use F.O. without repeaters only with mirror switches.)

    File encryption always helps. You can encrypt your data file into something that looks like a text document (even document that has some text meaning) then take the document apart into small pieces and send those pieces in no particular order. By themselves the pieces will not have any special meaning and will look benign to the file filters and detectors.

    You must always think ahead

  • Something like this would be an interesting way of passing information deemed "inappropriate" by powerful people. Perhaps there is a way to code up some program to pass the DeCSS code, or the Cyber Patrol hacks, or whatever without actually passing the information.

    With the way things are going, how can we know what information will still be "legal" next month. At least for those of us in the US, it might be a good idea to start forming some way of maintaining freedom in spite of the courts and big business.

    If we have to choose which book to become, I'll become a fanfic long-form story set in the DBZ universe (I love the author; well, at least what I've read so far).

    -Nathaniel
  • They actually taught us this in intelligence school and gave it the name "PIZZA-INT", by analogy with SIGINT (signals intelligence), IMINT (imagery intelligence), HUMINT (human intelligence), etc.

    Countries who care what the US is up to always know when something important is about to happen, because the number of late-night pizzas delivered to offices at the Pentagon and State Department skyrockets.

    It doesn't seem too covert, though. After all, the spy school instructors knew all about it and it was a common joke throughout the US intelligence community.

    Chris

  • There is not much difference between socialism and communism, they both have the same ideals. A hundred years ago it eaven ment the same, it only got split up later.
    The difference is that communists want to achieve their goal with a revolution whereas socialists believe in a more graduate change.

    So is Linux socialistic or communistic? I think more socialistic (unless you count the 'nuke redmond' attitude most slashdotters have).

    Grtz, Jeroen

  • you are right, it isn't as simple as it used to be...

    A lot of communists (the real kind) today call themself socialist because communism stands for a totalitarian regime for most people.

    Grtz, Jeroen

    p.s. I regard myself a socialist to (the non-violant, pro-democracy kind)

  • tell me why feds should care if you're cracked

    same reason they should care if i got shot

  • Wasnt able to actually read the article cuz the site went over its hit limit, but i know SGI uses page faults to serialize access to the 3D acceleration hardware. essentially it amounts to IPC amoung some progs that dont know about each other w/ the assitance of IRIX. I think they might even use something similar in the 3dfx DRI driver. At least that is what i heard.
  • I KNEW MS Windows was smarter than it seems! I'll bet that it's been using BSOD's to send secret messages for years.
  • Actually, it seems that any program which could pass the information would have to pass the same information no matter what level of abstraction it uses to do it and would, in theory, still be illegal. But there's a question of detection and enforcement... It seems doable in principle: A process monitoring some chosen shared resource acts as listener, another process acts to perturb shared resources and send information, and with the signal path established run whatever protocol you like over it.

    -- Ben

  • What is described in this article are "covert channels" which are ancient and very well researched.
  • How's any of that trolling?
  • First off, I for one found it very interesting.

    Secondly, any time two people have access to the same area, be it a physical area or shared computing resource, there is the risk of covert communications. The only way to prevent it is to remove access to all but one person.

    Multics was designed primarily to share expensive computing resources. Computing resources are much cheaper now, so it is more feasible to prevent covert communications by just plain not sharing. Each person in a hypersecure installation would have their own computer, without even a conventional LAN, with communications between the systems being tightly controlled and monitored.

    ----
  • I found this story http://www.multicians.org/security.html [multicians.org] amusing, particulary how the password "encryption" in an early version of MULTICS turned out to depend on a compiler bug.
  • Not to mention that Multics really didn't have a "kernel space". It had certain instructions that would only run in the lowest ring, and of course the kernel ran in the lowest ring, but the kernel wasn't the only thing running in the lowest ring, some of the kernel functionality was actually implemented in the processor's hardware and microcode, and a lot of the I/O processing was done by the front end processors (which is what made implementing Emacs on Multics such a ball of hair -- the FNP's weren't originally designed to do character-at-a-time I/O).

    The easiest way to get a password in Multics was the old "trojan terminal" trick. Multics was usually accessed via crowded terminal rooms, and there was often a waiting list to get on (remote access back then was very expensive and few Multics installations had any remote access). Log in, run your trojan (which put up a fake login prompt), walk away. Someone else jumps on your "open" terminal, tries their personid, projectid, and password, gets 'login incorrect' message (at which point your program surreptuously exits and logs out, having saved the data to a file somewhere), and then they get the "real" login prompt. Classic, simple, almost undetectable!

    Of course, that wasn't the holy grail. The holy grail was getting one of the operators' passwords, or somehow executing commands as an operator to create a ring0 gate for you (basically same as creating a suid root /bin/sh). I'm trying to remember whether it was Multics that someone tried the "smart terminal" attack on, or whether we'd already moved to Unix by that time. The TVI912 terminal would transmit the contents of its screen to the remote system when you sent ESC-7. So send a clear screen, the commands to be executed, and then ESC-7 as the payload of a Unix "write" command (or Multics equivalent -- still trying to remember whether Multics had an equivalent, darn it!), and guess what happens? (Oh, make sure to clear the screen again after you've sent the payload!). Of course, after the first time, the operators remembered to turn off messages ("biff n" on Unix) :-).

    -E

  • Still in MVS/OS 390 3 bits for key

    As I remember, System/360 storage keys were associated with large (4K?) regions of memory; they weren't associated with individual words, as tags are on the Burroughs machines (and their successors) and on the PowerPC AS/400's. I tend to think of "tags" as applying to individual words; do storage keys protect down to the word level on S/390?

  • Say, whatever happened to tagged storage?

    It's in the PowerPC AS/400's [ibm.com], at least according to Frank Soltis' Inside the AS/400 book - there are tag bits in the extended PowerPC architecture used in the AS/400's.

    It's also presumably still in the the Unisys ClearPath LX [unisys.com] and ClearPath NX [unisys.com] series, which include processors that are presumably the latest generation of the line of machines that started with the Burroughs mainframes.

  • Ummm, XML existed in 1967 (about) when Multics was deployed?

    No, it didn't.

    The poster to whom you're responding was mistaken; they didn't use XML, they used SGML.

    (Or, as the banner on this site should perhaps read:

    Slashdot
    YHBT. YHL. HAND.
    )

  • Covert communication channels are a security risk in a system that has restricted data which it is trying to limit access to. For instance this would be an issue if you have a multi-user system which contains classified information. This is the kind of thing that is covered in a B2 security clearance.

    Covert channels are not a security risk if your question is keeping the wrong person from getting in control of your computer.

    Cheers,
    Ben
  • here's a real life example of using a shared resource (a 5 1/4" floppy drive) as a communication channel that has worked well for years. In the IS office that this is in (and I'm not naming names cause I know one of their managers reads /.) the main server room is on the far side of a cube farm where the grunts in the IS staff sit. Located in the main server room is a print server with an old 5 1/4" drive that rattles really loudy when you try to access it without a floppy, almost every member of the IS dept has that drive mapped on thier desktop machines. It makes the perfect alarm bell... any time management comes into the cube farm and walks down the side isle then someone will see them and hit refresh on that mapped drive which will ring the alarm in the server room and warn the occupants of the room of managements impending approach... we won't go into why they might need warning... use your imagination. ;)
  • Back in 1980, my high school had a PDP 11/03 for student use. The operating system was an early version of RT11. We had a binary license, and received the operating system on a set of 8 inch floppies. I spent a lot of time on this machine.

    One time I did a disk dump from the raw disk device to the console, and after a few pages of binary gibberish, out came several pages of operating system kernel source code! Apparently, Digital, when it made up the master disk for the operating system binary distribution, reused a floppy that had previously contained operating system source code, and never bothered to zero out the disk. Interesting stuff!
  • isn't this why they use TEMPEST?

    Different problem. TEMPEST is concerned with information leakage via electro-magnetic radiation.

    Covert channels are a concern in multi-level secure systems where information of different classifications and compartments are processed on a single system. A process running in a Top Secret context should not be able to send classified information to a process running in a Confidential context.

  • As a more modern example of the same, I've read all sorts of interesting things opening MS Word documents in a text editor. Usually I was just trying to get the gist of something someone emailed me, but Word's 'fast save' feature often leaves deleted text in the file. This is particularly bad when someone's used a letter to Alice as a template for a letter to Bob. :)

    Doing a fresh 'Save As' was a fine work-around, but most people were shocked when I point this out to them.
  • Actually it is explained really well in Tanenbaum Operaton System book on page 437 in the second edition.
  • His site is hosted by best.com, probably on a shell machine with alot of other sites. The typical account has a 200MB/day, 25k hits/day limit.

    The total amount is compared against the last 24 hours, computed on the hour. Check just after the hour and you may get thru, before he's back over the limit again.

    This is just like any other quota system to keep one user from hogging an unfair share of the available resources, in this case bandwidth, hits or cgi cpu cycles.
  • I don't quite see how you figure that. The means by which something is stored is largely irrelevant to legal proceedings, provided that means can be understood (otherwise, where's the case?). What you're talking about is *hiding* the DeCSS code. While hiding the code may made the MPAA's job harder, it doesn't make the distribution of such code any more legal. I could just as easily take the DeCSS code and tar it, then gzip it, then encrypt it with GPG, and finally name it "wordpad.exe". That would make it pretty hard for anybody (other than the intended recipient) to use the file, but that still doesn't make it "not DeCSS".

    -----------

    "You can't shake the Devil's hand and say you're only kidding."

  • wasnt this one way to break passwords in some old systems ? you would cycle through the letters one by one and with a page fault you could know which was the right letter at one position. this reduced the work greatly.
  • The B2 systems I've played with won't even allow you to cut and paste from one window to another that doesn't have the proper level of privs. Keep in mind this isn't as easy as layers as well.

    Most B2 systems have several user groups and each are in a different Security Realm. You can be in several Realms and you may be able to copy info from one area to another but generlay you cant go the other way without downgrading the data which of course requires other levels of access. Real access control is not easy to do and is very, very complicated.

    When I ran a news server for the AF I used to feed news to a B2 certifed system. I never new if it got the news or not because it never responded. It had a dedicated hunk of hardware that did TCP/IP and simply acked everything. It was like sending bits off into the ether.
  • While hiding the code may made the MPAA's job harder, it doesn't make the distribution of such code any more legal.
    When I'm engaged in one or more of my multitudinous crimes of sex, drugs, rock and roll, and thoughtcrime, I'm not concerned with whether it's legal; I'm concerned with whether or not armed agents of the state are going to catch and cage me. A crime, after all, is just an act that the state doesn't like, and "legal" has no relationship to "right". (With apologies to Frank Z: Force is not law. Law is not justice. Justice is not ethics. Ethics is not compassion. Compassion is the best.)

    So the question becomes, how can I not get caught? Spreading information via covert channels (the technical term for this sort of communication) makes it very hard for said agents to even suspect my actions, and to gather evidence for a conviction.

    (Note to my fans in domestic surveillance: only kidding, everyone knows I'm a good law-abiding American.)

  • How's that for slashdoting...


    Sorry, temporarily closed

    My internet service provider, Best Internet Communications, limits the number of hits and the bandwidth each web site can consume, and my site is past its limit.

    http://www.multicians.org:80/limit.html
  • If you had a file that was write only and you had like a LOT of copies and you wrote some data to the file could you verify that you had written different data by timing the page fault? I cant see any real use for this but it's a new way of thinking about stuff.
  • I have requested temporary relief from my hit quota. Best.com was pretty quick last time I was slashdotted, hope it's quick this time. They put in tiered pricing a few years back when people put up porn sites on Best's servers: rather than censor, they charge for bandwidth. I've been well served at the lowest tier.
  • This article probably doesn't interest most slashdotters, because the OSes that we use aren't designed to protect against these kinds of things. This, of course, stems from the fact that the situations in which we use our systems do not require us to segment our users and prevent them from communicating.

    Actually, this doesn't affect most Slashdot users because the operating systems they use have such lousy security that there are far easier attacks than using a covert channel. This applies to all the UNIX variants and all the Microsoft products. (Yes, it does. If it didn't, there wouldn't be new CERT advisories every week.)

    There are solutions to the covert channel problem, but people hate them. Applications have to be denied access to an accurate timebase, which means no clock with resolution finer than a second, introducing deliberate jitter into the CPU dispatching, and adding some random delay to I/O operation responses. Forget playing games or multimedia.

    The big win is that in a system with mandatory security and covert-channel protection, even if a virus or trojan horse gets access to secured data, it can't get the data out of the box.

    I used to develop secure operating system kernels for a DoD contractor. The current state of operating system security is worse than it was twenty years ago. Users will choose animated penguins over security. What passes for "computer security" today is mostly aimed at keeping amateurs from blatantly interfering with servers. And it can't even do that successfully. A serious attack is much more subtle.

    In the DoD world, one worries about two main threats. The intelligence community worries about the attacker obtaining some information they have without an alarm being raised. The military worries about the attacker interfering with their systems at a time chosen by the attacker, probably at a key moment of a military operation. The attacker is assumed to have enough resources to duplicate the systems being attacked and to practice on them. There's also the assumption that physical and computer intrusions might be used together.

    Enough rant for now. Go read some NSA evaluations of security products. [ncsc.mil]

  • Countries who care what the US is up to always know when something important is about to happen, because the number of late-night pizzas delivered to offices at the Pentagon and State Department skyrockets.

    It doesn't seem too covert, though. After all, the spy school instructors knew all about it and it was a common joke throughout the US intelligence community.

    This actually sounds like just a special case of traffic analysis. Traffic analysis is one thing that you can do if you can intercept but not decrypt your enemy's transmissions. When his signal traffic increases suddenly, you know something is up.

    If you really want to be paranoid about signal security, you have to send out a background of boring messages at least as big as your expected peak traffic. You could, for instance, send out messages that said "Ignore me" padded with whitespace (and then encrypted) at periodic intervals. When you actually want to send a real message, you'd substitute your real message for one of the fake ones. Of course this is something that computers are very good for; they can mimic the length patterns of real messages and automatically ignore the bogus received messages.

    By analogy, the State and Defense departments could constantly order late night pizzas even when nothing interesting was going on. A more practical alternative would be to have a supply of frozen pizzas and an oven, or even an in-house pizzeria. Sounds like a good excuse to me.

  • until the sysop gets his bandwidth straightened out, you can get the referenced doc at:

    ftp://ftp.stratus.com/pub/vos/multics/tvv/timing-c hn.html

    bah. why repost an article when a URL will do?

  • by Anonymous Coward on Wednesday March 22, 2000 @05:36PM (#1181397)
    Proof Carrying Code is a much better solution than signed binaries. Signed binaries still require you to trust the developer, which is many cases is still not a good idea. See here for more information:

    http://www.cs.cmu.edu/~petel/papers /pcc/pcc.html [cmu.edu]
  • by Stan Chesnutt ( 2253 ) on Wednesday March 22, 2000 @04:48PM (#1181398) Homepage
    I'm really heartened to see that folks are willing to read about discoveries made in the past, learn from mistakes that have been committed before, and stand upon the shoulders of earlier generations, rather than standing in their footprints.

    I was able to use Multics for four years in college, and that experience gave me more insight and wisdom than any four years spent with Unix, Windows, or MacOS.

    (In the areas of system design, that is! The only graphics expertise I gained from Multics was how to do simple bitmap images using the lineprinter!)

    An unapologetic former Multician,
    Stan Chesnutt
  • It had certain instructions that would only run in the lowest ring

    ...and, at least in the older processors only in a segment tagged as a segment whose code should run in "master mode"; even most of the ring 0 code couldn't execute privileged instructions - it could only call small routines in one of those segments (said segments were, presumably, inaccessible outside of ring 0). The entry for "privileged mode" in the Multics site (well, mirror site at Stratus, anyway) glossary [stratus.com] says:

    The GCOS versions of the Multics processors implemented a traditional one-bit hardware user/supervisor distinction: the CPU privileged mode ("master mode" on the 645) enabled system-control instructions, which were otherwise illegal. In Multics, privileged became a bit (see REWPUG) in the [Segment Descriptor Word], effectively ANDed with ring of execution being ring 0. Very few supervisor segments had this bit. Today, modern processors only have rings.
    some of the kernel functionality was actually implemented in the processor's hardware and microcode

    Hardware and/or microcode - the GE 645 and 6180 had no microcode; the Multics site glossary entry for the 6180 [stratus.com] says:

    The 6180 processor was among the last of the great non-microcoded engines. Entirely asynchronous, its hundred-odd boards would send out requests, earmark the results for somebody else, swipe somebody else's signals or data, and backstab each other in all sorts of amusing ways which occasionally failed (the "op not complete" timer would go off and cause a fault). Unlike the PDP-10, there was no hint of an organized synchronization strategy: various "it's ready now", "ok, go", "take a cycle" pulses merely surged through the vast backpanel ANDed with appropriate state and goosed the next guy down. Not without its charms, this seemingly ad-hoc technology facilitated a substantial degree of overlap (see Operations Unit) as well as the "appending" (in the dictionary sense) of the Multics address mechanism to the extant 6000 architecture in an ingenious, modular, and surprising way (see Appending Unit). Modification and debugging of the processor, though, were no fun.
  • by rrogers ( 48345 ) on Wednesday March 22, 2000 @06:16PM (#1181401)
    What the hell, I'll respond to myself. Seeing how the site is probably permanently down (at least till his next billing cycle) I'll cut and paste here. There were 2 diagrams on the site, but you should be able to understand without them.


    Tom Van Vleck

    About 1976, I was explaining Multics security to a colleague at Honeywell. (This was before it got the B2 rating, but the mandatory access control mechanism was present.) I explained the difference between storage channels and timing channels, and said that timing channels weren't weren't important because they were so slow, noisy, and easily clogged, that you couldn't send a signal effectively.

    My friend, Bob Mullen, astounded me a few days later by showing me two processes in the terminal room. You could type into one and the other would type out the same phrase a few seconds later. The processes were communicating at teletype speed by causing page faults and observing how long they took. The sender process read or didn't read certain pages of a public library file. The receiver process read the pages and determined whether they were already in core by seeing if the read took a long or short real time. The processes were running on a large Multics utility installation at MIT under average load; occasionally the primitive coding scheme used would lose sync and the receiver would type an X instead of the letter sent. If you typed in "trojan horse," the other process would type "trojan horse" a little later, with occasional X's sprinkled in the output.

    Bob pointed out that noise in the channel was irrelevant to the bandwidth (Shannon's Law). Suitable coding schemes could get comparable signalling rates over many different resource types, despite countermeasures by the OS: besides page faulting, one could signal via CPU demand, segment activation, disk cache loading, or other means.

    When I thought about this I realized that any dynamically shared resource is a channel. If a process sees any different result due to another process's operation, there is a channel between them. If a resource is shared between two processes, such that one process might wait or not depending on the other's action, then the wait can be observed and there is a timing channel.

    Closing the channel Bob demonstrated would be difficult. The virtual memory machinery would have to do extra bookkeeping and extra page reads. To close the CPU demand channel, the scheduler would have to preallocate time slots to higher levels, and idle instead of making unneeded CPU time available to lower levels.

    Bob and I did not compute the bandwidth of the page fault channel. It was about as fast as electronic mail for short messages. A spy could create a Trojan Horse program that communicated with his receiver process, and if executed by a user of another classification, the program could send messages via timing channels despite the star-property controls on data.

    The rules against writing down were designed to keep Trojan Horse programs from sending data to lower levels, by ensuring that illegal actions would fault. In this case, the sender is doing something legal, looking at data he's allowed to, consuming resources or not, and so on. The receiver is doing something legal too, for example looking at pages and reading the clock. Forbidding repeated clock reading or returning fake clock values would restrict program semantics and introduce complexity.

    I believe we need a mechanism complementary to the PERMISSION mechanism, called a CERTIFICATION mechanism. This controls the right to Execute software, while permission controls the Read and Write flags. A user classified Secret is not allowed to execute, at Secret level, a program certified only to Unclassified. Programs would only be certified to a given level after examination to ensure they contained no Trojan Horses. This kind of permission bit handling is easy to implement, and if the site only certifies benign programs, all works perfectly.

    How do we examine programs to make sure they have no Trojan Horses? The same way we examine kernels to make sure they don't. Examining application code to ensure it doesn't signal information illegally should be easier than the work we have to do on a kernel.

    Do we need to forbid write down for storage, if we allow write down via timing channels? It's inconsistent to forbid some write down paths and allow others. The main reason for preventing write down is not to stop deliberate declassification (a spy can always memorize a secret and type it into an unclassified file), it's to protect users from Trojan Horse programs leaking information. The certification approach solves this problem adifferent way, by providing a way to prevent execution of Trojan Horse programs.

    Who should be allowed to certify a program to a given level? I think anyone who can create a program to execute at level X should be able to take a lower level program and certify it to level X. We can't stop a malicious user from creating a transmitter program, unless we forbid all program creation at level X. Other policies are possible; a site could centralize all program certification as a security officer responsibility. In any case, the OS should audit every certification and store the identity of the certifier as a file attribute.

    [1] Honeywell, Multics Security Administration manual ####.

    [2] Bell, D. E. and L. J. LaPadula, Computer Security Model: Unified Exposition and Multics Interpretation, ESD-TR-75-306, Hanscom AFB, MA, 1975 (also available as DTIC AD-A023588).

    [3] Bell, D. E., and L. J. LaPadula, Secure Computer Systems: Unified Exposition and Multics Interpretation, Mitre Technical Report MTR-2997, rev 2, March 1976.

    [4] Biba, K. J., S. R. Ames, Jr., E. L. Burke, P. A. Karger, W. R. Price, R. R. Schell, W. L. Schiller, The top level specification of a Multics security kernel, WP-20377, MITRE Corp, Bedford MA, August 1975.
  • by cranq ( 61540 ) on Wednesday March 22, 2000 @07:06PM (#1181402)
    Youbetcha. I never used Multics, but I read a book on its design. Those folks thought things THROUGH. They had plans for hot-swapping CPUs (back when they were big as houses), capability-based security (I think, it's been a while) and good stuff like that.

    In many ways we of the Present have NIH syndrome. It can't be good unless it's new. But you know what they say...

    A month in the lab can often save an afternoon in the Library.

    Say, whatever happened to tagged storage?



    Regards, your friendly neighbourhood cranq
  • by norton_I ( 64015 ) <hobbes@utrek.dhs.org> on Wednesday March 22, 2000 @05:50PM (#1181403)
    A B2 secure rated system requires that a program authorized to read sensitive data must not be allowed to *give* that data to a process with lower clearance. In theory, a trojan which infiltrated the secure levels of a system would possibly be able to read/modify/destroy that data (though other mechanisms exist to prevent that), but not communicate it to an untrusted party. However, with a covert channel such as this, that protection doesn't work.
  • by Wesley Felter ( 138342 ) <wesley@felter.org> on Wednesday March 22, 2000 @05:27PM (#1181404) Homepage
    If anyone's curious, more info on confinement and covert channels is here:

    at erights.org [erights.org]

    The computer security fact forum [caplet.com]
  • by Morgaine ( 4316 ) on Wednesday March 22, 2000 @05:42PM (#1181405)
    They're called "covert channels" in secure systems work / Orange Book terminology, and it's extraordinarily difficult (actually, impossible) to eliminate them totally --- a case of asymptotic approximation, law of diminishing returns, etc..

    The classic tradeoff between probability of detection and bandwidth of the covert channel operates here, but if the amount of data which needs to be communicated is low then the trojan spook always wins against the guardian.

    On the other hand, detection of the covert communication is an interesting problem in its own right, because it boils down to the problem of telling the remote party exactly where to find the significant data among gigabits of obfuscation --- effectively the same problem as key exchange in crypto!
  • by jabber ( 13196 ) on Wednesday March 22, 2000 @07:14PM (#1181406) Homepage
    The original site is closed due to bandwidth quota restriction.

    See alternate location [stratus.com].
  • by Anonymous Coward on Wednesday March 22, 2000 @04:56PM (#1181407)
    Rather than shared resources, how about programs posting to Slashdot then reading it:

    Program A writes to a random thread [slashdot.org].

    Program A-2 loads the thread, and moderates up the post, using karma from it's auto-"Linux is not socialist" post feature (such posts are always informative).

    Program B (the reciever) loads the thread likewise, using Highest Scores First, and reads the posts marked up to two. It then leaves a "Linux is socialist" post. Invariably, a moderator will find this, take offence, and moderate it down.

    Program A will, by continuosly re-loading the thread, notice the new (Score: -1, Troll) post, and will be assured that the communication was successful. Program execution will then continue as normal.

    Hmmm...maybe I should patent it.
  • by Guy Harris ( 3803 ) <guy@alum.mit.edu> on Wednesday March 22, 2000 @05:11PM (#1181408)

    Yes, there was a paper by Butler Lampson on system design (it was in the proceedings of some conference, I think, published in an issue of some ACM SIG's journal; alas, I no longer have the journal issue in question) about that.

    As I remember, this was on TENEX, and was an unintended consequence of a feature allowing an application to catch (or, at least, be notified of) page faults - or it may have been protection faults (e.g., SIGSEGV or SIGBUS on UNIX) - from userland.

    Files, if I remember the paper correctly, could be protected with a password in TENEX, and you specified the password in the call to open the file.

    Normally, you'd have to cycle each character of the password independently, in order to try a brute-force password search.

    However, the code that did the password checking would, if I remember the paper correctly, fetch a character of the password, check it, and either fail immediately or continue on fetching the next character and checking it.

    You could then try opening the file with an N-character password, with the second character placed so that the fault in question would occur if the system tried to fetch it (i.e., either on the next page, or not in the address space if the fault wasn't just a "page me in" fault). You'd start with the first legal character value for all characters.

    If the attempt to open the file succeeded, you win.

    If it fails without a fault, you know that it didn't bother trying to fetch the next character, and therefore you know that the first character wasn't the right one; you'd then replace it with the next character value, and try again.

    If it fails with a fault, you know that it did try to fetch the next character, and therefore you know that the first character is the right one. You then set up a password with that character as the first character, and with the second character being the first legal character value, and with the third character being on the next page, and repeat the process with the second character.

  • by bug ( 8519 ) on Wednesday March 22, 2000 @06:18PM (#1181409)
    This article probably doesn't interest most slashdotters, because the OSes that we use aren't designed to protect against these kinds of things. This, of course, stems from the fact that the situations in which we use our systems do not require us to segment our users and prevent them from communicating.

    In the DoD, for instance, there are situations where you would want to do this. You don't want to allow someone viewing Top Secret data to have their information leaked to someone who isn't cleared for it.

    This is why Mandatory Access Controls were invented. A lot of slashdotters have probably heard of the Orange Book, in which the DoD laid out a method of classifying the security model of computer systems. Unix variants are roughly C-1'ish, which DoD doesn't even certify anymore. OSes with ACLs and such (like NT) are roughly C-2'ish (now whether NT actually gets the job done or not is up for debate). Once you get to the B and A levels, you have Mandatory Access Controls. They are designed to prevent one user from leaking classified data to another person. In a MAC system, you should not be able to "chmod" or "chown" a file to allow someone else to view it.

    It goes a little bit deeper, though. You also need to protect against more subtle methods of communication, called covert channels. In a covert storage channel, a user would fill up a disk and another user would be able to tell, or one would write to a file that another had access to read, or one would twiddle a file lock. In a covert timing channel, one user would perform a CPU intensive process and the other user would be able to tell that the responsiveness of the system changed, or the user would perform an I/O intensive application and the other user would be able to tell that his own disk accesses were more or less responsive. In this way, users can communicate via manipulating shared resources.

    It's not very sexy, but it is something that DoD and the intelligence community in particular care very much about. As you can guess, these aren't easy problems to solve (if they are solvable at all).
  • by Deven ( 13090 ) <deven@ties.org> on Wednesday March 22, 2000 @07:11PM (#1181410) Homepage
    Since the site linked from the story appears to have been slashdotted, here is an alternate link to the same story. [city.ac.uk]

If I want your opinion, I'll ask you to fill out the necessary form.

Working...