A common security pattern found in secure and insecure systems alike (ObjectCapabilityModel vs AccessControlList). And one that must be built in at a low level for OrthogonalSecurity. Permissions flags refers to using a generic set of permissions to control access to meta-operations. Extensible permission flags checked by user-defined operations gives you a general system. Trivalent flags (set, unset and absent) give you a completely uniform system. Set and unset are familiar. Absent means "the permission flag collected by the message so far gets to stay the same", as defined in GrandUnifiedCapabilities. In any object graph there are operations whose meaning can be defined with respect to the object graph alone. For example, adding a node, removing a node, adding or removing a link, traversing a link, changing the permissions on a link, watching for operations on a link, and so on and so forth. It should be obvious to anyone that an abstraction which exists at the level of the object graph must be implemented ''at that level'', that you must BuildSecurityAbstractionsIntoCapabilities. ---- '''Confinement''' I've never been interested in confinement since it doesn't admit to a universal theory near as I can see. * ''It's quite feasible to formally model the reachability constraints imposed by [authority] confinement. Shapiro & Weber's paper "Verifying the EROS Confinement Mechanism" (http://www.eros-os.org/devel/00Devel.html) does that, for example.'' * See that's exactly the kind of crap that having an override permission eliminates. The weaken attribute is completely illegitimate crap that was introduced for the sole purpose of removing duplication. Apparently, the authors couldn't tolerate the duplication that not having override caused, but they were too stupid to invent it. So what they did was invent an inferior and broken concept that prevents ''perfectly legitimate behaviour'' and then congratulated themselves on how safe that makes their system. Bleh! Besides, you can't prove confinement. To do so, you'd need to prove that all covert communications channels (e.g., causing page faults or any other visible resource slowdown) are eliminated. And that simply has nothing to do with caps. * ''The paper is about authority confinement, not information confinement. This ''is'' useful in capability systems.'' I think I understand the point being made though: You can demonstrate that an agent without any ties to the outside world is contained. You can demonstrate that an agent can't pass an independent cap to an outside agent. Covert channels simply aren't relevant to this. What you absolutely can't do is prove that the 'contained' agent isn't acting as a proxy for an outside agent, communicated via covert channel. http://nokia.cwillu.com/Graph.png "In this paper, we will ignore issues of covert channels. While important, reducing such channels is of interest only when it has been shown that overt channels have been closed." ("Verifying the EROS Confinement Mechanism", http://www.eros-os.org/devel/00Devel.html, footnote on page 2) The point is that authority confinement through normal channels isn't enough to rely on. Any time you have a lack of ''information confinement'' as well as the possibility of a proxy (untrusted agent, contained or otherwise), you no longer have authority confinement. Their point is that covert channels are the least of your worries when you can't prove confinement through normal channels. Now, it's quite possible to do an analysis of the possible covert channels of a given system, and close them. What you basically can't do (and I'm oversimplifying) is close them while maintaining efficiency in a system with many distinct mutually suspicious agents. A hole that allows only one bit per week to be communicated is enough to completely break security in some applications. A pre-emptive OS allowing competition to such things as drive spindles, pci bus, network bandwidth, display hardware, crypto access, etc., has enough covert channels that this is a legitimate concern. ---- '''Unix's permission flags''' Naturally, Unix' permissions are horribly useless. For example, execute is meaningless for almost every file. [You're of course correct that Unix permissions have problems, but it's not true that execute is meaningless in general. In fact, it's as meaningful as it gets: it indicates (usually to the shell) that a file can be profitably executed. It might be a shell script or something, and it's not always the case that a "#!" marker in the first two characters is a reliable flag, because a kernel module might know how to execute some random thing like a java jar file. Furthermore, even a regular a.out executable, which definitely contains executable code, lives in a text file, and must be marked X to indicate that it can be executed. Or not, if e.g. the compiler decides that the compilation process isn't complete for some reason. So although it at first glance to a user X doesn't seem to make sense for random text files, actually it has many, many uses, and can be adopted to new uses (Apache uses it to indicate which HTML files contain executable template code).] ''The Unix permission flag seems to function more as type metadata ("is this file executable") rather than security metadata ("is the user/process authorized to run this file?").'' [That's not quite it. It's that Unix blurs the two things together. The X permission can be viewed as functioning as both security '''and''' type metadata.] ''Ignoring SetUid programs, programs in Unix are executed with the privileges accorded to the user - blocking execution of programs isn't, in general, a good way of restricting a user's activities on the system.'' [That's a value judgement, not a technical judgement. Perhaps you mean "there are better ways of restricting a user's activities, such as XYZ" - and then specify the XYZ.] ''Most technically skilled users can compile their own copy of any given utility (and grant themselves execute privileges) should they need to.'' [I don't follow the intent of this comment at all. This is untrue if you're talking about security privileges, since users can't grant themselves root, and if you're not talking about security, but rather how X helps the system function nicely, then I don't see what's relevant about this comment. Technically skilled users who are not intent on breaking security can... do everything that technical skilled users can do. Yes, so?] First, execute is meaningless in general. The fact that it means something only to the shell, kernel, whatever is ''precisely'' why it is meaningless in general. It has no meaning with respect to intrinsic graph semantics. Worse than that, execute is nothing more than a specialized form of read. Which is exactly what the above author was getting at with "technically skilled users can compile their own version of a utility". If you can read the source, you can execute the program. If you can read even just the executable, you can make your own copy of it. So having read but no_execute doesn't change the semantics at all, just creates more hoops for users to have to jump through. Which is the distinctive bad smell of a broken system. Second, a permission flag system must be extensible to allow the creation, setting and propagation of arbitrary permissions. At best, execute is one of those extensible permissions and not a standard permission at all. In actuality, within a well-designed OS, execute doesn't exist, period. The concept of a shell or GUI "executing" an object at the user's command is a holdover from an imperative language mentality, which is at least 20 years obsolete in our OO world. From the user's perspective, there is absolutely no difference between "executing" a Word document by getting it interpreted by MS Word and "executing" an a.out file by getting it interpreted by the shell / linker / scheduler / whatever. In the OO worldview, instead of commanding the execution of a search script, you just clone a new search object from your toolbox of blank objects. As a consequence, searches / search_results must be first-class objects. And certainly, users never command the proper representation of an object graph. When they get near it, it should be properly represented and that's all there is to it! Third, static types are evil. If you want type information, then it's contained in the name of the link to the object. Why is this? Because the name of the object and its type information are ''one and the same thing'' in a dynamic environment. Ask yourself "who can legitimately read, suppress, change, or add name information" then ask yourself the same thing regarding type information and you get identical answers. For instance, an object can have multiple different types depending on where you access it from, just like it can have multiple different names. So in a dynamically typed system, names and types are the same thing; they're just a tiny hint to the system that the user wants it to TRY to interpret the subgraph as something specific. And of course, a statically typed OS is an absurdity beyond imagining; something only a C-freak would even consider. Fourth, everyone here is completely clueless about anything even remotely connected to security. The previous confused rambling proves it. -- RK ---- I'm going to make one stab at getting Richard to share some of his ideas, because I have this uneasy suspicion that some of them may be interesting. * My similar question to him and his answer was long ago moved to GeneralCapabilityModel; take a look. -- dm Richard, if you want to do this Q/A form, let me toss one back at you: "how do you use extensible generic permissions to implement encapsulation of objects?" Let's try to start this at high level, and bring it down to individual flags, for some specific case, so we can all see how this works. -- AdamBerger Uhhh, it's pretty difficult to describe a specific case without a lot of diagrams and junk. And the abstract I've mostly described in GrandUnifiedCapabilities, maybe TwoKindsOfCapabilities too. The basic idea is that while you keep caps with override absent (or set) to the object (top node of the subgraph), you only hand out caps with override unset. What that means is that the permissions you put on any caps within the object don't matter to you because they're overridden. But they matter to everyone you've handed a limited cap (override unset) to. So you use that. You unset read on any caps leading to parts of the object you don't want anyone to even go to, you unset create, delete and modify everywhere since you don't want anyone else to modify the object, and so on. Now here's the fun part. Since you don't want to duplicate yourself everywhere you go, what you do is you set all the downwards (have override absent) perms in the subgraph to absent and all the upwards going perms (have override unset) to unset. Then you just set the perms once at the top node of the subgraph (objects' subpart) to whatever you want, and the caps lower down in that subgraph will seem to inherit them. See GeneralCapabilityModel CategorySecurityPatterns