Fact of the week
Since the 1950's the prevalent security model for living organisms has been that of an immune system which distinguishes between self and non-self, i.e. part of the body or invading organism. Although its meaning is still controversial, this model is clearly insufficient to secure the body from life-threatening illness. Cancers, liver/kidney failure are all part of self. Also intake of food (which becomes self) can kill (e.g. heart disease) unless the intake is monitored.
The ability to abstract and associate ideas is the key to our understanding. When we simply collect facts and obervations, like stamp collectors, then we have nothing more than a dull list of cases for our troubles. It is only when we begin to step back from a problem and think about how the pieces fit into a larger puzzle, that we derive meaning from individual facts and observations. That is our purpose in this week's lecture.
What could we possibly mean by a model for security? In science, we use models to idealize the real world, to simplify it and to systematize it. A model is a kind of coarse interpretation of the world as we see it. The point about model making is that, by grouping together similar ideas and cases into general qualities (i.e. by making qualitative generalizations), we reduce a wide range of apparently different things into special cases of a single general thing.
In security, we should perhaps be wary of this procedure. After all, computer security breaches are nothing if not a litany of exploitations of weaknesses caused by a lack of attention to detail. Here we are arguing for less detail and more generality. Which view is right?
Clearly both generality and detail are important. Think of every model as a computer program and we see that we need a high level part, formed from general subroutines in order to understand the structure of the problem. Then we also need to flesh out the low-level details to make those details precise and to cover all of the special cases. But the wisdom behind structured programming is precisely the wisdom in model building: we should not mix up these two complementary viewpoints, nor sacrifice one for the other. (We shall be returning to structured programming next week to consider it as a model for software security.)
In order to capture all of the levels of a discussion, we need abstractions but we also need a more formal language. If we pay attention to formal language (like mathematics) then we actually have the possibility of proving that a security model is secure. Here we shall only have time to illustrate the idea.
We should distinguish between a model for security and a security policy.
A security model needs to cover the basic points, which we have already discussed on a number of occasions:
- Model: a security model is an appraisal of what security means, what it should cover, what methods we should provide/use to achieve security. A model is a framework for understanding and solving the problem of security for a particular purpose.
- Policy: A security policy is an attitude to security. Regardless of what methods are available for securing a system, we also need to define how much security is required, i.e. how and when we should apply a model for security.
Security is impossible without an attention to detail. We need to look after computer systems with:
- Privacy (Access control)
- Integrity (Corruption)
- Authentication (hence identity)
- Trust (the most subtle issue)
There is a general tendency in any ordered system to become disordered, or succumb to entropy. This is because the probability of a random change destroying something is much higher than the probability that a random change will do something good.
- Host configuration (e.g. cfengine)
- Intrusion detection (e.g. Network Flight Recorder, log files)
Finite state machinesThere is a concept in (computer) science which is used repeatedly to describe systems which change over time. This view is primarily a geometrical picture of the state of a system. Suppose we have a set of variables (a,b,c,d,..) which describe what a system is doing. This set of variables labels a point in a vector space. Usually, the variables have discrete values, yes/no, read/write/execute, ready/waiting etc. Thus each variable has only a certain number of values. We can represent the complete set of variables as a vector which points to a position in a lattice. We call that position the state of the system.--------- (x,y,z) = state of system / | /| / | / | ---------- | | | | | | / ----- | / | / | / |/ |/ ---------- (0,0,0)The state of a system changes with time. If the states include information about access rights, the contents of important files etc, then clearly some states are secure and others are insecure. This tells us that basic management of attributes (e.g. cfengine) is the first step to a secure system.
Access matricesSuppose we have n subjects (S) (i.e. users or clients) and m objects (O) (files, processes etc). An access matrix M(S,O) is a description of the access permissions of each subject to each object:Objects | m(1,1) m(1,2) m(1,3) .. .. | M(S,0) = | | Subjects | m(2,1) m(2,2) ... .. .. | | .. | | |e.g. m(s,o). s=mark, o=/etc/passwd, m(s,o)=read.
Enemies of securityThere is a few things which hinders us in our ability to create secure systems: These are real-world considerations:
- Complexity: users will become impatient and work around security (e.g. ACLs).
- The need for backward compatibilty in software (e.g. DOS filesystems in NT).
- Backups - redundancy means more opportunity to steal? Need equal secrity on backups.
Gollmann describes several models in formal terms. Here is a brief summary of some of the important points. You should read chapter 4 for more details.
Bell-LaPadula model (BLP)This is a formal description of a system with static access control, i.e. privacy. It tells us nothing about integrity or trust. In order to describe this security model as a state space, we need to specifiy the variables which characterize the system. The system consists of
The current set of states will de described in terms of these. Gollmann writes this as an direct product of sets
- A set of subjects S = (s1,s2,s3..) or users, to whom we grant or deny access.
- A set of objects O = (o1,o2,o3...), e.g. files, to which access is either granted or denied.
- A set of access operations A = (a1,a2...) = (read,write,append,execute)
- A set of security levels L = (l1,l2..) = (access,no-access,partial-access...), which determine which subjects will get access to which objects, at a given level. Notice that in this model access is not just yes/no, but could have several refinements, e.g. read name of file, read entire document etc.current state = B x M x FWhere
The maximum security level is sometimes called the subject's security clearence. Notice that no contingency is made in the model for changing permissions. Everything is static.
- B = B (S x O x A) is the set of all current access operations made by the system. An element of B has coordinates (s,o,a).
- M = Sum over O of M(S x 0) is the sum/collection of all the access matrices for each object.
- F is the set of all security level assignments, for users/subjects, for files/objects etc. In this model, each element of F is a triplet (fs,fc,fo), where:
- fs is the maximum permission level a subject can have at any time.
- fc is the current permission level of a subject.
- fo is the permission level required to access an object.
Security policies within BLPThe simple security property:A state of the system (b,m,f) satisfies the ss property if read/write permissions are only possible when fo <= fs, i.e. when a user has sufficient of greater clearence than the clearence required by the object.This is just our common understanding of what file permissions mean, for instance. This is not sufficient to prevent a low-level user from accessing high-level object, however. If we only had the ss-property, then a user could create a new file with high level clearence which contained a program which could then read the high level objects and copy them to low level ones. The star property tries to fix this:A state of the system (b,m,f) satisfies the * property if append/write access is denied for fc <= fo, i.e. when a user's current clearence is less than that required by the object, no writing or file-creation is possible. Moreover, any write/append operation to another file owned by the subject must have fo' <= fo, i.e. it must not be possible for a file owned by a user to create a file at a higher security level than its own.This tells us that a high level subject cannot send information to a low-level subject! That is why we need the current classification fc, so that we can temporarily downgrade a user's security level. Another approach would be to define a set of trusted users who were allowed to violate these rules.
The discretionary security property is a trivial definition:A state (b,m,f) satifies the ds property if each triplet (s,o,a) has a=M(s,o), i.e. each file's permissions are determined by a user-definable matrix. i.e. the permissions are not fixed. They are at the discretion of the users who control the file.
The basic security theorem: true or false?Any change in a system from one state to another is called a transition. The basic security theorem is a property of the finite state machine picture. It says:If we start off in a secure state, and every transition is secure, then we can never end up in an insecure state.This is useful to know. It's truth is not in question. However, just because we have defined a state to be secure, does not mean that it really is! Maybe our definition was just silly. The BLP model was criticized for just this failing. From what we have said so far, there is no reason why we could not do the following:
This would be like MS-DOS/MacOS, with no restrictions on anything. According to the definitions, this state is secure. Of course it isn't. It is important to realize that:
- Place every subject in the lowest security level.
- Place every object in the lowest security level.
- Give everyone access rights to everything.
With any set of rules, it is likely that there will be
unforeseen loopholes which can be exploited in ways
which were not intended..
Some other remarks about BLPA covert channel is a private information flow which is not controlled by a security mechanism. Filenames (object names) are an example of a covert channel, if everyone on the system can see them. Even telling a user that a certain operation is not permitted provides a bit of information which could be useful to them. Sometimes it is necessary not only to conceal the contents of objects, but also their existence!
How shall we interpret the BLP model in terms of Unix?
BLP Unix Subjects (S) UID/Username
Objects (O) Files
Access rights (M) Read
Security levels (L) Allowed
Harrison-Ruzzo-Ullman modelThis model refines the BLP by including policies for changing access rights to objects, as well as creation and deletion of objects. It uses primitive operations for create and delete in each of the spaces: (S,O,A). i.e. create/delete user, create/delete file, create/delete entry in permissions matrix.
Chinese Wall modelThis model's aim was to avoid conflicts of interest between different groups within a whole. Conflicts arise, for instance, when two groups are competitors. The model attempts to exclude information flows which can lead to conflicts of interest.
Again the BLP has to be modified to allow classesof subjects/users which have conflicts of interest. Every transition of the system to a new state is now context dependent and access rights need to be re-examined to determine whether the transition is allowed. This is a potentially non-commutative lattice.
The Biba ModelThe Biba model addresses the issue of integrity, i.e. whether information can become corrupted. A new label is used to gauge integrity. If a high security object comes into contact with a low-level information, or be handled by a low-level program, the integrity level can be downgraded. For instance, if one used an insecure program to view a secure document, the program might corrupt the document, append it, truncate it, or even covertly communicate it to another part of the system.
The Clark-Wilson modelClark and Wilson have also created a model which includes an attention to data integrity. This model is a stepping stone to next week's lecture, since it introduces concepts which parallel objection oriented language technologies.
The Clark-Wilson model also tries to address the relationship between the system and the acceptance of information from outside world by insisting on auditing of transactions. This will not help security/integrity but it can detect breaches. In summary:
- Data objects can only be manipulated by a certain set of programs. Users have access to the programs rather than to the data. (e.g. this is like the WWW or a database). Think of the discussion last week about restricting access based on "role".
- Separation of duties: assigning different roles to different users. Users might have to collaborate in order to achieve some secure operation. For instance, think of the dual-key approach to arming nuclear warheads. Or in Star Trek, the authorization and command sequence to self-destruct the Enterprise by three ranking officers with voice-print identification combined with a simple, easy to remember sequence (sufficiently complicated to avoid accidents).
- Subjects/users are identified and authenticated.
- Objects/data can only be accessed by authorized programs (ensures integrity).
- Subjects/users only have access to certain programs.
- An audit log is maintained over external transactions.
- The system must be certified in order for it to work.
Thought of the week
A virus is a piece of RNA or DNA (i.e. genetic programming) which is capable of reprogramming a cell's replication system. It is like a programming bug which forces the cell's program into an infinite loop, making copies of the virus and ending in the cell's destruction. It is believed that viruses occur through random errors in cell reproduction. The complexity of the genetic program makes such an attack statistically likely (the bigger the program, the more there is to go wrong). Viruses invade cells, in spite of an elaborate security mechanism based on molecular recognition.