* 7 *

Security models

Fact of the week

Since the 1950's the prevalent security model for living organisms has been that of an immune system which distinguishes between self and non-self, i.e. part of the body or invading organism. Although its meaning is still controversial, this model is clearly insufficient to secure the body from life-threatening illness. Cancers, liver/kidney failure are all part of self. Also intake of food (which becomes self) can kill (e.g. heart disease) unless the intake is monitored.

Chapter 4 Gollmann

Abstractions and models

The ability to abstract and associate ideas is the key to our understanding. When we simply collect facts and obervations, like stamp collectors, then we have nothing more than a dull list of cases for our troubles. It is only when we begin to step back from a problem and think about how the pieces fit into a larger puzzle, that we derive meaning from individual facts and observations. That is our purpose in this week's lecture.

What could we possibly mean by a model for security? In science, we use models to idealize the real world, to simplify it and to systematize it. A model is a kind of coarse interpretation of the world as we see it. The point about model making is that, by grouping together similar ideas and cases into general qualities (i.e. by making qualitative generalizations), we reduce a wide range of apparently different things into special cases of a single general thing.

In security, we should perhaps be wary of this procedure. After all, computer security breaches are nothing if not a litany of exploitations of weaknesses caused by a lack of attention to detail. Here we are arguing for less detail and more generality. Which view is right?

Clearly both generality and detail are important. Think of every model as a computer program and we see that we need a high level part, formed from general subroutines in order to understand the structure of the problem. Then we also need to flesh out the low-level details to make those details precise and to cover all of the special cases. But the wisdom behind structured programming is precisely the wisdom in model building: we should not mix up these two complementary viewpoints, nor sacrifice one for the other. (We shall be returning to structured programming next week to consider it as a model for software security.)

In order to capture all of the levels of a discussion, we need abstractions but we also need a more formal language. If we pay attention to formal language (like mathematics) then we actually have the possibility of proving that a security model is secure. Here we shall only have time to illustrate the idea.

What should a model cover?

We should distinguish between a model for security and a security policy. A security model needs to cover the basic points, which we have already discussed on a number of occasions: Security is impossible without an attention to detail. We need to look after computer systems with: There is a general tendency in any ordered system to become disordered, or succumb to entropy. This is because the probability of a random change destroying something is much higher than the probability that a random change will do something good.

Finite state machines

There is a concept in (computer) science which is used repeatedly to describe systems which change over time. This view is primarily a geometrical picture of the state of a system. Suppose we have a set of variables (a,b,c,d,..) which describe what a system is doing. This set of variables labels a point in a vector space. Usually, the variables have discrete values, yes/no, read/write/execute, ready/waiting etc. Thus each variable has only a certain number of values. We can represent the complete set of variables as a vector which points to a position in a lattice. We call that position the state of the system.
                          ---------   (x,y,z) = state of system
                        / |        /|
                       /  |       / |
                       ----------   |
                      |   |      |  |
                      |  / ----- |  /
                      | /        | /
                      |/         |/
                       ----------
                 (0,0,0)
The state of a system changes with time. If the states include information about access rights, the contents of important files etc, then clearly some states are secure and others are insecure. This tells us that basic management of attributes (e.g. cfengine) is the first step to a secure system.

Access matrices

Suppose we have n subjects (S) (i.e. users or clients) and m objects (O) (files, processes etc). An access matrix M(S,O) is a description of the access permissions of each subject to each object:
                                    Objects

                       |  m(1,1)  m(1,2)  m(1,3) ..  .. |
   M(S,0) =            |                                |
             Subjects  |  m(2,1)  m(2,2)  ...    ..  .. |
                       |  ..                            |
                       |                                |
e.g. m(s,o). s=mark, o=/etc/passwd, m(s,o)=read.

Enemies of security

There is a few things which hinders us in our ability to create secure systems: These are real-world considerations:

Some security models

Gollmann describes several models in formal terms. Here is a brief summary of some of the important points. You should read chapter 4 for more details.

Bell-LaPadula model (BLP)

This is a formal description of a system with static access control, i.e. privacy. It tells us nothing about integrity or trust. In order to describe this security model as a state space, we need to specifiy the variables which characterize the system. The system consists of The current set of states will de described in terms of these. Gollmann writes this as an direct product of sets
current state = B x M x F
Where The maximum security level is sometimes called the subject's security clearence. Notice that no contingency is made in the model for changing permissions. Everything is static.

Security policies within BLP

The simple security property:
A state of the system (b,m,f) satisfies the ss property if read/write permissions are only possible when fo <= fs, i.e. when a user has sufficient of greater clearence than the clearence required by the object.
This is just our common understanding of what file permissions mean, for instance. This is not sufficient to prevent a low-level user from accessing high-level object, however. If we only had the ss-property, then a user could create a new file with high level clearence which contained a program which could then read the high level objects and copy them to low level ones. The star property tries to fix this:
A state of the system (b,m,f) satisfies the * property if append/write access is denied for fc <= fo, i.e. when a user's current clearence is less than that required by the object, no writing or file-creation is possible. Moreover, any write/append operation to another file owned by the subject must have fo' <= fo, i.e. it must not be possible for a file owned by a user to create a file at a higher security level than its own.
This tells us that a high level subject cannot send information to a low-level subject! That is why we need the current classification fc, so that we can temporarily downgrade a user's security level. Another approach would be to define a set of trusted users who were allowed to violate these rules.

The discretionary security property is a trivial definition:

A state (b,m,f) satifies the ds property if each triplet (s,o,a) has a=M(s,o), i.e. each file's permissions are determined by a user-definable matrix. i.e. the permissions are not fixed. They are at the discretion of the users who control the file.

The basic security theorem: true or false?

Any change in a system from one state to another is called a transition. The basic security theorem is a property of the finite state machine picture. It says:
If we start off in a secure state, and every transition is secure, then we can never end up in an insecure state.
This is useful to know. It's truth is not in question. However, just because we have defined a state to be secure, does not mean that it really is! Maybe our definition was just silly. The BLP model was criticized for just this failing. From what we have said so far, there is no reason why we could not do the following: This would be like MS-DOS/MacOS, with no restrictions on anything. According to the definitions, this state is secure. Of course it isn't. It is important to realize that:

With any set of rules, it is likely that there will be
unforeseen loopholes which can be exploited in ways
which were not intended..

Some other remarks about BLP

A covert channel is a private information flow which is not controlled by a security mechanism. Filenames (object names) are an example of a covert channel, if everyone on the system can see them. Even telling a user that a certain operation is not permitted provides a bit of information which could be useful to them. Sometimes it is necessary not only to conceal the contents of objects, but also their existence!

How shall we interpret the BLP model in terms of Unix?

BLPUnix
Subjects (S)UID/Username
GID/Groups
Objects (O)Files
processes
memory segments
Access rights (M)Read
Write
Execute
Security levels (L)Allowed
Disallowed
Setuid
Setgid

Harrison-Ruzzo-Ullman model

This model refines the BLP by including policies for changing access rights to objects, as well as creation and deletion of objects. It uses primitive operations for create and delete in each of the spaces: (S,O,A). i.e. create/delete user, create/delete file, create/delete entry in permissions matrix.

Chinese Wall model

This model's aim was to avoid conflicts of interest between different groups within a whole. Conflicts arise, for instance, when two groups are competitors. The model attempts to exclude information flows which can lead to conflicts of interest.

Again the BLP has to be modified to allow classesof subjects/users which have conflicts of interest. Every transition of the system to a new state is now context dependent and access rights need to be re-examined to determine whether the transition is allowed. This is a potentially non-commutative lattice.

The Biba Model

The Biba model addresses the issue of integrity, i.e. whether information can become corrupted. A new label is used to gauge integrity. If a high security object comes into contact with a low-level information, or be handled by a low-level program, the integrity level can be downgraded. For instance, if one used an insecure program to view a secure document, the program might corrupt the document, append it, truncate it, or even covertly communicate it to another part of the system.

The Clark-Wilson model

Clark and Wilson have also created a model which includes an attention to data integrity. This model is a stepping stone to next week's lecture, since it introduces concepts which parallel objection oriented language technologies. The Clark-Wilson model also tries to address the relationship between the system and the acceptance of information from outside world by insisting on auditing of transactions. This will not help security/integrity but it can detect breaches. In summary:

Thought of the week

A virus is a piece of RNA or DNA (i.e. genetic programming) which is capable of reprogramming a cell's replication system. It is like a programming bug which forces the cell's program into an infinite loop, making copies of the virus and ending in the cell's destruction. It is believed that viruses occur through random errors in cell reproduction. The complexity of the genetic program makes such an attack statistically likely (the bigger the program, the more there is to go wrong). Viruses invade cells, in spite of an elaborate security mechanism based on molecular recognition.

Back