Fact of the week
This course is worth 4 "vektall" and will be based a continual assessment of your work. The course is an MSc level course, so it involves a good measure of work. On the other hand, higher level courses reward with higher grades than lower level courses, so the incentive is there for you to do the problems. There is no written exam. Each week you must complete a set of problems which culminate in a result which must be submitted. You must deliver all work by the relevant deadlines.
Imagine an island, connected to the world by a number of bridges. Behind the bridges is everything that we care about and all our wealth and assets. On the other side is everyone who could profit by stealing from us, harming us, or discrediting us. Perhaps we want to travel somewhere, carrying some of these assets. Venturing out into the world could be dangerous. We could be attacked, robbed, even killed.
This scenario is a general one, whether it be a military operation on a real island, or whether the island is a computer system and the bridges are network connections. The journey could be a network transmission. The principles of security a largely the same, regardless of whether we use computers or not. To maintain our safety, security and indeed our assets we have to sacrifice the convenience of free travel to the outside, because free travel to our island, since this would also allow others free travel. If we come into conflict, we have to face the possibility of loss or damage. One of our aims is to minimize the risk of loss. Note that security is not necessarily about secrecy or keeping something for ourselves: one of our assets or interests could be the freedom to distribute information to customers, or provide a service. If someone tries to prevent us from doing that, it is a destruction of our assets. Freedom to do is also an asset.
Security is about well-being (integrity) and about protecting property or interests from intrusions, stealing or wire-tapping (privacy - the right to keep a secret can also be stolen). In order to do that, in a hostile environment, we need to restrict access to our assets. To grant access to a few, we need to know whom we can trust and we need to verify the credentials (authenticate) of those we allow to come near us.
Security is thus based on the following independent issues:
Environments can be hostile because of
- Privacy - the ability to keep things private/confidential
- Trust - do we trust data from an individual or a host? Could they be used against us?
- Authenticity - are security credentials in order? Are we talking to whom we think we are talking to, privately or not.
- Integrity - has the system been compromised/altered already?
What are we afraid of?
- Physical threats - weather, natural disaster, bombs, power failures, etc.
- Human threats - stealing, trickery, bribery, spying, sabotage, accidents.
- Software threats - viruses, Trojan horses, logic bombs, denial of service.
In the system administration course, we reviewed these issues from the perspective of a system manager, and discussed possible defenses. There we stated the fundamental requirement for security to exist:
- Losing the abilty to use the system.
- Losing important data or files
- Losing face/reputation
- Losing money
- Spreading private information about people.If we are going to understand all the issues related to security, we need to see the problems from a greater number of perspectives. We need to open up some of the black boxes which we referred to in the system administration course and look inside. This is a journey that will touch upon topics from almost every course you have ever had.
In order to secure a system, we require the ability to restrict access or privilege to the system.
In this course we shall cover many aspects of security in connection with computer systems. Whereas, in system administration we looked at some of the practical aspects of computer security, here we look more carefully at the theory of the subject. This will allow us to bring in several new issues which we did not consider earlier. For instance, many computers are used in mission-critical systems, such as aircraft controls and machinery, where human lives are at stake. Thus reliability and safety are also concerns here (actually, these can be defined under the heading of integrity, if we think of an unreliable system as being one whose operation whose behaviour loses integrity). Real-time systems are computer systems which are guaranteed to respond in real-time to every request which is made of them. That means that a real-time system must always be fast enough to cope with any demand which is made of it. Real time systems are required in cases where human lives and huge sums of money are involved. For instance, in a flight control system it would be unacceptable to give a command "Oh my goodness, we're going to crash, flaps NOW!" and have the computer reply with "Processing, please wait...".
Thus we need to use our imaginations, and ask the question: what is important to us? What do we need to protect?
A final point: when we are restricing access, in an environment where there is a thin line between trust and mistrust, accountability is often important. We can keep records of who does what, and hold people responsible for their actions. This is often important to organizations, since it means that they can blame someone else for their loss. This seems to be important to finanicial and political organizations.
The dilemma of securityThe problem that we cannot get away from in computer security is that we can only have good security if everyone understands what security means, and agrees with the need for security.
If we make things difficult for users by imposing too many restrictions, they will tend to work around them, because people are essentially lazy.
Security is a social problem, because it has no meaning until a person defines what it means to them.
The harsh truth is this: in practice, most users have little or no understanding of security. This is our biggest security hole.
The meaning of security lies in trustOne thing that I am going to repeat many times in this course is the following mantra:
We introduce the idea of security for protecting ourselves against parties whom we do not trust. But how do we solve this problem? Usually, we introduce some kind of technology to move trust from a risky place to a safer place. For example, if we do not trust our neighbours not to steal our possessions, we put a lock on our door. We no longer have to trust our neighbours, but we have to trust that the lock will do its job in the way we expect. If we don't entirely trust the lock, we could install an alarm system which rings the police if someone breaks in. Now we are trusting the lock a little, the alarm system and the police. After all, who says that the police will not be the one's who steal your possessions? In some parts of the world, this idea is not so absurd.
Every security problem boils down to a question of trust in the end. Whom or what do we trust?
Every day, we go about our lives placing our trust in banks, cash terminals (ATM/minibanks), course examiners, police, government, restaurants (will they poison us today?) and a hundred other things. We do not question this trust, because it is seldom broken. But that is not always the case.
Whe you learn to drive a dangerous piece of machinery, like a car, you are placing lives at risk, and most governments require you to pass en exam to show that you can use the equipment safely. Computer systems are just as capable of causing great damage, perhaps not to individuals so much as to society. We are so reliant on them that things fall apart quickly when they fail to work. Still, we do not demand that users take a driving test for computers. Nor do we demand that the computers themselves be safe to drive. Our trust in computers and their users is often quite misplaced. And this is where the problems lie.
Minimum requirements: Orange BookThe only risk to computers is the people who come into contact with them: networked users. To minimize the effects of users on the system, we introduce security mechanisms. The Trusted Computer Security Evaluation Criteria (TSEC) Orange book was the first attempt to try to specify a standard for security management in the US in 1967. Although concentrated on national security issues, the recommendations were also of general applicability. Windows NT made a song and dance about having a C2 security classification when it was released (actually this was only for a machine decoupled from the network, and with no floppy drive). Unix systems have always been roughly C2 compliant. (See Gollmann chapter 9), though this actually means very little. C2 security means having the ability to set file permissions, or Discrectionary Access Controls (DAC). This requires users to have names and passwords, and it requires files to have permission bits, and processes to have owners. C2 also requires the system to be able to log activity so that users have accountability. Few systems bother to log every little detail in this way, so C2 security is not very useful in practice, but it does mention the basics: login identity and access control.
Implementing security: scalesWhenever we are faced with understanding something complicated, we need to do it level by level or scale by scale. It is remarkable how often this simple principle is forgotten and then rediscovered. If we look at a computer system, for instance, there are lots of levels at which it works:
At the top of this list, we have high level issues which are usually under our control, either by choice or by design. At the bottom are low-level issues which we normally cannot do anything about. Security can be weak or strong at any of these levels, with different consequences in each case.
- The interface between program and user
- The functionality provided by the program
- The algorithms which implement that functionality
- The act of communication with other systems
- Subsystems (e.g. system calls) which the program relies on
- The hardware the software runs on
- Any dependencies or trust relationships implicit in the system
For example, consider a program which has a series of buttons and menus. Suppose the program controls a crucial computer system, handling hundreds of transactions per second for the stock market. If this program stops, it means real money will be lost. Periodically, it's necessary to buy or sell stocks. Suppose the buttons are arranged like this:|--------------------------------------------------| | Newsflash Buy Sell Quit | |--------------------------------------------------| | | | | | DISPLAY | | | | | | | |--------------------------------------------------|This is an insecure user-interface, because a simple slip of the mouse could result in selecting Quit, instead of Sell. We have a lot to lose by such a mistake. The Newsflash function broadcasts news bulletins about buying/selling strategies and the day's secret password information to everyone on a list of names within the company. This information is not encrypted, so it is susceptible to sniffing or spoofing attacks. This function is an intrinsically insecure function of the program. The algorithms which the program uses to calculate its prognoses use temporary files in a public filespace, which could be modified by other users. Another user could therefore trick the program into calculating incorrectly. This is a security fault in the coding.
We could continue this deconstruction through all the levels of the program in a litany of condemnation. Suffice it to say that, at every level, there are potentially damaging flaws. We begin to see a pattern of thoughtless design faults which we could do something about...
Design considerationsThere are many design considerations which go into ensuring security and we shall be returning to these in the coming weeks.
Then, there is a number of questions we have to ask ourselves.
- Special protocols
- Limiting functionality
- User interface standards
Here is a summary of the principles from the course in Network and System Administration, which relate to security. You should think about these in the light of this introduction:
- Should protection mechanisms focus on data, functionality or users?
- In which layer should security mechanisms be placed?
- Do we prefer a simple (easy to secure) or feature-rich (difficult) system?
- Should security be handled by a central manager, or left to individual components in a system?
- How do we prevent an attacker from accessing a level below our security mechanisms and thereby circumventing them?
- The principle of communities: What one member of a cooperative community does affects every other member, and vice versa. Each member of a community therefore has a responsibility to consider the well-being of other members of the community.
- Policy: a clear expression of goals and responses prepares a site for future trouble, and documents intent and procedure.
- Simplest is best: simple rules make system behaviour easy to understand. Users tolerate rules if they understand them.
- Security: the fundamental requirement for security is the ability to restrict access and privilege to data.
- Data invulnerability (redundancy): The purpose of a backup copy is to provide an image of data which is unlikely to be destroyed by the same act that destroys the original.
In the coming weeks, we shall take some of the issues above and consider them in detail. We shall also try to gauge some of the political developments and current concerns about security which dominate the computer news.
Thought of the week
Do you trust the information in this course? What makes you trust it? How could you verify if the information were true or false? Do you trust the identity and the authenticity of the source? Can you verify that I am who I say I am? How much proof do you need?