Fact of the week
Much of bank security relies on the existence of "tamper-proof" technologies, either by relying on physical isolation of systems, or by building systems which destruct if tampered with. Tamper resistance is almost impossible to achieve in a public arena.
Last week we talked about possible forms of attack against different kinds of system. This week, we need to examine a method for analyzing systems, in order to find their weaknesses and detail our own assumptions about their security.
If we want to talk about security in a more serious way (as more than a video game), we have to say what we mean by it, in a technical sense. Saying that "we want security" is not good enough, because it is too vague. We need to:
|One important lesson that we shall learn here is that "computer security" is not just about computers. It should really be called "security including the use of computers". Security is a property of whole systems (rules and procedures), not of individual parts.|
Policy (sometimes formalized as "law") is a principle of society. Society is a system (a set of rules and procedures). The first countermeasure to the breakdown of this discipline is a deterrant: the threat of retaliation. "If you do this, we will not like you, and we may punish you!" Society needs mechanisms, beauracracies, police forces and sometimes military personnel to enforce its rules, because there is always a few individuals who do not understand the discpline.
In most cases, especially with computer crime, organizations have few possibilities for reprimanding those who break policy, except to report them to law enforcement agencies. Each country has its own national laws which override local policy, i.e. local security policy has to obey the law of the land. This sometimes causes problems for either side. For instance, in some countries, encryption is forbidden by the government, i.e. citizens do not have the right to privacy; in others, system administrators are not allowed to investigate users suspected of having committed a crime, since it would be a violation of their privacy. These are the opposite ends of the spectrum.
Nowadays, law-enforcement agencies (police forces) take computer crime more serously, but computer crime has all the counterparts of major crime, organized crime, and petty crime. Because the idea of lawful behaviour in a virtual world is still new, computer crime (ignoring local policy rules) is dominated by petty crime, perpetrated by ignorant or selfish users, who do not see their behaviour as criminal. Recall the principle of societies, last lecture.
The principle of communities: What one member of a cooperative community does affects every other member, and vice versa. Each member of a community therefore has a responsibility to consider the well-being of other members of the community.This same rule generalizes to any system (=society) of components (=members).
Example 1: we have become used to using ATM mini-bank terminals for withdrawing money. These are now everywhere, all different shapes and sizes. We trust these terminals to give us money when we enter private codes, because they usually do. One attack used by criminals is to install their own ATM which collects PIN codes and card details and then says "An error has occurred", so that users do not get money. They have then stolen your card details.This kind of "scam" is common. It aims to exploit your trust. The same can, in fact, be said about any other kind of crime. Here is another example of misplaced trust:
Example 2: airport staff do not trust that passengers will not carry weapons, so they use metal detectors, because most weapons are metallic. They trust their metal detectors to find any weapons. A stone knife could be easily smuggled onboard a plane.Closer to home:
Example 3: a computer user downloads large files of pornographic material, filling up the disk. This violates policy, but the system manager does not enforce this policy very carefully, so users can ignore it. Here the trust goes both ways. The system administrator trusts that most users will not break this rule, and the users trust that the system administrator will not enforce it.There are many reasons why people will violate policy:
Not all security violations are intentional, of course. Lost luggage is generally caused by human error. Programming errors are always caused by human error. Human error is thus, either directly or indirectly, responsible for a very large part of security problems. Here is a few examples of human errors which can result in problems:
Our increasing use of systems (computer systems, security systems, beauracratic systems, quality control systems, electrical systems, plumbing systems) is an embracement of presumed rigour. In other words, systems expect the players to follow rules and exhibit discipline. Without "systems" we have only efficiency as a gauge of success. With systems, we also need to have a beauracratic attention to detail, in order to make the system work. Thus systems are inherently more vulnerable to failure, because they require precision (which is something most humans are not good at).
Systems are characterized by components and procedures which fit together to perform a job. Usually the components are designed as modules, which are analyzed and tested one by one. The analysis of whole systems is more difficult, and is less well implemented. This means that there are two kinds of systemic fault:
Why don't we have a backup? Sometimes the reason is a design fault, and other times it is a calculated risk.
Serial : single point of failure (OR) -------- ---------- --------| |-------| |-------- -------- ---------- Parallel: multiple points of failure - redundancy (AND) -------- --------| |------- | -------- | | | | -------- | -------|--------| |-------|----------- | -------- | | | | -------- | --------| |------- --------You might remember these diagrams from electronics classes: Kirchoff's laws for electric current. We can think of failure rate as being like electrical resistance: something which stops the flow of work/current.
Fault trees are made of the following symbols:
|(a)||AND gate||P(out) = P(A)P(B) (independent)|
|(b)||OR gate||P(out) = P(A)+P(B) - P(A)P(B) independent|
|(c)||XOR gate||P(out) = P(A)+P(B) (mutex)|
These can also be generalized for more than two inputs.
The standard gate symbols give us ways of combining the effects of dependency. The OR gate represents a serial dependency (failure if either one or the other component fails). The OR gate assumes that events are independent, i.e. the number of possibilities does not change as a result of a measurement on one of the inputs; the XOR gate is a dependent event, since a non-zero value on one input assumes a zero value on the rest. The AND gate requires the failure of parallel branches; it could be either dependent or independent. For the sake of simplicity, we shall consider only examples using indepedendent probabilities. This week's exercises are about using these gates.
We split the tree into two main branches: first try the root password of the system, OR try to attack any services which might contain bugs.
P(break in) = P(A OR (NOT A AND (B AND C))) = P(A) + (1-P(A)) x P(B)P(C)Suppose we have, from experience, that
Chance of guessing root pw P(A) = 5/1000 = 0.005 Chance of finding service exploit = 50/1000 = 0.05 Chance that hosts are misconfigured = 10% = 0.1 P(T) = 0.005 + 0.995 x 0.05 x 0.1 = 0.005 + 0.0049 = 0.01 = 1%Notice how, even though the chance of guessing the root password is small, it becomes an equally likely avenue of attack, due to the chance that the host might have been upgraded. Thus we see that the chance of break in is a competition between an attacker and a defender.
The problems this week are about taking this idea further.
Thought of the week
It has been said that the only difference between commerce and warfare is politics.