Created
December 23, 2013 10:09
-
-
Save jankotek/8094508 to your computer and use it in GitHub Desktop.
The Complexity Trap
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
*This is section from 'A Cryptographic Evaluation of IPsec' by N. Ferguson and B. Schneier. I think it applies to programming in general.* | |
*Original can be found at: https://www.schneier.com/paper-ipsec.html* | |
Security’s worst enemy is complexity. | |
This might seem an odd statement, especially in the light of the many simple | |
systems that exhibit critical security failures. It is true nonetheless. Simple fail- | |
ures are simple to avoid, and often simple to fix. The problem in these cases | |
is not a lack of knowledge of how to do it right, but a refusal (or inability) to | |
apply this knowledge. Complexity, however, is a different beast; we do not really | |
know how to handle it. Complex systems exhibit more failures as well as more | |
complex failures. These failures are harder to fix because the systems are more | |
complex, and before you know it the system has become unmanageable. | |
Designing any software system is always a matter of weighing and reconciling | |
different requirements: functionality, efficiency, political acceptability, security, | |
backward compatibility, deadlines, flexibility, ease of use, and many more. The | |
unspoken requirement is often simplicity. If the system gets too complex, it be- | |
comes too difficult and too expensive to make and maintain. Because fulfilling | |
more of the other requirements usually involves a more complex design, many | |
systems end up with a design that is as complex as the designers and imple- | |
menters can reasonably handle. (Other systems end up with a design that is too | |
complex to handle, and the project fails accordingly.) | |
Virtually all software is developed using a try-and-fix methodology. Small | |
pieces are implemented, tested, fixed, and tested again. Several of these small | |
pieces are combined into a larger module, and this module is tested, fixed, and | |
tested again. The end result is software that more or less functions as expected, | |
although we are all familiar with the high frequency of functional failures of | |
software systems. | |
This process of making fairly complex systems and implementing them with a | |
try-and-fix methodology has a devastating effect on security. The central reason | |
is that you cannot easily test for security; security is not a functional aspect of | |
the system. Therefore, security bugs are not detected and fixed during the devel- | |
opment process in the same way that functional bugs are. Suppose a reasonable- | |
sized program is developed without any testing at all during development and | |
quality control. We feel confident in stating that the result will be a completely | |
useless program; most likely it will not perform any of the desired functions cor- | |
rectly. Yet this is exactly what we get from the try-and-fix methodology with | |
respect to security . | |
The only reasonable way to “test” the security of a system is to perform | |
security reviews on it. A security review is a manual process; it is very expensive | |
in terms of time and effort. And just as functional testing cannot prove the | |
absence of bugs, a security review cannot show that the product is in fact secure. | |
The more complex the system is, the harder a security evaluation becomes. A | |
more complex system will have more security-related errors in the specification, | |
design, and implementation. We claim that the number of errors and difficulty of | |
the evaluation are not linear functions of the complexity, but in fact grow much | |
faster. | |
For the sake of simplicity, let us assume the system has n different options, | |
each with two possible choices. (We use n as the measure of the complexity; this | |
seems reasonable, as the length of the system specification and the implementa- | |
tion are proportional to n.) Then there are n(n − 1)/2 = O(n2 ) different pairs of | |
options that could interact in unexpected ways, and 2n different configurations | |
altogether. Each possible interaction can lead to a security weakness, and the | |
number of possible complex interactions that involve several options is huge. | |
We therefore expect that the number of actual security weaknesses grows very | |
rapidly with increasing complexity. | |
The increased number of possible interactions creates more work during the | |
security evaluation. For a system with a moderate number of options, checking | |
all the two-option interactions becomes a huge amount of work. Checking every | |
possible configuration is effectively impossible. Thus the difficulty of performing | |
security evaluations also grows very rapidly with increasing complexity. The | |
combination of additional (potential) weaknesses and a more difficult security | |
analysis unavoidably results in insecure systems. | |
In actual systems, the situation is not quite so bad; there are often options | |
that are “orthogonal” in that they have no relation or interaction with each | |
other. This occurs, for example, if the options are on different layers in the | |
communication system, and the layers are separated by a well-defined interface | |
that does not “show” the options on either side. For this very reason, such a | |
separation of a system into relatively independent modules with clearly defined | |
interfaces is a hallmark of good design. Good modularization can dramatically | |
reduce the effective complexity of a system without the need to eliminate impor- | |
tant features. Options within a single module can of course still have interactions | |
that need to be analyzed, so the number of options per module should be mini- | |
mized. Modularization works well when used properly, but most actual systems | |
still include cross-dependencies where options in different modules do affect each | |
other. | |
A more complex system loses on all fronts. It contains more weaknesses to | |
start with, it is much harder to analyze, and it is much harder to implement | |
without introducing security-critical errors in the implementation. | |
This increase in the number of security weaknesses interacts destructively | |
with the weakest-link property of security: the security of the overall system is | |
limited by the security of its weakest link. Any single weakness can destroy the | |
security of the entire system. | |
Complexity not only makes it virtually impossible to create a secure system, | |
it also makes the system extremely hard to manage. The people running the | |
actual system typically do not have a thorough understanding of the system and | |
the security issues involved. Configuration options should therefore be kept to a | |
minimum, and the options should provide a very simple model to the user. Com- | |
plex combinations of options are very likely to be configured erroneously, which | |
results in a loss of security. The stories in [Kah67, Wri87, And93b] illustrate how | |
management of complex systems is often the weakest link. | |
We therefore repeat: security’s worst enemy is complexity. Security systems | |
should be cut to the bone and made as simple as possible. There is no substitute | |
for simplicity. | |
*Imagine the AES process in committee form. RC6 is the most elegant cipher, so we | |
start with that. It already uses multiplications and data-dependent rotations. We | |
add four decorrelation modules from DFC to get provable security, add an outer | |
mixing layer (like MARS) made from Rijndael-like rounds, add key-dependent S- | |
boxes (Twofish), increase the number of rounds to 32 (Serpent), and include one | |
of the CAST-256 S-boxes somewhere. We then make a large number of arbitrary | |
changes until everybody is equally unhappy. The end result is a hideous cipher that | |
combines clever ideas from many of the original proposals and which most likely | |
contains serious weaknesses because nobody has taken the trouble to really analyze | |
the final result.* |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment