How to Develop Secure Systems: 10 Design Principles

How to Develop Secure Systems: 10 Design Principles

In order to design and implement a system securely, Saltzer and Schroeder define 10 guiding design principles in their famous and one of the most cited article “The Protection of Information in Computer Systems”. Our article reviews these principles with key takeaways.

How to Develop Secure Systems: 10 Design Principles by Saltzer and Schroeder for Secure System Development

Saltzer and Schroeder’s 1975 article “The Protection of Information in Computer Systems” (One of the most cited works in Computer Security history) outlines 10 fundamental design principles for developing secure systems, whether hardware or software.

Though published at a time when only mainframe computers were in use with no interconnectivity among them, these principles still apply in today’s modern computing world where personal computers and ubiquitous smart devices communicate with each other on the Internet.

At the heart of these principle lies two basic tenets: simplicity and access control. Simplicity favors easy to understand designs that result in systems with fewer inconsistencies. Access control on the other hand, mediates each transaction to allow only the authorized parties to get access to the resources.

In this article, we review these principles with key takeaways (bulleted list items as excerpts from the article) to increase the awareness on secure system development.

Principle 1. Economy of Mechanism

This principle favors simplicity over complexity as it it more likely to have security vulnerabilities in systems that are complex in their design and implementation. Embracing and striving for simplicity allows for systems that are easier to test, validate and maintain. For this reason, apply the well known mantra Keep It Simple Silly (KISS) for enhanced security. Key takeaways for this principle are:

  • Keep the design as simple and small as possible.
  • Design and implementation errors that result in unwanted access paths will not be noticed during normal use.
  • As a result, techniques such as line-by-line inspection of software and physical examination of hardware are necessary.
  • For such techniques to be successful, a small and simple design is essential.

Principle 2. Fail-Safe Defaults

In computing systems, the default access right should be “no access”. In other words, access rights should be managed individually with “allow” rights (whitelisting) leaving the default at “deny”. This is both easier to manage and leaves the system at a secure state if the security mechanism fails.

The opposite of this principle, allowing by default and denying on individual cases (blacklisting), is a very dangerous security malpractice and should be avoided. Firewall and file access configurations are two of the classical examples where this principles should be applied. Key takeaways for this principle are:

  • Base access decisions on permission rather than exclusion.
  • The default is lack of access, and the protection scheme identifies conditions under which access is permitted.
  • The alternative mechanism to identify conditions under which access should be refused, presents a wrong psychological base for secure system design.
  • A conservative design must be based on arguments why objects should be accessible, rather than why they should not.

Principle 3. Complete Mediation

This principle mandates that access rights are completely validated every time an access occurs. A common malpractice that violates this tenet is checking access rights initially and relying on this initial cached data for determining the access rights for the following access requests. For instance, in some operating systems, file permissions are checked when a file is opened and a file handler is created for checking the following access request. If access rights are checked while a file is opened by a user, updated access rules will not apply to the current user. Key takeaways for this principle are:

  • Every access to every object must be checked for authority.
  • This principle, when systematically applied, is the primary underpinning of the protection system.

Principle 4. Open Design

This principle reflects the earlier tenets advising that security should not depend on the secrecy of the design or the implementation. Kerckhoffs’ Principle (1883) as well as Shannon’s Maxim (1948) suggest sharing the design publicly for increasing the chances of detecting security flaws by more eyes, but keeping the keys secrets in cryptographic systems.

Quote by Claude Shannon
Quote by Claude Shannon

One ought to design systems under the assumption that the enemy will immediately gain full familiarity with them.

Claude Shannon

Read more educational and inspirational cyber quotes at our page 100+ Best Cyber Security & Hacker Quotes.

The opposite of this principle is known as Security Through Obscurity and should be avoided. Key takeaways for this principle are:

  • The design should not be secret.
  • The mechanisms should not depend on the ignorance of potential attackers. But rather on the possession of specific keys or passwords.
  • This decoupling of protection mechanism from protection keys permits the mechanisms to be examined by many reviewers without the concern that review may itself compromise the safeguards.

Principle 5. Separation of Privilege

A protection mechanism shall be more secure and robust if it requires two separate (or multiple) control mechanisms before granting a privilege or performing a task. Classical examples include dual keys used for crypto key controls or safety deposit boxes. Another technical example is that Debian Linux requires a user both to know the password of the other user and to be in the Sudo group (or the Wheel group in BSD Linux) to be able to execute the Sudo (Substitute User) command successfully. Key takeaways for this principle are:

  • Where feasible, a protection mechanism that requires two keys to unlock it is more robust and secure.
  • Once the mechanism is locked, the two keys can be physically separated and distinct programs, organizations, or individuals made responsible for them.
  • From then on, no single accident, deception, or breach of trust is sufficient to compromise the protected information.

Principle 6. Least Privilege

This principle guides that every program and user should operate with few privileges as possible. In other terms, subjects should be given only those privileges necessary to complete their tasks, not more. Additional rights should be given as needed and removed after use. As an underlying tenet, privileges should be based on the Need-to-Know principle. Key takeaways for this principle are:

  • Every program and every user of the system should operate using the least set of privileges necessary to complete the job.
  • The military security rule of “need-to-know” is an example of this principle.
  • Primarily, this principle limits the damage that can result from an accident or error.

Principle 7. Least Common Mechanism

Least common mechanism principle suggest not sharing system mechanisms among users or programs except when absolutely necessary. This is due to the fact that shared mechanisms can potentially lead to unintended and uncontrolled information flows among different parties. Moreover, malicious actors can gain unauthorized access and exfiltrate information through these shared mechanisms, what is also known as covert channels. Key takeaways for this principle are:

  • Minimize the amount of mechanism common to more than one user and depended on by all users.
  • Every shared mechanism (esp. shared variables) represents a potential information path between users and must be designed with great care to be sure it does not unintentionally compromise security.

Principle 8. Psychological Acceptability

Originally, this principle stated that security mechanisms should not add to the difficulty of accessing a resource, so that it will be adopted naturally and exercised correctly by the users. Later, this principle was renamed to “Principle of Least Astonishment” to reflect the fact that security mechanisms will add some difficulty but it should be as minimal as possible for increased usability.

C. Kaufman, R. Perlman, and M. Speciner. In "Network Security" 2nd Ed.
C. Kaufman, R. Perlman, and M. Speciner. In “Network Security” 2nd Ed.

Humans are incapable of securely storing high-quality cryptographic keys, and they have unacceptable speed and accuracy when performing cryptographic operations… But they are sufficiently pervasive that we must design our protocols around their limitations.

C. Kaufman, R. Perlman, and M. Speciner. In “Network Security” 2nd Ed.

Read more educational and inspirational cyber quotes at our page 100+ Best Cyber Security & Hacker Quotes.

This principle also states that security mechanisms must match the users’ mental models so that they can specify and use protection mechanisms correctly. To state it differently, security mechanisms should be designed in a way users can understand why the mechanisms work that way. Key takeaways for this principle are:

  • It is essential that the human interface be designed for ease of use, so that users routinely and automatically apply the protection mechanisms correctly.
  • To the extent that the user’s mental image of this protection goals matches the mechanisms he must use, mistakes will be minimized.
  • If users must translate his image of protection needs into a radically different specification language, they will make errors.

Principle 9. Work Factor

Resources required to compromise a system using brute force attacks or with trial-and-error attacks (Work Factor) could be used as an indicator to measure the security of system. However, Saltzer and Schroeder note that this principle applies imperfectly to computer systems since attackers could use indirect mechanisms, such as system failures or other logical weaknesses, to compromise the security of a system. Key takeaways for this principle are:

  • Compare the cost of circumventing the mechanism with the resources of a potential attacker.
  • Many protection mechanisms are not susceptible to direct work factor calculation, since defeating them by systematic attack may be logically impossible.
  • Defeat can be accomplished by indirect strategies such as waiting for an accidental hardware failure or searching for an error in implementation.
  • Reliable estimates of the length of such a wait or search are very difficult tom make.

Principle 10. Compromise Recording

Another principle that applies imperfectly (as noted by Saltzer and Schroeder) to computer systems is the practice of keeping records of attacks. Though compromise logs should be kept as a best practice, this approach is not perfect due two for reasons. First, compromise detection can not be guaranteed. Second, attackers can change or tamper with the compromise logs. The key takeaways for this principle are:

  • It is suggested that mechanisms that reliably record that a compromise of information has occurred can be used in place of more elaborate mechanisms.
  • Render the compromised version worthless and issue a new one.
  • Rarely used, since it is difficult to guarantee discovery once security is broken.
  • Physical damage usually is not involved, and logical damage can be undone by a clever attacker.

Though being one of the most cited works in computer security, this article is also one of the least read. If you prefer, you can read the original paper from the link: “The Protection of Information in Computer Systems“.

You could also read our popular articles What is a Security Vulnerability? or What is Vulnerability Scanning?