-
-
Save ryazo/3267bcd5aedeebb9224ad90183a5eb85 to your computer and use it in GitHub Desktop.
Notes on high-assurance security methods by Nick P from Schneier on Security blog
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
OpenSSL leaks, Target hacks, NSA surveillance... We need a way to evaluate system security that works. What would it look like? | |
I finally put my proprietary development framework on Schneier's blog for free in 2013 in a reply to another commenter. There was | |
hardly any demand for effective, ground-up security that I've specialized in so why not be altruistic. Cleaned up version at the | |
link below: | |
http://pastebin.com/y3PufJ0V | |
Then, the Snowden leaks happened and I was glad to see my framework addressed about every NSA attack vector, including in TAO catalog. | |
Exception was physical implants although I'm always clear that devices enemies got possession of can't be trusted. Far as source of my | |
framework, what a TLA does becomes obvious if a person looks at previous failures, successful approaches, the nature of systems, each | |
layer/component in a system, and risky interactions between them. Thinking along all these lines finds many more flaws and makes | |
beating likes of NSA more realistic. | |
In any case, security certification isn't like most others. There are numerous standards, best practices, policies, guides, etc. | |
There's some stuff common to each, but many differences as well. You could say it's fragmented, incosistent, redundant, and sometimes | |
political. Too many standards, too: Common Criteria, DOD C&A, CIA's type, FISMA, and so on. Most people making them have never made a | |
secure system either. If you doubt, look at the technical components of *any* security evaluation and compare them to my requirements. | |
You'll see that they're missing plenty despite real attacks or flaws happening due to every item on my list. You might also find it | |
strange that NSA's promoting weak stuff like NetTop when plenty of my list is in their official, evaluation criteria. | |
So, what can we do? Well, let's start with what the Common Criteria (main standard) is doing. Current CC practice is to create | |
Protection Profiles (PP's) for a given type of system (eg firewall), device (eg printer), or software (eg OS). It covers the security | |
threats, protections needed, and minimal level of assurance. CC uses so-called Evaluated Assurance Level's (EAL's) 1-7 to rate the | |
strength of evaluated security on scale from highest to lowest. EAL1-3 is garbage, EAL4-EAL4+ is commercial best practices, EAL6-7 is | |
highly secure stuff, and EAL5-EAL5+ is medium. | |
An independent, private lab evaluates the product against the requirements with lots of paperwork. They're ridiculously expensive. If | |
it's EAL5-7, local intelligence service gets involved, often gets the code itself, and pentests the product plenty before allowing a | |
certification. Vast majority of products are under EAL4, much big name stuff maxes out at EAL4 (while getting hacked plenty), | |
smartcards dominate EAL5-6 evaluations, and a few exceptional general-purpose systems are certified at EAL5-7. So, by government's own | |
standards, almost everything out there is insecure as can be, but at least it's certified to be. ;) | |
I'll briefly mention the Central Intelligence Agency's classification. They improve on the other scheme by simplifying it with an | |
additional three-part rating: confidentiality, integrity, and availability. Each one gets a rating from Low to High. So, a prospective | |
buyer that knows CIA evaluated it can look at the C.I.A. ratings (pun intended) to have a quick idea of how good protections are for | |
each attribute. However, CC is nice in that it lists each threat and countermeasure in the system. That gives more technical people | |
a thorough idea of its strengths and weaknesses. I like having both types of measurement. | |
Where to go from here is pretty open to discussion. My proposal is to do evaluation and security description the same way I do my | |
security framework: layer by layer, component by component, interaction by interaction, and with environmental assumptions stated. | |
Mainly because it works that way & didn't most other ways. ;) An EAL6+ OS like INTEGRITY-178B (an old favorite) running on a regular | |
board won't be secure. Attackers may smash firmware, peripherals, etc. A more honest evaluation for something like Dell SCC would | |
list (example): processor (low), BIOS (low), peripheral code (low), kernel (high), key drivers (high), disk/networking middleware | |
(unevaluated), virtualization system (unevaluated), and so on. | |
The features of each would also be specified. Optionally, the level of rigor is stated on a per feature basis. This allows the vendor | |
to incrementally develop both the product and assurance of each component plus clearly communicate the strengths or risks to the user. | |
The person interpreting it would be a technical professional, who would translate it to layperson speak for the decision-makers. The | |
features, threats, and so on mentioned would also be adjusted in a reasonable period of time when new threats appear. Protection | |
profiles would still be the norm to provide decent defaults without mentally overwhelming buyers. They just get adjusted periodically. | |
This means almost everything on the market, proprietary or open, is insecure against a motivated attacker. I'm almost for going further | |
to create two categories at system level: Low and High. If it's not proven High security, then it's Low by default. Combining components | |
means overall security is rated at level of weakest component. This should force vendors wanting to claim High security to invest | |
heavily in bottom-up secure architectures for their product. Gives me hope to know some already did. As threats in my framework are | |
added, the number of attack vectors and level of rigor of such bottom-up approaches would only increase. | |
Bottom line: Our evaluation process needs to be streamlined and the incentives changed to force anything declaring itself to have | |
strong security to actually have strong security. It also needs an abstract rating for a quick glance evaluation along with a thorough | |
treatment of strengths and weaknesses for a technical evaluation. I'd also recommend eliminating unnecessary costs and any incentives | |
to loophole through requirements. Having mutually suspicious parties evaluate the security claims, while signing hash of resulting | |
system image, might also reduce subversion concerns. | |
Evaluators should also work hand in hand with developers so they continuously have a clear understanding of the system with feedback | |
going both ways when it's needed most. The mechanisms employed for High security should benefit commercial sector as easily as | |
government to maximize potential ROI for those developing robust solutions. Finally, governments involved in CC should each make a | |
rule that only High security products are purchased for anything security-critical. That rule might give us half a dozen in each | |
category from as many vendors within a year or two*. ;) | |
* Note: This is exactly what happened when they did this mandate in Orange Book days. Putting a profit motive on High security is a | |
proven way to make it happen. The solutions developed were also diverse and extendable, too. | |
Nick P | |
Security Engineer/Researcher | |
(High assurance focus) |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment