Building Secure Systems -- lvh
- High level
- Bugs --> security bugs --> particularly bad
- "Tools" don't work for security
- Unit tests don't catch it
- Using some software...read the docs?
- Google/SO don't always have best answers
- Good practice ~= bad practice
- Process is different in security:
- Install it
- Try to break it
- Learn and repeat
- Usually try to "make it work"; security is trying to "make if fail"
- Not considered a priority -- features > security bugs
- "Good practice looks just like bad practice"
- Sony as an example -- people don't care about security, not economically motivating
- Not something that's taught
- Principle of "least authority" -- don't give power that's not needed
- How do you make systems secure? Adversial! red/blue team
- Some automated testing can catch security
- "Attack surfaces" -- where can it be attacked?
- "Threat model" -- what are you trying to prevent?
- Example smartphone threat model
- Contains everything
- Protected by...a lock screen (passcode, fingerprint)
- Fingerprints are on everything you touch!
- Passcode can just be watched So...which is a better route? Use a threat model. Depends who you're worried about.
- Random people? touch ID
- Government? passcode might be better.
HTTPS: A Comedy of Errors Ashwini Orugati
-
Passive (just need the ability to listen) vs. active (misdirecting traffic/changing data -- MITM) attacks
-
SSL (old and busted) vs. TLS (probably what you mean)
-
Authentication via certificates
-
Certificate validation -- can we trust a site?
-
Server: certificate signed by CA (intermediary)
-
Not just authentication -- want to make sure site is who you think it is
-
TLS session:
- Handshake (more detailed than this)
- Have to agree on a bunch of details
- Now traffic can be
-
Implementations: OpenSSL, BoringSSL, Secure Transport. They all have bugs/flaws.
-
API design sucks
-
Protocol flaws (downgrade attacks in establishing details)
-
Cookie stealing -- cookies always sent; secure flag on cookies?
-
Cookie injection
-
Users will click through certificate warnings
-
https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
-
Tangled Web
-
requests
core-contributor -
lukasa
-
Early implementer of HTTP/2
-
1.1 is old -- first a standard in 1996
-
Our internet is different now -- 600 bytes of HTML no longer what we use
- Inefficient use of TCP
- Mostly doesn't reuse connections
- Requires lots of concurrent connections
- Too many roundtrips!
- Not designed to be "real time"
- Lots of hacking/misuse to make it fast
-
How'd we get to HTTP/2? Political discussion, not technical.
-
Differences?
- HTTP/1.1 -- Text easy to debug, but complex to parse
- Binary protocol: faster, easier to parse
- Just a different wire format
- Multiplexing
- One TCP connection can carry many concurrent requests
- "stream" -- single request/response (part of connection..?)
- Priority/flow control of streams (does some resource depend on another one?)
- Header compression (headers are big, let's fix taht. Why not use gzip? Could, but structured data would be better. HPACK designed specifically more HTTP headers
- Early stream termination
- Server push ("headline") -- can send responses to requests in advance. For priming caches.
- Why not use server push? Server sends "request" that it anticipates to respond to
-
Poul-Henning Kamp -- HTTP/2.0 sucks. lol.
-
What's wrong with HTTP/2?
- Difficult to reason about; hard to debug.
- MAINTAINS STATE!
- Interrupt testing for debugging
- Awkward/terrible edge cases for backwards compatability with HTTP/1.1
- Inherently concurrent -- makes it quite a bit trickier
asyncio
<--> HTTP/2
-
Lots of implementations of HTTP/2
-
Twitter, Chrome, Firefox all using it
-
Python --
hyper
client library:httplib
equivalent. Lower level. -
Ideally, will put
hyper
underurllib3
underrequests
-
More widely used than IPv6. Poor IPv6.
- Runscope -- (hosted) tests for APIs. Fancy pingdom.
- Traffic inspector (middleware...?)
- 60 services with only 8 engineers
- But...why?
- Scaling infrastructure
- Scaling team (...how do microservices help this?)
- Independent
- Different codebases
- Different deploys
- Different owners
- Microservices = SOA + DevOps
- How should they communicate?
- How do you "find a service?" "Smart client"
- A wrapper around
requests
- Accepts "service URLs" (
serivce://identity/...
) - Retries certain requests
- Can run requests asynchronously
- "Atlas" -- register service metadata in Zookeeper & Dynamo
- Built on top of Flask
- Where to get API keys etc.?
SmartConfig
-- gets keys from Atlas- Common logging config
- Service skeletons with default values/configurations
-
Puppet
-
"invest in it"
-
They also use Go, services can be language agnostic (well, anything that talks HTTP).
-
No shared databases between services; makes services dependent on one another.
-
One click deploys
-
50 deploys/day (seems...excessive) -- to production or in general?
-
Some sort of "integration testing", but happens in production