Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
Notes from NIST Container Security Standards Talk: OSCON 2018
Elsie Philips and Paul Burt, CoreOS/RedHat
NIST container security standards
=================================
Elsie & Paul
in 2017 NIST issued a report on containers
wanted to explain security concerns and make recommendations
63 pages long
yeah, it's long, but it's totally worth reading
goo.gl/
Audience: anyone running container tech
does not cover any other cloud tech
term "container" is a bit fuzzy
just application containers for server apps
defined: virtualization, immutability
containers are stateless: if you make a change, destroy and replace
a lot of legacy security tools aren't designed to deal with statelessness or rapid deployment
container runtime
bridge between image and OS
eliminates need to create complex configurations
went through deployment tiers, from development with CICD, testing & accredidation, registry,
orchestration, and then deployment
Now, here's the risks: everything that can go wrong.
Image Risks/Vulns
- if a component has a vuln, then every single image with that component has it and needs to be redeployed
- poor configuration can be vuln even if all components are up to date
- embedded malware in images. example: usage of base layer from unknown party
- embedded clear text secrets. ooops, included the password in the image build
- use of untrusted images. portability & reuse is great, but still untrusted parties
Registry Risks
- insecure registry connections, MIM can read image contents, or give substitute images
Orchestrator Risks
- unrestricted admin access for some orchestrators, no privilege levels
- Jessie's blog post about "hard multitenancy in kubernetes"
- poorly segregated inter-container network traffic
- everything uses the same virtual network
Node vulns
- app vulns, apps in the container can be holes. then can attack other containers
- rogue containers: whena container runs where it's not supposed to be. like a debug container running in production, and exposed.
- you can forget about them.
- Shared Kernel: large attack service
- Improper user access rights, such as users bypassing the orchestrator via shelling in.
- Host OS file system tampering. Like can the container mount impt. filesystmes on the host?
Recommendations
- use vuln mgmt tools designed for maintainers. Older tools don't work with rapid replacement etc.
- adopt tools & processes that validate compliance with best practices, like using base layers only from verified sources
- example: "distroless" containers project from google
- embedded malware monitoring on all images, e.g. Clair
- store secrets OUTSIDE the container
- set of trusted images for your environment, don't allow random images
Registry vuln countermeasures
- only connect with registries over encryption (SSL), eg. Quay over HTTPS
- purge old images
- authentication mandatory for sensitive images, writing to registry
Orchestrator vuln measures
- Restrict admin access
- create a different network for each sensistivity level
- isolate deployment to hosts by sensitivity level
- configure orchestrator to create a secure env for the runtime
Container Vulns measures
- Talked about CRI-O as alternative. Is smaller with smaller vuln surface. IBM also has something
- monitor, automate compliance of container config standards
- containers running with roots in read-only mode makes them harder to compromise
- unpriv containers, can use Buildah.
- have separate environments for dev, test, and prod. And RBAC profiles for each
Host vulns measures:
- use a container-specific OS (like CoreOS), and keep it updated
- also have orchestration that supports rollout, so that you can constantly update nodes
- audit all host os authentication and log
- run containers with minimal FS permissions
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.