Cloud native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.
These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil.
The Cloud Native Computing Foundation seeks to drive adoption of this paradigm by fostering and sustaining an ecosystem of open source, vendor-neutral projects. We democratize state-of-the-art patterns to make these innovations accessible for everyone.
- Cloud Native Landscape: Serverless, https://landscape.cncf.io/format=serverless
20220202 Free Guide: Our 15 Principles for Designing and Deploying Scalable Applications on Kubernetes
Principle 1: A single Pod is pretty much never what you want to use
Principle 2: Clearly separate stateful and stateless components
Principle 3: Separate secret from non-secret configuration to make use clear and for security
Principle 4: Enable automatic scaling to ensure capacity management
Principle 5: Enhance and enable automation by hooking into the container lifecycle management
Principle 6: Use probes correctly to detect and automatically recover from failures
Principle 7: Let components fail hard, fast, and loudly
Principle 8: Prepare your application for observability
Principle 9: Set Pod resource requests and limits appropriately
Principle 10: Reserve capacity and prioritize Pods
Principle 11: Force co-location of or spreading out Pods as needed
Principle 12: Ensure Pod availability during planned operations that can cause downtime
Principle 13: Choose blue/green or canary deployments over stop-the-world deployments
Principle 14: Avoid giving Pods permissions they do not need
Principle 15: Limit what Pods can do within your cluster