Skip to content

Instantly share code, notes, and snippets.

@tsanghan
Last active February 21, 2024 16:22
Show Gist options
  • Save tsanghan/65d04c42451e88b21cd8357881196bdc to your computer and use it in GitHub Desktop.
Save tsanghan/65d04c42451e88b21cd8357881196bdc to your computer and use it in GitHub Desktop.
cilium
---
# Source: cilium/templates/cilium-agent/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: "cilium"
namespace: kube-system
---
# Source: cilium/templates/cilium-operator/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: "cilium-operator"
namespace: kube-system
---
# Source: cilium/templates/cilium-ca-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: cilium-ca
namespace: kube-system
data:
ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURGRENDQWZ5Z0F3SUJBZ0lSQUpvcis1T2ovTGVYSVh2dE5ocGI2WlF3RFFZSktvWklodmNOQVFFTEJRQXcKRkRFU01CQUdBMVVFQXhNSlEybHNhWFZ0SUVOQk1CNFhEVEkwTURJeU1URTJNVFF6TlZvWERUSTNNREl5TURFMgpNVFF6TlZvd0ZERVNNQkFHQTFVRUF4TUpRMmxzYVhWdElFTkJNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DCkFROEFNSUlCQ2dLQ0FRRUFzaEFVUVV5UERJUHl3dEx0SysySThKZGQvdFlkbm1nd295dTNxbWtTOUNNaVRQQ0wKa2VTazFtdGZCUEpDclZhck5IdlZVUFJYcDBaQk81Mm85S1B0QTRUc2Q0UFFMRWJUV0ZXclgweU5aRTlMM0xGMAowL21ub2JNMHdBT0d0QkpTeFNGODQ5MUNMUWhBcnRpclFhbG1IeThKY1FYQ0ZxcWtveEVvaFllTU5ReDdVQkpUCmJzS0RYUDZ0eWV4bDBucDdBSTNuWDFVQTlaM3dGcmhaaVJlR0VacUlUQ3Q0WGpTdzRtdlJlRHZYaHVidGlGUGcKcGdLUFRVaVZjYloyTUZoYlVjSVVINjJmdGc0ajNteXczenEzSFZzeFNDdTJVcm5sUnpXVVh6bmJ5QU5KbDhYNQpvdlB4VHhtc3ZzNDZwcXl0dUVjZ241Z3VJTVBnL1FEUDNLa1RHUUlEQVFBQm8yRXdYekFPQmdOVkhROEJBZjhFCkJBTUNBcVF3SFFZRFZSMGxCQll3RkFZSUt3WUJCUVVIQXdFR0NDc0dBUVVGQndNQ01BOEdBMVVkRXdFQi93UUYKTUFNQkFmOHdIUVlEVlIwT0JCWUVGQ1YxNXJ1OG92S2tYTXlEUGhLaXp3T01ZMkdVTUEwR0NTcUdTSWIzRFFFQgpDd1VBQTRJQkFRQ2NCbzUzckpJSmxUV3lwUFF5bWVVU05LZFRrMmsrSi8yckhKY3dreHlpbEZzMm5yVXppdnd0CjBaOWJZcW5MaUVvYTQrTUphNWUybHlzMDNONWtVaFVaaWVFV09HZzRMOExiUy93S0RmTW5RTnJnVUM0aDlqbnYKQURZVE5uL0dqbnV5YVdBZ2ZMVlRVcld3RjJnUG9vekt5RVJzZlZsQVQrTmRWd2k2bXdXMWZYU1B6a2lETVpoQgo3T0EyeVNnV2VaQVhCakJmK2tVZnlpbEtvK2F2N3VsTjhCOHZNQVdTenVESHVuUW14MS9iK1VHSHMvYzA0OUtFCjJlcEY2UzdRa2FVZlpYOXZGT1pYeEhWZ3p5OStOb3J4b29vdGc3VVVsZmZ4SmpsTm5ac2JEWEF3WnlIbzFqTWsKREJoRU1BdzRTWHo3UU9Za01QSHFsZVJDNzB0alJVS3EKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
ca.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBc2hBVVFVeVBESVB5d3RMdEsrMkk4SmRkL3RZZG5tZ3dveXUzcW1rUzlDTWlUUENMCmtlU2sxbXRmQlBKQ3JWYXJOSHZWVVBSWHAwWkJPNTJvOUtQdEE0VHNkNFBRTEViVFdGV3JYMHlOWkU5TDNMRjAKMC9tbm9iTTB3QU9HdEJKU3hTRjg0OTFDTFFoQXJ0aXJRYWxtSHk4SmNRWENGcXFrb3hFb2hZZU1OUXg3VUJKVApic0tEWFA2dHlleGwwbnA3QUkzblgxVUE5WjN3RnJoWmlSZUdFWnFJVEN0NFhqU3c0bXZSZUR2WGh1YnRpRlBnCnBnS1BUVWlWY2JaMk1GaGJVY0lVSDYyZnRnNGozbXl3M3pxM0hWc3hTQ3UyVXJubFJ6V1VYem5ieUFOSmw4WDUKb3ZQeFR4bXN2czQ2cHF5dHVFY2duNWd1SU1QZy9RRFAzS2tUR1FJREFRQUJBb0lCQVFDbERjeUl2dnUyb1RUTgpMUkhWNzBoSnBEWG4rL2ZXbDBQR2JNYkNPc1hyOGdsZ2duVU5sb0RKbFJ1dURSYUxjTlFnUVUxNXpoVFdKSVJoClM0S0t5c3p2dnk0bWx0UEh4eHN2UGJJdUUxclpDYndMWlo4aXdyK0ZYd1ZkbTZjb2tmZVJiYnBEeWh3R2ZDamgKS2t5TkFBWitqMjVVQ3Y5ZlhXeGhENDJkUVFsUlBhWGFZYXVBMkxPVEphc2NwY3h6ZVVXV1Z1ZDVlc3lENkpiMwpTeTF6aXNuMWNLeFh5akc4SVlMYmtaYXJpeFRnaFROSVh0blVqQXZPZUYyQkVocVg0WStNbVloMjhML1ZKM0tSCmtEbHVQQzlNT2ZzclhaNzFJWHJCUDJRY2RJMGJrd3k1dGFSa0tsNklUMzFnQjhIZW5mSUREK3IxdzNhL0VCZXYKRGZnZ1g1OFZBb0dCQU5aUDlBWVNPc0YvUGdNMlMwYytqOVYxOGs1LzBkYmt0cWtUWXJhUlQ0QzIzUFN2dGNvbQphZzdzazZDbHUrSG50Q2dSY1RZSWtXQkdZeU9iUlo5WWs2V0JkT0k4QVRWSzNmRDloWjIwZzhFNUMyMWh6WHhhCmFwUEFNMWVORUdiSHMrbTJBK1NoZlJQNFRxdTk5V3cxanNaYzVtMHJBeml0TGNLM3ErTGlnOTFUQW9HQkFOU3oKQlJrdldzM2JHZGJHMFdVek9lTTVhdW16amVrQ3RjUlV6ZXBYdHFMRCtjSXUycG5USVJSMyt6TTdibGExQTZJNQpXTStnSERKK01QK3FEUy9BVExWeldnV3ZvaGFHaklycUdDa0lIRVhSZGY1WDNjTitJQ0ZBa3dYVkVqRnJkdXdYCjF3QU85WEI5dmxNNyszV0U5aXZCWGVMTEZRV2ZjNFNGS3ZVRUlCUmpBb0dBUExUQkpzY2JKWnhwY0hkOHMxMmgKV0pIa1pTQUh6SnRVc21mdldrK20rWXJTNCt5eHplVTd2YVo5MnMrWGZOSXBVZ0ErMVZOditwbDFrNnh6K0VNYQo3NUxRRFJWNk1pSlc0K0NzYkpPcGpwNGVBb25scndmZGtLU3M1bXZxN1hJOElFT1NycnlmdFh4c3JIRk9oNnhVCkdSUlBvVFRCNE5nTlVrNjh2YlAwTGtrQ2dZRUFzUmNBQnJFRHRHTll4eGF2M3NkZ3lndlROUkwyODJyN05hUzUKOFFQb251bjJON1BVODcveVNkMS9lMjllOWJndWQxR3gzT1JjdGJtVlNEZ29WSHFTSTMwUUZhM2VrVXlqRlVIRQpyZHovMVMySlJTT1pFeHdlMmpDdWVHdW5neGdMWXBTU3dJeXowMTRPS2JUR0wxbHRzSTZGZ2I4K0dIbGlyNUpFCmFzMXRmQ3NDZ1lFQXVQOE9WYTM5TWh2Q2dOa015MTJVcjhUZUxIdmZLK0V1TDZOK3BvMDNwZkVXTERtYStBTnEKbS8xejk2MFppdjFMY3g2UnBUeEJDQzRxOEVpSEVGQmc1b1drWTVYZVJ6UDJZdWprNTQ5eWtEanJROERZSGdJTgpJZ3U4QWJ3N0J3VzRteTJwTG1KSEoyTDhiWjJtRVNFQ3dMOHpvWWJxWGVqK2FqRFlVT0FNYW5ZPQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
---
# Source: cilium/templates/hubble/tls-helm/server-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: hubble-server-certs
namespace: kube-system
type: kubernetes.io/tls
data:
ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURGRENDQWZ5Z0F3SUJBZ0lSQUpvcis1T2ovTGVYSVh2dE5ocGI2WlF3RFFZSktvWklodmNOQVFFTEJRQXcKRkRFU01CQUdBMVVFQXhNSlEybHNhWFZ0SUVOQk1CNFhEVEkwTURJeU1URTJNVFF6TlZvWERUSTNNREl5TURFMgpNVFF6TlZvd0ZERVNNQkFHQTFVRUF4TUpRMmxzYVhWdElFTkJNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DCkFROEFNSUlCQ2dLQ0FRRUFzaEFVUVV5UERJUHl3dEx0SysySThKZGQvdFlkbm1nd295dTNxbWtTOUNNaVRQQ0wKa2VTazFtdGZCUEpDclZhck5IdlZVUFJYcDBaQk81Mm85S1B0QTRUc2Q0UFFMRWJUV0ZXclgweU5aRTlMM0xGMAowL21ub2JNMHdBT0d0QkpTeFNGODQ5MUNMUWhBcnRpclFhbG1IeThKY1FYQ0ZxcWtveEVvaFllTU5ReDdVQkpUCmJzS0RYUDZ0eWV4bDBucDdBSTNuWDFVQTlaM3dGcmhaaVJlR0VacUlUQ3Q0WGpTdzRtdlJlRHZYaHVidGlGUGcKcGdLUFRVaVZjYloyTUZoYlVjSVVINjJmdGc0ajNteXczenEzSFZzeFNDdTJVcm5sUnpXVVh6bmJ5QU5KbDhYNQpvdlB4VHhtc3ZzNDZwcXl0dUVjZ241Z3VJTVBnL1FEUDNLa1RHUUlEQVFBQm8yRXdYekFPQmdOVkhROEJBZjhFCkJBTUNBcVF3SFFZRFZSMGxCQll3RkFZSUt3WUJCUVVIQXdFR0NDc0dBUVVGQndNQ01BOEdBMVVkRXdFQi93UUYKTUFNQkFmOHdIUVlEVlIwT0JCWUVGQ1YxNXJ1OG92S2tYTXlEUGhLaXp3T01ZMkdVTUEwR0NTcUdTSWIzRFFFQgpDd1VBQTRJQkFRQ2NCbzUzckpJSmxUV3lwUFF5bWVVU05LZFRrMmsrSi8yckhKY3dreHlpbEZzMm5yVXppdnd0CjBaOWJZcW5MaUVvYTQrTUphNWUybHlzMDNONWtVaFVaaWVFV09HZzRMOExiUy93S0RmTW5RTnJnVUM0aDlqbnYKQURZVE5uL0dqbnV5YVdBZ2ZMVlRVcld3RjJnUG9vekt5RVJzZlZsQVQrTmRWd2k2bXdXMWZYU1B6a2lETVpoQgo3T0EyeVNnV2VaQVhCakJmK2tVZnlpbEtvK2F2N3VsTjhCOHZNQVdTenVESHVuUW14MS9iK1VHSHMvYzA0OUtFCjJlcEY2UzdRa2FVZlpYOXZGT1pYeEhWZ3p5OStOb3J4b29vdGc3VVVsZmZ4SmpsTm5ac2JEWEF3WnlIbzFqTWsKREJoRU1BdzRTWHo3UU9Za01QSHFsZVJDNzB0alJVS3EKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURWekNDQWorZ0F3SUJBZ0lSQUlRbHRxdFpwTVlvUmExdjFtUVRsck13RFFZSktvWklodmNOQVFFTEJRQXcKRkRFU01CQUdBMVVFQXhNSlEybHNhWFZ0SUVOQk1CNFhEVEkwTURJeU1URTJNVFF6TlZvWERUSTNNREl5TURFMgpNVFF6TlZvd0tqRW9NQ1lHQTFVRUF3d2ZLaTVrWldaaGRXeDBMbWgxWW1Kc1pTMW5jbkJqTG1OcGJHbDFiUzVwCmJ6Q0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU5WQkFscHAyU2I0V0pJK3V1eEkKNmVxTlNJWmZiaFVKclZNTGR4SURCU0FCb0p3YmFqSGZRSG9IOG5TdnB4dXdjcStPYmx3cDB4WG5SQnpKK3pZagpwNzFRZlFidDF2L2hJV01rQ3R3eHdlZVhUQjNwQ1BvMEpUakJSa3Bxaytad1NwVklsQkFhT3ZER01jKzhXOEh6CkR1UXRTclRadmtMTVMrR3BhQzZzU2RiZDFXby9BNDZqTFhvSnFtM2YxTG4vTEdwQUZ5MElDdWhhNWp3SWhSb28KaGFTeXJJUnN6NkZKZStPdVk2dGphQXZhYUdsUnlDVE5lUGdYWWl1K1B3aFpDVXN1Q3IyMk9jNW5JL21aU1oreQpERVE1NnkybXJIUHdjLzlPTjd3WVdjeDJxbXNadCtQc1owL1ZzMkZhSXZWNnBzNEF1Tlpma1N3UkpnMitYa01hCnN0TUNBd0VBQWFPQmpUQ0JpakFPQmdOVkhROEJBZjhFQkFNQ0JhQXdIUVlEVlIwbEJCWXdGQVlJS3dZQkJRVUgKQXdFR0NDc0dBUVVGQndNQ01Bd0dBMVVkRXdFQi93UUNNQUF3SHdZRFZSMGpCQmd3Rm9BVUpYWG11N3lpOHFSYwp6SU0rRXFMUEE0eGpZWlF3S2dZRFZSMFJCQ013SVlJZktpNWtaV1poZFd4MExtaDFZbUpzWlMxbmNuQmpMbU5wCmJHbDFiUzVwYnpBTkJna3Foa2lHOXcwQkFRc0ZBQU9DQVFFQVZNQjRFWGVQM1RzSmNGUXVUallwZDFWdGdkSkoKT21wSTBrM09pZkhlMEppS1JqbDRnbUdiYURwcmlIcWNiZnN2ZVlzZ1ErOElUU2p0TXh1YkNTUTFlMUNGMUtMSAo3TWhjNFJDTUtjU0wzc1lCRHFqeGJJNGZqaWd0SEdSbHY5YjJwRlJLTXBnc0VtNUNmZmI4UFdHUGZNdmVTRXh4Cm5QTXNXY2h3Zi9xZ0M1dkNSbXdqVm9MekJiK2FnYm50TVR6OW9DOTRHTWpnaHpYb3huQll4SDI1cmZmNk56T1AKcXc1UUN0eEg1dlZqYkhHZnIzWnRxOGpja2VjcHMwbmpLYW5rU2ozZUFlc1B5UUVBVHJRblc4QnVaUThLOFJGYwpWNXprd24yWnFYdHY0U0hoaU96ZWNzRitJQUp1NURwa1d4V2lsVjRLZ3dXRnl1bUluRTBQdysyTnR3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBMVVFQ1dtblpKdmhZa2o2NjdFanA2bzFJaGw5dUZRbXRVd3QzRWdNRklBR2duQnRxCk1kOUFlZ2Z5ZEsrbkc3QnlyNDV1WENuVEZlZEVITW43TmlPbnZWQjlCdTNXLytFaFl5UUszREhCNTVkTUhla0kKK2pRbE9NRkdTbXFUNW5CS2xVaVVFQm82OE1ZeHo3eGJ3Zk1PNUMxS3RObStRc3hMNGFsb0xxeEoxdDNWYWo4RApqcU10ZWdtcWJkL1V1ZjhzYWtBWExRZ0s2RnJtUEFpRkdpaUZwTEtzaEd6UG9VbDc0NjVqcTJOb0M5cG9hVkhJCkpNMTQrQmRpSzc0L0NGa0pTeTRLdmJZNXptY2orWmxKbjdJTVJEbnJMYWFzYy9Cei8wNDN2QmhaekhhcWF4bTMKNCt4blQ5V3pZVm9pOVhxbXpnQzQxbCtSTEJFbURiNWVReHF5MHdJREFRQUJBb0lCQUdCcUdlUkN3dGpwb3pITApocnRaWTlpVnM5cDh2c3BvSzZMR0pqbFFnRHF1UWEwU2YvcTRVdkJaTTNjcUMwVnJpdzV3T05rV1Y4Y3BYaFFlCkJhTytqeEg2bCt4UUQ4cDBRS0lRSTVEV05qSzhwcjlISXJYc2FYKzFjbEFteTJOK0ZWcFZEQXdUcjk0MzNVRnMKaVpld2ltVURUU2xpNExCV3FXQUhOWUVVaC9YS1ZndWV4OWw4T1JicFlRQ0VDd3ZwdThuU2FvYjg4V1ljMk5udwpUZW5VY3hWNUo3bTZIbUdKdHVuVVJ5THp0UEdBbEZVNEwyUnplaWNzS2FYYnRMWTVycUg0NW1pa2JwMXZQdktXClUvbVFuU0lVWkwwZ3VHTmNNMjczR1lZcWoyNUFoeUp4eUhjOEFyQUJHeU5vd2U1QWhqNEJlejZidEEzKzR0NWIKcTlYR0R3RUNnWUVBMjdQa0twaXpKKy9ZaEJiNmI2a1dmYnh2ZTI2UGtyMDdTOFpxdk1HWWdWeDR6K3BFakVqLwpYYmtQT0R0RE9xQ3BhK3lZK0R0aStsU2VaejNpMHVNcnV3QW9PbmhrdmR4UXc3SWM2UGNaK1pKb1o3WDR3RjFOCkg5NEEyMFh2SFJJSVpsb3lJRjBPL3dnSy9LQVEwcXlST0Z3YkJCZS9ZNWtxZWsvaHEzendJa0VDZ1lFQStIeGYKd25GZHovcmVrZ2NvZW91U1JNclRiQVc4cTF2OUVlelIwRFVEMFdUU2JvNlFXd1ZxTTliYk9kSkk3STZuVWRWeQorckRuVVpFNDd5bFNpMXBsTXRJNVNXZndrWG8zLzg3Q0hTb25HK3FBRExDOTdXK0F1a2N4TU43NndoRys3dWNoCnFWRHlwUlora2J5K0lYS241dmVCeUpTL280YzZwS1ZzZjhCdUtCTUNnWUFwVGVPcWduVEVJRnBqVXZLWVJZQzkKK013NHQydDBtZkRvNlErdUZ2TjE5bzJjQVI0TUJibEV4SUx3L210QVBXNDhwUW1KT1pqOUdTV0NvV2JnWU9jYQp6QWZFSGxoS1BYNU5uRkhGRnBlaWpQemw2cGN1aXh2eHpzbjRiMmhwM2JjSWp4SjNkU2RabVFoL3dCUUpsM25oCno2Y2dtTnBaZmpVM000ZG90eDlxUVFLQmdRQ2RnQmdpWTBFWFJ1ZzBueHpsTC9weWFDMUNWeENUZlNjWGFZaEQKOUphSzd1RUMrcEk5WDExRnBuWW1YRWVreVhiOHc5S3hXOWdETjQxaTZrcEwwZXc3SGt6NVhreDVxWUk5UG95RApkK2g2SlZVc3RncHNxVFJxM2gwcjRPb0lnTDhKSnErTFpxZW1SRy9OYUZrTFVtVmlYSmVDeitYNGZRcUt1ZC9mCnlkVUl5UUtCZ0VjWUxhdGhVLzhUT3phZnM1NjhDUURRZUVqS1QzbzZ2cTlKZjY4U3Q4Zk0zMnZjZGhMeW5yMi8KOC9UUC9sS2RodFE1QnZteGEvYU12Yi81WS9rb0pvREZ2UDlsYmhGcFNpQ3c1MW92b1FPZnY5b1pYY2NZMGtOSgpWaHF6MnlPT3d1d3JnNHdFclVRek9pY21MUWpiaENmemcwU0VnYkRXcDc5MmdibDhXbzA0Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
---
# Source: cilium/templates/cilium-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: cilium-config
namespace: kube-system
data:
# Identity allocation mode selects how identities are shared between cilium
# nodes by setting how they are stored. The options are "crd" or "kvstore".
# - "crd" stores identities in kubernetes as CRDs (custom resource definition).
# These can be queried with:
# kubectl get ciliumid
# - "kvstore" stores identities in an etcd kvstore, that is
# configured below. Cilium versions before 1.6 supported only the kvstore
# backend. Upgrades from these older cilium versions should continue using
# the kvstore by commenting out the identity-allocation-mode below, or
# setting it to "kvstore".
identity-allocation-mode: crd
identity-heartbeat-timeout: "30m0s"
identity-gc-interval: "15m0s"
cilium-endpoint-gc-interval: "5m0s"
nodes-gc-interval: "5m0s"
skip-cnp-status-startup-clean: "false"
# If you want to run cilium in debug mode change this value to true
debug: "false"
debug-verbose: ""
# The agent can be put into the following three policy enforcement modes
# default, always and never.
# https://docs.cilium.io/en/latest/security/policy/intro/#policy-enforcement-modes
enable-policy: "default"
policy-cidr-match-mode: ""
# Port to expose Envoy metrics (e.g. "9964"). Envoy metrics listener will be disabled if this
# field is not set.
proxy-prometheus-port: "9964"
# If you want metrics enabled in cilium-operator, set the port for
# which the Cilium Operator will have their metrics exposed.
# NOTE that this will open the port on the nodes where Cilium operator pod
# is scheduled.
operator-prometheus-serve-addr: ":9963"
enable-metrics: "true"
# Enable IPv4 addressing. If enabled, all endpoints are allocated an IPv4
# address.
enable-ipv4: "true"
# Enable IPv6 addressing. If enabled, all endpoints are allocated an IPv6
# address.
enable-ipv6: "false"
# Users who wish to specify their own custom CNI configuration file must set
# custom-cni-conf to "true", otherwise Cilium may overwrite the configuration.
custom-cni-conf: "false"
enable-bpf-clock-probe: "false"
# If you want cilium monitor to aggregate tracing for packets, set this level
# to "low", "medium", or "maximum". The higher the level, the less packets
# that will be seen in monitor output.
monitor-aggregation: medium
# The monitor aggregation interval governs the typical time between monitor
# notification events for each allowed connection.
#
# Only effective when monitor aggregation is set to "medium" or higher.
monitor-aggregation-interval: "5s"
# The monitor aggregation flags determine which TCP flags which, upon the
# first observation, cause monitor notifications to be generated.
#
# Only effective when monitor aggregation is set to "medium" or higher.
monitor-aggregation-flags: all
# Specifies the ratio (0.0-1.0] of total system memory to use for dynamic
# sizing of the TCP CT, non-TCP CT, NAT and policy BPF maps.
bpf-map-dynamic-size-ratio: "0.0025"
# bpf-policy-map-max specifies the maximum number of entries in endpoint
# policy map (per endpoint)
bpf-policy-map-max: "16384"
# bpf-lb-map-max specifies the maximum number of entries in bpf lb service,
# backend and affinity maps.
bpf-lb-map-max: "65536"
bpf-lb-external-clusterip: "false"
# Pre-allocation of map entries allows per-packet latency to be reduced, at
# the expense of up-front memory allocation for the entries in the maps. The
# default value below will minimize memory usage in the default installation;
# users who are sensitive to latency may consider setting this to "true".
#
# This option was introduced in Cilium 1.4. Cilium 1.3 and earlier ignore
# this option and behave as though it is set to "true".
#
# If this value is modified, then during the next Cilium startup the restore
# of existing endpoints and tracking of ongoing connections may be disrupted.
# As a result, reply packets may be dropped and the load-balancing decisions
# for established connections may change.
#
# If this option is set to "false" during an upgrade from 1.3 or earlier to
# 1.4 or later, then it may cause one-time disruptions during the upgrade.
preallocate-bpf-maps: "false"
# Regular expression matching compatible Istio sidecar istio-proxy
# container image names
sidecar-istio-proxy-image: "cilium/istio_proxy"
# Name of the cluster. Only relevant when building a mesh of clusters.
cluster-name: default
# Unique ID of the cluster. Must be unique across all conneted clusters and
# in the range of 1 and 255. Only relevant when building a mesh of clusters.
cluster-id: "0"
# Encapsulation mode for communication between nodes
# Possible values:
# - disabled
# - vxlan (default)
# - geneve
# Default case
routing-mode: "tunnel"
tunnel-protocol: "vxlan"
service-no-backend-response: "reject"
# Enables L7 proxy for L7 policy enforcement and visibility
enable-l7-proxy: "true"
enable-ipv4-masquerade: "true"
enable-ipv4-big-tcp: "false"
enable-ipv6-big-tcp: "false"
enable-ipv6-masquerade: "true"
enable-masquerade-to-route-source: "false"
enable-xt-socket-fallback: "true"
install-no-conntrack-iptables-rules: "false"
auto-direct-node-routes: "false"
enable-local-redirect-policy: "false"
kube-proxy-replacement: "true"
kube-proxy-replacement-healthz-bind-address: ""
bpf-lb-sock: "false"
enable-health-check-nodeport: "true"
enable-health-check-loadbalancer-ip: "false"
node-port-bind-protection: "true"
enable-auto-protect-node-port-range: "true"
bpf-lb-acceleration: "disabled"
enable-svc-source-range-check: "true"
enable-l2-neigh-discovery: "true"
arping-refresh-period: "30s"
enable-k8s-networkpolicy: "true"
# Tell the agent to generate and write a CNI configuration file
write-cni-conf-when-ready: /host/etc/cni/net.d/05-cilium.conflist
cni-exclusive: "true"
cni-log-file: "/var/run/cilium/cilium-cni.log"
enable-endpoint-health-checking: "true"
enable-health-checking: "true"
enable-well-known-identities: "false"
enable-remote-node-identity: "true"
synchronize-k8s-nodes: "true"
operator-api-serve-addr: "127.0.0.1:9234"
# Enable Hubble gRPC service.
enable-hubble: "true"
# UNIX domain socket for Hubble server to listen to.
hubble-socket-path: "/var/run/cilium/hubble.sock"
hubble-export-file-max-size-mb: "10"
hubble-export-file-max-backups: "5"
# An additional address for Hubble server to listen to (e.g. ":4244").
hubble-listen-address: ":4244"
hubble-disable-tls: "false"
hubble-tls-cert-file: /var/lib/cilium/tls/hubble/server.crt
hubble-tls-key-file: /var/lib/cilium/tls/hubble/server.key
hubble-tls-client-ca-files: /var/lib/cilium/tls/hubble/client-ca.crt
ipam: "kubernetes"
ipam-cilium-node-update-rate: "15s"
egress-gateway-reconciliation-trigger-interval: "1s"
enable-vtep: "false"
vtep-endpoint: ""
vtep-cidr: ""
vtep-mask: ""
vtep-mac: ""
enable-bgp-control-plane: "false"
procfs: "/host/proc"
bpf-root: "/sys/fs/bpf"
cgroup-root: "/sys/fs/cgroup"
enable-k8s-terminating-endpoint: "true"
enable-sctp: "false"
k8s-client-qps: "10"
k8s-client-burst: "20"
remove-cilium-node-taints: "true"
set-cilium-node-taints: "true"
set-cilium-is-up-condition: "true"
unmanaged-pod-watcher-interval: "15"
# default DNS proxy to transparent mode in non-chaining modes
dnsproxy-enable-transparent-mode: "true"
tofqdns-dns-reject-response-code: "refused"
tofqdns-enable-dns-compression: "true"
tofqdns-endpoint-max-ip-per-hostname: "50"
tofqdns-idle-connection-grace-period: "0s"
tofqdns-max-deferred-connection-deletes: "10000"
tofqdns-proxy-response-max-delay: "100ms"
agent-not-ready-taint-key: "node.cilium.io/agent-not-ready"
mesh-auth-enabled: "true"
mesh-auth-queue-size: "1024"
mesh-auth-rotated-identities-queue-size: "1024"
mesh-auth-gc-interval: "5m0s"
proxy-connect-timeout: "2"
proxy-max-requests-per-connection: "0"
proxy-max-connection-duration-seconds: "0"
external-envoy-proxy: "false"
max-connected-clusters: "255"
# Extra config allows adding arbitrary properties to the cilium config.
# By putting it at the end of the ConfigMap, it's also possible to override existing properties.
---
# Source: cilium/templates/cilium-agent/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cilium
labels:
app.kubernetes.io/part-of: cilium
rules:
- apiGroups:
- networking.k8s.io
resources:
- networkpolicies
verbs:
- get
- list
- watch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- namespaces
- services
- pods
- endpoints
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- list
- watch
# This is used when validating policies in preflight. This will need to stay
# until we figure out how to avoid "get" inside the preflight, and then
# should be removed ideally.
- get
- apiGroups:
- cilium.io
resources:
- ciliumloadbalancerippools
- ciliumbgppeeringpolicies
- ciliumbgpnodeconfigs
- ciliumbgpadvertisements
- ciliumbgppeerconfigs
- ciliumclusterwideenvoyconfigs
- ciliumclusterwidenetworkpolicies
- ciliumegressgatewaypolicies
- ciliumendpoints
- ciliumendpointslices
- ciliumenvoyconfigs
- ciliumidentities
- ciliumlocalredirectpolicies
- ciliumnetworkpolicies
- ciliumnodes
- ciliumnodeconfigs
- ciliumcidrgroups
- ciliuml2announcementpolicies
- ciliumpodippools
verbs:
- list
- watch
- apiGroups:
- cilium.io
resources:
- ciliumidentities
- ciliumendpoints
- ciliumnodes
verbs:
- create
- apiGroups:
- cilium.io
# To synchronize garbage collection of such resources
resources:
- ciliumidentities
verbs:
- update
- apiGroups:
- cilium.io
resources:
- ciliumendpoints
verbs:
- delete
- get
- apiGroups:
- cilium.io
resources:
- ciliumnodes
- ciliumnodes/status
verbs:
- get
- update
- apiGroups:
- cilium.io
resources:
- ciliumnetworkpolicies/status
- ciliumclusterwidenetworkpolicies/status
- ciliumendpoints/status
- ciliumendpoints
- ciliuml2announcementpolicies/status
- ciliumbgpnodeconfigs/status
verbs:
- patch
---
# Source: cilium/templates/cilium-operator/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cilium-operator
labels:
app.kubernetes.io/part-of: cilium
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
- watch
# to automatically delete [core|kube]dns pods so that are starting to being
# managed by Cilium
- delete
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
# To remove node taints
- nodes
# To set NetworkUnavailable false on startup
- nodes/status
verbs:
- patch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
# to perform LB IP allocation for BGP
- services/status
verbs:
- update
- patch
- apiGroups:
- ""
resources:
# to check apiserver connectivity
- namespaces
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
# to perform the translation of a CNP that contains `ToGroup` to its endpoints
- services
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- cilium.io
resources:
- ciliumnetworkpolicies
- ciliumclusterwidenetworkpolicies
verbs:
# Create auto-generated CNPs and CCNPs from Policies that have 'toGroups'
- create
- update
- deletecollection
# To update the status of the CNPs and CCNPs
- patch
- get
- list
- watch
- apiGroups:
- cilium.io
resources:
- ciliumnetworkpolicies/status
- ciliumclusterwidenetworkpolicies/status
verbs:
# Update the auto-generated CNPs and CCNPs status.
- patch
- update
- apiGroups:
- cilium.io
resources:
- ciliumendpoints
- ciliumidentities
verbs:
# To perform garbage collection of such resources
- delete
- list
- watch
- apiGroups:
- cilium.io
resources:
- ciliumidentities
verbs:
# To synchronize garbage collection of such resources
- update
- apiGroups:
- cilium.io
resources:
- ciliumnodes
verbs:
- create
- update
- get
- list
- watch
# To perform CiliumNode garbage collector
- delete
- apiGroups:
- cilium.io
resources:
- ciliumnodes/status
verbs:
- update
- apiGroups:
- cilium.io
resources:
- ciliumendpointslices
- ciliumenvoyconfigs
- ciliumbgppeerconfigs
- ciliumbgpadvertisements
- ciliumbgpnodeconfigs
verbs:
- create
- update
- get
- list
- watch
- delete
- patch
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- create
- get
- list
- watch
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- update
resourceNames:
- ciliumloadbalancerippools.cilium.io
- ciliumbgppeeringpolicies.cilium.io
- ciliumbgpclusterconfigs.cilium.io
- ciliumbgppeerconfigs.cilium.io
- ciliumbgpadvertisements.cilium.io
- ciliumbgpnodeconfigs.cilium.io
- ciliumbgpnodeconfigoverrides.cilium.io
- ciliumclusterwideenvoyconfigs.cilium.io
- ciliumclusterwidenetworkpolicies.cilium.io
- ciliumegressgatewaypolicies.cilium.io
- ciliumendpoints.cilium.io
- ciliumendpointslices.cilium.io
- ciliumenvoyconfigs.cilium.io
- ciliumexternalworkloads.cilium.io
- ciliumidentities.cilium.io
- ciliumlocalredirectpolicies.cilium.io
- ciliumnetworkpolicies.cilium.io
- ciliumnodes.cilium.io
- ciliumnodeconfigs.cilium.io
- ciliumcidrgroups.cilium.io
- ciliuml2announcementpolicies.cilium.io
- ciliumpodippools.cilium.io
- apiGroups:
- cilium.io
resources:
- ciliumloadbalancerippools
- ciliumpodippools
- ciliumbgpclusterconfigs
- ciliumbgpnodeconfigoverrides
verbs:
- get
- list
- watch
- apiGroups:
- cilium.io
resources:
- ciliumpodippools
verbs:
- create
- apiGroups:
- cilium.io
resources:
- ciliumloadbalancerippools/status
verbs:
- patch
# For cilium-operator running in HA mode.
#
# Cilium operator running in HA mode requires the use of ResourceLock for Leader Election
# between multiple running instances.
# The preferred way of doing this is to use LeasesResourceLock as edits to Leases are less
# common and fewer objects in the cluster watch "all Leases".
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- create
- get
- update
---
# Source: cilium/templates/cilium-agent/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cilium
labels:
app.kubernetes.io/part-of: cilium
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cilium
subjects:
- kind: ServiceAccount
name: "cilium"
namespace: kube-system
---
# Source: cilium/templates/cilium-operator/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cilium-operator
labels:
app.kubernetes.io/part-of: cilium
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cilium-operator
subjects:
- kind: ServiceAccount
name: "cilium-operator"
namespace: kube-system
---
# Source: cilium/templates/cilium-agent/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: cilium-config-agent
namespace: kube-system
labels:
app.kubernetes.io/part-of: cilium
rules:
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- list
- watch
---
# Source: cilium/templates/cilium-agent/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cilium-config-agent
namespace: kube-system
labels:
app.kubernetes.io/part-of: cilium
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: cilium-config-agent
subjects:
- kind: ServiceAccount
name: "cilium"
namespace: kube-system
---
# Source: cilium/templates/hubble/peer-service.yaml
apiVersion: v1
kind: Service
metadata:
name: hubble-peer
namespace: kube-system
labels:
k8s-app: cilium
app.kubernetes.io/part-of: cilium
app.kubernetes.io/name: hubble-peer
spec:
selector:
k8s-app: cilium
ports:
- name: peer-service
port: 443
protocol: TCP
targetPort: 4244
internalTrafficPolicy: Local
---
# Source: cilium/templates/cilium-agent/daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: cilium
namespace: kube-system
labels:
k8s-app: cilium
app.kubernetes.io/part-of: cilium
app.kubernetes.io/name: cilium-agent
spec:
selector:
matchLabels:
k8s-app: cilium
updateStrategy:
rollingUpdate:
maxUnavailable: 2
type: RollingUpdate
template:
metadata:
annotations:
# Set app AppArmor's profile to "unconfined". The value of this annotation
# can be modified as long users know which profiles they have available
# in AppArmor.
container.apparmor.security.beta.kubernetes.io/cilium-agent: "unconfined"
container.apparmor.security.beta.kubernetes.io/clean-cilium-state: "unconfined"
labels:
k8s-app: cilium
app.kubernetes.io/name: cilium-agent
app.kubernetes.io/part-of: cilium
spec:
containers:
- name: cilium-agent
image: "quay.io/cilium/cilium:v1.15.1@sha256:351d6685dc6f6ffbcd5451043167cfa8842c6decf80d8c8e426a417c73fb56d4"
imagePullPolicy: IfNotPresent
command:
- cilium-agent
args:
- --config-dir=/tmp/cilium/config-map
startupProbe:
httpGet:
host: "127.0.0.1"
path: /healthz
port: 9879
scheme: HTTP
httpHeaders:
- name: "brief"
value: "true"
failureThreshold: 105
periodSeconds: 2
successThreshold: 1
initialDelaySeconds: 5
livenessProbe:
httpGet:
host: "127.0.0.1"
path: /healthz
port: 9879
scheme: HTTP
httpHeaders:
- name: "brief"
value: "true"
periodSeconds: 30
successThreshold: 1
failureThreshold: 10
timeoutSeconds: 5
readinessProbe:
httpGet:
host: "127.0.0.1"
path: /healthz
port: 9879
scheme: HTTP
httpHeaders:
- name: "brief"
value: "true"
periodSeconds: 30
successThreshold: 1
failureThreshold: 3
timeoutSeconds: 5
env:
- name: K8S_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: CILIUM_K8S_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: CILIUM_CLUSTERMESH_CONFIG
value: /var/lib/cilium/clustermesh/
- name: GOMEMLIMIT
valueFrom:
resourceFieldRef:
resource: limits.memory
- name: KUBERNETES_SERVICE_HOST
value: "localhost"
- name: KUBERNETES_SERVICE_PORT
value: "7445"
lifecycle:
postStart:
exec:
command:
- "bash"
- "-c"
- |
set -o errexit
set -o pipefail
set -o nounset
# When running in AWS ENI mode, it's likely that 'aws-node' has
# had a chance to install SNAT iptables rules. These can result
# in dropped traffic, so we should attempt to remove them.
# We do it using a 'postStart' hook since this may need to run
# for nodes which might have already been init'ed but may still
# have dangling rules. This is safe because there are no
# dependencies on anything that is part of the startup script
# itself, and can be safely run multiple times per node (e.g. in
# case of a restart).
if [[ "$(iptables-save | grep -E -c 'AWS-SNAT-CHAIN|AWS-CONNMARK-CHAIN')" != "0" ]];
then
echo 'Deleting iptables rules created by the AWS CNI VPC plugin'
iptables-save | grep -E -v 'AWS-SNAT-CHAIN|AWS-CONNMARK-CHAIN' | iptables-restore
fi
echo 'Done!'
preStop:
exec:
command:
- /cni-uninstall.sh
securityContext:
seLinuxOptions:
level: s0
type: spc_t
capabilities:
add:
- CHOWN
- KILL
- NET_ADMIN
- NET_RAW
- IPC_LOCK
- SYS_ADMIN
- SYS_RESOURCE
- DAC_OVERRIDE
- FOWNER
- SETGID
- SETUID
drop:
- ALL
terminationMessagePolicy: FallbackToLogsOnError
volumeMounts:
# Unprivileged containers need to mount /proc/sys/net from the host
# to have write access
- mountPath: /host/proc/sys/net
name: host-proc-sys-net
# Unprivileged containers need to mount /proc/sys/kernel from the host
# to have write access
- mountPath: /host/proc/sys/kernel
name: host-proc-sys-kernel
- name: bpf-maps
mountPath: /sys/fs/bpf
# Unprivileged containers can't set mount propagation to bidirectional
# in this case we will mount the bpf fs from an init container that
# is privileged and set the mount propagation from host to container
# in Cilium.
mountPropagation: HostToContainer
# Check for duplicate mounts before mounting
- name: cilium-cgroup
mountPath: /sys/fs/cgroup
- name: cilium-run
mountPath: /var/run/cilium
- name: etc-cni-netd
mountPath: /host/etc/cni/net.d
- name: clustermesh-secrets
mountPath: /var/lib/cilium/clustermesh
readOnly: true
# Needed to be able to load kernel modules
- name: lib-modules
mountPath: /lib/modules
readOnly: true
- name: xtables-lock
mountPath: /run/xtables.lock
- name: hubble-tls
mountPath: /var/lib/cilium/tls/hubble
readOnly: true
- name: tmp
mountPath: /tmp
initContainers:
- name: config
image: "quay.io/cilium/cilium:v1.15.1@sha256:351d6685dc6f6ffbcd5451043167cfa8842c6decf80d8c8e426a417c73fb56d4"
imagePullPolicy: IfNotPresent
command:
- cilium-dbg
- build-config
env:
- name: K8S_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: CILIUM_K8S_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: KUBERNETES_SERVICE_HOST
value: "localhost"
- name: KUBERNETES_SERVICE_PORT
value: "7445"
volumeMounts:
- name: tmp
mountPath: /tmp
terminationMessagePolicy: FallbackToLogsOnError
# Mount the bpf fs if it is not mounted. We will perform this task
# from a privileged container because the mount propagation bidirectional
# only works from privileged containers.
- name: mount-bpf-fs
image: "quay.io/cilium/cilium:v1.15.1@sha256:351d6685dc6f6ffbcd5451043167cfa8842c6decf80d8c8e426a417c73fb56d4"
imagePullPolicy: IfNotPresent
args:
- 'mount | grep "/sys/fs/bpf type bpf" || mount -t bpf bpf /sys/fs/bpf'
command:
- /bin/bash
- -c
- --
terminationMessagePolicy: FallbackToLogsOnError
securityContext:
privileged: true
volumeMounts:
- name: bpf-maps
mountPath: /sys/fs/bpf
mountPropagation: Bidirectional
- name: clean-cilium-state
image: "quay.io/cilium/cilium:v1.15.1@sha256:351d6685dc6f6ffbcd5451043167cfa8842c6decf80d8c8e426a417c73fb56d4"
imagePullPolicy: IfNotPresent
command:
- /init-container.sh
env:
- name: CILIUM_ALL_STATE
valueFrom:
configMapKeyRef:
name: cilium-config
key: clean-cilium-state
optional: true
- name: CILIUM_BPF_STATE
valueFrom:
configMapKeyRef:
name: cilium-config
key: clean-cilium-bpf-state
optional: true
- name: WRITE_CNI_CONF_WHEN_READY
valueFrom:
configMapKeyRef:
name: cilium-config
key: write-cni-conf-when-ready
optional: true
- name: KUBERNETES_SERVICE_HOST
value: "localhost"
- name: KUBERNETES_SERVICE_PORT
value: "7445"
terminationMessagePolicy: FallbackToLogsOnError
securityContext:
seLinuxOptions:
level: s0
type: spc_t
capabilities:
add:
- NET_ADMIN
- SYS_ADMIN
- SYS_RESOURCE
drop:
- ALL
volumeMounts:
- name: bpf-maps
mountPath: /sys/fs/bpf
# Required to mount cgroup filesystem from the host to cilium agent pod
- name: cilium-cgroup
mountPath: /sys/fs/cgroup
mountPropagation: HostToContainer
- name: cilium-run
mountPath: /var/run/cilium # wait-for-kube-proxy
# Install the CNI binaries in an InitContainer so we don't have a writable host mount in the agent
- name: install-cni-binaries
image: "quay.io/cilium/cilium:v1.15.1@sha256:351d6685dc6f6ffbcd5451043167cfa8842c6decf80d8c8e426a417c73fb56d4"
imagePullPolicy: IfNotPresent
command:
- "/install-plugin.sh"
resources:
requests:
cpu: 100m
memory: 10Mi
securityContext:
seLinuxOptions:
level: s0
type: spc_t
capabilities:
drop:
- ALL
terminationMessagePolicy: FallbackToLogsOnError
volumeMounts:
- name: cni-path
mountPath: /host/opt/cni/bin # .Values.cni.install
restartPolicy: Always
priorityClassName: system-node-critical
serviceAccount: "cilium"
serviceAccountName: "cilium"
automountServiceAccountToken: true
terminationGracePeriodSeconds: 1
hostNetwork: true
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
k8s-app: cilium
topologyKey: kubernetes.io/hostname
nodeSelector:
kubernetes.io/os: linux
tolerations:
- operator: Exists
volumes:
# For sharing configuration between the "config" initContainer and the agent
- name: tmp
emptyDir: {}
# To keep state between restarts / upgrades
- name: cilium-run
hostPath:
path: /var/run/cilium
type: DirectoryOrCreate
# To keep state between restarts / upgrades for bpf maps
- name: bpf-maps
hostPath:
path: /sys/fs/bpf
type: DirectoryOrCreate
# To keep state between restarts / upgrades for cgroup2 filesystem
- name: cilium-cgroup
hostPath:
path: /sys/fs/cgroup
type: DirectoryOrCreate
# To install cilium cni plugin in the host
- name: cni-path
hostPath:
path: /opt/cni/bin
type: DirectoryOrCreate
# To install cilium cni configuration in the host
- name: etc-cni-netd
hostPath:
path: /etc/cni/net.d
type: DirectoryOrCreate
# To be able to load kernel modules
- name: lib-modules
hostPath:
path: /lib/modules
# To access iptables concurrently with other processes (e.g. kube-proxy)
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
# To read the clustermesh configuration
- name: clustermesh-secrets
projected:
# note: the leading zero means this number is in octal representation: do not remove it
defaultMode: 0400
sources:
- secret:
name: cilium-clustermesh
optional: true
# note: items are not explicitly listed here, since the entries of this secret
# depend on the peers configured, and that would cause a restart of all agents
# at every addition/removal. Leaving the field empty makes each secret entry
# to be automatically projected into the volume as a file whose name is the key.
- secret:
name: clustermesh-apiserver-remote-cert
optional: true
items:
- key: tls.key
path: common-etcd-client.key
- key: tls.crt
path: common-etcd-client.crt
- key: ca.crt
path: common-etcd-client-ca.crt
- name: host-proc-sys-net
hostPath:
path: /proc/sys/net
type: Directory
- name: host-proc-sys-kernel
hostPath:
path: /proc/sys/kernel
type: Directory
- name: hubble-tls
projected:
# note: the leading zero means this number is in octal representation: do not remove it
defaultMode: 0400
sources:
- secret:
name: hubble-server-certs
optional: true
items:
- key: tls.crt
path: server.crt
- key: tls.key
path: server.key
- key: ca.crt
path: client-ca.crt
---
# Source: cilium/templates/cilium-operator/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: cilium-operator
namespace: kube-system
labels:
io.cilium/app: operator
name: cilium-operator
app.kubernetes.io/part-of: cilium
app.kubernetes.io/name: cilium-operator
spec:
# See docs on ServerCapabilities.LeasesResourceLock in file pkg/k8s/version/version.go
# for more details.
replicas: 2
selector:
matchLabels:
io.cilium/app: operator
name: cilium-operator
# ensure operator update on single node k8s clusters, by using rolling update with maxUnavailable=100% in case
# of one replica and no user configured Recreate strategy.
# otherwise an update might get stuck due to the default maxUnavailable=50% in combination with the
# podAntiAffinity which prevents deployments of multiple operator replicas on the same node.
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 50%
type: RollingUpdate
template:
metadata:
annotations:
prometheus.io/port: "9963"
prometheus.io/scrape: "true"
labels:
io.cilium/app: operator
name: cilium-operator
app.kubernetes.io/part-of: cilium
app.kubernetes.io/name: cilium-operator
spec:
containers:
- name: cilium-operator
image: "quay.io/cilium/operator-generic:v1.15.1@sha256:819c7281f5a4f25ee1ce2ec4c76b6fbc69a660c68b7825e9580b1813833fa743"
imagePullPolicy: IfNotPresent
command:
- cilium-operator-generic
args:
- --config-dir=/tmp/cilium/config-map
- --debug=$(CILIUM_DEBUG)
env:
- name: K8S_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: CILIUM_K8S_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: CILIUM_DEBUG
valueFrom:
configMapKeyRef:
key: debug
name: cilium-config
optional: true
- name: KUBERNETES_SERVICE_HOST
value: "localhost"
- name: KUBERNETES_SERVICE_PORT
value: "7445"
ports:
- name: prometheus
containerPort: 9963
hostPort: 9963
protocol: TCP
livenessProbe:
httpGet:
host: "127.0.0.1"
path: /healthz
port: 9234
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 3
readinessProbe:
httpGet:
host: "127.0.0.1"
path: /healthz
port: 9234
scheme: HTTP
initialDelaySeconds: 0
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 5
volumeMounts:
- name: cilium-config-path
mountPath: /tmp/cilium/config-map
readOnly: true
terminationMessagePolicy: FallbackToLogsOnError
hostNetwork: true
restartPolicy: Always
priorityClassName: system-cluster-critical
serviceAccount: "cilium-operator"
serviceAccountName: "cilium-operator"
automountServiceAccountToken: true
# In HA mode, cilium-operator pods must not be scheduled on the same
# node as they will clash with each other.
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
io.cilium/app: operator
topologyKey: kubernetes.io/hostname
nodeSelector:
kubernetes.io/os: linux
tolerations:
- operator: Exists
volumes:
# To read the configuration from the config map
- name: cilium-config-path
configMap:
name: cilium-config
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment