View foo.diff
ε !:! ~/go/src/k8s.io/kubernetes $ git diff
diff --git a/cmd/kubeadm/app/cmd/upgrade/diff.go b/cmd/kubeadm/app/cmd/upgrade/diff.go
index c0223fedb9..46404a2901 100644
--- a/cmd/kubeadm/app/cmd/upgrade/diff.go
+++ b/cmd/kubeadm/app/cmd/upgrade/diff.go
@@ -26,6 +26,7 @@ import (
"github.com/spf13/cobra"
corev1 "k8s.io/api/core/v1"
kubeadmapiv1alpha3 "k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1alpha3"
+ "k8s.io/kubernetes/cmd/kubeadm/app/cmd/options"
View git-sync-operator.mmo
sequenceDiagram
participant Dev
participant App Repo
participant Jenkins
participant DockerHub
participant Config Repo
participant Operator
participant K8s API
participant S3
participant App
View gist:b6033b3fdc388581e98579cba3563bd2
Error: module.mdn-primary-cloudfront-stage.aws_cloudfront_distribution.mdn-primary-cf-dist: cache_behavior.10.forwarded_values.0.headers: should be a list
Error: module.mdn-primary-cloudfront-stage.aws_cloudfront_distribution.mdn-primary-cf-dist: cache_behavior.11.forwarded_values.0.headers: should be a list
Error: module.mdn-primary-cloudfront-stage.aws_cloudfront_distribution.mdn-primary-cf-dist: cache_behavior.12.forwarded_values.0.headers: should be a list
View gist:f7f2d1eaed9ed07e34ae39ed06471607
MDN uses Celery, which is an asynchronous task queue/job queue.
In SCL3, Celery was configured to use a RabbitMQ backend. AWS
doesn't provide a RabbitMQ service, but Celery can use Redis via
Elasticache (which it now does).
MDN uses Memcached as a generic shared cache in kuma, kumascript,
waffle (feature switches), rate limiting, and Cacheback.
View gist:215795f8d74e5d3738ce7e201e8bdb81
diff --git a/k8s/tools/node_upgrades/generate_update_nodes.py b/k8s/tools/node_upgrades/generate_update_nodes.py
index 3b3dc0e..596d8b1 100755
--- a/k8s/tools/node_upgrades/generate_update_nodes.py
+++ b/k8s/tools/node_upgrades/generate_update_nodes.py
@@ -13,9 +13,10 @@ def get_public_ip(addresses):
def format_node_command(node):
node_name = node.metadata.name
+ external_id = node.spec.external_id
node_type = node.metadata.labels['kubernetes.io/role']
View gist:f21a034ebbc1c3602baec8645d569ace
Running load tests for 300s
54.152.126.220 | SUCCESS | rc=0 >>
Requests [total, rate] 30000, 100.00
Duration [total, attack, wait] 5m0.219038353s, 4m59.989999813s, 229.03854ms
Latencies [mean, 50, 95, 99, max] 179.381517ms, 163.054728ms, 331.153473ms, 724.411335ms, 1.447728273s
Bytes In [total, mean] 968726862, 32290.90
Bytes Out [total, mean] 0, 0.00
Success [ratio] 96.63%
Status Codes [code:count] 502:1006 200:28990 504:4
Error Set:
View gist:6cb6fff51056631fafab2c8808849204
Running load tests for 300s
54.152.126.220 | SUCCESS | rc=0 >>
Requests [total, rate] 30000, 100.00
Duration [total, attack, wait] 5m0.204257341s, 4m59.989999821s, 214.25752ms
Latencies [mean, 50, 95, 99, max] 185.613624ms, 163.910638ms, 346.136775ms, 595.41838ms, 5.844076108s
Bytes In [total, mean] 928687099, 30956.24
Bytes Out [total, mean] 0, 0.00
Success [ratio] 92.78%
Status Codes [code:count] 200:27835 502:2162 504:3
Error Set:
View gist:ba87506db39110db696101a52939e4cf
54.166.57.59 | SUCCESS | rc=0 >>
Requests [total, rate] 15000, 50.00
Duration [total, attack, wait] 5m0.180475655s, 4m59.979999871s, 200.475784ms
Latencies [mean, 50, 95, 99, max] 183.118764ms, 147.634505ms, 310.999401ms, 852.353221ms, 6.008968224s
Bytes In [total, mean] 499487952, 33299.20
Bytes Out [total, mean] 0, 0.00
Success [ratio] 99.22%
Status Codes [code:count] 200:14883 502:117
Error Set:
502 BAD_GATEWAY
View gist:e9b7578a27e264e0ff3b85cd0a471184
Running load tests for 300s
54.152.126.220 | SUCCESS | rc=0 >>
Requests [total, rate] 15000, 50.00
Duration [total, attack, wait] 5m0.228512532s, 4m59.979999818s, 248.512714ms
Latencies [mean, 50, 95, 99, max] 167.283077ms, 147.409802ms, 320.27304ms, 415.598466ms, 1.236314495s
Bytes In [total, mean] 502479135, 33498.61
Bytes Out [total, mean] 0, 0.00
Success [ratio] 99.79%
Status Codes [code:count] 200:14968 502:32
Error Set:
View gist:20dfdfe20e2d2960c3a62aa6131fb126
<!:!>metadave@epsilon:~/moz/ee-infra-private-backup/k8s/load_testing (dp_sumo_load_testing *)$ ./load.sh
Testing on the following instances:
54.89.240.71
34.227.143.30
54.89.107.176
Running load tests for 300s
54.89.240.71 | SUCCESS | rc=0 >>
Requests [total, rate] 30000, 100.00
Duration [total, attack, wait] 5m0.074464169s, 4m59.989999805s, 84.464364ms
Latencies [mean, 50, 95, 99, max] 1.134179072s, 87.370007ms, 10.665514561s, 12.852770726s, 18.560192096s