Skip to content

Instantly share code, notes, and snippets.

View resouer's full-sized avatar
🏡
Sunnyvale

Lei Zhang (Harry) resouer

🏡
Sunnyvale
View GitHub Profile
@resouer
resouer / git_rebase.md
Created November 10, 2018 08:39 — forked from ravibhure/git_rebase.md
Git rebase from remote fork repo

In your local clone of your forked repository, you can add the original GitHub repository as a "remote". ("Remotes" are like nicknames for the URLs of repositories - origin is one, for example.) Then you can fetch all the branches from that upstream repository, and rebase your work to continue working on the upstream version. In terms of commands that might look like:

Add the remote, call it "upstream":

git remote add upstream https://github.com/whoever/whatever.git

Fetch all the branches of that remote into remote-tracking branches, such as upstream/master:

git fetch upstream

apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0001
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
cd $GOPATH/src/k8s.io/kubernetes
git clone https://github.com/kubernetes/kubernetes .
./hack/install-etcd.sh
export PATH=$GOPATH/src/k8s.io/kubernetes/third_party/etcd:${PATH}
make test-e2e-node ARCH=arm64 PARALLELISM=1 RUNTIME=remote CONTAINER_RUNTIME_ENDPOINT=/var/run/frakti.sock IMAGE_SERVICE_ENDPOINT=/var/run/frakti.sock TEST_ARGS="--prepull-images=false" FOCUS="\[Conformance\]"
@resouer
resouer / gist:5f9cdcd2889da7c4a183c277016b5412
Last active May 19, 2018 02:18
Test result of equivalence cache
8-core VM on GCE
Code base: master branch
Commit ID: 1e689a8
Performance test with:
`./test-performance.sh`
// 100 nodes 3k pods
@resouer
resouer / scale down
Created March 5, 2018 20:35
scale down
```
./cluster-autoscaler --v=5 --stderrthreshold=error --logtostderr=true --cloud-provider=aztools --skip-nodes-with-local-storage=false --nodes=1:10:dlws-worker-asg --leader-elect=false --scale-down-enabled=true --kubeconfig=./deploy/kubeconfig.yaml
```
```
I0305 20:20:54.209418 2996 scale_down.go:175] Scale-down calculation: ignoring 2 nodes, that were unremovable in the last 5m0s
I0305 20:20:54.209449 2996 scale_down.go:207] Node harrydevbox-worker01 - utilization 0.883802
I0305 20:20:54.209475 2996 scale_down.go:211] Node harrydevbox-worker01 is not suitable for removal - utilization too big (0.883802)
I0305 20:20:54.209494 2996 scale_down.go:207] Node harrydevbox-worker03 - utilization 0.030000
I0305 20:20:54.209651 2996 static_autoscaler.go:332] harrydevbox-worker03 is unneeded since 2018-03-05 20:10:54.176130596 +0000 UTC m=+71.982770098 duration 10m0.032698796s
@resouer
resouer / scale.md
Last active February 27, 2018 19:45
Autoscaling demo
  1. Start the autoscaler
root@d8dd5a58836b:~/workspace/DLworkspace/src/ClusterBootstrap# ./cluster-autoscaler --v=4 --stderrthreshold=error --logtostderr=true --cloud-provider=aztools --skip-nodes-with-local-storage=false --nodes=1:10:dlws-worker-asg --leader-elect=false --scale-down-enabled=false --kubeconfig=./deploy/kubeconfig.yaml
  1. Scale a existing deployment to many instances.
root@d8dd5a58836b:~/workspace/DLworkspace/src/ClusterBootstrap# ./deploy.py kubectl scale --replicas=8 deploy/nginx-deployment
===============================================
Checking Available Nodes for Deployment...
This file has been truncated, but you can view the full file.
travis_fold:start:worker_info
Worker information
hostname: 0bd5fe9b-a6d4-4e91-bcf4-3c0a1707bc11@1.production-1-worker-org-b-1-gce
version: v3.4.0 https://github.com/travis-ci/worker/tree/ce0440bc30c289a49a9b0c21e4e1e6f7d7825101
instance: travis-job-f1ff641f-9d87-40ba-b503-651b933ba313 travis-ci-sardonyx-xenial-1517746024-4d52a99 (via amqp)
startup: 21.230888467s
travis_fold:end:worker_info
mode of '/usr/local/clang-5.0.0/bin' changed from 0777 (rwxrwxrwx) to 0775 (rwxrwxr-x)
travis_fold:start:system_info
Build system information
@resouer
resouer / rook.io
Created January 20, 2018 01:10
Rook.io in Kubernetes
# On every node install ceph client
$ apt-get update && apt-get install ceph-common -y
$ kubectl apply -f https://raw.githubusercontent.com/rook/rook/release-0.5/cluster/examples/kubernetes/rook-operator.yaml
clusterrole "rook-operator" created
serviceaccount "rook-operator" created
clusterrolebinding "rook-operator" created
deployment "rook-operator" created
$ kubectl apply -f https://raw.githubusercontent.com/rook/rook/release-0.5/cluster/examples/kubernetes/rook-cluster.yaml
@resouer
resouer / issue.md
Last active January 11, 2018 18:17
Moving the equivalence class cache from alpha to beta

Feature Description

  • One-line feature description (can be used as a release note): Moving the equivalence class cache from alpha to beta.
  • Primary contact (assignee): @resouer, @misterikkit
  • Responsible SIGs: @kubernetes/sig-scheduling-feature-requests
  • Design proposal link (community repo): Equivalence class cache scheduling design doc
  • Link to e2e and/or unit tests: e2e test of equivalence class cache
  • Reviewer(s) - (for LGTM) recommend having 2+ reviewers (at least one from code-area OWNERS file) agreed to review. Reviewers from multiple companies preferred: @davidopp, @bsalamat , @erictune, @k82cn, @timothysc, @wojtek-t
  • Approver (likely from SIG/area to which feature belongs): @davidopp, @wojtek-t
  • Feature target (which target eq
Stackube is a Kubernetes-centric OpenStack distro. It allows you to run
a Kubernetes cluster with standalone OpenStack components with both soft and hard
multi-tenancy. It works as a thin layer of production ready cloud infrastructure which fully based on container.
Stackube aligns with the OpenStack mission:
We have a clear and defined scope which aims at building a native Kubernetes cluster working along side with standalone vanilla OpenStack components, and use OpenStack to provide production ready multi-tenant networking and persistent volume. Please check our [scope documentation](https://github.com/openstack/stackube/blob/master/doc/source/stackube_scope_clarification.rst) for more detail.
The 4 opens: