Skip to content

Instantly share code, notes, and snippets.

View honghuac's full-sized avatar
💭
TWO-peat speaker and content contributor for Red Hat Summit #rhsummit

Hong honghuac

💭
TWO-peat speaker and content contributor for Red Hat Summit #rhsummit
View GitHub Profile

Description

When using Homebrew (http://brew.sh) and searching formulas or pull requests you may get the dreaded error message: Github API Rate limit exceeded

Let's fix that! (yeah!)


Short version

Create a new Personal Token in your Github Account Settings (Sidebar: Applications) and then copy the Token. In the Terminal, use export HOMEBREW_GITHUB_API_TOKEN=YOURAPITOKENWITHFUNKYNUMBERSHERE (change that to your API Token) or add that to your .bash_profile and then do source .bash_profile.

% ansible-playbook -i ~/hosts -u raffles --private-key=~/.ssh/raff-vm1_key.pem playbooks/dd_agent.yml
[WARNING]: Skipping plugin (/Users/raffles/DDrepos/dd_ansible_example/playbooks/callback_plugins/datadog_callback.py), cannot load: Missing
parentheses in call to 'print'. Did you mean print(...)? (datadog_callback.py, line 22)
PLAY [all] ***********************************************************************************************************************************************
TASK [Gathering Facts] ***********************************************************************************************************************************
Enter passphrase for key '/Users/raffles/.ssh/raff-vm1_key.pem':
ok: [202.21.31.119]
@honghuac
honghuac / sidecar.yaml
Created November 28, 2022 05:44
Example of sidecar containers
# Example YAML configuration for the sidecar pattern.
# It defines a main application container which writes
# the current date to a log file every five seconds.
# The sidecar container is nginx serving that log file.
# (In practice, your sidecar is likely to be a log collection
# container that uploads to external storage.)
# To run:
@honghuac
honghuac / logcopy.sh
Last active August 10, 2021 13:15
Hong's log file copy script
# Make it executable with chmod +x backup.sh
#!/bin/bash
/usr/bin/scp -r -P {REMOTEPORT} user@serveripaddress:/home/remoteuser/directory /home/localuser/directory
NAME REVISION DESIRED CURRENT TRIGGERED BY
deploymentconfigs/jenkins 1 1 1 config,image(jenkins:2)
NAME TYPE FROM LATEST
buildconfigs/mlbparks-pipeline JenkinsPipeline Git@master 2
buildconfigs/nationalparks-pipeline JenkinsPipeline Git@master 2
buildconfigs/parksmap-pipeline JenkinsPipeline Git@master 2
NAME TYPE FROM STATUS STARTED DURATION
builds/mlbparks-pipeline-1 JenkinsPipeline Git Running About an hour ago
[root@doubleh ansible]# oc project lab1-kafka-project
Now using project "lab1-kafka-project" on server "https://master.dev39.openshift.opentlc.com:443".
[root@doubleh ansible]# oc get events
LAST SEEN FIRST SEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE
10m 11m 11 strimzi-cluster-operator-5b675565f7.154d268e96bd6d2b ReplicaSet Warning FailedCreate replicaset-controller Error creating: No API token found for service account "strimzi-cluster-operator", retry after the token is automatically created and added to the service account
12m 12m 1 strimzi-cluster-operator.154d268c7d229e32 Deployment Normal ScalingReplicaSet deployment-controller Scaled up replica set strimzi-cluster-operator-5b675565f7 to 1
org.apache.camel.spring.Main.main() INFO [org.apache.camel.spring.SpringCamelContext] - Route: dataformat started and consuming from: file://./target/test-classes/camel/csv
org.apache.camel.spring.Main.main() INFO [org.apache.camel.spring.SpringCamelContext] - Total 1 routes, of which 1 are started
org.apache.camel.spring.Main.main() INFO [org.apache.camel.spring.SpringCamelContext] - Apache Camel 2.21.0.fuse-000112-redhat-3 (CamelContext: camel-1) started in 0.433 seconds
org.apache.camel.spring.Main.main() INFO [org.springframework.context.support.DefaultLifecycleProcessor] - Starting beans in phase 2147483646
Camel (camel-1) thread #2 - file://./target/test-classes/camel/csv INFO [dataformat] - >> Student : jeff,fuse-camel-training,1,5-10-2012
Camel (camel-1) thread #2 - file://./target/test-classes/camel/csv INFO [dataformat] - >> Student Registered : jeff,fuse-camel-training,1,05-10-2012,Follow,jboss-fuse-0123
Camel (camel-1) thread #2 - file://./target/test-classes/camel/csv INFO [dataformat] - >> Stud
503 git clone https://github.com/scholzj/strimzi-training.git
504 cd strimzi-training/
506 cd examples/install/cluster-operator/
508 oc login https://master.6d13.openshift.opentlc.com
554 oc login -u user2 -p r3dh4t1!
555 oc new-project user2-kafka-project
556 oc adm policy add-role-to-user admin developer -n user2-kafka-project
557 oc new-project user2-kafka-project-2
558 oc adm policy add-role-to-user admin developer -n user2-kafka-project-2
566 oc apply -f examples/install/cluster-operator/01-ServiceAccount-strimzi-cluster-operator.yaml
[root@doubleh ~]# oc logs -f po/mongodb-1
=> sourcing /usr/share/container-scripts/mongodb/pre-init//10-check-env-vars.sh ...
=> sourcing /usr/share/container-scripts/mongodb/pre-init//20-setup-wiredtiger-cache.sh ...
=> [Wed May 30 14:09:55] wiredTiger cacheSizeGB set to 1
=> sourcing /usr/share/container-scripts/mongodb/pre-init//30-set-config-file.sh ...
=> sourcing /usr/share/container-scripts/mongodb/pre-init//35-setup-default-datadir.sh ...
=> sourcing /usr/share/container-scripts/mongodb/pre-init//40-setup-keyfile.sh ...
=> [Wed May 30 14:09:55] Waiting for local MongoDB to accept connections ...
2018-05-30T14:09:55.929+0000 I CONTROL [initandlisten] MongoDB starting : pid=32 port=27017 dbpath=/var/lib/mongodb/data 64-bit host=mongodb-1
2018-05-30T14:09:55.929+0000 I CONTROL [initandlisten] db version v3.4.9
<repositories>
<repository>
<id>syndesis-local</id>
<name>Repository for Syndesis builds</name>
<url>file:///root/.m2/repository</url>
<layout>default</layout>
</repository>
<repository>
<id>maven-redhat</id>
<name>Repository for Maven builds</name>