Skip to content

Instantly share code, notes, and snippets.

Host scaling using selector labels.
1. Create a host say `dev1` and add a unique label to it `foo=bar`.
All hosts with label `foo=bar` will fall under the same HostScalingGroup. While adding new hosts, one of the hosts with this label will be used for cloning
2. Create a webhook to scale hosts from scaling group with label `foo=bar`. Creation fails if no hosts are found with that label.
3. In Execute, get all hosts sorted in descending order of created time. Find all hosts with label `foo=bar`. Number of such hosts is the length of the HostScalingGroup. If scaling up, consider the least recently created host with that label as the base host. Then from the HostScalingGroup, find the first host that matches base host prefix. Since it is the most recently created one, use its suffix to calculate suffix for the next clone host. Add the selector label `foo=bar` to this clone. Create as many clones as specified by the amount.
4. If scaling down, among all hosts sorted by created time, get the `amount` number of mo
# Webhooks
## New drivers
In 1.5, we have added two additional drivers for webhook-service, one for scaling hosts and the second for redeploying a service based on Dockerhub webhooks. Rest of the framwework remains the same.
## Scaling host
Driver for scaling hosts is implemented by cloning a base host. Hosts will be differentiated into different scaling groups by labels. This label will be provided by the users. The labels should be unique so as to distinguish properly between different host scaling groups. All hosts with the same label, whether created manually or added through webhook-service will be considered in a single host scaling group. This label is a necessary field while adding the receiver hook. It's not mandatory to have any hosts with this label while adding the hook, but while executing there must exist at least one such host. This host needs to be added by the user manually. The rest of the hosts added through webhook-service for any scaling group will be clones of the first added (least re
1. PingFederate server -> Server Configuration -> System settings -> Data Stores -> Add
2. PingFederate server -> IdP Configuration -> Create new -> Metadata should be Rancher's generated....-> ACS should be
updated to <rancherIP>:8080/v1-auth/saml/acs
InvalidXMLException -> NameIDPolicy struct tag
ProtocolBinding -> Add to AuthnRequest
asn1 error: tags don't match -> x509.ParsePKCS1PrivateKey expects RSA private key. Meaning in place of key with:
-----BEGIN PRIVATE KEY-----
it should have
-----BEGIN RSA PRIVATE KEY-----
To convert your private key to RSA private key use this
`command -> openssl rsa -in server.key -out server_new.key`
If key was generated with this command:
`openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout privateKey.key -out certificate.crt`
1. In case of errors, log the error only if you handle it, if it's going to bubble up, then only return the error
2. If the error might expose too many details to the user, log the error, and return a human readable message
3. Follow import patterns specific to the programming language you use
4. It is a bad practice to use data structure names in your variable names.
5. The code should be self-documenting or self-explanatory. If that means adding comments, add comments.
But you should also strive to name and structure things so that the code is self-evident without comments.
6. Better commit messages
7. Close response, http, connections
create multiple AD groups
source: http://www.signalwarrant.com/create-active-directory-groups-bulk-csv-w-powershell/
```
$csv = Import-Csv -Path ".\bulk_input2.csv"
ForEach ($item In $csv)
{
Write-Host "item: $($item)"
$create_group = New-ADGroup -Name $item.GroupName -GroupCategory $item.GroupCategory -groupScope $item.GroupScope -Path $item.OU
Write-Host -ForegroundColor Green "Group $($item.GroupName) created!"
@mrajashree
mrajashree / DNF
Created September 15, 2017 21:16
dnf download <package>: downloads the binaries

Pods: Unique IP address accessible within cluster. One or more containers, ususally one. Smallest concept Deployment: If pod goes down, we need something to monitor it and notify api-server to bring up another one.

RANCHER_URL="myrancherhost:8080"
# External ID value, should be the value of the attribute you want to provide for User ID in Rancher, so if is sAMAccountName
EXTERNAL_ID="value of sAMAccountName for admin"
curl -H "Accept: application/json" -H "Content-Type: application/json" -d '{"name":"api.host", "value":"${RANCHER_URL}"}' http://${RANCHER_URL}/v2-beta/settings
# Define the admin user
curl -H "Content-Type: application/json" -X PUT -d '{"externalId":"${EXTERNAL_ID}", "externalIdType":"shibboleth_user"}' http://${RANCHER_URL}/v2-beta/accounts/1a1
# Get API keys for Admin user