THIS GIST WAS MOVED TO TERMSTANDARD/COLORS
REPOSITORY.
PLEASE ASK YOUR QUESTIONS OR ADD ANY SUGGESTIONS AS A REPOSITORY ISSUES OR PULL REQUESTS INSTEAD!
# Template for rendering the datepicker with Django-Crispy-Forms | |
# datepicker.html | |
{% load crispy_forms_field %} | |
<div id="div_{{ field.auto_id }}" class="clearfix control-group{% if field.errors %} error{% endif %}"> | |
{% if field.label %} | |
<label for="id_{{ field.id_for_label }}" class="control-label {% if field.field.required %}requiredField{% endif %}"> | |
{{ field.label|safe }}{% if field.field.required %}<span class="asteriskField">*</span>{% endif %} | |
</label> | |
{% endif %} |
upstream plex-upstream { | |
# change plex-server.example.com:32400 to the hostname:port of your plex server. | |
# this can be "localhost:32400", for instance, if Plex is running on the same server as nginx. | |
server plex-server.example.com:32400; | |
} | |
server { | |
listen 80; | |
# server names for this server. |
THIS GIST WAS MOVED TO TERMSTANDARD/COLORS
REPOSITORY.
PLEASE ASK YOUR QUESTIONS OR ADD ANY SUGGESTIONS AS A REPOSITORY ISSUES OR PULL REQUESTS INSTEAD!
[run] | |
branch = True | |
source = __YOUR_PROJECT_FOLDER__ | |
[report] | |
exclude_lines = | |
if self.debug: | |
pragma: no cover | |
raise NotImplementedError | |
if __name__ == .__main__.: |
#!/bin/sh | |
echo "#From dshield.org" > /tmp/blacklist | |
wget -q -O - http://feeds.dshield.org/block.txt | awk '/^[0-9]/ { print "DROP", "net:"$1"/24", "all"}' >> /tmp/blacklist | |
echo "#From spamhaus.org" >> /tmp/blacklist | |
wget -q -O - http://www.spamhaus.org/drop/drop.lasso | awk '/^[0-9]/ { print "DROP", "net:"$1, "all"}' >> /tmp/blacklist | |
mv /tmp/blacklist /etc/shorewall/blrules | |
shorewall refresh &>/dev/null |
The official guide for setting up Kubernetes using kubeadm
works well for clusters of one architecture. But, the main problem that crops up is the kube-proxy
image defaults to the architecture of the master node (where kubeadm
was run in the first place).
This causes issues when arm
nodes join the cluster, as they will try to execute the amd64
version of kube-proxy
, and will fail.
It turns out that the pod running kube-proxy
is configured using a DaemonSet. With a small edit to the configuration, it's possible to create multiple DaemonSets—one for each architecture.
Follow the instructions at https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/ for setting up the master node. I've been using Weave Net as the network plugin; it see
#!/bin/python3 | |
# Fork of https://gist.github.com/davej/113241 | |
# Requirements: | |
# - twitter API credentials (replace the correponding variables) | |
# - tweet.js file you get by extracting your twitter archive zip file (located in data/) | |
# License : Unlicense http://unlicense.org/ | |
import tweepy | |
import |