pprof is an extension of gperftools, a high-performance multi-threaded malloc implementation
- Add
OPENSHIFT_PROFILE=web
to /etc/sysconfig/atomic-openshift-node, openshift-master or origin-master - Restart OpenShift
# vi: set ft=ruby : | |
VAGRANTFILE_API_VERSION = "2" | |
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| | |
config.vm.box = "ubuntu/trusty64" | |
config.vm.network "forwarded_port", guest: 8000, host: 8000 | |
config.vm.provision :ansible do |ansible| |
pprof is an extension of gperftools, a high-performance multi-threaded malloc implementation
OPENSHIFT_PROFILE=web
to /etc/sysconfig/atomic-openshift-node, openshift-master or origin-masterimport os | |
import time | |
from datetime import datetime, timedelta | |
from tzlocal import get_localzone | |
def check_file_time_elpased(file_path): | |
ctime = time.ctime(os.path.getmtime(file_path)) | |
last_updated = datetime.strptime(ctime), "%a %b %d %H:%M:%S %Y") | |
tz = get_localzone() | |
then = tz.normalize(tz.localize(last_updated)) |
import ephem | |
def is_daytime(city): | |
sun = ephem.Sun() | |
locale = ephem.city(city) | |
sun.compute(locale) | |
twilight = -12 * ephem.degree | |
is_sunlight = sun.alt > twilight | |
return is_sunlight |
SHOW_APPLICATION: | |
404 Not Found: | |
101: "Application '#{id}' not found" | |
127: "Domain '#{domain_id}' not found" | |
UPDATE_CARTRIDGE: | |
404 Not Found: | |
163: "Cartridge '#{cartridge_name}' for application '#{app_id}' not found" | |
101: "Application '#{app_id}' not found for domain '#{domain_id}'" |
# Aggregate connection stats between source and destination apps | |
tshark -n -r example-file.pcap -T fields -e ip.src -e ip.dst -e tcp.dstport "tcp.flags == 0x0002" | sort | uniq -c | sort -nr |
# Consider adding image: jwilder/nginx-proxy for managing routes | |
version: '2' | |
services: | |
arangodb: | |
image: arangodb/arangodb:3.0.0 | |
ports: | |
- "8529:8529" | |
environment: | |
- ARANGO_NO_AUTH=1 | |
app: |
This describes deploying and running OpenShift Origin in Amazon Web Services.
This is based upon the code and installer on 2014-02-26 so YMMV.
We will be using a VPC for deployment in us-east-1 and Route53 for DNS. I will leave the VPC setup as an exercise for the reader.
from werkzeug.datastructures import CallbackDict | |
from flask.sessions import SessionInterface, SessionMixin | |
from itsdangerous import URLSafeTimedSerializer, BadSignature | |
from datetime import datetime, timedelta | |
class ItsdangerousSession(CallbackDict, SessionMixin): | |
def __init__(self, initial=None): | |
def on_update(self): | |
self.modified = True | |
CallbackDict.__init__(self, initial, on_update) |
The fine folks at Datamountaineer have developed the stream-reactor making it easier to integrate data piplines with Kafka.
As a demo I'm streaming Twitter data to Kafka, and then from Kafka to RethinkDB.
Configs using Landoop's fast-data-dev environment:
Source: Twitter
name=TwitterSourceConnector
connector.class=com.eneco.trading.kafka.connect.twitter.TwitterSourceConnector