Skip to content

Instantly share code, notes, and snippets.

View emmettbutler's full-sized avatar

Emmett Butler emmettbutler

View GitHub Profile
@emmettbutler
emmettbutler / celery_beat_tracing_example.py
Last active March 22, 2023 17:40
A minimal example of how to use dd-trace-py v1.9.3 to trace celery.beat and redbeat scheduling functionality
import logging
import time
import celery
from ddtrace import Pin
from ddtrace import config
# this should be omitted if running celery under ddtrace-run
import ddtrace.bootstrap.sitecustomize # noqa
import datetime as dt
import random
from collections import namedtuple
Heartbeat = namedtuple("Heartbeat", ["nginx_ts", "inc"])
# how many seconds should a pageview be assumed to last for each test?
# it might be useful to include the actual average pageview length
# or to set these to quartiles from actual data
EXAMPLE_PAGEVIEW_LENGTHS = (5, 20, 120, 600, 1200)
# how many iterations of each backoff strategy to run
@emmettbutler
emmettbutler / example_strategy.js
Last active August 22, 2018 22:36
An example script demonstrating how to track custom video players with Parse.ly's JavaScript API
var platform = "myVideoPlatform";
var strategy = {
platform: platform,
searchTags: ["DIV"],
verify: function(elem) {
return (' ' + elem.className + ' ').indexOf(" my_video ") !== -1;
},
subscribe: function(elem) {
var playerApi = myVideoJsApi(elem);
playerApi.on("play", function(playedVideoMetadata) {
https://www.flickr.com/photos/amontalenti/29920282384/in/album-72157675666411885/
https://www.flickr.com/photos/amontalenti/29949417843/in/album-72157675666411885/
https://www.flickr.com/photos/amontalenti/30554215586/in/album-72157675666411885/
https://www.flickr.com/photos/amontalenti/albums/72157675666411885/with/29920282384/
2016-07-15 15:56:02,923 - pykafka.producer - DEBUG - Sending 795 messages to broker 12
2016-07-15 15:56:02,927 - pykafka.producer - DEBUG - Sending 949 messages to broker 11
2016-07-15 15:56:03,024 - pykafka.producer - DEBUG - Successfully sent 949/949 messages to broker 11
2016-07-15 15:56:08,025 - pykafka.producer - DEBUG - Sending 60 messages to broker 11
2016-07-15 15:56:08,041 - pykafka.producer - DEBUG - Successfully sent 60/60 messages to broker 11
2016-07-15 15:56:08,041 - pykafka.producer - INFO - Worker exited for broker ue1d-kafka-beta1b.lan.cogtree.com:9092
2016-07-15 15:56:08,041 [INFO] pykafka.producer - Worker exited for broker ue1d-kafka-beta1b.lan.cogtree.com:9092
2016-07-15 15:56:15,000 - pykafka.producer - INFO - Starting new produce worker for broker 11
2016-07-15 15:56:15,000 [INFO] pykafka.producer - Starting new produce worker for broker 11
2016-07-15 15:56:15,006 - pykafka.producer - INFO - Starting new produce worker for broker 12
#!/bin/bash
# ssh-multi
# D.Kovalov
# Based on http://linuxpixies.blogspot.jp/2011/06/tmux-copy-mode-and-how-to-control.html
# a script to ssh multiple servers over multiple tmux panes
starttmux() {
if [ -z "$HOSTS" ]; then
try:
msg = self.consumer.consume()
except ConsumerStoppedException:
self.consumer.stop()
log.info('Restarting stopped pykafka consumer')
self.consumer.start()
msg = self.consumer.consume()
if not msg or not msg.value:
continue
@emmettbutler
emmettbutler / log_pure_python.txt
Last active February 5, 2016 23:44
Message too large
DEBUG:pykafka.connection:Connecting to localhost:9092
DEBUG:pykafka.connection:Successfully connected to localhost:9092
INFO:pykafka.handlers:RequestHandler.stop: about to flush requests queue
INFO:pykafka.cluster:Discovered 3 brokers
DEBUG:pykafka.cluster:Discovered broker id 0: emmett-debian:9092
DEBUG:pykafka.connection:Connecting to emmett-debian:9092
DEBUG:pykafka.connection:Successfully connected to emmett-debian:9092
DEBUG:pykafka.cluster:Discovered broker id 1: emmett-debian:9093
DEBUG:pykafka.connection:Connecting to emmett-debian:9093
DEBUG:pykafka.connection:Successfully connected to emmett-debian:9093
INFO:pykafka.producer:Worker exited for broker emmett-debian:9092
ERROR:pykafka.producer:Exception encountered in worker thread:
File "/home/emmett/git/parsely/pykafka/pykafka/producer.py", line 438, in queue_reader
self.producer._send_request(batch, self)
File "/home/emmett/git/parsely/pykafka/pykafka/producer.py", line 366, in _send_request
self._update()
File "/home/emmett/git/parsely/pykafka/pykafka/producer.py", line 201, in _update
self._cluster.update()
File "/home/emmett/git/parsely/pykafka/pykafka/cluster.py", line 382, in update
metadata = self._get_metadata()
client = KafkaClient(hosts="mycoolkafkahost:9092")
topic = client.topics["mycooltopic"]
consumer = topic.get_simple_consumer()
while True:
zmq_message = get_from_zmq(block=False)
kafka_message = consumer.consume(block=False)
do_something_with_messages((zmq_message, kafka_message))