Skip to content

Instantly share code, notes, and snippets.

View sreedharbukya's full-sized avatar
🎯
Focusing

Sreedhar Bukya sreedharbukya

🎯
Focusing
View GitHub Profile
@clemfromspace
clemfromspace / tasks.py
Created November 24, 2017 04:34
Running a scrapy spider from a celery task
from billiard.context import Process
from scrapy.crawler import Crawler
from scrapy import signals
from scrapy.utils.project import get_project_settings
from twisted.internet import reactor
from celery_app import app
class CrawlerProcess(Process):
@miticojo
miticojo / k8s-centralized-logging.yaml
Last active October 21, 2019 09:43
K8S - Centralized logging with ELK and Fluentd (kubernetes >= 1.6)
apiVersion: v1
kind: ServiceAccount
metadata:
name: elasticsearch-logging
namespace: kube-system
labels:
k8s-app: elasticsearch-logging
version: v1
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
@luckydev
luckydev / gist:b2a6ebe793aeacf50ff15331fb3b519d
Last active October 22, 2022 14:03
Increate max no of open files limit in Ubuntu 16.04/18.04 for Nginx
# maximum capability of system
user@ubuntu:~$ cat /proc/sys/fs/file-max
708444
# available limit
user@ubuntu:~$ ulimit -n
1024
# To increase the available limit to say 200000
user@ubuntu:~$ sudo vim /etc/sysctl.conf
@coingraham
coingraham / updateredrivesqs.py
Created March 21, 2016 19:27
Python - Update the redrive policy for existing queues
import boto3
import json
# update these with your actual settings
maxreceivecount = 3
queueurl = "https://sqs.region.amazonaws.com/accountnumber/queuename"
deadlettertargetarn = "arn:aws:sqs:region:accountnumber:deadletterqueuename"
# you could just create a json string
policy = {"maxReceiveCount" : maxreceivecount, "deadLetterTargetArn": deadlettertargetarn}
@drmalex07
drmalex07 / celeryconfig.py
Last active August 31, 2023 03:53
A quickstart example with celery queue. #celery
# This is a quickstart! In the real world use a real broker (message queue)
# such as Redis or RabbitMQ !!
BROKER_URL = 'sqlalchemy+sqlite:///tasks.db'
CELERY_RESULT_BACKEND = "db+sqlite:///results.db"
CELERY_IMPORTS = ['tasks']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
@javierarilos
javierarilos / CollectionPipeline.js
Last active January 27, 2016 13:49
Main examples from Martin Fowler article on Collection Pipeline programming pattern with javascript and underscore.js
'use strict';
/**
* Implementing main examples from Martin Fowler article
* on Collection Pipeline programming pattern:
* http://martinfowler.com/articles/collection-pipeline/
* with #javascript and underscore.js: http://underscorejs.org/
*
* tested with nodejs v0.10.32
*
* underscore.js is required:
@somandubey
somandubey / gist:52bff8c7cc8639292629
Created August 28, 2014 19:56
How to increase ulimit in Linux

How to increase ulimit in Linux:

  • Step 1 (ulimit): open the sysctl.conf and add this line fs.file-max = 65536

      vi /etc/sysctl.conf   
    

    add following at end of file in above file:

      fs.file-max = 65536
    

save and exit.

@prschmid
prschmid / webservicethreadingtestcase.py
Last active March 31, 2022 13:21
An example of how to perform a multi-threaded unit test of a web service. The particulars of this example make use of some conventions used when developing a web service when using the Flask microframework, but can be modified to fit most needs.
"""An example of how to perform a multi-threaded unit test of a web service.
The particulars of this example make use of some conventions used when
developing a web service when using the Flask microframework, but can be
modified to fit most needs.
"""
import json
import threading
import time
@ibeex
ibeex / foo.log
Created August 4, 2012 13:46
Flask logging example
A warning occurred (42 apples)
An error occurred