Skip to content

Instantly share code, notes, and snippets.

View vitasiku's full-sized avatar

Asiku Vitalis vitasiku

View GitHub Profile
@vitasiku
vitasiku / gym.py
Created September 29, 2021 20:32 — forked from Alir3z4/gym.py
import os
import pickle
import warnings
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from tensorflow.keras.callbacks import EarlyStopping
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Dropout
@vitasiku
vitasiku / rabbitmq_notes.md
Created June 13, 2021 19:14 — forked from Dev-Dipesh/rabbitmq_notes.md
Why RabbitMQ is better over Redis and notes on RabbitMq.

Redis is Database whereas RabbitMQ was designed as a message router or message-orientated-middleware (mom), so I'm sure if you look for benchmarks, you'll find that RabbitMQ will outperform Redis when it comes to message routing.

RabbitMQ is written in Erlang which was specifically designed by the telecom industry to route messages, you get clustering out of the box due to it being written in Erlang which means in a clustered environment, RabbitMQ will outperform Redis even further.

Furthermore, you get guaranteed delivery of messages due to the AMQP protocol, in other words, if the network drops while consuming the message, the consumer won't be able to say thanks for the message, so the consumer will drop the message and Rabbit will requeue the message, if you publish a message and the queue didn't say thanks to the publisher due to network problems or timeouts, Rabbit will drop the message and the publisher will keep on trying to publish the message. You can have publish retries with backoff policies, so

@vitasiku
vitasiku / celery.sh
Created June 10, 2021 21:08 — forked from amatellanes/celery.sh
Celery handy commands
/* Useful celery config.
app = Celery('tasks',
broker='redis://localhost:6379',
backend='redis://localhost:6379')
app.conf.update(
CELERY_TASK_RESULT_EXPIRES=3600,
CELERY_QUEUES=(
Queue('default', routing_key='tasks.#'),
@vitasiku
vitasiku / start-celery-for-dev.py
Created June 10, 2021 10:04 — forked from chenjianjx/start-celery-for-dev.py
A python script which starts celery worker and auto reload it when any code change happens.
'''
A python script which starts celery worker and auto reload it when any code change happens.
I did this because Celery worker's "--autoreload" option seems not working for a lot of people.
'''
import time
from watchdog.observers import Observer ##pip install watchdog
from watchdog.events import PatternMatchingEventHandler
import psutil ##pip install psutil
import os
@vitasiku
vitasiku / README-python-framework-benchmark.md
Created February 7, 2021 14:38 — forked from nhymxu/README-python-framework-benchmark.md
Flask vs Falcon vs FastAPI benchmark
gunicorn run:app --workers=9
gunicorn run:app --workers=9 --worker-class=meinheld.gmeinheld.MeinheldWorker

Macbook Pro 2015 Python 3.7

Framework Server Req/s Max latency +/- Stdev
@vitasiku
vitasiku / py-args-for-bash.sh
Created December 22, 2020 07:56 — forked from benkehoe/py-args-for-bash.sh
Python argument parsing for bash scripts
#!/bin/sh
# This is free and unencumbered software released into the public domain.
#
# Anyone is free to copy, modify, publish, use, compile, sell, or
# distribute this software, either in source code form or as a compiled
# binary, for any purpose, commercial or non-commercial, and by any
# means.
#
# In jurisdictions that recognize copyright laws, the author or authors
def is_cyclic(input_list):
"""
The intuition is very simple and can be thought of as traversing a double-linked list or tree-traversals.
- For the given_list to be cyclic, the first and last chars in words that form the list should match.
- Which means, these chars should form even pairs.
Thus, this function,
1. Creates a new list consisting of only the first and last character of every word in the list.
2. Convert the new list into a string.
3. Counts the number of occurences of every character in the new string in step 2.
@vitasiku
vitasiku / pyproject.toml
Last active May 14, 2019 06:53
pyproject
[tool.poetry]
name = "ds-3_6_6_poetry"
version = "0.1.0"
description = ""
authors = ["Vitalis <asikuvitalis@gmail.com>"]
[tool.poetry.dependencies]
python = "3.6.6"
numpy = "^1.16"
pandas = "^0.24.2"
@vitasiku
vitasiku / MonteCarloKMeansClustering.py
Created February 26, 2019 14:58 — forked from StuartGordonReid/MonteCarloKMeansClustering.py
Monte Carlo K-Means Clustering
import math
import random
import csv
import numpy as np
import cProfile
import hashlib
memoization = {}
class Clustering:
def k_means_clustering(self, n, s=1.0):
"""
This method performs the K-means clustering algorithm on the data for n iterations. This involves updating the
centroids using the mean-shift heuristic n-times and reassigning the patterns to their closest centroids.
:param n: number of iterations to complete
:param s: the scaling factor to use when updating the centroids
pick on which has a better solution (according to some measure of cluster quality)
"""