Skip to content

Instantly share code, notes, and snippets.

@Santhin
Santhin / test.txt
Created November 29, 2021 21:20
Ciekawy opis
Lol ale fajne api

Update

sudo apt-get update

Install ZSH & Oh-My-ZSH

sudo apt-get install zsh
@Santhin
Santhin / add_python_to_WINDOWS_WSL_ubuntu.md
Created June 6, 2021 13:23 — forked from ikhsanalatsary/add_python_to_WINDOWS_WSL_ubuntu.md
Add python to Windows Subsystem for Linux (WSL) [ubuntu]
#source: https://www.kaggle.com/gemartin/load-data-reduce-memory-usage
import pandas as pd
import numpy as np
def reduce_mem_usage(df):
""" iterate through all the columns of a dataframe and modify the data type
to reduce memory usage.
import sqlalchemy as sa
import urllib
import pandas as pd
import pickle5 as pickle
from tqdm import tqdm
class PandasToSQL:
"""
This class wrap to_sql function from pandas with tqdm progress bar
@Santhin
Santhin / remove-all-from-docker.sh
Created January 13, 2021 22:51
docker cleaner taken from stack/github
#!/bin/bash
# Stop and remove all containers
echo "Removing containers :"
if [ -n "$(docker container ls -aq)" ]; then
docker container stop $(docker container ls -aq);
docker container rm $(docker container ls -aq);
fi;
# Remove all images
@Santhin
Santhin / asyncio_instalation.py
Created January 13, 2021 19:09
asyncio reactor installation
# asyncio reactor installation (CORRECT) - `reactor` must not be defined at this point
# https://docs.scrapy.org/en/latest/_modules/scrapy/utils/reactor.html?highlight=asyncio%20reactor#
import scrapy
import asyncio
from twisted.internet import asyncioreactor
scrapy.utils.reactor.install_reactor('twisted.internet.asyncioreactor.AsyncioSelectorReactor')
is_asyncio_reactor_installed = scrapy.utils.reactor.is_asyncio_reactor_installed()
print(f"Is asyncio reactor installed: {is_asyncio_reactor_installed}")
from twisted.internet import reactor
@Santhin
Santhin / scheduler.py
Last active July 5, 2022 02:51
APscheduler with scrapy
#source stack
from scrapy import spiderloader
from scrapy.utils import project
from scrapy.crawler import CrawlerRunner
from twisted.internet import reactor, defer
from scrapy.utils.log import configure_logging
import logging
from datetime import datetime