Skip to content

Instantly share code, notes, and snippets.

View BartlomiejSkwira's full-sized avatar

Bartek Skwira BartlomiejSkwira

View GitHub Profile
@BartlomiejSkwira
BartlomiejSkwira / url_add_query.py
Last active October 9, 2020 09:37 — forked from sam-ghosh/url_add_query.py
Django template tag to append a query string to a url - Python 3
# Python 3 version
from urllib.parse import urlsplit, urlunsplit
from django import template
from django.http import QueryDict
register = template.Library()
@register.simple_tag
def url_add_query(url, **kwargs):
@BartlomiejSkwira
BartlomiejSkwira / path_operations.py
Created April 26, 2019 10:50
Python path operations #python
#get one level up dierctory to current like: '../'
from pathlib import Path
Path.cwd().parents[0]
@BartlomiejSkwira
BartlomiejSkwira / pass_task_id_to_celery.py
Created April 17, 2019 11:34
Specify task_id and pass it to a Celery task #python #celery
add.apply_async(args, kwargs, task_id=i)
# The -s switch disables per-test capturing.
pytest -s testfile
@BartlomiejSkwira
BartlomiejSkwira / wget_website
Created November 16, 2018 08:43
Mirror/copy/download a website for offline browsing
wget --mirror --convert-links --adjust-extension --page-requisites --no-parent http://example.com
#list all containers
docker ps -q
#stop all running containers
docker stop $(docker ps -a -q)
#kill all running containers with
docker kill $(docker ps -q)
git fetch --prune origin
git fetch --prune --all #to prune all the dead branches from all the remotes
docker-compose up -d --force-recreate <service>
@BartlomiejSkwira
BartlomiejSkwira / backup_docker_postgres.sh
Last active April 3, 2018 14:42
Backup and restore postgres in Docker
#Backup your databases
docker exec -u <your_postgres_user> <postgres_container_name> pg_dump -Fc <db_name> > db.dump
#Restore your databases
docker exec -i -u <your_postgres_user> <postgres_container_name> pg_restore -C -d <db_name> < db.dump
Wget is a command-line utility that can retrieve all kinds of files over the HTTP and FTP protocols. Since websites are served through HTTP and most web media files are accessible through HTTP or FTP, this makes Wget an excellent tool for ripping websites.
While Wget is typically used to download single files, it can be used to recursively download all pages and files that are found through an initial page:
wget -r -p //www.makeuseof.com
However, some sites may detect and prevent what you’re trying to do because ripping a website can cost them a lot of bandwidth. To get around this, you can disguise yourself as a web browser with a user agent string:
wget -r -p -U Mozilla //www.makeuseof.com
If you want to be polite, you should also limit your download speed (so you don’t hog the web server’s bandwidth) and pause between each download (so you don’t overwhelm the web server with too many requests):