# set http proxy
export http_proxy=http://PROXYHOST:PROXYPORT
# set http proxy with user and password
export http_proxy=http://USERNAME:PASSWORD@PROXYHOST:PROXYPORT
# set http proxy with user and password (with special characters)
Many users of Git are curious about the lack of delta compression at the object (blob) level when commits are first written. This efficiency is saved until the pack file is written. Loose objects are written in compressed, but non-delta format at the time of each commit.
A simple run though of a commit sequence with only the smallest change to the image (in uncompressed TIFF format to amplify the observable behavior) aids the understanding of this deferred and different approach efficiency.
Create the repo:
I have run an nginx container... | |
docker ps | |
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES | |
6d67de07731d nginx "nginx -g 'daemon ..." 40 minutes ago Up 40 minutes 80/tcp, 443/tcp epic_goldberg | |
I want to use Debian for debug: | |
docker run -it --pid=container:6d67de07731d --net=container:6d67de07731d --cap-add sys_admin debian | |
I can see the nginx process: |
# configure proxy for git while on corporate network | |
# From https://gist.github.com/garystafford/8196920 | |
function proxy_on(){ | |
# assumes $USERDOMAIN, $USERNAME, $USERDNSDOMAIN | |
# are existing Windows system-level environment variables | |
# assumes $PASSWORD, $PROXY_SERVER, $PROXY_PORT | |
# are existing Windows current user-level environment variables (your user) | |
# environment variables are UPPERCASE even in git bash |
From: https://github.com/search?q=pydantic+basemodel+__root__&type=Code
- https://github.com/Food-X-Technologies/foodx_devops_tools/blob/832b930ff59ededd6ba96a42e86d93ab9fce87e1/foodx_devops_tools/puff/_puff_parameters.py
- https://github.com/DHARPA-Project/kiara/blob/023caa381df08bd276c9c9327727c047f490efd0/src/kiara/utils/models.py
- https://github.com/DGEXSolutions/osrd/blob/5b9bdb9c0bcb70dc4d1a12d4eabc4aa494d5bf56/api/osrd_infra/schemas/path.py
This gist is an example of how you can simply install and run and extended Postgres using docker-compose
. It assumes that you have docker
and docker-compose
installed and running on your workstation.
- Requires
docker
anddocker-compose
- Clone via http:
git clone https://gist.github.com/b0b7e06943bd389560184d948bdc2d5b.git
- Make
load-extensions.sh
executable - Build the image:
docker-compose build
function main { | |
Update-Windows-Configuration | |
Install-Utils | |
Install-Browsers | |
Install-Fonts |
import abc | |
import dataclasses | |
from typing import Callable, Generic, TypeVar | |
# TODO: pip install returns | |
# TODO: add `returns.contrib.mypy.returns_plugin` to mypy plugins | |
# TODO: read the docs at https://github.com/dry-python/returns | |
from returns.primitives.hkt import Kind1, SupportsKind1, kinded | |
_ValueType = TypeVar('_ValueType') |
- Create
UNLOGGED
table. This reduces the amount of data written to persistent storage by up to 2x. - Set
WITH (autovacuum_enabled=false)
on the table. This saves CPU time and IO bandwidth on useless vacuuming of the table (since we neverDELETE
orUPDATE
the table). - Insert rows with
COPY FROM STDIN
. This is the fastest possible approach to insert rows into table. - Minimize the number of indexes in the table, since they slow down inserts. Usually an index
on
time timestamp with time zone
is enough. - Add
synchronous_commit = off
topostgresql.conf
. - Use table inheritance for fast removal of old data:
NOTE: This is a question I found on StackOverflow which I’ve archived here, because the answer is so effing phenomenal.
If you are not into long explanations, see [Paolo Bergantino’s answer][2].