This is a list of URLs to PostgreSQL EXTENSION repos, listed in alphabetical order of parent repo, with active forks listed under each parent.
⭐️ >= 10 stars
⭐️⭐️ >= 100 stars
⭐️⭐️⭐️ >= 1000 stars
Numbers of stars might not be up-to-date.
This logging setup configures Structlog to output pretty logs in development, and JSON log lines in production.
Then, you can use Structlog loggers or standard logging
loggers, and they both will be processed by the Structlog pipeline (see the hello()
endpoint for reference). That way any log generated by your dependencies will also be processed and enriched, even if they know nothing about Structlog!
Requests are assigned a correlation ID with the asgi-correlation-id
middleware (either captured from incoming request or generated on the fly).
All logs are linked to the correlation ID, and to the Datadog trace/span if instrumented.
This data "global to the request" is stored in context vars, and automatically added to all logs produced during the request thanks to Structlog.
You can add to these "global local variables" at any point in an endpoint with `structlog.contextvars.bind_contextvars(custom
apiVersion: batch/v1beta1 | |
kind: CronJob | |
metadata: | |
name: postgres-backup | |
spec: | |
schedule: "0 12 * * *" | |
jobTemplate: | |
spec: | |
backoffLimit: 0 | |
template: |
package main | |
import ( | |
"database/sql" | |
"testing" | |
"time" | |
_ "github.com/lib/pq" | |
) |
Go 19 hrs 25 mins ██████████████▏░░░░░░ 67.3% | |
Bash 2 hrs 53 mins ██░░░░░░░░░░░░░░░░░░░ 10.0% | |
JSON 2 hrs 16 mins █▋░░░░░░░░░░░░░░░░░░░ 7.9% | |
JavaScript 1 hr 52 mins █▎░░░░░░░░░░░░░░░░░░░ 6.5% | |
Makefile 1 hr 37 mins █▏░░░░░░░░░░░░░░░░░░░ 5.6% |
For this configuration you can use web server you like, i decided, because i work mostly with it to use nginx.
Generally, properly configured nginx can handle up to 400K to 500K requests per second (clustered), most what i saw is 50K to 80K (non-clustered) requests per second and 30% CPU load, course, this was 2 x Intel Xeon
with HyperThreading enabled, but it can work without problem on slower machines.
You must understand that this config is used in testing environment and not in production so you will need to find a way to implement most of those features best possible for your servers.
" Maintainer: Khoa <khoa.hd96@gmail.com> | |
" We want the latest vim settings/options, it must be first because it changes other options as a side effect | |
set nocompatible | |
" Vim plug settings ------------------- {{{ | |
if empty(glob('~/.vim/autoload/plug.vim')) | |
silent !curl -fLo ~/.vim/autoload/plug.vim --create-dirs | |
\ https://raw.githubusercontent.com/junegunn/vim-plug/master/plug.vim | |
autocmd VimEnter * PlugInstall --sync | source $HOME/.vimrc |
Quite a lot of different people have been on the same trail of thought. Gary Bernhardt's formulation of a "functional core, imperative shell" seems to be the most voiced.
"Imperative shell" that wraps and uses your "functional core".. The result of this is that the shell has fewer paths, but more dependencies. The core contains no dependencies, but encapsulates the different logic paths. So we’re encapsulating dependencies on one side, and business logic on the other side. Or put another way, the way to figure out the separation is by doing as much as you can without mutation, and then encapsulating the mutation separately. Functional core — Many fast unit tests. Imperative shell — Few integration tests
function! s:corona_stats() abort | |
let l:lines = [] | |
let l:keys = [ | |
\ ['country', 'Quốc gia'], | |
\ ['cases', 'Số ca'], | |
\ ['todayCases', 'Số ca (hôm nay)'], | |
\ ['deaths', 'Tử vong'], | |
\ ['todayDeaths', 'Tử vong (hôm nay)'], | |
\ ['recovered', 'Hồi phục'], | |
\] |