Hey! I saw this has been indexed by the search engines. It is a first draft of a post I ended up publishing on my blog at: Scaling PostgreSQL With Pgpool and PgBouncer
Thanks for stopping by!
This playbook has been removed as it is now very outdated. |
def s3_form_tag(options = {}) | |
bucket = options[:bucket] | |
access_key_id = options[:access_key_id] | |
secret_access_key = options[:secret_access_key] | |
key = options[:key] || '' | |
content_type = options[:content_type] || '' # Defaults to binary/octet-stream if blank | |
redirect = options[:redirect] || '/' | |
acl = options[:acl] || 'public-read' | |
expiration_date = options[:expiration_date].strftime('%Y-%m-%dT%H:%M:%S.000Z') if options[:expiration_date] | |
max_filesize = options[:max_filesize] || 671088640 # 5 gb |
Hey! I saw this has been indexed by the search engines. It is a first draft of a post I ended up publishing on my blog at: Scaling PostgreSQL With Pgpool and PgBouncer
Thanks for stopping by!
I frequently administer remote servers over SSH, and need to copy data to my clipboard. If the text I want to copy all fits on one screen, then I simply select it with my mouse and press CMD-C, which asks relies on m y terminal emulator (xterm2) to throw it to the clipboard.
This isn't practical for larger texts, like when I want to copy the whole contents of a file.
If I had been editing large-file.txt
locally, I could easily copy its contents by using the pbcopy
command:
For this configuration you can use web server you like, i decided, because i work mostly with it to use nginx.
Generally, properly configured nginx can handle up to 400K to 500K requests per second (clustered), most what i saw is 50K to 80K (non-clustered) requests per second and 30% CPU load, course, this was 2 x Intel Xeon
with HyperThreading enabled, but it can work without problem on slower machines.
You must understand that this config is used in testing environment and not in production so you will need to find a way to implement most of those features best possible for your servers.
CREATE OR REPLACE FUNCTION public.json_append(data json, insert_data json) | |
RETURNS json | |
IMMUTABLE | |
LANGUAGE sql | |
AS $$ | |
SELECT ('{'||string_agg(to_json(key)||':'||value, ',')||'}')::json | |
FROM ( | |
SELECT * FROM json_each(data) | |
UNION ALL | |
SELECT * FROM json_each(insert_data) |
// You can edit this code! | |
// Click here and start typing. | |
package main | |
import "fmt" | |
import "runtime" | |
import "strings" | |
func identifyPanic() string { | |
var name, file string |
WITH btree_index_atts AS ( | |
SELECT nspname, relname, reltuples, relpages, indrelid, relam, | |
regexp_split_to_table(indkey::text, ' ')::smallint AS attnum, | |
indexrelid as index_oid | |
FROM pg_index | |
JOIN pg_class ON pg_class.oid=pg_index.indexrelid | |
JOIN pg_namespace ON pg_namespace.oid = pg_class.relnamespace | |
JOIN pg_am ON pg_class.relam = pg_am.oid | |
WHERE pg_am.amname = 'btree' | |
), |
#!/bin/bash | |
# | |
# O pgbadger no modo incremental gera um indice (arquivos .bin) | |
# para fazer o parser do log, e não possui um mecanismo interno | |
# de limpeza de arquivos que não estão mais em uso. | |
# | |
# Este script efetua a limpeza dos .bin obsoletos, ou seja, | |
# aqueles que são de dias anteriores aos correspondentes a | |
# semana corrente. |
memuse measures unique physical total memory taken by a process and its children, ignoring duplicate copy-on-write pages and shared memory.
This is a solution for http://serverfault.com/questions/676335/how-measure-memory-without-copy-on-write-pages
It's a quick and dirty utility, but feel free to fork & improve.
Example:
~ » sudo ./memuse.py 15897 eugene@eugene-thinkpad
PID Commandline Frames (+unique) VMEM