<Additional information about your API call. Try to use verbs that match both request type (fetching vs modifying) and plurality (one vs multiple).>
-
URL
<The URL Structure (path only, no root url)>
-
Method:
#301 Redirects for .htaccess | |
#Redirect a single page: | |
Redirect 301 /pagename.php http://www.domain.com/pagename.html | |
#Redirect an entire site: | |
Redirect 301 / http://www.domain.com/ | |
#Redirect an entire site to a sub folder | |
Redirect 301 / http://www.domain.com/subfolder/ |
Latency Comparison Numbers (~2012) | |
---------------------------------- | |
L1 cache reference 0.5 ns | |
Branch mispredict 5 ns | |
L2 cache reference 7 ns 14x L1 cache | |
Mutex lock/unlock 25 ns | |
Main memory reference 100 ns 20x L2 cache, 200x L1 cache | |
Compress 1K bytes with Zippy 3,000 ns 3 us | |
Send 1K bytes over 1 Gbps network 10,000 ns 10 us | |
Read 4K randomly from SSD* 150,000 ns 150 us ~1GB/sec SSD |
-- show running queries (pre 9.2) | |
SELECT procpid, age(clock_timestamp(), query_start), usename, current_query | |
FROM pg_stat_activity | |
WHERE current_query != '<IDLE>' AND current_query NOT ILIKE '%pg_stat_activity%' | |
ORDER BY query_start desc; | |
-- show running queries (9.2) | |
SELECT pid, age(clock_timestamp(), query_start), usename, query | |
FROM pg_stat_activity | |
WHERE query != '<IDLE>' AND query NOT ILIKE '%pg_stat_activity%' |
wget --no-check-certificate --content-disposition https://github.com/joyent/node/tarball/v0.7.1 | |
# --no-check-cerftificate was necessary for me to have wget not puke about https | |
curl -LJO https://github.com/joyent/node/tarball/v0.7.1 |
RDBMS-based job queues have been criticized recently for being unable to handle heavy loads. And they deserve it, to some extent, because the queries used to safely lock a job have been pretty hairy. SELECT FOR UPDATE followed by an UPDATE works fine at first, but then you add more workers, and each is trying to SELECT FOR UPDATE the same row (and maybe throwing NOWAIT in there, then catching the errors and retrying), and things slow down.
On top of that, they have to actually update the row to mark it as locked, so the rest of your workers are sitting there waiting while one of them propagates its lock to disk (and the disks of however many servers you're replicating to). QueueClassic got some mileage out of the novel idea of randomly picking a row near the front of the queue to lock, but I can't still seem to get more than an an extra few hundred jobs per second out of it under heavy load.
So, many developers have started going straight t
git branch -m old_branch new_branch # Rename branch locally | |
git push origin :old_branch # Delete the old branch | |
git push --set-upstream origin new_branch # Push the new branch, set local branch to track the new remote |
import "bytes" | |
func StreamToByte(stream io.Reader) []byte { | |
buf := new(bytes.Buffer) | |
buf.ReadFrom(stream) | |
return buf.Bytes() | |
} | |
func StreamToString(stream io.Reader) string { | |
buf := new(bytes.Buffer) |
Automated analysis is the main advantage to working with a modern statically typed compiled language like C++. Code analysis tools can inform us when we have implemented an operator overload with a non-canonical form, when we should have made a method const, or when the scope of a variable can be reduced.
WITH table_scans as ( | |
SELECT relid, | |
tables.idx_scan + tables.seq_scan as all_scans, | |
( tables.n_tup_ins + tables.n_tup_upd + tables.n_tup_del ) as writes, | |
pg_relation_size(relid) as table_size | |
FROM pg_stat_user_tables as tables | |
), | |
all_writes as ( | |
SELECT sum(writes) as total_writes | |
FROM table_scans |