My Elasticsearch cheatsheet with example usage via rest api (still a work-in-progress)
# use ImageMagick convert | |
# the order is important. the density argument applies to input.pdf and resize and rotate to output.pdf | |
convert -density 90 input.pdf -rotate 0.5 -attenuate 0.2 +noise Multiplicative -colorspace Gray output.pdf |
// Blogpost: https://rz.my/2017/11/decrypting-cordova-crypt-file-plugin.html | |
var fs = require("fs"), | |
path = require("path"), | |
crypto = require("crypto"); | |
var config = { | |
key : 'CRYPT_KEY', | |
iv : 'CRYPT_IV' | |
} |
0-mail.com | |
007addict.com | |
020.co.uk | |
027168.com | |
0815.ru | |
0815.su | |
0clickemail.com | |
0sg.net | |
0wnd.net | |
0wnd.org |
Service | SSL | status | Response Type | Allowed methods | Allowed headers |
---|
Cross-origin resource sharing (CORS) is a mechanism that allows restricted resources (e.g. fonts) on a web page to be requested from another domain outside the domain from which the first resource was served. This is set on the server-side and there is nothing you can do from the client-side to change that setting, that is up to the server/API. There are some ways to get around it tho.
Sources : MDN - HTTP Access Control | Wiki - CORS
CORS is set server-side by supplying each request with additional headers which allow requests to be requested outside of the own domain, for example to your localhost
. This is primarily set by the header:
Access-Control-Allow-Origin
Server Price Breakdown: DigitalOcean, Amazon AWS LightSail, Vultr, Linode, OVH, Hetzner, Scaleway/Online.net:
Permalink: git.io/vps
Provider | Type | RAM | Cores | Storage | Transfer | Network | Price |
---|
/** | |
* Crawl the sitemap.xml for 301 redirections and 404 errors. | |
* Source: http://edmondscommerce.github.io/php/crawl-an-xml-sitemap-quality-check-301-and-404.html | |
* | |
* To use this script you need to allocate a huge amount of time to maximum_execution_time to | |
* avoid Fatal error: Maximum execution time...I suggest to run this script on terminal. | |
* Ex: $ php test-xml.php > ~/Desktop/sitemap-curl-result.txt | |
* | |
* For 3000 links the average time the script consumed is around 45 minutes to 1 hour. | |
*/ |
<table> | |
<thead> | |
<tr> | |
<th>Payment</th> | |
<th>Issue Date</th> | |
<th>Amount</th> | |
<th>Period</th> | |
</tr> | |
</thead> | |
<tbody> |
RDBMS-based job queues have been criticized recently for being unable to handle heavy loads. And they deserve it, to some extent, because the queries used to safely lock a job have been pretty hairy. SELECT FOR UPDATE followed by an UPDATE works fine at first, but then you add more workers, and each is trying to SELECT FOR UPDATE the same row (and maybe throwing NOWAIT in there, then catching the errors and retrying), and things slow down.
On top of that, they have to actually update the row to mark it as locked, so the rest of your workers are sitting there waiting while one of them propagates its lock to disk (and the disks of however many servers you're replicating to). QueueClassic got some mileage out of the novel idea of randomly picking a row near the front of the queue to lock, but I can't still seem to get more than an an extra few hundred jobs per second out of it under heavy load.
So, many developers have started going straight t