This is inspired by A half-hour to learn Rust and Zig in 30 minutes.
Your first Go program as a classical "Hello World" is pretty simple:
First we create a workspace for our project:
This is inspired by A half-hour to learn Rust and Zig in 30 minutes.
Your first Go program as a classical "Hello World" is pretty simple:
First we create a workspace for our project:
""" | |
License: MIT - https://opensource.org/licenses/MIT | |
ChromeLogger is a protocol which allows sending logging messages to the Browser. | |
This module implements simple support for Django. It consists of two components: | |
* `LoggingMiddleware` which is responsible for sending all log messages | |
associated with the request to the browser. | |
* `ChromeLoggerHandler` a python logging handler which collects all messages. |
Ansible playbook to setup HTTPS using Let's encrypt on nginx. | |
The Ansible playbook installs everything needed to serve static files from a nginx server over HTTPS. | |
The server pass A rating on [SSL Labs](https://www.ssllabs.com/). | |
To use: | |
1. Install [Ansible](https://www.ansible.com/) | |
2. Setup an Ubuntu 16.04 server accessible over ssh | |
3. Create `/etc/ansible/hosts` according to template below and change example.com to your domain | |
4. Copy the rest of the files to an empty directory (`playbook.yml` in the root of that folder and the rest in the `templates` subfolder) |
Backup: | |
docker exec -t -u postgres your-db-container pg_dumpall -c > dump_`date +%d-%m-%Y"_"%H_%M_%S`.sql | |
Restore: | |
cat your_dump.sql | docker exec -i your-db-container psql -Upostgres |
All of the below properties or methods, when requested/called in JavaScript, will trigger the browser to synchronously calculate the style and layout*. This is also called reflow or layout thrashing, and is common performance bottleneck.
Generally, all APIs that synchronously provide layout metrics will trigger forced reflow / layout. Read on for additional cases and details.
elem.offsetLeft
, elem.offsetTop
, elem.offsetWidth
, elem.offsetHeight
, elem.offsetParent
import jwt | |
from django.conf import settings | |
from django.contrib.auth.models import User | |
from rest_framework import exceptions | |
from rest_framework.authentication import TokenAuthentication | |
class JSONWebTokenAuthentication(TokenAuthentication): |
data:text/html, <style type="text/css">.e{position:absolute;top:0;right:0;bottom:0;left:0;}</style><div class="e" id="editor"></div><script src="http://d1n0x3qji82z53.cloudfront.net/src-min-noconflict/ace.js" type="text/javascript" charset="utf-8"></script><script>var e=ace.edit("editor");e.setTheme("ace/theme/monokai");e.getSession().setMode("ace/mode/ruby");</script> | |
<!-- | |
For other language: Instead of `ace/mode/ruby`, Use | |
Markdown -> `ace/mode/markdown` | |
Python -> `ace/mode/python` | |
C/C++ -> `ace/mode/c_cpp` | |
Javscript -> `ace/mode/javascript` | |
Java -> `ace/mode/java` | |
Scala- -> `ace/mode/scala` |
#! /usr/bin/env python | |
"""{escher} -- one-file key-value storage. | |
What? | |
This is a toy application to manage persistent key-value string data. | |
The file {escher} is *both* the application and its data. | |
When you run any of the commands below, the file will be executed and, | |
after data change, it will rewrite itself with updated data. | |
You can copy the file with whatever name to create multiple datasets. |
-- show running queries (pre 9.2) | |
SELECT procpid, age(clock_timestamp(), query_start), usename, current_query | |
FROM pg_stat_activity | |
WHERE current_query != '<IDLE>' AND current_query NOT ILIKE '%pg_stat_activity%' | |
ORDER BY query_start desc; | |
-- show running queries (9.2) | |
SELECT pid, age(clock_timestamp(), query_start), usename, query | |
FROM pg_stat_activity | |
WHERE query != '<IDLE>' AND query NOT ILIKE '%pg_stat_activity%' |
L1 cache reference ......................... 0.5 ns
Branch mispredict ............................ 5 ns
L2 cache reference ........................... 7 ns
Mutex lock/unlock ........................... 25 ns
Main memory reference ...................... 100 ns
Compress 1K bytes with Zippy ............. 3,000 ns = 3 µs
Send 2K bytes over 1 Gbps network ....... 20,000 ns = 20 µs
SSD random read ........................ 150,000 ns = 150 µs
Read 1 MB sequentially from memory ..... 250,000 ns = 250 µs