This script will load any stackoverflow site from the XML dump (retrievable at https://archive.org/details/stackexchange via torrent) into Elasticsearch.
To use just call:
python load_stack.py PATH
#!/bin/bash | |
set -e | |
## | |
# Script for paralel execution of django's test suite. | |
# | |
# Usage: place it in the same directory as your django checkout (not inside) | |
# and run it | |
# | |
# Params: you can optionally supply number of processes to spawn |
This script will load any stackoverflow site from the XML dump (retrievable at https://archive.org/details/stackexchange via torrent) into Elasticsearch.
To use just call:
python load_stack.py PATH
import pygame | |
from pygame.locals import * | |
from random import randint | |
import os, sys | |
ARRAY_SIZE = 50 | |
DIRECTIONS = { | |
"LEFT": (-1, 0), | |
"RIGHT": (1, 0), |
import re | |
from collections import defaultdict, Counter | |
def bold(txt): | |
return '\x1b[1m%s\x1b[0m' % txt | |
DATA = [ | |
{ | |
'title': 'Django', | |
'description': 'Django is a high-level Python Web framework that ' |
""" | |
This is a simple benchmark that tests the performance of several | |
locking implementations. Each example implements a connection | |
pool that provides `get_connection()` to retrieve a connection | |
from the pool and `release()` to return a connection back to the | |
pool. | |
The `test()` function creates an instance of `pool_class` and | |
creates `num_threads` `threading.Thread` instances that simply call | |
`pool.get_connection()` and `pool.release()` repeatedly until |
#!/bin/sh | |
PRIMARY=$(xrandr| grep -P '^[[:alnum:]-]+ connected primary' | cut -d ' ' -f 1) | |
SECONDARY=$(xrandr| grep -P '^[[:alnum:]-]+ connected (?!primary)' | head -n 1 | cut -d ' ' -f 1) | |
OUTPUTS=$(xrandr | sed -ne '2,$s;^\([^ ]\{1,\}\).*;\1;p' | grep -v "^$PRIMARY\$") | |
CMD="xrandr --output $PRIMARY --primary --auto " | |
function resolution { | |
xrandr | sed -ne "/$1/,/^[^ ]/s;^ [[:space:]]*\([0-9xip]\{1,\}\) .*;\1;p" |
Experimental CLI interface for the helpers in the python library.
Main purpose is to expose the bulk functionality to enable rapid loading of data into an elasticsearch cluster. Combined with the scan command it can also be used to reindex data from elasticsearch into a different index or cluster.
I hereby claim:
To claim this, I am signing this object:
from elasticsearch_dsl import DocType, Object, MetaField | |
class MyDoc(DocType): | |
inner = Object() | |
class Meta: | |
dynamic_templates = MetaField([ | |
{ | |
'strings_in_inner': { | |
'path_match': 'inner.*', |