Skip to content

Instantly share code, notes, and snippets.

View inoryy's full-sized avatar
:octocat:

Roman Ring inoryy

:octocat:
View GitHub Profile
from heapq import heappop, heappush
def dijkstra(g, s, f):
h = [(0,s)]
cost, prev = {}, {}
cost[s], prev[s] = 0, None
while len(h) > 0:
_,v = heappop(h)
if v == f:
@inoryy
inoryy / tf_seq2seq_single_str_inference.py
Created May 12, 2017 11:54 — forked from noname01/tf_seq2seq_single_str_inference.py
Quick hack for loading seq2seq model and inference via feed_dict.
from pydoc import locate
import tensorflow as tf
import numpy as np
from seq2seq import tasks, models
from seq2seq.training import utils as training_utils
from seq2seq.tasks.inference_task import InferenceTask, unbatch_dict
class DecodeOnce(InferenceTask):
'''
#!/usr/bin/env python
import sys, re, os
DEFAULT_MAIN = 'main.cpp'
DEFAULT_OUT = 'arena.cpp'
BUNDLE_SYM = '// ###'
def bundle(flname, main = False):
base = os.path.dirname(flname)
import tensorflow as tf
class TfCategorical:
def __init__(self, logits):
self.logits = logits
self.probs = tf.nn.softmax(logits)
def sample(self):
u = tf.random_uniform(tf.shape(self.logits))
@inoryy
inoryy / mcts.cpp
Last active January 25, 2019 10:29
Node* mcts(Node* root) {
save();
while (true) {
if (TIME >= TIME_LIMIT) break;
// select
Node* node = root;
while (!isTerminal() && node->isExpanded()) {
node = node->uctChild();
apply(node->action);
@inoryy
inoryy / _results.md
Last active December 28, 2022 07:54
Fixing MKL on AMD Zen CPU.

Fixing MKL on AMD Zen CPU

As per discussion on Reddit, it seems a workaround for the Intel MKL's notorious SIMD throttling of AMD Zen CPUs is as simple a setting MKL_DEBUG_CPU_TYPE=5 environment variable.

Benchmarks

All three scripts are executed in the same Python 3.7 environment on a first-gen AMD Zen CPU (1950x).
The difference will be even bigger on newer models as first-gen Zen resolves 256-bit AVX2 in two 128-bit instructions.