Skip to content

Instantly share code, notes, and snippets.

Jakub Arnold darthdeus

Block or report user

Report or block darthdeus

Hide content and notifications from this user.

Learn more about blocking users

Contact Support about this user’s behavior.

Learn more about reporting abuse

Report abuse
View GitHub Profile
  • Rush, A. M., Chopra, S., and Weston, J. (2015). A neural attention model for abstractive sentence summarization.
  • Chopra, S., Auli, M., and Rush, A. M. (2016). Abstractive sentence summarization with attentive recurrent neural networks.
  • Nallapati, R., Zhou, B., and Ma, M. (2016). Classify or select: Neural architectures for extractive document summarization.
  • Nallapati, R., Zhou, B., dos Santos, C. N., Gülçehre, Ç., and Xiang, B. (2016). Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond.
  • Nallapati, R., Zhai, F., and Zhou, B. (2017). Summarunner: A recurrent neural network based sequence model for extractive summarization of documents.
  • See, A., Liu, P. J., and Manning, C. D. (2017). Get to the point: Summarization with pointer-generator networks.
  • Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I. (2017). Attention is all you need.
  • Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz K
View vycislitelnost.tex
\title{Státnice - Vyčíslitelnost}
View glow-layers.txt
QuantizeImage/Forward/ : x=[8, 48, 48, 3] z=[None] logdet=[8]
SqueezingLayer/Forward/Scale1 : x=[8, 24, 24, 12] z=[None] logdet=[8]
ActnormBiasLayer/Forward/ : x=[8, 24, 24, 12] z=[None] logdet=[8]
ActnormScaleLayer/Forward/ : x=[8, 24, 24, 12] z=[None] logdet=[8]
ChainLayer/Forward/ : x=[8, 24, 24, 12] z=[None] logdet=[8]
ActnormLayer/Forward/ : x=[8, 24, 24, 12] z=[None] logdet=[8]
InvertibleConv1x1Layer/Forward/ : x=[8, 24, 24, 12] z=[None] logdet=[8]
AffineCouplingLayer/Forward/ : x=[8, 24, 24, 12] z=[None] logdet=[8]
ChainLayer/Forward/Step1 : x=[8, 24, 24, 12] z=[None] logdet=[8]
ActnormBiasLayer/Forward/ : x=[8, 24, 24, 12] z=[None] logdet=[8]
import numpy as np
import numpy.linalg as linalg
import logging
def jitchol(A, maxtries=6):
A = np.ascontiguousarray(A)
diagA = np.diag(A)
if np.any(diagA <= 0.):
View config.yml
# This file is a template for a new experiment.
# It specifies how the experiment is to be created,
# but does not hold its state.
# Name of the experiment
name: {{experiment_name}}
# A List of hyperparamters to be tuned.
# Each hyperparameter needs to specify:
import tensorflow as tf
input = tf.placeholder(dtype=tf.float32, shape=None)
def encoder(x):
return x*x
def decoder(x):
darthdeus / bednarek-testbed.hpp
Created Dec 23, 2018
when you otevres bednarkuv kod and generic_generator appears
View bednarek-testbed.hpp
template< typename SP, typename SQ>
class abstract_generator {
virtual ~abstract_generator() {}
void label( logger & log) const { label_( log); }
template< bool debug>
void run( logger & log, const SP & sp, const SQ & sq) const
if ( debug )
run_debug_( log, sp, sq);
View sequence.txt
View tf-error.log
$ bazel build --jobs=12 --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package --verbose_failures @93bc2e2072
WARNING: The following configs were expanded more than once: [cuda]. For repeatable flags, repeats are counted twice and may lead to unexpected behavior.
WARNING: /home/darth/.cache/bazel/_bazel_darth/08554d152596e5a7df399506682a63f3/external/protobuf_archive/WORKSPACE:1: Workspace name in /home/darth/.cache/bazel/_bazel_darth/08554d152596e5a7df399506682a63f3/external/protobuf_archive/WORKSPACE (@com_google_protobuf) does not match the name given in the repository's definition (@protobuf_archive); this will cause a build error in future versions
WARNING: /home/darth/.cache/bazel/_bazel_darth/08554d152596e5a7df399506682a63f3/external/grpc/WORKSPACE:1: Workspace name in /home/darth/.cache/bazel/_bazel_darth/08554d152596e5a7df399506682a63f3/external/grpc/WORKSPACE (@com_github_grpc_grpc) does not match the name given in the repository's definition (@grpc); this will cause a bui
darthdeus /
Created Mar 16, 2018 — forked from karpathy/
Minimal character-level language model with a Vanilla Recurrent Neural Network, in Python/numpy
Minimal character-level Vanilla RNN model. Written by Andrej Karpathy (@karpathy)
BSD License
import numpy as np
# data I/O
data = open('input.txt', 'r').read() # should be simple plain text file
chars = list(set(data))
data_size, vocab_size = len(data), len(chars)
You can’t perform that action at this time.