Skip to content

Instantly share code, notes, and snippets.

View raytroop's full-sized avatar
🎯
Focusing

raytroop

🎯
Focusing
View GitHub Profile
import uvm_pkg :: *;
class my_seq_item extends uvm_sequence_item;
rand logic [7:0] addr;
rand logic [7:0] data;
constraint addr_range_cn {
addr inside {[10:20]};
}
constraint data_range_cn {
@raytroop
raytroop / pycurses.py
Created October 16, 2019 13:22 — forked from claymcleod/pycurses.py
Python curses example
import sys,os
import curses
def draw_menu(stdscr):
k = 0
cursor_x = 0
cursor_y = 0
# Clear and refresh the screen for a blank canvas
stdscr.clear()
@raytroop
raytroop / static_inline_example.md
Created December 5, 2018 16:32 — forked from htfy96/static_inline_example.md
static inline vs inline vs static in C++

In this article we compared different behavior of static, inline and static inline free functions in compiled binary. All the following test was done under g++ 7.1.1 on Linux amd64, ELF64.

Test sources

header.hpp

#pragma once

inline int only_inline() { return 42; }
static int only_static() { return 42; }
@raytroop
raytroop / tf_gradient_clip_lr_decay.py
Created September 2, 2018 08:31 — forked from InnerPeace-Wu/tf_gradient_clip_lr_decay.py
ways to do gradients clipping and learning rate decay in tensorflow
import tensorflow as tf
#aplly exponential decay on learning rate
global_step = tf.Variable(0, trainable=False)
stater_learning_rate = lr #for start
learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step,
decay_steps, decay_rate, staircase=True)
optimizer = tf.train.AdamOptimizer(learning_rate)