Skip to content

Instantly share code, notes, and snippets.

View nlpjoe's full-sized avatar
🎯
Focusing

Jzzhou nlpjoe

🎯
Focusing
View GitHub Profile
@GhibliField
GhibliField / 计算机类学术论文 28个常见出版社一般写法.md
Last active April 28, 2024 02:54
计算机类学术论文 28个常见出版社一般写法(参考文献用)
序号 出版社一般写法 出版地 备注
1 AAAI Menlo Park, CA Association for the Advancement of Artificial Intelligence
2 Academic 同Elsevier Academic Press is part of Elsevier
3 Academy Press New York/ London/ Paris/ San Diedo,CA/ San Francisco,CA/ Sao Paulo/ Sydney/ Tokyo/Toronto AP,Academy Press
4 ACL Stroudsburg,PA Association for Computational Linguistics
5 ACM New York, NY ACM Press,Association for Computing and Machinery
6 AP Professional Boston,MA/ San Diedo,CA/New York/ London/ Sydney/ Tokyo/ Toronto  
7 Chapman & Hall London/ Glasgow/ Weinheim/ New York/ Madras CH
@cbaziotis
cbaziotis / AttentionWithContext.py
Last active April 25, 2022 14:37
Keras Layer that implements an Attention mechanism, with a context/query vector, for temporal data. Supports Masking. Follows the work of Yang et al. [https://www.cs.cmu.edu/~diyiy/docs/naacl16.pdf] "Hierarchical Attention Networks for Document Classification"
def dot_product(x, kernel):
"""
Wrapper for dot product operation, in order to be compatible with both
Theano and Tensorflow
Args:
x (): input
kernel (): weights
Returns:
"""
if K.backend() == 'tensorflow':
@Mistobaan
Mistobaan / tensorflow_confusion_metrics.py
Created March 3, 2016 22:25
Confusion Metrics written in tensorflow format
# from https://cloud.google.com/solutions/machine-learning-with-financial-time-series-data
def tf_confusion_metrics(model, actual_classes, session, feed_dict):
predictions = tf.argmax(model, 1)
actuals = tf.argmax(actual_classes, 1)
ones_like_actuals = tf.ones_like(actuals)
zeros_like_actuals = tf.zeros_like(actuals)
ones_like_predictions = tf.ones_like(predictions)
zeros_like_predictions = tf.zeros_like(predictions)
@japerk
japerk / nltk_tokenize_tag_chunk.rst
Created February 25, 2012 16:36
NLTK Tokenization, Tagging, Chunking, Treebank

Sentence Tokenization

>>> from nltk import tokenize >>> para = "Hello. My name is Jacob. Today you'll be learning NLTK." >>> sents = tokenize.sent_tokenize(para) >>> sents ['Hello.', 'My name is Jacob.', "Today you'll be learning NLTK."]