This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import torch | |
from transformers import BertTokenizer, BertModel, BertForMaskedLM | |
import logging | |
logging.basicConfig(level=logging.INFO)# OPTIONAL | |
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') | |
model = BertForMaskedLM.from_pretrained('bert-base-uncased') | |
model.eval() |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# -*- coding: utf-8 -*- | |
u""" | |
Beta regression for modeling rates and proportions. | |
References | |
---------- | |
Grün, Bettina, Ioannis Kosmidis, and Achim Zeileis. Extended beta regression | |
in R: Shaken, stirred, mixed, and partitioned. No. 2011-22. Working Papers in | |
Economics and Statistics, 2011. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
''' | |
Original implementation | |
https://github.com/clab/dynet_tutorial_examples/blob/master/tutorial_parser.ipynb | |
The code structure and variable names are similar for better reference. | |
Not for serious business, just for some comparison between PyTorch and DyNet | |
(and I still prefer PyTorch) | |
''' | |
import torch as T |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import torch | |
import torch.nn as nn | |
from torch.autograd import Variable | |
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence | |
import torch.nn.functional as F | |
import numpy as np | |
import itertools | |
def flatten(l): | |
return list(itertools.chain.from_iterable(l)) |