Skip to content

Instantly share code, notes, and snippets.

Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@bentrevett
bentrevett / example-post.json
Last active October 27, 2022 23:17
linustechtips-forum-scraper
{"name": "linux", "headline": "linux", "text": "I am trying to build an os from scratch and wanted to look at Linux source code for some reference but I do not know where to find the source code ty in advance \ud83d\ude42\n \n\n\n\t\u00a0\n \n", "dateCreated": "2022-10-26T05:06:30+0000", "datePublished": "2022-10-26T05:06:30+0000", "dateModified": "2022-10-27T14:26:46+0000", "image": "https://linustechtips.com/uploads/monthly_2022_06/comic-book.thumb.gif.78e4e143db467ea95ca54a8d79a4f022.gif", "author": {"@type": "Person", "name": "swabro", "image": "https://linustechtips.com/uploads/monthly_2022_06/comic-book.thumb.gif.78e4e143db467ea95ca54a8d79a4f022.gif", "url": "https://linustechtips.com/profile/991006-swabro/"}, "interactionStatistic": [{"@type": "InteractionCounter", "interactionType": "http://schema.org/ViewAction", "userInteractionCount": 162}, {"@type": "InteractionCounter", "interactionType": "http://schema.org/CommentAction", "userInteractionCount": 9}, {"@type": "InteractionCounter", "interactionTy
import re
def is_basic_expr(expr):
"""
A basic expression has one or more positive integers separated by +-*/.
"""
# finds more than one space
if re.search("\s\s+", expr):
return False
# finds negative numbers
import torch
import torch.nn as nn
class LSTM(nn.Module):
def __init__(self, in_dim, hid_dim):
super().__init__()
self.w_f = nn.Linear(in_dim, hid_dim, bias = False)
self.u_f = nn.Linear(hid_dim, hid_dim, bias = False)
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@bentrevett
bentrevett / RAM-notes.md
Created February 25, 2020 15:27
Notes on the Recurrent Models of Visual Attention paper

Recurrent Models of Visual Attention

Convolutional neural networks (CNNs) have their computation scale linearly with the number of pixels in the input. What if we could come up with a model that only looks at a sequence of small regions (patches) within the input image? The amount of computation is then independent on the size of the image, and dependent on the size and number of the patches extracted. This also reduces the task complexity as the model can focus on the object of interest, ignoring any surrounding clutter. This work is related to three branches of research: reducing computation in computer vision, "saliency detectors", and computer vision as a sequential decision task.

Biological inspirations: Humans do not perceive a whole scene at once, instead they focus attention on parts of the visual space to acquire information and then combine it to build an internal representation of the scene. Locations at which humans fixate have been shown to be task specific.

Recurrent Attention Model (R

Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@bentrevett
bentrevett / english-setentence-seq2seq-with-attention-reconstruction.ipynb
Created December 17, 2019 19:23
English Setentence Seq2Seq with Attention Reconstruction
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@bentrevett
bentrevett / convLSTM.py
Created July 18, 2019 16:03
convLSTM model from https://arxiv.org/abs/1901.03559 without all the bells and whistles (encoded observation skip connection, top down skip connection, pool-and-inject)
import torch
import torch.nn as nn
class ConvLSTMCell(nn.Module):
def __init__(self,
input_size,
input_dim,
hidden_dim,
kernel_size,
bias):