Skip to content

Instantly share code, notes, and snippets.

@dfalbel
dfalbel / [Guild AI] denoising.md
Created February 14, 2023 10:14
Guild AI Repository

This is a Guild AI runs repository. To access runs, install Guild AI and run guild pull gist:dfalbel/denoising. For more information about Guild AI Gist based repositories, see Guild AI - Gists.

This is a Guild AI runs repository. To access runs, install Guild AI and run guild pull gist:dfalbel/denoising-diffusion-runs. For more information about Guild AI Gist based repositories, see Guild AI - Gists.

@dfalbel
dfalbel / parallel-dataloaders.R
Created July 29, 2021 13:57
Benchmark torch parallel dataloaders
library(torch)
dat <- dataset(
"mydataset",
initialize = function(time, size, len = 100 * 32) {
self$time <- time
self$len <- len
self$size <- size
},
.getitem = function(i) {
@dfalbel
dfalbel / keras.R
Created June 19, 2021 13:39
Multiple outputs Keras
library(keras)
library(tensorflow)
input <- layer_input(shape = list(365, 10))
representation <- input %>%
layer_lstm(units = 32, input_shape = list(365, 10)) %>%
layer_dropout(rate = 0.2)
output1 <- representation %>%
layer_dense(units = 2, name = "out1")
@dfalbel
dfalbel / example_00.R
Created January 28, 2021 20:42
torch for R examples
library(torch)
library(ggplot2)
# we want to find the minimum of this function
# using the gradient descent.
f <- function(x) {
x^2 - x
}
todoist_token <- config::get("TODOIST_TOKEN")
get_tasks_week <- function(week = 0, offset = 0) {
res <- httr::POST(
url = "https://api.todoist.com/sync/v8/activity/get",
body = list(
token = todoist_token,
limit = 100,
page = week,

dataset

Daniel Falbel 4/13/2019

Context

In tf.data in python the api for iterating over the elements of a dataset is the following:

---
title: "Quora Question Pairs"
output:
flexdashboard::flex_dashboard:
orientation: columns
runtime: shiny
---
```{r global, include=FALSE}
library(keras)
library(readr)
library(keras)
library(purrr)
FLAGS <- flags(
flag_integer("vocab_size", 50000),
flag_integer("max_len_padding", 20),
flag_integer("embedding_size", 256),
flag_numeric("regularization", 0.0001),
flag_integer("seq_embedding_size", 512)
download.file("https://snap.stanford.edu/data/finefoods.txt.gz", "finefoods.txt.gz")
library(readr)
library(stringr)
library(purrr)
reviews <- read_lines("finefoods.txt.gz")
reviews <- reviews[str_sub(reviews, 1, 12) == "review/text:"]
reviews <- str_sub(reviews, start = 14)