Skip to content

Instantly share code, notes, and snippets.

View dirkschumacher's full-sized avatar
👶
I may be slow to respond.

Dirk Schumacher dirkschumacher

👶
I may be slow to respond.
View GitHub Profile
# There are essential two prominent ways to model a TSP as a MILP. One is to formulate the full model using the Miller–Tucker–Zemlin (MTZ) formulation and the other option is to use the so-called sub-tour elimination constraints .[1](https://www.unc.edu/~pataki/papers/teachtsp.pdf)
#
# The first formulation is fairly compact (quadratic many constraints and variables) but is not suitable anymore when n gets larger. The second formulation has exponential many constraints at most, but can solve larger TSPs due to the better LP relaxation. The idea of the latter approach is add constraints to the model *during* the solution process as soon as a solution was found that contains a sub-tour. For solution strategies like this solvers usually offer callbacks that let's you modify the model during the the branch-and-cut process - this is however not currently supported by `ompr`.
#
# Therefor we will use the MTZ formulation and solve a fairly small TSP.
library(ompr)
library(magrittr)
# Linear regression with tensorflow and R
library(tensorflow)
# Y = X * beta + epsilon
# =>
# beta = (X'X)^-1X'y
# first we build the computational graph
X <- tf$placeholder(tf$float64, name = "X")
# this is just a script to test the rhxl package, I just quickly looked at the data
# Ethiopia Who is doing What Where - 3W December 2017
# source: https://data.humdata.org/dataset/3w-december-2017
url <- "https://data.humdata.org/dataset/615416d2-457b-461a-8155-090f0ced0bf8/resource/f71bf111-8706-42f4-ba46-4ce3c8c949dc/download/3w_hxl.xlsx"
# load the rhxl package
# https://github.com/dirkschumacher/rhxl
library(rhxl)
download.file(url, "file.xlsx")
splice_df <- function(x, ...) {
  expr <- rlang::enquo(x)
  cols <- lapply(rlang::ensyms(..., .named = TRUE), as.character)
  lapply(cols, function(col_name) {
    rlang::quo(`[[`(!!expr, !!col_name))
  })
}
a <- Matrix::sparseVector(1:2, i = 1:2, length = 2)
b <- Matrix::sparseVector(1:2, i = 1:2, length = 2)
class(a * b)
#> [1] "dsparseVector"
#> attr(,"package")
#> [1] "Matrix"
class(a / b) # bug? numeric instead of sparseVector
#> [1] "numeric"
library(armacmp)

# taken from https://gallery.rcpp.org/articles/black-scholes-three-ways/
put_option_pricer_arma <- armacmp(function(s = type_colvec(),
                                           k = type_scalar_numeric(),
                                           r = type_scalar_numeric(),
                                           y = type_scalar_numeric(),
                                           t = type_scalar_numeric(),
 sigma = type_scalar_numeric()) {
library(armacmp)
# Arnold, T., Kane, M., & Lewis, B. W. (2019). A Computational Approach to Statistical Learning. CRC Press.
# logistic regression using the Newton-Raphson
log_reg <- armacmp(function(X, y) {
  beta <- rep.int(0, ncol(X))
  for (i in seq_len(25)) {
    b_old <- beta
    alpha <- X %*% beta
    p <- 1 / (1 + exp(-alpha))
library(armacmp)

# some of julia's microbenchmarks translated to C++
# https://github.com/JuliaLang/Microbenchmarks/blob/master/perf.R

fib_cpp <- armacmp(function(n = type_scalar_int()) {
  fib_rec <-  function(nr = type_scalar_int()) {
    if (nr < 2) {
      return(nr, type = type_scalar_int())
library(armacmp)
# code from https://nextjournal.com/wolfv/how-fast-is-r-with-fastr-pythran
# which in turn comes in part from http://www.tylermw.com/throwing-shade/
# Author: Tyler Morgan-Wall

# first the R version

faster_bilinear <- function (Z, x0, y0){
  i = floor(x0)
`?` <- function(lhs, rhs) {
  if (missing(rhs)) {
    return(eval(bquote(utils::`?`(.(substitute(lhs))))))
  }
  rhs <- substitute(rhs)
  envir <- parent.frame()
  split_colon <- strsplit(deparse(rhs), ":")
  stopifnot(length(split_colon) == 1L, length(split_colon[[1L]]) == 2L)
 rhs_chr &lt;- split_colon[[1L]]