Skip to content

Instantly share code, notes, and snippets.

View SimonAB's full-sized avatar

Simon Babayan SimonAB

View GitHub Profile
@SimonAB
SimonAB / compact.bbx
Created February 22, 2020 18:08
BibLaTeX compact style
%% ---------------------------------------------------------------
%% biblatex-compact
%% Version: 2020-02-22
%% ---------------------------------------------------------------
%%
\ProvidesFile{compact.bbx}
% Load the standard style to avoid copy-pasting unnecessary material
\RequireBibliographyStyle{numeric-comp}
@SimonAB
SimonAB / pnas.bbx
Created February 22, 2020 18:05
BibLaTeX PNAS style
%% ---------------------------------------------------------------
%% biblatex-pnas --- A biblatex implementation
%% of the PNAS bibliography style
%% Version: 2020-02-22
%% ---------------------------------------------------------------
%%
\ProvidesFile{pnas.bbx}
% Load the standard style to avoid copy-pasting unnecessary material
@SimonAB
SimonAB / counter.jl
Last active February 20, 2020 23:02
"""
This function takes a dataframe and a column name (as symbol), and
prints out the count and the proportion of items within that column.
"""
using DataFrames, Query
function counter(df::DataFrame, col::Symbol)
for level in unique(df[!, col])
t = df |>
@filter(_[col] == level)|>
DataFrame
"""
This function uses gradient descent to search for the weights
that minimises the logit cost function.
A tuple with learned weights vector (θ) and the cost vector (𝐉)
are returned.
"""
function logistic_regression_sgd(X, y, λ, fit_intercept=true, η=0.01, max_iter=1000)
# Initialize some useful values
"""
lin_reg_grad_descent(X, y, α, fit_intercept=true, n_iter=2000)
This function uses gradient descent algorithm to find the best weights (θ)
that minimises the mean squared loss between the predictions that the model
generates and the target vector (y).
A tuple of 1D vectors representing the weights (θ)
and a history of loss at each iteration (𝐉) is returned.
"""