Skip to content

Instantly share code, notes, and snippets.

View gdbassett's full-sized avatar

Gabe gdbassett

  • Liberty Mutual
  • US
View GitHub Profile
@gdbassett
gdbassett / flip.R
Created September 27, 2017 19:31
coord_flip for ggvis
#' Flip the x and y axis
#'
#' This is accomplished by updating the x & y marks, updating the flipping the
#' scales, and updating the axis labels.
#'
#' WARNING: This currently works for rectangular layer figures. It may not work with
#' multiple-layer figures, other marks, or signals.
#'
#' WARNING: No tests currently exist for this function
#'
@gdbassett
gdbassett / schema_to_graph.py
Last active September 21, 2017 20:55
function to convert
import networkx as nx # NOTE: written against dev networkx 2.0
import logging
import inspect
import json
logger = logging.getLogger()
fileLogger = logging.FileHandler("~/Documents/Development/tmp/vega.log")
fileLogger.setLevel(logging.DEBUG)
logger.addHandler(fileLogger)
@gdbassett
gdbassett / two_barcharts.json
Last active September 15, 2017 23:30
Two bar charts with the goal of controlling one from another
{
"$schema": "https://vega.github.io/schema/vega-lite/v2.json",
"vconcat": [
{
"data": {
"values": [
{
"enum": "victim.industry2.52",
"x": 471,
"n": 1935,
@gdbassett
gdbassett / bayesian_credible_intervals.R
Last active August 15, 2017 18:13
bayesian credible intervals on veris data
# pick an enumeration
enum <- "action.*.variety"
# establish filter criteria (easier than a complex standard-eval filter_ line)
df <- vcdb %>%
dplyr::filter(plus.dbir_year == 2016, subset.2017dbir) %>%
dplyr::filter(attribute.confidentiality.data_disclosure.Yes) %>%
dplyr::filter(victim.industry2.92)
# establish priors from previous year
priors <- df %>%
@gdbassett
gdbassett / livesplit.R
Last active June 26, 2017 20:02
basic R code to parse livesplit splits into a dataframe
speedrun <- XML::xmlParse("/livesplit.lss")
speedrun <- XML::xmlToList(speedrun)
chunk <- do.call(rbind, lapply(speedrun[['Segments']], function(segments) {
segments.df <- do.call(rbind, lapply(segments[['SegmentHistory']], function(segment) {
if ('RealTime' %in% names(segment))
data.frame(`attemptID` = segment$.attrs['id'], RealTime = segment$RealTime)
}))
segments.df$name <- rep(segments$Name, nrow(segments.df))
---
title: "Test"
author: "Gabe"
date: "November 03, 2016"
output: html_document
params:
df: data.frame()
a: ""
b: ""
c: "FALSE"
@gdbassett
gdbassett / linearKMeans.R
Last active February 27, 2016 16:45
A quick function to produce a kmeans like calculation, but using a line in place of the point centroid. Used to try and classify multiple linear relationships in a dataset.
#' @param df Dataframe with x and y columns. (Hopefully in the future this can be x)
#' @param nlines The number of clusters.
#' @param ab a dataframe with a 'slopes' and 'intercepts' column and one row per initial line. Dimensions must match nlines.
#' @param maxiter The maximum number of iterations to do
#' @export
#' @examples
linearKMeans <- function(df, ab=NULL, nlines=0, maxiter=1000) {
# default number of lines
nlines_default <- 5
@gdbassett
gdbassett / gist:6438b4036a501eba9f5e
Created January 28, 2015 14:11
Association Rules Console Ouput 1
> df <- df[!names(df) %in% c('root.victim.region',
+ 'root.victim.country',
+ 'root.summary',
+ 'root.summary=Source_Category',
+ 'root.victim.industry',
+ 'root.timeline.incident.year',
+ 'root.plus.dbir_year',
+ 'root.action.social.notes',
+ 'root.victim.secondary.notes',
+ 'root.action.hacking.notes',
@gdbassett
gdbassett / text_cluster.py
Last active October 2, 2018 07:19
Basic script for text->vectorization->TF-IDF->canopies->kmeans->clusters. Initially tested on VCDB breach summaries.
#!/usr/bin/env python
# -*- encoding: utf-8 -*-
# based on http://scikit-learn.org/stable/auto_examples/document_clustering.html
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.cluster import KMeans, MiniBatchKMeans
from sklearn.metrics.pairwise import pairwise_distances
import numpy as np
from time import time
from collections import defaultdict
@gdbassett
gdbassett / canopy.py
Created December 12, 2014 21:59
Efficient python implementation of canopy clustering. (A method for efficiently generating centroids and clusters, most commonly as input to a more robust clustering algorithm.)
from sklearn.metrics.pairwise import pairwise_distances
import numpy as np
# X shoudl be a numpy matrix, very likely sparse matrix: http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.sparse.csr_matrix.html#scipy.sparse.csr_matrix
# T1 > T2 for overlapping clusters
# T1 = Distance to centroid point to not include in other clusters
# T2 = Distance to centroid point to include in cluster
# T1 > T2 for overlapping clusters
# T1 < T2 will have points which reside in no clusters
# T1 == T2 will cause all points to reside in mutually exclusive clusters