Skip to content

Instantly share code, notes, and snippets.

View musically-ut's full-sized avatar
🐙
🐢 🎣 🐠

Utkarsh Upadhyay musically-ut

🐙
🐢 🎣 🐠
View GitHub Profile
function (eset, cl, mfrow = c(1, 1), colo, min.mem = 0, time.labels,
new.window = TRUE)
{
clusterindex <- cl[[3]]
memship <- cl[[4]]
memship[memship < min.mem] <- -1
colorindex <- integer(dim(exprs(eset))[[1]])
if (missing(colo)) {
colo <- c("#FF8F00", "#FFA700", "#FFBF00", "#FFD700",
"#FFEF00", "#F7FF00", "#DFFF00", "#C7FF00", "#AFFF00",
par(mfrows=c(clustering$cluster,1))
for(j in 1:max(clustering$cluster))
{
x <- 1:12
d <- d[clustering$cluster == j,]
plot.default(x = NA, xlim = c(1, 12),
ylim = c(min(y), max(y)), xlab = "Time", ylab = "Expression changes",
main = paste("Cluster", j), axes = FALSE)
@musically-ut
musically-ut / bench_semi_supervised_n_iter.py
Last active July 7, 2017 06:35 — forked from jnothman/bench_semi_supervised_n_iter
Benchmarking `sklearn.semi_supervised` `n_iter_` as a function of model and data characteristics
import numpy as np
from sklearn import datasets
from sklearn.semi_supervised import LabelPropagation, LabelSpreading
###for n_samples in [20, 200, 2000, 20000]:
### X, y = datasets.make_classification(n_samples=n_samples, n_classes=3, n_informative=3)
for (X, y) in [datasets.load_iris(return_X_y=True)]:
for model in [LabelPropagation(max_iter=1000),
#LabelSpreading(alpha=0.01),
#LabelSpreading(alpha=0.1),
#LabelSpreading(alpha=0.3)