Skip to content

Instantly share code, notes, and snippets.

Confused

Kanav Gupta kanav99

Confused
Block or report user

Report or block kanav99

Hide content and notifications from this user.

Learn more about blocking users

Contact Support about this user’s behavior.

Learn more about reporting abuse

Report abuse
View GitHub Profile
View final.md

Final Blog

This blog marks end of a really wonderful experience with amazing people! Special shoutout to Chris Rackauckas, Yingbo Ma and David Widmann who immensely helped me in getting my work done and helping me throughout this whole summer. I would always be greatful to them for accepting me as a student.

Regarding my work, my project dealt with more of general fixes for many of the JuliaDiffEq repositories. My work was focussed the most on Callbacks, Non Linear solvers and other Derivative utilities. It involved a lot of moving code back and forth from one repository to another as we were making the common tools in all the repositories shifted to DiffEqBase. Also I made a small benchmarking service called DiffEqBot for the organization which also got featured on JuliaLang Blog! Below is the list of my contribution grouped by the type of work -

Derivative Utilities and Non linear solvers

We used to have seperate copies of the NLSolver methods in each of the repositories. All of them were basicall

View blog-diffeqbot.md

Hello @DiffEqBot!

Hi! Today we all got a new member to the DiffEq family. Say Hi to our own DiffEqBot - A Bot which helps run benchmarks and compares with the current master of a given package. It also generates and stores the Reports generated in a repository. What's special about this is that it is completely stateless (no databases involved at all, just juggling between repositories!) and it has no exposed public URLs. Even though highly inspired by Nanosoldier, this has a completely unique workflow.

How do you make it work?

So what all you need to do is call @DiffEqBot runbenchmarks in a comment in a PR of a JuliaDiffEq repository and it will do all the work for you. It will benchmark your pull request against the current master and post the link of report when the job gets completed. Found a bug in PR and now you don't need to complete previous job? Just comment @DiffEqBot abort and it won't run now. You also need to mai

View mwe-hightol.jl
using OrdinaryDiffEq
# First time
function lorenz(du,u,p,t)
du[1] = 1e6*((1-u[2]*u[2])*u[1] - u[2])
du[2] = 1*u[1]
end
u0 = [0;2.]
tspan = (0.0,6.3)
@kanav99
kanav99 / stiff.jl
Created May 26, 2019
no recompile mwe
View stiff.jl
using OrdinaryDiffEq
# First time
function lorenz(du,u,p,t)
du[1] = 1e6*((1-u[2]*u[2])*u[1] - u[2])
du[2] = 1*u[1]
end
u0 = [0;2.]
tspan = (0.0,6.3)
@kanav99
kanav99 / benchmark.md
Created May 14, 2019
A benchmark for PkgBenchmark
View benchmark.md

Benchmark Report for OrdinaryDiffEq

Job Properties

  • Time of benchmarks:
    • Target: 14 May 2019 - 19:43
    • Baseline: 14 May 2019 - 19:44
  • Package commits:
    • Target: 44564c
    • Baseline: 44564c
  • Julia commits:
View dprkn7.jl
struct DPRKN7ConstantCache{T,T2} <: OrdinaryDiffEqConstantCache
c1::T2
c2::T2
c3::T2
c4::T2
c5::T2
c6::T2
c7::T2
c8::T2
a21::T
View gist:c7025a3a69b3577af849e86e3abda214
from sklearn import tree
from sklearn.tree import export_graphviz
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.naive_bayes import MultinomialNB
from sklearn.cross_validation import train_test_split
from sklearn import cross_validation
from sklearn.tree import DecisionTreeClassifier, export_graphviz
from sklearn import tree
You can’t perform that action at this time.