Skip to content

Instantly share code, notes, and snippets.

Andrew Tulloch ajtulloch

Block or report user

Report or block ajtulloch

Hide content and notifications from this user.

Learn more about blocking users

Contact Support about this user’s behavior.

Learn more about reporting abuse

Report abuse
View GitHub Profile
@ajtulloch
ajtulloch / Block-Sparse GEMM.ipynb
Last active Aug 28, 2019
Block-Sparse GEMM.ipynb
View Block-Sparse GEMM.ipynb
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
View minimal.py
from tvm import relay
from mxnet.gluon import nn
import mxnet as mx
class TestBlock(nn.HybridBlock):
def __init__(self):
super(TestBlock, self).__init__()
self.conv = nn.Conv2D(8, 3, 1, 1, use_bias=False)
self.a000 = nn.Activation("relu")
self.a0_0 = nn.MaxPool2D(pool_size=2, strides=2)
View gist:8a2d68deec59045d471b3debdf5aeefc
diff --git a/src/relay/pass/quantize.cc b/src/relay/pass/quantize.cc
index 3a2e54c8..4059dc3a 100644
--- a/src/relay/pass/quantize.cc
+++ b/src/relay/pass/quantize.cc
@@ -340,18 +340,9 @@ Expr MulRealize(const Call& ref_call,
const auto* rhs = new_args[1].as<QRealizeIntExprNode>();
Expr ldata = lhs->data;
Expr rdata = rhs->data;
-
DataType dtype = cfg->dtype_activation;
View Untitled.ipynb
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@ajtulloch
ajtulloch / Untitled41.ipynb
Last active Apr 30, 2019
RelayTVMFusionE2E.ipynb
View Untitled41.ipynb
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
View foo.diff
diff --git a/tutorials/optimize/opt_gemm.py b/tutorials/optimize/opt_gemm.py
index 44ee53a7..c9785cbf 100644
--- a/tutorials/optimize/opt_gemm.py
+++ b/tutorials/optimize/opt_gemm.py
@@ -44,24 +44,24 @@ import timeit
# The size of the matrix
# (M, K) x (K, N)
# You are free to try out different shapes, sometimes TVM optimization outperforms numpy with MKL.
-M = 1024
-K = 1024
@ajtulloch
ajtulloch / -
Created Jun 15, 2018
opt_gemm.diff
View -
diff --git a/tutorials/optimize/opt_gemm.py b/tutorials/optimize/opt_gemm.py
index 44ee53a7..c9785cbf 100644
--- a/tutorials/optimize/opt_gemm.py
+++ b/tutorials/optimize/opt_gemm.py
@@ -44,24 +44,24 @@ import timeit
# The size of the matrix
# (M, K) x (K, N)
# You are free to try out different shapes, sometimes TVM optimization outperforms numpy with MKL.
-M = 1024
-K = 1024
View master.py
#! /usr/bin/env python
import pexpect
import pexpect.replwrap
repl = pexpect.replwrap.REPLWrapper("lua", u"> ", None, u"> ")
output = repl.run_command("= 1 + 1", timeout=1).splitlines()[1:]
assert(int(output[0]) == 2)
View DR.hs
module DR where
import Control.Applicative
import Data.Graph.Inductive.Graph
import Data.Graph.Inductive.PatriciaTree
import Data.Graph.Inductive.Query
import qualified Data.Map as M
newtype Task = Task Int deriving (Eq, Ord, Show)
View DR.hs
module DR where
import Control.Applicative
import Data.Graph.Inductive.Graph
import Data.Graph.Inductive.PatriciaTree
import Data.Graph.Inductive.Query
import qualified Data.Map as M
newtype Task = Task Int deriving (Eq, Ord, Show)
You can’t perform that action at this time.