Skip to content

Instantly share code, notes, and snippets.

View kdplus's full-sized avatar
🍇

Yusn kdplus

🍇
View GitHub Profile
@recolic
recolic / README.md
Last active January 20, 2023 11:14
android qq 聊天记录导出

android qq聊天记录导出大致流程

tested on android 6 tencent qq

  1. 设法将/data/data/com.tencent.*/databases目录拷贝出来,我假设你了解如何做到这一点。

  2. 运行以下命令。我假设你了解如何安装/使用sqlite,我假设你了解linux基本知识。

$ sqlite3 872222222-IndexQQMsg.db

Papers from Super SloMo references

  • Super SloMo: High Quality Estimation of Multiple Intermediate Frames for Video Interpolation [Paper]
    • Huaizu Jiang, Deqing Sun, Varun Jampani, Ming-Hsuan Yang, Erik Learned-Miller, Jan Kautz
    • CVPR 2018 (splotlight)
  • Video frame synthesis using deep voxel flow [Paper] [Code]
    • Z. Liu, R. Yeh, X. Tang, Y. Liu, and A. Agarwala.
    • ICCV 2017
  • Video frame interpolation via adaptive separable convolution. [Paper] [Code]
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
require 'torch'
require 'xlua'
require 'image'
require 'cunn'
require 'cudnn'
require 'nn'
require 'torch'
require 'optim'
require 'paths'
@ryerh
ryerh / tmux-cheatsheet.markdown
Last active June 10, 2024 17:21 — forked from MohamedAlaa/tmux-cheatsheet.markdown
Tmux 快捷键 & 速查表 & 简明教程

注意:本文内容适用于 Tmux 2.3 及以上的版本,但是绝大部分的特性低版本也都适用,鼠标支持、VI 模式、插件管理在低版本可能会与本文不兼容。

Tmux 快捷键 & 速查表 & 简明教程

启动新会话:

tmux [new -s 会话名 -n 窗口名]

恢复会话:

@toshi-k
toshi-k / LagrangeDualDict.r
Last active December 28, 2017 08:45
Learning bases using the Lagrange dual (R)
# = = = = = include = = = = = #
library(MASS)
# = = = = = function = = = = = #
Obj_func <- function(Y,B,A,Lambda){
MAT <- t(Y) %*% Y - Y %*% t(A) %*% solve(A%*%t(A)+Lambda) %*% t(Y%*%t(A)) - Lambda
return <- sum(diag(MAT))
@szagoruyko
szagoruyko / vgg.lua
Last active September 11, 2017 08:32
require 'nn'
local vgg = nn.Sequential()
-- building block
local function ConvBNReLU(nInputPlane, nOutputPlane)
vgg:add(nn.SpatialConvolution(nInputPlane, nOutputPlane, 3,3, 1,1, 1,1))
vgg:add(nn.SpatialBatchNormalization(nOutputPlane,1e-3))
vgg:add(nn.ReLU(true))
return vgg
end
@paidi
paidi / gist:310d2d869ef74794b239
Last active February 14, 2017 10:29
Batch SparseLinear
m = nn.ParallelTable()
layer = nn.SparseLinear(inputSize,outputSize)
m:add(nn.Sequential():add(layer):add(nn.Reshape(1,outputSize)))
for i=2,batchSize do
local repLayer = layer:clone('weight', 'bias', 'gradWeight', 'gradBias')
m:add(nn.Sequential():add(repLayer):add(nn.Reshape(1,outputSize)))
end
batchLayer = nn.Sequential():add(m):add(nn.JoinTable(1))
@gcr
gcr / alexnet-BETTER.lua
Last active October 12, 2016 08:16
AlexNet in Torch.
------- AlexNet: Using my own weight initialization
model = nn.Sequential()
model:add(cudnn.SpatialConvolution(3,96,11,11,4,4,2,2))
model.modules[#model.modules].weight:normal(0, 0.01)
model.modules[#model.modules].bias:fill(0)
model:add(cudnn.ReLU())
model:add(inn.SpatialCrossResponseNormalization(5, 0.0001, 0.75, 1))
model:add(nn.SpatialMaxPooling(3,3,2,2))
model:add(cudnn.SpatialConvolution(96,256,5,5,1,1,2,2))
model.modules[#model.modules].weight:normal(0, 0.01)
@dwf
dwf / gist:1335246
Created November 2, 2011 23:10
My implementation of feature sign search for L1 minimization.
"""
L1-penalized minimization using the feature sign search algorithm.
"""
import logging
import numpy as np
log = logging.getLogger("feature_sign")
log.setLevel(logging.INFO)