Skip to content

Instantly share code, notes, and snippets.

@YimianDai
Last active November 17, 2020 08:09
Show Gist options
  • Save YimianDai/c2f01dc26aea8c655475cc52ed7b79ac to your computer and use it in GitHub Desktop.
Save YimianDai/c2f01dc26aea8c655475cc52ed7b79ac to your computer and use it in GitHub Desktop.
xUnit: Learning a Spatial Activation Function for Efficient Image Restoration

本文的 Motivation 是 a learnable nonlinear function with spatial connections 来 making the nonlinear activations more effective. 事实上, xUnit, a layer with spatial and learnable connections 也可以理解成跟 SENet, GENet 一样的 Attention 模块. 从下图看, xUnit 其实也就是跟 GENet 一样的模块, 这点在 GENet 的论文里也提到了.

对我而言, 本文最大的贡献是指出了 Nonlinear Activation 函数可以写成 Element-wise Multiplication 的形式

原始的 Nonlinear Activation 形式

其中, $f(·)$ is some nonlinear activation function which operates element-wise on its argument

作者指出上式可以写成如下形式

gk 的形式为

以 ReLU 为例, 可以写成

因此, 只要得到 gk 就可以了, 可以在 gk 中蕴含 spatial context, 就完成了考虑 spatial context 的 Activation Function, 如下图

其中 dk 为

  1. Hk denoting depth-wise convolution

The idea is to introduce

  1. nonlinearity (ReLU),
  2. spatial processing (depth-wise convolution),
  3. construction of gating maps in the range [0, 1] (Gaussian).

代码地址: https://github.com/kligvasser/xUnit

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment