Skip to content

Instantly share code, notes, and snippets.

@Aakash-kaushik
Last active September 26, 2020 13:19
Show Gist options
  • Save Aakash-kaushik/f20d64241cab6c2ebf80fd84c9a7895c to your computer and use it in GitHub Desktop.
Save Aakash-kaushik/f20d64241cab6c2ebf80fd84c9a7895c to your computer and use it in GitHub Desktop.
declaring the model architecture
FFN<NegativeLogLikelihood<>, RandomInitialization> model;
model.Add<Convolution<>>(1, // Number of input activation maps.
6, // Number of output activation maps.
5, // Filter width.
5, // Filter height.
1, // Stride along width.
1, // Stride along height.
0, // Padding width.
0, // Padding height.
28, // Input width.
28 // Input height.
);
model.Add<ReLULayer<>>();
model.Add<MaxPooling<>>(2, // Width of field.
2, // Height of field.
2, // Stride along width.
2, // Stride along height.
true);
model.Add<Convolution<>>(6, // Number of input activation maps.
16, // Number of output activation maps.
5, // Filter width.
5, // Filter height.
1, // Stride along width.
1, // Stride along height.
0, // Padding width.
0, // Padding height.
12, // Input width.
12 // Input height.
);
model.Add<ReLULayer<>>();
model.Add<MaxPooling<>>(2, 2, 2, 2, true);
model.Add<Linear<>>(16 * 4 * 4, 10);
model.Add<LogSoftMax<>>();
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment