Skip to content

Instantly share code, notes, and snippets.

@marty1885
Last active October 24, 2022 06:45
Show Gist options
  • Star 2 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save marty1885/dd648a1806348bf4cd2c2fd0feafae36 to your computer and use it in GitHub Desktop.
Save marty1885/dd648a1806348bf4cd2c2fd0feafae36 to your computer and use it in GitHub Desktop.
#include <iostream>
#include <vector>
#include "tiny_dnn/tiny_dnn.h"
using namespace tiny_dnn;
using namespace tiny_dnn::activation;
using namespace tiny_dnn::layers;
int main()
{
network<sequential> net;
net << fully_connected_layer(2,3) << sigmoid_layer()
<< fully_connected_layer(3,1) << sigmoid_layer();
std::vector<vec_t> trainIn = {{0,0}, {0,1}, {1,0}, {1,1}};
std::vector<vec_t> trainOut = {{0}, {1}, {1}, {0}};
gradient_descent optimizer(0.53f);
net.fit<mse>(optimizer, trainIn, trainOut, 1, 1000);
net.save("net");
std::cout << net.predict({1,0})[0] << std::endl;
}
@dFohlen
Copy link

dFohlen commented Oct 24, 2018

Good job but in this case you can also classify the output like this.

  network<sequential> net;
  net << fully_connected_layer(2,3) << sigmoid_layer()
      << fully_connected_layer(3,2) << sigmoid_layer();

  std::vector<vec_t> trainIn    = {{0,0}, {0,1}, {1,0}, {1,1}};
  std::vector<label_t> trainOut = {0, 1, 1, 0};

  gradient_descent optimizer;
  optimizer.alpha = 0.53f;
  timer t;
  t.start();
  net.train<cross_entropy>(optimizer, trainIn, trainOut, 1, 1000);
  t.stop();
  std::cout << "duration: " << t.total() << " s" << std::endl;

  net.save("net");

  std::cout << "result: " << net.predict_label({1,0}) << std::endl;
  std::cout << "probability: " << net.predict_max_value({1,0}) * 100 << " %" <<  std::endl;

duration: 51.0456 s
result: 1
probability: 99.5675 %

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment