Skip to content

Instantly share code, notes, and snippets.

@danielharan
Created November 21, 2017 22:11
Show Gist options
  • Save danielharan/e70a954a31ff810af67965d874c24eaf to your computer and use it in GitHub Desktop.
Save danielharan/e70a954a31ff810af67965d874c24eaf to your computer and use it in GitHub Desktop.
from __future__ import print_function
import numpy as np
import tflearn
from tflearn.data_utils import load_csv
data, labels = load_csv('extracted_histograms.csv', target_column=0, n_classes=2)
def preprocess(data, columns_to_ignore):
# Sort by descending id and delete columns
for id in sorted(columns_to_ignore, reverse=True):
[r.pop(id) for r in data]
return np.array(data, dtype=np.float32)
to_ignore=[0] # file name
data = preprocess(data, to_ignore)
# print(len(data[0])) # 30 elements, as expected
# Build neural network
net = tflearn.input_data(shape=[None,30])
net = tflearn.fully_connected(net, 32)
net = tflearn.fully_connected(net, 32)
net = tflearn.fully_connected(net, 2, activation='softmax')
net = tflearn.regression(net)
# Define model
model = tflearn.DNN(net)
# Start training (apply gradient descent algorithm)
model.fit(data, labels, n_epoch=10, validation_set=0.1, show_metric=True)
@danielharan
Copy link
Author

Oh, JFC, nice error message, tflearn!

Turns out it was in my way of loading data:

categorical_labels: bool. If True, labels are returned as binary vectors (to be used with 'categorical_crossentropy').

If I don't set that to true, it throws an error on the size of the input. Clear as mud.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment