Skip to content

Instantly share code, notes, and snippets.

@maxpagels
Last active October 13, 2020 12:22
Show Gist options
  • Star 2 You must be signed in to star a gist
  • Fork 2 You must be signed in to fork a gist
  • Save maxpagels/f0353e0a81af82c4bd693a8d2f578bea to your computer and use it in GitHub Desktop.
Save maxpagels/f0353e0a81af82c4bd693a8d2f578bea to your computer and use it in GitHub Desktop.
# Sample code for building a multi-layer perceptron
# that predicts the brightness of a light bulb based
# on the month, weekday, hour and minute.
import numpy as np
from keras.models import Sequential
from keras.layers.core import Dense, Activation
from keras.utils import np_utils
from sklearn import preprocessing
# Seed the RNG so that different runs
# are valid for comparison
np.random.seed(1337)
number_of_labels = 4
number_of_iterations = 3000
# Training examples are initially represented as vectors
# [month, weekday, hour, minute]
# with the following possible values:
# month: 0-11
# weekday: 0-6 (sunday: 0, saturday: 6)
# hour: 0-23
# minute: 0-59
#
# This example includes only 5 training examples;
# *lots* more are needed in order to reach high accuracy
# and train a model that will generalise to new examples
# well.
#
# Plug in your own data and experiment!
training_set = np.array([
[9, 5, 21, 13],
[9, 6, 22, 59],
[10, 1, 1, 7],
[6, 3, 13, 20],
[2, 4, 16, 25],
])
test_set = np.array([
[9, 5, 22, 54],
[9, 6, 6, 15],
[2, 4, 15, 23],
[10, 1, 2, 19],
[5, 3, 12, 4],
])
batch_size = training_set.shape[0]
# Convert training & test sets to one-hot vectors
encoder = preprocessing.OneHotEncoder(
n_values=[12, 7, 24, 60],
sparse=False
)
encoder.fit(training_set)
x_train = encoder.transform(training_set)
x_test = encoder.transform(test_set)
print(x_train)
number_of_features = len(x_train[0])
# Labels are initially represented as numbers:
# 0 = off, 1 = dim, 2 = medium, 3 = bright
training_labels = np.array([1, 1, 0, 3, 2])
test_labels = np.array([1, 1, 2, 0, 3])
# Convert labels to one-hot vectors
y_train = np_utils.to_categorical(training_labels)
y_test = np_utils.to_categorical(test_labels)
# Build the network architecture:
# - input layer with 103 neurons
# - one hidden layer with 200 neurons
# - output layer with 4 neurons
#
# Keras provides a sequential API that makes
# it easy to add layers to the network
model = Sequential()
# Add a fully-connected hidden layer, i.e. connect the each
# of the 103 neurons in the input layer to each of the 200
# neurons in the hidden layer
model.add(Dense(200, input_shape=(number_of_features,)))
# We'll use a rectified linear unit as the activation
# function of each neuron in the hidden layer
model.add(Activation('relu'))
# Connect each of the 200 neurons in the hidden layer to
# each of the four neurons in the output layer
model.add(Dense(4))
# For all neurons in the output layer, calculate the
# probability of it belonging to the corresponding class (label)
model.add(Activation('softmax'))
# Print summary of the neural network architecture
model.summary()
# Calculate the loss (how far from the correct label each
# training example is) during each iteration using a
# categorical cross-entropy function. Use a stochastic gradient
# descent optimization algorithm to get new weights that
# hopefully minimises the loss in subsequent iterations
model.compile(loss='categorical_crossentropy',
optimizer='sgd',
metrics=['accuracy'])
# Train the model
model.fit(
x_train,
y_train,
batch_size=batch_size,
nb_epoch=number_of_iterations,
validation_data=(x_test, y_test)
)
# Evaluate against test set: print the loss and the accuracy
# (how many of the examples in the test set that were correctly
# classified)
print(model.evaluate(x_test, y_test))
# Make a test prediction for Friday, 1 January at 10:32 pm (22:32 using
# a 24-hour clock). predict_classes() outputs 0 if the bulb should be
# off, 1 for dim, 2 for medium and 3 for bright
prediction_features = np.array([[1, 5, 22, 32]])
one_hot_encoded_features = encoder.transform(prediction_features)
print("predicted class:")
print(model.predict_classes(one_hot_encoded_features))
@ksetdekov
Copy link

Hello, Mark!
I have just found your work, after thinking about this exact idea, thanks a lot for showing the code of the training.
I was wondering about a related issue, how do you differentiate between what state the bulb should be, when the user together with the model control the bulb? And also, how did you collect desired states between user inputs?
For example

time model prediction user input desired state
0 off no input ?
1 on on on
2 on no input on
3 off no input ?on, but how to determine it?
4 off off off
5 on no input ?off, but how to determine it?
6 on off off

I would be very grateful, if you could share you approach to this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment