Skip to content

Instantly share code, notes, and snippets.

@nudles
Last active February 10, 2021 07:53
Show Gist options
  • Save nudles/889730b6de7bd3ccac417e125686db69 to your computer and use it in GitHub Desktop.
Save nudles/889730b6de7bd3ccac417e125686db69 to your computer and use it in GitHub Desktop.
train alexnet over cifar10 and do prediction
.project
.pydevproject
data_
parameter_
*.pyc

Train AlexNet over CIFAR-10

This example provides the training and serving scripts for AlexNet over CIFAR-10 data. The best validation accuracy (without data augmentation) we achieved was about 82%.

SINGA version

Note that all examples should clearly specify the SINGA version against which the scripts are tested. The format is Apache SINGA-<VERSION>-<COMMITID>. For example, All scripts have been tested against Apache SINGA-v1.0.0-fac3af9.

Folder layout

The folder structure for an example is as follows where README.md is required and other files are optional.

  • README.md. Every example should have a README.md file for the model description, SINGA version and running instructions.
  • train.py. The training script. Users should be able to run it directly by python train.py. It is optional if the model is shared only for prediction or serving tasks.
  • serve.py. The serving script. Users can submit the query via the web front end provided by Rafiki or via curl. It is optional if the model is shared only for training tasks.
  • model.py. It has the functions for creating the neural net. It could be merged into train.py and serve.py, hence are optional.
  • data.py. This file includes functions for downloading and extracting data and parameters. These functions could be merged into the train.py and serve.py, hence are optional.
  • index.html. This file is used for the serving task, which provides a web page for users to submit queries and get responses for the results. It is required for running the serving task in the cloud mode. If the model is shared only for training or running in the local mode, it is optional.
  • requirements.txt. For specifying the python libraries used by users' code. It is optional if no third-party libs are used.

Some models may have other files and scripts. Typically, it is not recommended to put large files (e.g. >10MB) into this folder as it would be slow to clone the gist repo.

Instructions

Local mode

To run the scripts on your local computer, you need to install SINGA. Please refer to the installation page for detailed instructions.

To install the libraries in requirement.txt, please run

pip install -r requirements.txt

Training

The training program could be started by

    python train.py

By default, the training is conducted on a GPU card, to use CPU for training (very slow), run

    python train.py --use_cpu

The model parameters would be dumped periodically, into model-<epoch ID> and the last one is model.

Serving

This example does not have the serving script for local mode. To simulate the local mode, you can start the prediction script and use curl to pass the query image.

    python serve.py &
    curl -i -F image=@image1.jpg http://localhost:9999/api
    curl -i -F image=@image2.jpg http://localhost:9999/api
    curl -i -F image=@image3.jpg http://localhost:9999/api

The above commands start the serving program using the model trained for Alexnet as a daemon, and then submit three queries (image1.jpg, image2.jpg, image3.jpg) to the port (the default port is 9999). To use other port, please add -p PORT_NUMBER to the running command. If you run the serving task after finishing the training task, then the model parameters from model would be used. Otherwise, it would use the one downloaded using data.py.

Cloud mode

To run the scripts on the Rafiki platform, you don't need to install SINGA. But you need to add the dependent libs introduced by your code into the requirement.txt file.

Adding model

The Rafiki front-end provides a web page for users to import gist repos directly. Users just specify the HTTPS (NOT the git web URL) clone link and click load to import a repo.

Training

The Rafiki font-end has a Job view for adding a new training job. Users need to configure the job type as 'training', select the model (i.e. the repo added in the above step) and its version. With these fields configured, the job could be started by clicking the start button. Afterwards, the users would be redirected to the monitoring view. Note that it may take sometime to download the data for the first time. The Rafiki backend would run python train.py in the backend.

Serving

The serving job is similar to the training job except the job type is 'serving'. The Rafiki backend would run python serve.py. Users can jump to the serving view rendered using the index.html from the gist repo. Note that it may take sometime to download the data for the first time.

# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
# =============================================================================
import os, sys, shutil
import urllib
import cPickle
import numpy as np
data_folder = "data_"
tar_data_url = 'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz'
tar_data_name = 'cifar-10-python.tar.gz'
data_path = 'cifar-10-batches-py'
parameter_folder = "parameter_"
parameter_name = "parameter"
tar_parameter_url = "http://comp.nus.edu.sg/~dbsystem/singa/assets/file/cifar10-alexnet.tar.gz"
tar_parameter_name = tar_parameter_url.split('/')[-1]
mean_name = 'mean.npy'
def load_dataset(filepath):
'''load data from binary file'''
print 'Loading data file %s' % filepath
with open(filepath, 'rb') as fd:
cifar10 = cPickle.load(fd)
image = cifar10['data'].astype(dtype=np.uint8)
image = image.reshape((-1, 3, 32, 32))
label = np.asarray(cifar10['labels'], dtype=np.uint8)
label = label.reshape(label.size, 1)
return image, label
def load_train_data(num_batches=5):
labels = []
batchsize = 10000
images = np.empty((num_batches * batchsize, 3, 32, 32), dtype=np.uint8)
for did in range(1, num_batches + 1):
fname_train_data = os.path.join(data_folder, data_path,
"data_batch_{}".format(did))
image, label = load_dataset(fname_train_data)
images[(did - 1) * batchsize:did * batchsize] = image
labels.extend(label)
images = np.array(images, dtype=np.float32)
labels = np.array(labels, dtype=np.int32)
return images, labels
def load_test_data():
images, labels = load_dataset(
os.path.join(data_folder, data_path, "test_batch"))
return np.array(images, dtype=np.float32), np.array(labels, dtype=np.int32)
def save_mean_data(mean):
mean_path = os.path.join(parameter_folder, mean_name)
np.save(mean_path, mean)
def prepare_train_files():
'''download train file'''
if os.path.exists(os.path.join(data_folder, data_path)):
return
print "download file"
#clean data
download_file(tar_data_url, data_folder)
untar_data(os.path.join(data_folder, tar_data_name), data_folder)
if not os.path.exists(parameter_folder):
os.makedirs(parameter_folder)
def prepare_serve_files():
'''download parameter file including mean file'''
if not os.path.exists(os.path.join(parameter_folder, tar_parameter_name)):
if not os.path.exists(parameter_folder):
os.makedirs(parameter_folder)
print "download parameter file"
download_file(tar_parameter_url, parameter_folder)
untar_data(
os.path.join(parameter_folder, tar_parameter_name),
parameter_folder)
def download_file(url, dest):
''' download one file to dest '''
if not os.path.exists(dest):
os.makedirs(dest)
if (url.startswith('http')):
file_name = url.split('/')[-1]
target = os.path.join(dest, file_name)
urllib.urlretrieve(url, target)
return
def get_parameter(file_name=None, auto_find=False):
''' get a parameter file or return none '''
if not os.path.exists(parameter_folder):
os.makedirs(parameter_folder)
return None
if file_name is not None and len(file_name):
return os.path.join(parameter_folder, file_name)
#find the last parameter file if outo_find is True
if auto_find:
parameter_list = []
for f in os.listdir(os.path.join(parameter_folder, parameter_name)):
if f.endswith(".model"):
parameter_list.append(os.path.join(parameter_folder, parameter_name, f[0:-6]))
if f.endswith(".bin"):
parameter_list.append(os.path.join(parameter_folder, parameter_name, f[0:-4]))
if len(parameter_list) == 0:
return None
parameter_list.sort()
return parameter_list[-1]
else:
return None
def load_mean_data():
mean_path = os.path.join(parameter_folder, parameter_name, mean_name)
if os.path.exists(mean_path):
return np.load(mean_path)
return None
def untar_data(file_path, dest):
tar_file = file_path
print 'untar data ..................', tar_file
import tarfile
tar = tarfile.open(tar_file)
print dest
print file_path
tar.extractall(dest)
tar.close()
<!DOCTYPE html>
<!--[if lt IE 7]> <html class="no-js lt-ie9 lt-ie8 lt-ie7"> <![endif]-->
<!--[if IE 7]> <html class="no-js lt-ie9 lt-ie8"> <![endif]-->
<!--[if IE 8]> <html class="no-js lt-ie9"> <![endif]-->
<!--[if gt IE 8]><!-->
<html class="no-js">
<!--<![endif]-->
<head>
<meta charset="utf-8">
<title>CIFAR</title>
<meta name="description" content="">
<script type="text/javascript" src="http://code.jquery.com/jquery-1.12.4.min.js"></script>
<style>
body{
text-align:center;
}
h2{
text-align:center;
}
#result{
font-size: 20px;
font-weight: bold;
color: #333;
}
</style>
</head>
<body>
<h2>CIFAR 10, Image Classification Live Service</h2>
<!--
<button id="stop">STOP</button>
<button id="go">Predict</button>
-->
<form>
<p>Please upload a jpg file!</p>
<input id="file-input" type="file" accept="image/*;capture=camera" ></input>
</form>
<br/>
<img id="image" src=""/>
<div id="result"></div>
</body>
<script language="javascript">
$("#file-input").change(function(){
var f = $("#file-input")[0].files[0];
ReadFile(f,function(result){
var file = DataURItoBlob(result);
$("#image").attr("src",result);
predict_dish(file);
});
});
$("#stop").click(function(){
$.ajax({
url:"/command/stop",
type:"POST",
processData: false, // Don't process the files
contentType: false,
success:function(response){
alert("Stop Success!");
},
error:function(e){
console.log(e);
alert("Stop Failed!");
}
});
});
function predict_dish(file){
var formData = new FormData();
formData.append('image', file, "image.jpg");
$.ajax({
url:"/api",
data:formData,
type:"POST",
processData: false, // Don't process the files
contentType: false,
success:function(response){
$("#result").html(response);
},
error:function(e){
console.log(e);
$("#result").html("Error Occurs!");
}
});
}
var ReadFile = function(file,callback) {
var reader = new FileReader();
reader.onloadend = function () {
ProcessFile(reader.result, file.type,callback);
}
reader.onerror = function () {
alert('There was an error reading the file!');
}
reader.readAsDataURL(file);
}
var ProcessFile = function(dataURL, fileType,callback) {
var maxWidth = 400;
var maxHeight = 400;
var image = new Image();
image.src = dataURL;
image.onload = function () {
var width = image.width;
var height = image.height;
var shouldResize = (width > maxWidth) || (height > maxHeight);
if (!shouldResize) {
callback(dataURL);
return;
}
var newWidth;
var newHeight;
if (width > height) {
newHeight = height * (maxWidth / width);
newWidth = maxWidth;
} else {
newWidth = width * (maxHeight / height);
newHeight = maxHeight;
}
var canvas = document.createElement('canvas');
canvas.width = newWidth;
canvas.height = newHeight;
var context = canvas.getContext('2d');
context.drawImage(this, 0, 0, newWidth, newHeight);
dataURL = canvas.toDataURL(fileType);
callback(dataURL);
};
image.onerror = function () {
alert('There was an error processing your file!');
};
}
var DataURItoBlob = function(dataURI) {
// convert base64 to raw binary data held in a string
// doesn't handle URLEncoded DataURIs - see SO answer #6850276 for code that does this
var byteString = atob(dataURI.split(',')[1]);
// separate out the mime component
var mimeString = dataURI.split(',')[0].split(':')[1].split(';')[0];
// write the bytes of the string to an ArrayBuffer
var ab = new ArrayBuffer(byteString.length);
var ia = new Uint8Array(ab);
for (var i = 0; i < byteString.length; i++) {
ia[i] = byteString.charCodeAt(i);
}
try {
return new Blob([ab], {type: mimeString});
} catch (e) {
// The BlobBuilder API has been deprecated in favour of Blob, but older
// browsers don't know about the Blob constructor
// IE10 also supports BlobBuilder, but since the `Blob` constructor
// also works, there's no need to add `MSBlobBuilder`.
var BlobBuilder = window.WebKitBlobBuilder || window.MozBlobBuilder;
var bb = new BlobBuilder();
bb.append(ab);
return bb.getBlob(mimeString);
}
}
</script>
</html>
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
# =============================================================================
from singa import layer
from singa import metric
from singa import loss
from singa import net as ffnet
def create_net(use_cpu=False):
if use_cpu:
layer.engine = 'singacpp'
net = ffnet.FeedForwardNet(loss.SoftmaxCrossEntropy(), metric.Accuracy())
W0_specs = {'init': 'gaussian', 'mean': 0, 'std': 0.0001}
W1_specs = {'init': 'gaussian', 'mean': 0, 'std': 0.01}
W2_specs = {'init': 'gaussian', 'mean': 0, 'std': 0.01, 'decay_mult': 250}
b_specs = {'init': 'constant', 'value': 0, 'lr_mult': 2, 'decay_mult': 0}
net.add(layer.Conv2D('conv1', 32, 5, 1, W_specs=W0_specs.copy(),
b_specs=b_specs.copy(), pad=2,
input_sample_shape=(3, 32, 32,)))
net.add(layer.MaxPooling2D('pool1', 3, 2, pad=1))
net.add(layer.Activation('relu1'))
net.add(layer.LRN(name='lrn1', size=3, alpha=5e-5))
net.add(layer.Conv2D('conv2', 32, 5, 1, W_specs=W1_specs.copy(),
b_specs=b_specs.copy(), pad=2))
net.add(layer.Activation('relu2'))
net.add(layer.AvgPooling2D('pool2', 3, 2, pad=1))
net.add(layer.LRN('lrn2', size=3, alpha=5e-5))
net.add(layer.Conv2D('conv3', 64, 5, 1, W_specs=W1_specs.copy(),
b_specs=b_specs.copy(), pad=2))
net.add(layer.Activation('relu3'))
net.add(layer.AvgPooling2D('pool3', 3, 2, pad=1))
net.add(layer.Flatten('flat'))
net.add(layer.Dense('dense', 10, W_specs=W2_specs.copy(),
b_specs=b_specs.copy()))
# print specs.name, filler.type, p.l1()
return net
flask>=0.10.1
flask_cors>=3.0.2
pillow>=2.3.0
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
# =============================================================================
import sys
import time
import traceback
import numpy as np
from argparse import ArgumentParser
from singa import tensor, device
from singa import image_tool
from rafiki.agent import Agent, MsgType
import data
import model
top_k = 5
tool = image_tool.ImageTool()
num_augmentation = 10
def image_transform(image):
'''Input an image path and return a set of augmented images (type Image)'''
global tool
return tool.load(image).resize_by_list([40]).crop5((32, 32), 5).get()
label_map = {
0: 'airplane',
1: 'automobile',
2: 'bird',
3: 'cat',
4: 'deer',
5: 'dog',
6: 'frog',
7: 'horse',
8: 'ship',
9: 'truck'
}
def get_name(label):
return label_map[label]
def serve(agent, use_cpu, parameter_file):
net = model.create_net(use_cpu)
if use_cpu:
print "running with cpu"
dev = device.get_default_device()
else:
print "runing with gpu"
dev = device.create_cuda_gpu()
agent = agent
data.prepare_serve_files()
print 'Start intialization............'
parameter = data.get_parameter(parameter_file, True)
print 'initialize with %s' % parameter
net.load(parameter)
net.to_device(dev)
print 'End intialization............'
mean = data.load_mean_data()
while True:
key, val = agent.pull()
if key is None:
time.sleep(0.1)
continue
msg_type = MsgType.parse(key)
if msg_type.is_request():
try:
response = ""
images = []
for im in image_transform(val):
ary = np.array(im.convert('RGB'), dtype=np.float32)
images.append(ary.transpose(2, 0, 1) - mean)
images = np.array(images)
x = tensor.from_numpy(images.astype(np.float32))
x.to_device(dev)
y = net.predict(x)
y.to_host()
y = tensor.to_numpy(y)
prob = np.average(y, 0)
# sort and reverse
labels = np.flipud(np.argsort(prob))
for i in range(top_k):
response += "%s:%s<br/>" % (get_name(labels[i]),
prob[labels[i]])
except:
traceback.print_exc()
response = "Sorry, system error during prediction."
agent.push(MsgType.kResponse, response)
elif MsgType.kCommandStop.equal(msg_type):
print 'get stop command'
agent.push(MsgType.kStatus, "success")
break
else:
print 'get unsupported message %s' % str(msg_type)
agent.push(MsgType.kStatus, "Unknown command")
break
# while loop
print "server stop"
def main():
try:
# Setup argument parser
parser = ArgumentParser(description="SINGA CIFAR SVG TRANING MODEL")
parser.add_argument("-p", "--port", default=9999, help="listen port")
parser.add_argument("-C", "--use_cpu", action="store_true")
parser.add_argument("--parameter_file", help="relative path")
# Process arguments
args = parser.parse_args()
port = args.port
# start to train
agent = Agent(port)
serve(agent, args.use_cpu, args.parameter_file)
agent.stop()
except SystemExit:
return
except:
traceback.print_exc()
sys.stderr.write(" for help use --help \n\n")
return 2
if __name__ == '__main__':
main()
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
# =============================================================================
import sys
import os
import traceback
import time
import numpy as np
from argparse import ArgumentParser
from singa import tensor, device, optimizer
from singa import utils
from singa.proto import core_pb2
from rafiki.agent import Agent, MsgType
import model
import data
def main():
'''Command line options'''
try:
# Setup argument parser
parser = ArgumentParser(description="Train Alexnet over CIFAR10")
parser.add_argument('-p', '--port', default=9999, help='listening port')
parser.add_argument('-C', '--use_cpu', action="store_true")
parser.add_argument('--max_epoch', default=140)
# Process arguments
args = parser.parse_args()
port = args.port
use_cpu = args.use_cpu
if use_cpu:
print "runing with cpu"
dev = device.get_default_device()
else:
print "runing with gpu"
dev = device.create_cuda_gpu()
# start to train
net = model.create_net(use_cpu)
agent = Agent(port)
train(net, dev, agent, args.max_epoch)
agent.stop()
except SystemExit:
return
except:
traceback.print_exc()
sys.stderr.write(" for help use --help \n\n")
def initialize(net, dev, opt):
'''initialize all parameters in the model'''
print 'Start intialization............'
for (p, specs) in zip(net.param_values(), net.param_specs()):
filler = specs.filler
if filler.type == 'gaussian':
p.gaussian(filler.mean, filler.std)
else:
p.set_value(0)
opt.register(p, specs)
print specs.name, filler.type, p.l1()
net.to_device(dev)
print 'End intialization............'
def get_data():
'''load data'''
data.prepare_train_files()
train_x, train_y = data.load_train_data()
test_x, test_y = data.load_test_data()
mean = data.load_mean_data()
if mean is None:
mean = np.average(train_x, axis=0)
data.save_mean_data(mean)
train_x -= mean
test_x -= mean
return train_x, train_y, test_x, test_y
def handle_cmd(agent):
pause = False
stop = False
while not stop:
key, val = agent.pull()
if key is not None:
msg_type = MsgType.parse(key)
if msg_type.is_command():
if MsgType.kCommandPause.equal(msg_type):
agent.push(MsgType.kStatus, "Success")
pause = True
elif MsgType.kCommandResume.equal(msg_type):
agent.push(MsgType.kStatus, "Success")
pause = False
elif MsgType.kCommandStop.equal(msg_type):
agent.push(MsgType.kStatus, "Success")
stop = True
else:
agent.push(MsgType.kStatus, "Warning, unkown message type")
print "Unsupported command %s" % str(key)
if pause and not stop:
time.sleep(0.1)
else:
break
return stop
def get_lr(epoch):
'''change learning rate as epoch goes up'''
if epoch < 120:
return 0.001
elif epoch < 130:
return 0.0001
else:
return 0.00001
def train(net, dev, agent, max_epoch, batch_size=100):
agent.push(MsgType.kStatus, 'Downlaoding data...')
train_x, train_y, test_x, test_y = get_data()
print 'training shape', train_x.shape, train_y.shape
print 'validation shape', test_x.shape, test_y.shape
agent.push(MsgType.kStatus, 'Finish downloading data')
opt = optimizer.SGD(momentum=0.9, weight_decay=0.0005)
initialize(net, dev, opt)
tx = tensor.Tensor((batch_size, 3, 32, 32), dev)
ty = tensor.Tensor((batch_size, ), dev, core_pb2.kInt)
num_train_batch = train_x.shape[0] / batch_size
num_test_batch = test_x.shape[0] / (batch_size)
idx = np.arange(train_x.shape[0], dtype=np.int32)
for epoch in range(max_epoch):
if handle_cmd(agent):
break
np.random.shuffle(idx)
print 'Epoch %d' % epoch
loss, acc = 0.0, 0.0
for b in range(num_test_batch):
x = test_x[b * batch_size:(b + 1) * batch_size]
y = test_y[b * batch_size:(b + 1) * batch_size]
tx.copy_from_numpy(x)
ty.copy_from_numpy(y)
l, a = net.evaluate(tx, ty)
loss += l
acc += a
print 'testing loss = %f, accuracy = %f' % (loss / num_test_batch,
acc / num_test_batch)
# put test status info into a shared queue
info = dict(
phase='test',
step=epoch,
accuracy=acc/num_test_batch,
loss=loss/num_test_batch,
timestamp=time.time())
agent.push(MsgType.kInfoMetric, info)
loss, acc = 0.0, 0.0
for b in range(num_train_batch):
x = train_x[idx[b * batch_size:(b + 1) * batch_size]]
y = train_y[idx[b * batch_size:(b + 1) * batch_size]]
tx.copy_from_numpy(x)
ty.copy_from_numpy(y)
grads, (l, a) = net.train(tx, ty)
loss += l
acc += a
for (s, p, g) in zip(net.param_specs(),
net.param_values(), grads):
opt.apply_with_lr(epoch, get_lr(epoch), g, p, str(s.name))
info = 'training loss = %f, training accuracy = %f' % (l, a)
utils.update_progress(b * 1.0 / num_train_batch, info)
# put training status info into a shared queue
info = dict(
phase='train',
step=epoch,
accuracy=acc/num_train_batch,
loss=loss/num_train_batch,
timestamp=time.time())
agent.push(MsgType.kInfoMetric, info)
info = 'training loss = %f, training accuracy = %f' \
% (loss / num_train_batch, acc / num_train_batch)
print info
if epoch > 0 and epoch % 30 == 0:
net.save(os.path.join(data.parameter_folder,
'parameter_%d' % epoch))
net.save(os.path.join(data.parameter_folder, 'parameter_last'))
if __name__ == '__main__':
main()
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment