Skip to content

Instantly share code, notes, and snippets.

@sin32775
Created July 29, 2021 00:08
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save sin32775/81fff2e32b39afb3ea4d3b62baaf2e19 to your computer and use it in GitHub Desktop.
Save sin32775/81fff2e32b39afb3ea4d3b62baaf2e19 to your computer and use it in GitHub Desktop.
Display the source blob
Display the rendered blob
Raw
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# TFLearnでテクニカル指標  Ichimokuの場合\n",
"TFLearnっていう、TensorFlowをもっと簡単に使えるライブラリがあるということなので、それも入れてみた。\n",
"http://tflearn.org/"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# グラフの定義(可視化)\n",
"\n",
"TFlearnで使うグラフの定義ですが、今回は線形モデルだとわかっているものとして、単に入力層から線形結合させたものを定義します。あと、バイアスも使わないので、bias=Falseとしておきます。\n",
"\n",
"ただ、regressionについては、デフォルトのままだとあまりよい結果がでなかったので、SGDメソッドを使って学習率を徐々に下げていくよう調整を行いました。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
" # Graph definition\n",
"layer_in = tflearn.input_data(shape=[None, N])\n",
"layer1 = tflearn.fully_connected(layer_in, 1, activation='linear', bias=False)\n",
"sgd = tflearn.optimizers.SGD(learning_rate=0.01, lr_decay=0.95, decay_step=100)\n",
"regression = tflearn.regression(layer1, optimizer=sgd, loss='mean_square')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 教師データの処理\n",
"\n",
"前回の例は、2入力の線形和を求めるだけのモデルだったのですが、二つの入力データの差が小さいと誤差が小さくなってしまい、学習がうまくできていませんでした。\n",
"\n",
"そこで、こんどは、二つのデータの差が1以上のみを教師データとしてみました。\n",
"\n",
"## 5分足(2007/07~2017/07) "
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Training Step: 10000 | total loss: \u001b[1m\u001b[32m3.34792\u001b[0m\u001b[0m | time: 0.011s\n",
"| SGD | epoch: 10000 | loss: 3.34792 -- iter: 31/31\n",
"\n",
"weights\n",
"W[0] = [ 0.63396788]\n",
"W[1] = [ 0.37587214]\n"
]
}
],
"source": [
"import numpy as np\n",
"import pandas as pd\n",
"import tensorflow as tf\n",
"import tflearn\n",
"\n",
"file = 'USDJPY5 Ichimoku.txt'\n",
"ohlc = pd.read_csv(file, index_col='Time', parse_dates=True)\n",
"close = ohlc.Close.values\n",
"ind1 = ohlc.Ind1.values\n",
"\n",
"N = 2\n",
"X = np.empty((0,N))\n",
"Y = np.empty((0,1))\n",
"for i in range(200):\n",
" if abs(close[i]-close[i+1]) >= 1.0:\n",
" X = np.vstack((X, close[i:i+N]))\n",
" Y = np.vstack((Y, ind1[i+N-1:i+N]))\n",
"\n",
"# Graph definition\n",
"layer_in = tflearn.input_data(shape=[None, N])\n",
"layer1 = tflearn.fully_connected(layer_in, 1, activation='linear', bias=False)\n",
"sgd = tflearn.optimizers.SGD(learning_rate=0.01, lr_decay=0.95, decay_step=100)\n",
"regression = tflearn.regression(layer1, optimizer=sgd, loss='mean_square')\n",
"\n",
"# Model training\n",
"m = tflearn.DNN(regression)\n",
"m.fit(X, Y, n_epoch=10000, snapshot_epoch=False, run_id='Ichimokulearn')\n",
"\n",
"# Weights\n",
"print('\\nweights')\n",
"for i in range(N):\n",
" print('W['+str(i)+'] =' ,m.get_weights(layer1.W)[i])\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# ノイズ入りの教師データ\n",
"\n",
"そもそもニューラルネットワークは、正確な数値予測のためではなく、結構アバウトな予測を行うためのものみたいです。\n",
"\n",
"そこで、こんどは指標値に平均0、標準偏差0.1のガウスノイズを付加したものを教師データとしてみました。"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Training Step: 10000 | total loss: \u001b[1m\u001b[32m0.50975\u001b[0m\u001b[0m | time: 0.011s\n",
"| SGD | epoch: 10000 | loss: 0.50975 -- iter: 31/31\n",
"\n",
"weights\n",
"W[0] = [ 0.66791338]\n",
"W[1] = [ 0.33452979]\n"
]
}
],
"source": [
"import numpy as np\n",
"import pandas as pd\n",
"import tensorflow as tf\n",
"import tflearn\n",
"\n",
"file = 'USDJPY5 Ichimoku.txt'\n",
"ohlc = pd.read_csv(file, index_col='Time', parse_dates=True)\n",
"close = ohlc.Close.values\n",
"ind1 = ohlc.Ind1.values\n",
"\n",
"N = 2\n",
"X = np.empty((0,N))\n",
"Y = np.empty((0,1))\n",
"for i in range(200):\n",
" if abs(close[i]-close[i+1]) >= 1.0:\n",
" X = np.vstack((X, close[i:i+N]))\n",
" noise = np.random.normal(0,0.1)\n",
" Y = np.vstack((Y, ind1[i+N-1:i+N]+noise))\n",
"# Graph definition\n",
"layer_in = tflearn.input_data(shape=[None, N])\n",
"layer1 = tflearn.fully_connected(layer_in, 1, activation='linear', bias=False)\n",
"sgd = tflearn.optimizers.SGD(learning_rate=0.01, lr_decay=0.95, decay_step=100)\n",
"regression = tflearn.regression(layer1, optimizer=sgd, loss='mean_square')\n",
"\n",
"# Model training\n",
"m = tflearn.DNN(regression)\n",
"m.fit(X, Y, n_epoch=10000, snapshot_epoch=False, run_id=' Ichimokulearn')\n",
"\n",
"# Weights\n",
"print('\\nweights')\n",
"for i in range(N):\n",
" print('W['+str(i)+'] =' ,m.get_weights(layer1.W)[i])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": []
}
],
"metadata": {
"anaconda-cloud": {},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
},
"latex_envs": {
"LaTeX_envs_menu_present": true,
"autocomplete": true,
"bibliofile": "biblio.bib",
"cite_by": "apalike",
"current_citInitial": 1,
"eqLabelWithNumbers": true,
"eqNumInitial": 1,
"hotkeys": {
"equation": "Ctrl-E",
"itemize": "Ctrl-I"
},
"labels_anchors": false,
"latex_user_defs": false,
"report_style_numbering": false,
"user_envs_cfg": false
},
"toc": {
"colors": {
"hover_highlight": "#DAA520",
"navigate_num": "#000000",
"navigate_text": "#333333",
"running_highlight": "#FF0000",
"selected_highlight": "#FFD700",
"sidebar_border": "#EEEEEE",
"wrapper_background": "#FFFFFF"
},
"moveMenuLeft": true,
"nav_menu": {
"height": "264px",
"width": "252px"
},
"navigate_menu": true,
"number_sections": true,
"sideBar": true,
"threshold": 4,
"toc_cell": false,
"toc_section_display": "block",
"toc_window_display": false,
"widenNotebook": false
}
},
"nbformat": 4,
"nbformat_minor": 1
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment