Skip to content

Instantly share code, notes, and snippets.

@Kensuke-Mitsuzawa
Kensuke-Mitsuzawa / config.yaml
Last active February 23, 2022 12:41
monitor your process on server
SLACK_WEBHOOK:
@Kensuke-Mitsuzawa
Kensuke-Mitsuzawa / generate_dirac_delta_distribution.py
Last active November 5, 2021 21:20
generate_dirac_delta_distribution.py
import typing
from pathlib import Path
import pandas
from tqdm import tqdm
import os
import numpy as np
def ddf(x: np.ndarray, sig: float):
val = []
@Kensuke-Mitsuzawa
Kensuke-Mitsuzawa / README.md
Last active September 1, 2021 13:30
Running SUMO traffic simulator on Google Colab

Using SUMO in Goolge Colab

Possible to use SUMO in Google Colab.

Procedure

  1. bash install_sumo.sh
  2. import your scenario data (either from google drive or upload directly)
  3. edit run_sumo.sh
@Kensuke-Mitsuzawa
Kensuke-Mitsuzawa / render_scatter_with_iteration.py
Created February 12, 2021 10:48
散布図と重みを同時に表示するためのコード
import numpy as np
import pandas
import random
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import matplotlib
import typing
def fix_min_max(x_tensor: np.ndarray,
@Kensuke-Mitsuzawa
Kensuke-Mitsuzawa / render.py
Created February 9, 2021 15:11
render multi-row scatter graph
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import pandas
# 多次元配列をつくる
x_tensor = np.random.uniform(size=(3, 10, 2), low=1.0, high=20)
#
mark_value = [2.5, 5.5]
@Kensuke-Mitsuzawa
Kensuke-Mitsuzawa / input_test_data.csv
Last active June 19, 2016 16:21
Group by items via python native way.
S.No. Datetime Details
1 2010/6/7 19:01 asd
1 2010/6/8 4:00 dfg
2 2010/6/9 0:00 dfg
2 2010/6/10 0:00 gfd
2 2010/6/11 0:00 gfd
3 2010/6/12 0:00 gfd
3 2010/6/13 0:00 abc
4 2010/6/14 0:00 abc
4 2010/6/15 0:00 def
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@Kensuke-Mitsuzawa
Kensuke-Mitsuzawa / cluster_analysis.py
Last active August 29, 2015 14:05
cluster analysis with PCA using scikit-learn
#! /usr/bin/python
# -coding:utf-8 -*-
import numpy as np
from sklearn.decomposition import PCA
import pandas
import logging
import os, sys, codecs, json
from sklearn.cluster import KMeans
from sklearn import datasets
# -*- coding: utf-8 -*-
"""
You need to fill in your API key from google below. Note that querying
supported languages is not implemented.
Language Code
-------- ----
Afrikaans af
Albanian sq
Arabic ar
#! /usr/bin/python
# -*- coding: utf-8 -*-
import sys,codecs,subprocess,readline,re
#sys.stdout = codecs.getwriter('utf-8')(sys.stdout)
## 辞書の定義
info_dic = {"structure":"none","Ga":"none","Wo":"none","Ni":"none","He":"none","To":"none","Kara":"none","Yori":"none","De":"none","Time":"none","Predict":"none"}