Skip to content

Instantly share code, notes, and snippets.

View smly's full-sized avatar
👁️‍🗨️

smly smly

👁️‍🗨️
View GitHub Profile
@smly
smly / http_418.md
Created February 24, 2022 08:45
HTTP/1.1 418
$ curl -I https://eng.mil.ru/
HTTP/1.1 418
Date: Thu, 24 Feb 2022 07:58:56 GMT
Content-Length: 0
Connection: keep-alive
Server: Ministry of Defence of the Russian Federation
@smly
smly / dba.py
Created June 2, 2021 09:46
query expansion and database augmentation
import numpy as np
from .functions import l2norm_numpy
def qe_dba(
feats_test, feats_index, sims, topk_idx, alpha=3.0, qe=True, dba=True
):
# alpha-query expansion (DBA)
feats_concat = np.concatenate([feats_test, feats_index], axis=0)
@smly
smly / pr3_test_code.out
Created August 6, 2020 07:51
(xfeat:PR#3) Test code and the result
col target
0 Cat 1
1 Dog 1
2 Dog 0
3 Dog 0
4 Fox 0
<class 'cudf.core.dataframe.DataFrame'>
col target col_le
0 Cat 1 0
1 Dog 1 1
diff --git a/optuna/integration/lightgbm_tuner/optimize.py b/optuna/integration/lightgbm_tuner/optimize.py
index f11c8326..4434e51c 100644
--- a/optuna/integration/lightgbm_tuner/optimize.py
+++ b/optuna/integration/lightgbm_tuner/optimize.py
@@ -112,7 +112,14 @@ class BaseTuner(object):
else:
raise NotImplementedError
+ if self.lgbm_params.get("metric") == "None":
+ if len(booster.best_score[valid_name].keys()) > 0:
import optuna.integration.lightgbm as lgb
import numpy as np
import optuna
def test_run() -> None:
params = {
"objective": "binary",
"metric": "binary_logloss",
} # type: Dict
import numpy as np
import sklearn.datasets
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
import mlflow
import optuna
import optuna.integration.lightgbm as lgb
FROM ubuntu:18.04
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
bc \
bzip2 \
ca-certificates \
curl \
git \
less \
[I 2020-04-16 15:04:56,418] Using an existing study with name 'parallel' instead of creating a new one.
feature_fraction, val_score: inf: 0%| | 0/7 [00:00<?, ?it/s][W 2020-04-16 15:04:56,643] Setting status of trial#15 as TrialState.FAIL because of the following error: AttributeError("'numpy.float64' object has no attribute 'translate'")
Traceback (most recent call last):
File "/home/smly/gitws/optuna/optuna/study.py", line 682, in _run_trial
result = func(trial)
File "/home/smly/gitws/optuna/optuna/integration/lightgbm_tuner/optimize.py", line 229, in __call__
param_value = min(trial.suggest_uniform("feature_fraction", 0.4, 1.0 + EPS), 1.0)
File "/home/smly/gitws/optuna/optuna/trial.py", line 550, in suggest_uniform
return self._suggest
@smly
smly / result.log
Created April 14, 2020 05:49
optgbm_multinode_rdb_error.log
[I 2020-04-14 14:44:15,580] Using an existing study with name 'digits_optgbm' instead of creating a new one.
[I 2020-04-14 14:44:15,596] Searching the best hyperparameters...
[LightGBM] [Warning] feature_fraction is set=0.1, colsample_bytree=1.0 will be ignored. Current value: feature_fraction=0.1
[LightGBM] [Warning] bagging_freq is set=10, subsample_freq=0 will be ignored. Current value: bagging_freq=10
[LightGBM] [Warning] lambda_l1 is set=6.0670460192270264e-05, reg_alpha=0.0 will be ignored. Current value: lambda_l1=6.0670460192270264e-05
[LightGBM] [Warning] lambda_l2 is set=0.06434253899747878, reg_lambda=0.0 will be ignored. Current value: lambda_l2=0.06434253899747878
[LightGBM] [Warning] bagging_fraction is set=0.7, subsample=1.0 will be ignored. Current value: bagging_fraction=0.7
[LightGBM] [Warning] min_data_in_leaf is set=64, min_child_samples=20 will be ignored. Current value: min_data_in_leaf=64
[W 2020-04-14 14:44:17,804] Setting status of trial#7 as TrialState.FAIL because of the following e
best_params, tuning_history = dict(), list()
booster = lgb.train(params, dtrain, valid_sets=dval,
verbose_eval=0,
best_params=best_params,
tuning_history=tuning_history)
print(‘Best Params:’, best_params)
print(‘Tuning history:’, tuning_history)