Skip to content

Instantly share code, notes, and snippets.

View nzw0301's full-sized avatar

Kento Nozawa nzw0301

  • Preferred Networks, Inc. / Preferred Elements, Inc.
  • Japan
  • 12:07 (UTC +09:00)
View GitHub Profile
"""
Modification version of https://github.com/optuna/optuna/pull/2303 with nccl backend
Optuna example that optimizes multi-layer perceptrons using PyTorch distributed.
In this example, we optimize the validation accuracy of hand-written digit recognition using
PyTorch distributed data parallel and MNIST. We optimize the neural network architecture as well
as the optimizer configuration. As it is too time consuming to use the whole MNIST dataset, we
here use a small subset of it.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
# CIFAR-100
import numpy as np
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
from torchvision.datasets import CIFAR100
train_transform = transforms.Compose(
[

fmatrix.py

  • print dataprint(data) に置換
  • from six.moves import range を追加し xrangerange に置換

logging.py

python3 から logging が標準ライブラリにあります。import 時に衝突するため、別名に変更する必要があります。

# -*- coding: utf-8 -*-
require 'twitter'
consumer_key = ''
consumer_secret = ''
access_token = ''
access_token_secret = ''
client = Twitter::REST::Client.new do |config|
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.