首先备份配置文件:
sudo cp -a /etc/apt/sources.list /etc/apt/sources.list.bak
# refer to https://github.com/rwightman/pytorch-image-models/blob/master/train.py | |
import yaml | |
import argparse | |
# The first arg parser parses out only the --config argument, this argument is used to | |
# load a yaml file containing key-values that override the defaults for the main parser below | |
parser_config = argparse.ArgumentParser(description='Training Config', add_help=False) | |
parser_config.add_argument('-c', '--config', default='', type=str, metavar='FILE', | |
help='YAML config file specifying default arguments') |
from collections import OrderedDict | |
import torch | |
from mindspore import Tensor, Parameter, save_checkpoint | |
from mindcv.models import resnet50 | |
model = resnet50() | |
model_weights_ms = model.parameters_dict() | |
model_weights_pt = torch.load("./resnet50.pth") |
import numpy as np | |
import mindspore as ms | |
from mindspore import nn, ops | |
class Fold(nn.Cell): | |
def __init__(self, channels, output_size, kernel_size, dilation=1, padding=0, stride=1) -> None: | |
"""Alternative implementation of fold layer via transposed convolution. | |
All parameters are the same as `"torch.nn.Fold" <https://pytorch.org/docs/stable/generated/torch.nn.Fold.html>`_, |
# Basic | |
set -g default-terminal "screen-256color" # enable 256 color | |
set -g mouse on # enable mouse | |
set -g history-limit 50000 | |
# Use Alt-arrow keys to switch panes | |
bind -n M-Left select-pane -L | |
bind -n M-Right select-pane -R | |
bind -n M-Up select-pane -U | |
bind -n M-Down select-pane -D |
#!/bin/bash | |
export RANK_SIZE=4 | |
echo "Command: $@" | |
# trap SIGINT to execute kill 0, which will kill all processes | |
trap 'kill 0' SIGINT | |
for ((i = 0; i < ${RANK_SIZE}; i++)); do | |
export RANK_ID=$i |
State-of-the-art diffusion models for image and audio generation in MindSpore. We've tried to provide a completely consistent interface and usage with the huggingface/diffusers. Only necessary changes are made to the huggingface/diffusers to make it seamless for users from torch.
🤗 Diffusers provides many scheduler functions for the diffusion process. A scheduler takes a model's output (the sample which the diffusion process is iterating on) and a timestep to return a denoised sample. The timestep is important because it dictates where in the diffusion process the step is; data is generated by iterating forward n timesteps and inference occurs by propagating backward through the timesteps. Based on the timestep, a scheduler may be discrete in which case the timestep is an int
or continuous in which case the timestep is a float
.
# Copyright 2023 UC Berkeley Team and The HuggingFace Team. All rights reserved. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, |