See also: https://gist.github.com/wey-gu/54db65ac5551b473de2feb34c0985110
Enable GPU
Follow https://web.eecs.umich.edu/~justincj/teaching/eecs442/WI2021/colab.html
Get Demo code and ControlNet
!nvidia-smi
import torch
# nebulagraph --> networkX | |
## 从 NebulaGraph 中获得一张图,存储为 pd dataframe | |
from nebula3.gclient.net import ConnectionPool | |
from nebula3.Config import Config | |
import pandas as pd | |
from typing import Dict | |
from nebula3.data.ResultSet import ResultSet |
services:
NebulaGraphDD:
image: ${DESKTOP_PLUGIN_IMAGE}
metad0:
profiles: ["core"]
labels:
- "com.vesoft.scope=core"
image: vesoft/nebula-metad:v3.3.0
environment:
generated by OpenAI ChatGPT, data fetched from https://en.wikipedia.org/wiki/2022_FIFA_World_Cup_squads
import requests
from bs4 import BeautifulSoup
import csv
# Define the URL of the Wikipedia page
brew install node_exporter
deploy dashboard in docker-compose ...
Add gh action:
name: release
on:
release:
types:
- published
docker-compose.yaml
mkdir nebulagraph_docker
cd nebulagraph_docker
vim docker-compose.yaml
It could be like this:
For k8s deployment, see https://gist.github.com/wey-gu/699b9a2ef5dff5f0fb5f288d692ddfd5
If not leveraging multiple interfaces, we have to use TLS instead to leverage SNI routing
ip address add 10.1.1.157/24 dev eth0
ip address add 10.1.1.156/24 dev eth0
ip address add 10.1.1.155/24 dev eth0