Skip to content

Instantly share code, notes, and snippets.

@ericxtang
Last active March 31, 2021 09:56
Show Gist options
  • Save ericxtang/54da7fe1c2c258c53044e9a3bcf98935 to your computer and use it in GitHub Desktop.
Save ericxtang/54da7fe1c2c258c53044e9a3bcf98935 to your computer and use it in GitHub Desktop.

Hosting the Community Grants Orchestrator

The community grants orchestrator switched from using a cloud-hosted CPU node to a GPU-based CherryServer node on 12/27/2020.

As a result, the transcoding performance according to the leader board has increased from 15% success rate to over 99% success rate.

We want to share the exact steps we took for the migration.

Step1: Provisioning a CherryServer GPU Machine

There are many dedicated GPU server providers out there. We chose cherryservers. It allows you to create dedicated GPU machines through a few simple clicks. This allowed us to bring up GPU servers without needing to build/host a rig. It costs around $155/mo to run a machine with a GPU attached. We chose to bring up a node with 2x1070 GPUs. After adding to cart and paying the invoice, it took a few hours for the server to be provisioned. The ssh access was visible in the user interface for up to 24 hours. Their support staff was nice and helpful over email as well.

Step2: Configure the CherryServer GPU Machine

Once we got access to the server, the first thing we did was to enable key-based login and disable password-based login. Your can follow this guide for more information.

After that, we set up a basic firewall. We chose ufw.

# Install ufw
sudo apt install ufw

# Allow ssh connection
sudo ufw allow OpenSSH

# Allow Livepeer orchestrator service port
sudo ufw allow 8935/tcp

# Enable the firewall
sudo ufw enable

If you run ufw status, you should see 8935/tcp (v6) ALLOW Anywhere (v6) in there.

We chose to also install fail2ban to further improve security. This tool jails illicit IPs by looking at logs.

# Install fail2ban
sudo apt install fail2ban -y

# Change the config
sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local

# Restart the service to enable new config
sudo service fail2ban restart

Step3: Configure Local GPU

We first installed the ubuntu common drivers package.

sudo apt install ubuntu-drivers-common

After, we installed the 440 version of the Nvidia driver

sudo apt install nvidia-driver-440

After this, you should be able to see your local nvidia cards by a running nvidia-smi

Note the driver imposes a 3 session per card limit on the 1070s. There are open source software patches that remove the limit, but it's against the Nvidia EULA.

Step4: Test Livepeer Setup

The first thing to run / test is whether networking is set up properly, and whether transcoding is working. To do this, we'll bring up a orchestrator/transcoder in offchain mode.

# Download Livepeer
wget https://github.com/livepeer/go-livepeer/releases/download/v0.5.13/livepeer-linux-amd64.tar.gz && 
tar -xvf livepeer-linux-amd64.tar.gz

# Run Livepeer Orchestrator
./livepeer-linux-amd64/livepeer -orchestrator -transcoder -httpAddr 0.0.0.0:8935 -serviceAddr {primary-ip}:8935 -nvidia 0,1 -v 99

We will then bring up a broadcaster in offchain mode from another computer

./livepeer-darwin-amd64/livepeer -broadcaster -orchAddr {orchestrator-primary-ip}:8935 -v 99

You should be able to stream a test video into the Livepeer broadcaster node in local mode. We use the big buck bunny video at (https://ia600501.us.archive.org/10/items/BigBuckBunny_310/big_buck_bunny_640_512kb.mp4)

ffmpeg -re -i big_buck_bunny_640_512kb.mp4 -c:v h264 -c:a aac -keyint_min 30 -g 60 -f flv rtmp://localhost:1935

You should see the video being transcoded by the GPU on the orchestrator/transcoder.

Step5: Run Livepeer Orchestrator On Mainnet

After testing your setup, you can run your orchestrator on the Livepeer mainnet.

We run this by using:

./livepeer-linux-amd64/livepeer -datadir {data-dir} -orchestrator -transcoder -network mainnet -pricePerUnit 10000 -serviceAddr {primary-ip}:8935 -httpAddr 0.0.0.0:8935 -ethPassword {password-file} -ethUrl {infura-provider-url} -v 99 -nvidia 0 -maxSessions 10

This can be wrapped into a systemctl service, or simply kept up by a monitoring script.

If you are new to setting up the orchestrator, you'll need to go through the CLI to become an orchestrator. Make sure you have Eth and LPT in the wallet.

After seeing the orchestrator becoming active, you can test out the orchestrator by running a onchain broadcaster like this:

./livepeer-darwin-amd64/livepeer -network mainnet -broadcaster -orchAddr https://{orchestrator-primary-ip}:8935 -ethUrl {infura-provider-url} -v 99

You will need to make a deposit using the livepeer CLI to make on-chain broadcaster work. After that, you will be able to send a test stream to the onchain broadcaster using the same ffmpeg command. You should see transcoding success messages in the logs.

@chrishobcroft
Copy link

Nice guide @ericxtang

Quick feedback... I think there's a link missing here:

Screenshot_20210117-112616

@ericxtang
Copy link
Author

Thanks. Fixed!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment