Skip to content

Instantly share code, notes, and snippets.

@stralex7 stralex7/

Last active Aug 16, 2018
What would you like to do?
CERBERUS-GTX1070TI-A8G ASUS Cerberus GTX 1070 Ti for ether mining using Ubuntu 17.10

Recently I got my hands on this fairly new card at a "decent" price. These notes are mainly for myself, but comments are welcome.

Hardware specs:

  • H81 Pro BTC R2.0
  • Intel(R) Celeron(R) CPU G1840 @ 2.80GHz
  • 4Gb RAM


I switched to ubuntu 18.04.1 with out any problem, to use lastest stable version of etherminer 0.15 CUDA-9.2 is required. Package names are different for driver version 396 to get the package name use:

/opt/miner# ubuntu-drivers devices
== /sys/devices/pci0000:00/0000:00:01.0/0000:01:00.0 ==
modalias : pci:v000010DEd00001B81sv00001043sd00008599bc03sc00i00
vendor   : NVIDIA Corporation
model    : GP104 [GeForce GTX 1070]
driver   : nvidia-driver-396 - third-party free recommended
driver   : nvidia-driver-390 - third-party free
driver   : xserver-xorg-video-nouveau - distro free builtin

I used Ubuntu Budgie 17.10 as a host OS. You can use any flavor of ubuntu you preffer that will not change the instructions below.

Before plugging the card let's install nVidia Drivers the easiest way to do so is to use ppa repository, more details here.

I will just summarize below:

sudo -s
add-apt-repository ppa:graphics-drivers
apt update
apt install -y nvidia-390 tmux

Power off the rig and install a card, in my case the xorg.conf was automatically populated with all sections regarding nvidia, all I had to do as to add Coolbits option:

Section "ServerLayout"
    Identifier "layout"
    Screen 0 "nvidia"
    Inactive "intel"

Section "Device"
    Identifier "intel"
    Driver "modesetting"
    BusID "PCI:0@0:2:0"
    Option "AccelMethod" "None"

Section "Screen"
    Identifier "intel"
    Device "intel"

Section "Device"
    Identifier "nvidia"
    Driver "nvidia"
    BusID "PCI:1@0:0:0"
    Option "ConstrainCursor" "off"
    Option "Coolbits" "28"

Section "Screen"
    Identifier "nvidia"
    Device "nvidia"
    Option "AllowEmptyInitialConfiguration" "on"
    Option "IgnoreDisplayDevices" "CRT"

Coolbits allow you to overclock/underclock and reduce power consumption.

In case your xorg.conf is not configured automatically you may use nvidia-xconfig. If you are planing to use a headless rig make sure you have Option "AllowEmptyInitialConfiguration" "on" specified in the configuration.

nvidia-xconfig --enable-all-gpu --cool-bits=28 --allow-empty-initial-configuration
# if --enable-all-gpi gives an error use the command below
nvidia-xconfig -a --cool-bits=28 --allow-empty-initial-configuration

I used ethminer latest binary, grab it from releases section.

The reason I didn't choose a latest stable build as it has this nasty bug with a DNS Lookup failure which is fixed in the latest versions.

sudo -s
mkdir -p /opt/miner
cd /opt/miner
tar vfx ethminer*

Let's create a miner service, that will automatically starts the miner on the system boot and restarts it in case it fails. I'm not going to use a watchdog here as I haven't faced any stability issues or system hangs.

Create a file /lib/systemd/system/miner.service that contains:

Description=Miner Tmux Service

ExecStart=/usr/bin/tmux new-session /opt/miner/


Use systemctl enable miner.service to register it with a systemd. should be executable chmod +x ./



export GPU_FORCE_64BIT_PTR=0
export GPU_MAX_HEAP_SIZE=100

/opt/miner/bin/ethminer -RH -HWMON --api-port 8080 --farm-recheck 200 -U -S -FS -O <your_wallet_public_key>.cerberus

The new format is:

/opt/miner/bin/ethminer -RH -HWMON --api-port 8080 --farm-recheck 200 -U -P stratum+tcp:// -P stratum+tcp://

ethminer has a JSON API that is compatible with the one from Clamore's, you can try something like: echo '{"id":17,"jsonrpc":"2.0","method":"miner_getstat1"}' | timeout 1 nc localhost 8080 to read your stats for e.g. zabbix. You can get a description of fields here.

You can also install an http proxy to control your rig or read stats via http, take a look at this project ethminer-http-api on github.

Underclocking and reducing power consumption

By default the card consumes around 160 Watts and produces around 26MH/s which is not a lot.

I have got a stable above 32MH/s from this card with a 120W power limit and a little under 32MH/s with 110W. You need X11 running to be able to underclock nvidia cards. Run 'nvidia-smi' to see what applications are currently using it. Chances are it is Xorg and your windows manager.

ps -ef | grep Xorg
root       807   754  0 13:20 tty7     00:00:01 /usr/lib/xorg/Xorg -core :0 -seat seat0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch

Note the auth we will need it if using headless setup.

DISPLAY=:0 XAUTHORITY=/var/run/lightdm/root/:0 /usr/bin/nvidia-smi -i 0 -pl 120
DISPLAY=:0 XAUTHORITY=/var/run/lightdm/root/:0 /usr/bin/nvidia-settings --assign "[gpu:0]/GPUGraphicsClockOffset[3]=-200" --assign "[gpu:0]/GPUMemoryTransferRateOffset[3]=1400"

The above comands are for a single card setup, that has index 0. The first line reduces power consumption from 180W to 120W, second underclocks GPU core and overclocks memory that is a crucial part for ethash algorythm. Underlocking the core allows to reduce the power consumption to my understanding.

I have used /etc/rc.local to make this settings applied on startup, as I didn't want them to be permanent. On ubuntu 17.10 rc.local is not present by default.

Create rc.local and make it executable, so systemd will execute via the service:

#!/bin/sh -e
# rc.local
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
# In order to enable or disable this script just change the execution
# bits.
# By default this script does nothing.

DISPLAY=:0 XAUTHORITY=/var/run/lightdm/root/:0 /usr/bin/nvidia-smi -i 0 -pl 120
DISPLAY=:0 XAUTHORITY=/var/run/lightdm/root/:0 /usr/bin/nvidia-settings --assign "[gpu:0]/GPUGraphicsClockOffset[3]=-200" --assign "[gpu:0]/GPUMemoryTransferRateOffset[3]=1400"

exit 0

That's basically it. I didn't find a need to control GPU fans manually, but again you can do that using nvidia-settings:

 sudo nvidia-settings -c :0 -a [gpu:0]/GPUFanControlState=1
 sudo nvidia-settings -c :0 -a [fan:0]/GPUTargetFanSpeed=60 


A sample output from a single card rig:

   "result" : [
      "32286;54;0", <- KH/s for GPU0
      "32286", <- KH/s total
      "56;33", <- Temperature and Fan 
   "jsonrpc" : "2.0",
   "id" : 17

This comment has been minimized.

Copy link
Owner Author

stralex7 commented Mar 21, 2018

Options -S and -O is deprecated and replaced by -P

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.