Skip to content

Instantly share code, notes, and snippets.

@bsodmike
Last active March 13, 2023 05:04
Show Gist options
  • Save bsodmike/369f8a202c5a5c97cfbd481264d549e9 to your computer and use it in GitHub Desktop.
Save bsodmike/369f8a202c5a5c97cfbd481264d549e9 to your computer and use it in GitHub Desktop.
OC Nvidia GTX1070s in Ubuntu 16.04LTS for Ethereum mining

Following mining and findings performed on EVGA GeForce GTX 1070 SC GAMING Black Edition Graphics Card cards.

First run nvidia-xconfig --enable-all-gpus then set about editing the xorg.conf file to correctly set the Coolbits option.

# /etc/X11/xorg.conf
Section "Device"
    Identifier     "Device0"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BoardName      "GeForce GTX 1070"
    BusID          "PCI:1:0:0"
    Option         "Coolbits" "28"
EndSection

Section "Device"
    Identifier     "Device1"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BoardName      "GeForce GTX 1070"
    BusID          "PCI:2:0:0"
    Option         "Coolbits" "28"
EndSection

Let's now apply a very light OC to the cards,

skylake:~# nvidia-settings -c :0 -q gpus

2 GPUs on skylake:0

    [0] skylake:0[gpu:0] (GeForce GTX 1070)

      Has the following names:
        GPU-0
        GPU-08ba492c-xxxx

    [1] skylake:0[gpu:1] (GeForce GTX 1070)

      Has the following names:
        GPU-1
        GPU-16e218e7-xxxx

# Apply +1300 Mhz Mem clock offset, and +100 Mhz on GPU clock
# Found these were the most stable on my Dual EVGA SC Black 1070s.
nvidia-settings -c :0 -a '[gpu:0]/GPUMemoryTransferRateOffset[3]=1300'
nvidia-settings -c :0 -a '[gpu:1]/GPUMemoryTransferRateOffset[3]=1300'
nvidia-settings -c :0 -a '[gpu:0]/GPUGraphicsClockOffset[3]=100'
nvidia-settings -c :0 -a '[gpu:1]/GPUGraphicsClockOffset[3]=100'

To check if these have applied, your X11 server needs to be running and you'll get a confirmation

~⟫ nvidia-settings -c :0 -a '[gpu:0]/GPUMemoryTransferRateOffset[3]=1400'
Failed to connect to Mir: Failed to connect to server socket: No such file or directory
Unable to init server: Could not connect: Connection refused

  Attribute 'GPUMemoryTransferRateOffset' (skylake:0[gpu:0]) assigned value 1400.

Check the final config,

skylake:~# nvidia-smi
Sat Jun 17 03:31:57 2017
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.66                 Driver Version: 375.66                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 1070    Off  | 0000:01:00.0      On |                  N/A |
| 60%   75C    P2   146W / 151W |   2553MiB /  8112MiB |     99%      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GTX 1070    Off  | 0000:02:00.0     Off |                  N/A |
| 38%   66C    P2   149W / 151W |   2198MiB /  8114MiB |     99%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0      1267    G   /usr/lib/xorg/Xorg                             184MiB |
|    0      3457    G   compiz                                         170MiB |
|    0      4956    C   ./ethdcrminer64                               2195MiB |
|    1      4956    C   ./ethdcrminer64                               2195MiB |
+-----------------------------------------------------------------------------+

References:

#!/bin/bash
echo "Run as sudo to lower power-limits."
echo ""
nvidia-smi -i 0 -pl 100
nvidia-smi -i 1 -pl 100
echo ""
echo ""
nvidia-smi
@SergeKrier
Copy link

SergeKrier commented Jan 18, 2018

Thanks for sharing the config/tips.

Fyi, achieving stable results with :

  • ubuntu 16.04.03
  • 4x INNO3D GEFORCE GTX 1070TI X2 8GB (2x on PCI 16x on MB, 2x on PCI 1x bridge with USB3 riser)
  • power limit at 120W
  • fan at 90%
  • GPUGraphicsClockOffset[3]=100
  • GPUMemoryTransferRateOffset[3]=1200

ethminer Speed 124.60 Mh/s gpu/0 31.23 46C 90% gpu/1 31.15 50C 90% gpu/2 31.15 55C 90% gpu/3 31.07 58C 90%

EDIT: NVIDIA-SMI Driver Version: 387.34

@OrkunKasapoglu
Copy link

Thanks for sharing the tips.

Can you provide what is the driver version?
Because I cannot overclocked memory and GPU core with 384.11 it is still same with stock hash values.

Thanks.

@TitanUranus
Copy link

TitanUranus commented Jan 19, 2018

Hi! Thanks for doing this work and sharing it.
Was able to OC 5 cards using Xubuntu 16 with Nvidia 384.11 and now 390.12.

3x 1070 Ti MSI DUKE + 1x 1070 Ti Gigabyte stable at -100/+1204 but stuck in p2, so 8808 mem. PL 120 (66%) and dual mining eth/pasc at 31 mh/s + 310 mh/s. Any lower on power or core clock and it drops hash, but any increase in power or core has no effect on hash.

1x 1070 EGVA FTW is able to hit the same mem OC, but it needs more power than the Ti's. -200/+1204 @ -pl 180 (it pulls 166w). I set this card's -pl individually using sudo nvidia-smi -i 2 -pl 180

Awaiting 2x rx 570 4gb to fill out rig and compare the competition.

@leoaltmann
Copy link

leoaltmann commented Jan 22, 2018

Hi folks, really appreciate the info here. Having an issue getting the OC settings to actually stick on my pair of 1070 Ti's. I'm running Ubuntu 17.10 (server, headless), Nvidia 387.34. Cards are MSI DUKE and PNY 1070 Ti. Accessing server via SSH.

I was able to get lightdm running and use nvidia-settings with the explicit DISPLAY and XAUTHORITY settings. Running either ethminer or nheqminer the cards never go above P2. Using nvidia-settings -q 'GPUCurrentClockFreqsString' the clocks I see match those listed in GPUPerfModes for P2.

The issue appears that the cards don't want to accept the OC settings. For either card, with a miner running, when I try change the clocks explicitly with nvidia-settings it gives no output. Example command I tried:
sudo DISPLAY=:0 XAUTHORITY=/var/run/lightdm/root/:0 nvidia-settings -a '[gpu:0]/GPUMemoryTransferRateOffset[3]=800'

Tried using nvidia-persistenced and nvidia-smi -pm ENABLED, no change.

I also tried setting the clocks manually using nvidia-smi, but was told the setting change wasn't allowed:

sudo nvidia-smi -ac 4004,1911
Setting applications clocks is not supported for GPU 00000000:01:00.0.
Treating as warning and moving on.
Setting applications clocks is not supported for GPU 00000000:04:00.0.
Treating as warning and moving on.
All done.

Any help would be appreciated. Thank you!

EDIT - Well I feel silly, a system restart cleared up the issue and OC settings are now sticking. Thanks again for the good info here!

@nachitox
Copy link

@shaozi
About this

If you have a monitor plugged in, the system error is:
ERROR: Error querying enabled displays on GPU 0 (Missing Extension).

I'm running ubuntu with Intel power saving mode, meaning the internal GPU runs xorg and the GTX1070 is free.
But when I run nvidia-settings -c :1 -q gpus the output is the one you mention. How can I solve it?

@gurucp
Copy link

gurucp commented Mar 9, 2018

@stralex7
Copy link

stralex7 commented Mar 9, 2018

Got inspired by your post and created my own notes for 1070ti
https://gist.github.com/stralex7/4e86d738beeb6c5d06fd1f1651644609

@jmsjr
Copy link

jmsjr commented Mar 16, 2018

I am confused with GTX1070's Max Memory. According to my screenshot below ( ascii art really as I can't find an easy way to attach an image here :

PowerMizer Information ------------------------------------

 Adaptive Clocking       : Enabled
 Graphics Clock          : 1594 MHz
 Memory Transfer Rate    : 7604 MHz

 Power Source            : AC

 Current PCIe Link Width : x1
 Current PCIe Link Speed : 2.5 GT/s

 Performance Level: 2

Performance Levels ----------------------------------------

       |  Graphics Clock       |  Memory Transfer Rate
Level  |  Min        Max       |  Min        Max   
0      |  139 MHz    307 MHz   |  810 MHz    810 MHz
1      |  139 MHz    1911 MHz  |  1620 MHz   1620 MHz
2 *    |  215 MHz    1987 MHz  |  7604 MHz   7604 MHz
3      |  215 MHz    1987 MHz  |  8008 MHz   8008 MHz

The above is when there are no offsets applied to either GPU or Memory ( e.g. stock standard values ).
If I apply a memory offset of +450 MHz, the "Memory Transfer Rate" at the Performance level 2 goes up to 8054 MHz.

However, when I run nvidia-smi

$ nvidia-smi --format=csv --query-gpu=clocks.current.memory
clocks.current.memory [MHz]
4032 MHz
4032 MHz

... it shows my memory is only at 4032 MHz.

Thus, my questions are:

  1. Why is nvidia-smi only showing 4032 MHz ? Which tool is showing the correct answer ? nvidia-smi or nvidia-settings ( via X )
  2. Has anyone been able to set the performance level to '3' ( highest performance ) ? Even if I select "Max Performance" instead of "Auto / Adaptive ", it goes to level '3' .. but goes back to level '2' after a few seconds. When I start mining, it still stays at level '2'.
  3. Has anyone been able to set memory clock to 8008 MHz, which appears to be the maximum allowed ?
  4. Lastly, I read a lot of sites which states that P0 is the highest performance, but the nvidia-settings output is showing the other way around. Which one is it ?

@RyanGosden
Copy link

nvidia-settings -c :0 -a '[gpu:0]/GPUMemoryTransferRateOffset[3]=1300'
nvidia-settings -c :0 -a '[gpu:0]/GPUGraphicsClockOffset[3]=100'

When executing the above, I do not get any errors. Where can I check to see if these have been set?

Thanks

@hadbabits
Copy link

nvidia-settings -c :0 -a '[gpu:0]/GPUMemoryTransferRateOffset[3]=1300'
nvidia-settings -c :0 -a '[gpu:0]/GPUGraphicsClockOffset[3]=100'

When executing the above, I do not get any errors. Where can I check to see if these have been set?

I was also having this issue of the command not working with my GTX 1660, the key was the '3' in brackets: it's the performance level. If you open your Nvidia-Settings, got to the powermizer tab and check to see how many performance levels you have. For me that's 0-2, so that's why using the command with 3 doesn't work. Change it to the highest level, 2 in my case, and it should work :)

@sursu
Copy link

sursu commented Jun 17, 2020

Do the steps described here perform overclocking as discussed below:

but on Linux?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment