Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
OC Nvidia GTX1070s in Ubuntu 16.04LTS for Ethereum mining

Following mining and findings performed on EVGA GeForce GTX 1070 SC GAMING Black Edition Graphics Card cards.

First run nvidia-xconfig --enable-all-gpus then set about editing the xorg.conf file to correctly set the Coolbits option.

# /etc/X11/xorg.conf
Section "Device"
    Identifier     "Device0"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BoardName      "GeForce GTX 1070"
    BusID          "PCI:1:0:0"
    Option         "Coolbits" "28"
EndSection

Section "Device"
    Identifier     "Device1"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BoardName      "GeForce GTX 1070"
    BusID          "PCI:2:0:0"
    Option         "Coolbits" "28"
EndSection

Let's now apply a very light OC to the cards,

skylake:~# nvidia-settings -c :0 -q gpus

2 GPUs on skylake:0

    [0] skylake:0[gpu:0] (GeForce GTX 1070)

      Has the following names:
        GPU-0
        GPU-08ba492c-xxxx

    [1] skylake:0[gpu:1] (GeForce GTX 1070)

      Has the following names:
        GPU-1
        GPU-16e218e7-xxxx

# Apply +1300 Mhz Mem clock offset, and +100 Mhz on GPU clock
# Found these were the most stable on my Dual EVGA SC Black 1070s.
nvidia-settings -c :0 -a '[gpu:0]/GPUMemoryTransferRateOffset[3]=1300'
nvidia-settings -c :0 -a '[gpu:1]/GPUMemoryTransferRateOffset[3]=1300'
nvidia-settings -c :0 -a '[gpu:0]/GPUGraphicsClockOffset[3]=100'
nvidia-settings -c :0 -a '[gpu:1]/GPUGraphicsClockOffset[3]=100'

To check if these have applied, your X11 server needs to be running and you'll get a confirmation

~⟫ nvidia-settings -c :0 -a '[gpu:0]/GPUMemoryTransferRateOffset[3]=1400'
Failed to connect to Mir: Failed to connect to server socket: No such file or directory
Unable to init server: Could not connect: Connection refused

  Attribute 'GPUMemoryTransferRateOffset' (skylake:0[gpu:0]) assigned value 1400.

Check the final config,

skylake:~# nvidia-smi
Sat Jun 17 03:31:57 2017
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.66                 Driver Version: 375.66                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 1070    Off  | 0000:01:00.0      On |                  N/A |
| 60%   75C    P2   146W / 151W |   2553MiB /  8112MiB |     99%      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GTX 1070    Off  | 0000:02:00.0     Off |                  N/A |
| 38%   66C    P2   149W / 151W |   2198MiB /  8114MiB |     99%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0      1267    G   /usr/lib/xorg/Xorg                             184MiB |
|    0      3457    G   compiz                                         170MiB |
|    0      4956    C   ./ethdcrminer64                               2195MiB |
|    1      4956    C   ./ethdcrminer64                               2195MiB |
+-----------------------------------------------------------------------------+

References:

#!/bin/bash
echo "Run as sudo to lower power-limits."
echo ""
nvidia-smi -i 0 -pl 100
nvidia-smi -i 1 -pl 100
echo ""
echo ""
nvidia-smi
@bsodmike

This comment has been minimized.

Copy link
Owner Author

bsodmike commented Jun 20, 2017

With the power limit dropped to 100W per card, here are my stats:

Tue Jun 20 17:31:17 2017
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.66                 Driver Version: 375.66                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 1070    Off  | 0000:01:00.0      On |                  N/A |
| 43%   67C    P2   100W / 100W |   2458MiB /  8112MiB |     99%      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GTX 1070    Off  | 0000:02:00.0     Off |                  N/A |
| 24%   59C    P2    97W / 100W |   2206MiB /  8114MiB |     99%      Default |
+-------------------------------+----------------------+----------------------+

ETH: 06/20/17-17:33:34 - SHARE FOUND - (GPU 1)
ETH: Share accepted (225 ms)!
ETH: 06/20/17-17:33:48 - SHARE FOUND - (GPU 1)
ETH: Share accepted (186 ms)!
GPU0 t=67C fan=43%, GPU1 t=59C fan=24%
ETH: 06/20/17-17:33:56 - New job from eth-eu1.nanopool.org:9999
ETH - Total Speed: 60.804 Mh/s, Total Shares: 62, Rejected: 0, Time: 01:41
ETH: GPU0 30.338 Mh/s, GPU1 30.466 Mh/s
ETH: 06/20/17-17:34:03 - New job from eth-eu1.nanopool.org:9999
ETH - Total Speed: 60.777 Mh/s, Total Shares: 62, Rejected: 0, Time: 01:41
ETH: GPU0 30.324 Mh/s, GPU1 30.454 Mh/s
ETH: 06/20/17-17:34:05 - New job from eth-eu1.nanopool.org:9999
ETH - Total Speed: 60.687 Mh/s, Total Shares: 62, Rejected: 0, Time: 01:41
ETH: GPU0 30.266 Mh/s, GPU1 30.421 Mh/s

Loss in performance by 1-1.5 Mh/s for savings of 100W in total.

@bsodmike

This comment has been minimized.

Copy link
Owner Author

bsodmike commented Jun 21, 2017

Running nvidia-settings -c :0 -q 'GPUCurrentClockFreqsString'

  Attribute 'GPUCurrentClockFreqsString' (skylake:0[gpu:0]): nvclock=1592,
  nvclockmin=240, nvclockmax=2088, nvclockeditable=1, memclock=4452,
  memclockmin=4452, memclockmax=4452, memclockeditable=1, memTransferRate=8904,
  memTransferRatemin=8904, memTransferRatemax=8904, memTransferRateeditable=1

  Attribute 'GPUCurrentClockFreqsString' (skylake:0[gpu:1]): nvclock=1652,
  nvclockmin=240, nvclockmax=2088, nvclockeditable=1, memclock=4452,
  memclockmin=4452, memclockmax=4452, memclockeditable=1, memTransferRate=8904,
  memTransferRatemin=8904, memTransferRatemax=8904, memTransferRateeditable=1

Important point to note, this means both cards are running at 8,904 + 1,300 = 10,204 Mhz on DDR5 clock!

@bsodmike

This comment has been minimized.

Copy link
Owner Author

bsodmike commented Jun 21, 2017

Interesting to note the amount of GPU memory used by each instance of Claymore

Thu Jun 22 03:05:01 2017
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.66                 Driver Version: 375.66                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 1070    Off  | 0000:01:00.0      On |                  N/A |
| 42%   67C    P2   101W / 100W |   2458MiB /  8112MiB |     99%      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GTX 1070    Off  | 0000:02:00.0     Off |                  N/A |
| 23%   59C    P2   100W / 100W |   2206MiB /  8114MiB |     99%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0      1283    G   /usr/lib/xorg/Xorg                             161MiB |
|    0      5989    G   compiz                                          90MiB |
|    0     30123    C   ./ethdcrminer64                               2203MiB |
|    1     30123    C   ./ethdcrminer64                               2203MiB |
+-----------------------------------------------------------------------------+
@bsodmike

This comment has been minimized.

Copy link
Owner Author

bsodmike commented Jun 22, 2017

Off-topic: OC settings on my GTX1080 Zotac Founders edition running n Windows 10 64-bit.

GDDR5x doubles the data rate by doubling the pre-fetch (16n) and doubling the data per memory access (64 bytes) versus GDDR5. As before quadruple memory then double.

Stock: 2*(1251 * 4) = 10,008 MHz
With +500 MHz OC applied: 2*(1376*4) = 11,008 MHz
Power limit: 50%
Core clock: +0 MHz

Hashing in Claymore, increase from 20/21 to 23 Mh/s.

@bsodmike

This comment has been minimized.

Copy link
Owner Author

bsodmike commented Jun 22, 2017

Advantages of mining on ethermine.org http://imgur.com/a/vpvXD

@bsodmike

This comment has been minimized.

Copy link
Owner Author

bsodmike commented Jun 22, 2017

Benchmarking stats to get an idea of DAG increases over time into the future.
Credit to Lee for the approach taken here.

These stats were generated for 2x EVGA GeForce GTX 1070 SC GAMING Black Edition Graphics Cards running at +1300Mhz RAM clock, 65% power-limit (~100W/card)

Now, Epoch 130 // 22/06/2017
ETH - Total Speed: 60.555 Mh/s, Total Shares: 314, Rejected: 0, Time: 06:13
ETH: GPU0 30.169 Mh/s, GPU1 30.386 Mh/s

Epoch 140, 45 days into the future
ETH - Total Speed: 60.622 Mh/s, Total Shares: 0, Rejected: 0, Time: 00:00
ETH: GPU0 30.326 Mh/s, GPU1 30.296 Mh/s
ETH - Total Speed: 59.147 Mh/s, Total Shares: 0, Rejected: 0, Time: 00:00
ETH: GPU0 28.797 Mh/s, GPU1 30.351 Mh/s

Epoch 160, 135 days into the future (~4.5 months out)
ETH - Total Speed: 59.200 Mh/s, Total Shares: 0, Rejected: 0, Time: 00:00
ETH: GPU0 30.139 Mh/s, GPU1 29.061 Mh/s
ETH - Total Speed: 60.477 Mh/s, Total Shares: 0, Rejected: 0, Time: 00:00
ETH: GPU0 30.146 Mh/s, GPU1 30.330 Mh/s
ETH - Total Speed: 60.493 Mh/s, Total Shares: 0, Rejected: 0, Time: 00:00
ETH: GPU0 30.198 Mh/s, GPU1 30.294 Mh/s

Epoch 180, 225 days into the future (~7.5 months out)
ETH - Total Speed: 60.396 Mh/s, Total Shares: 0, Rejected: 0, Time: 00:00
ETH: GPU0 30.144 Mh/s, GPU1 30.253 Mh/s
ETH - Total Speed: 60.387 Mh/s, Total Shares: 0, Rejected: 0, Time: 00:00
ETH: GPU0 30.168 Mh/s, GPU1 30.218 Mh/s
@blacksausage

This comment has been minimized.

Copy link

blacksausage commented Jul 3, 2017

I am getting an error when I enter:
nvidia-settings -c :0 -a '[gpu:0]/GPUMemoryTransferRateOffset[3]=1300'

"Unable to init server: Could not connect: Connection refused"

What is init server? What am I 'connecting' to?

@imluisfigo

This comment has been minimized.

Copy link

imluisfigo commented Jul 4, 2017

@blacksausage are you connecting to a headless server? One of your VGA cards needs to connect to a display and one user must log in from lightdm - SSH login doesn't count.

There are workarounds though. For 14.04 or lower try this thread. This doesn't work on my 16.04. I grabbed a dummy HDMI display head on amazon instead.

After applying a dummy display (software or hardware) you then use VNC to log in. Everything should work then

Good luck

@johnstcn

This comment has been minimized.

Copy link

johnstcn commented Jul 4, 2017

@blacksausage As counter-intuitive as it sounds, you need to have a display manager running in order to run nvidia-settings via CLI.
Additionally if you are logged into the server via SSH you may need to follow f0k's approach here.

@bsodmike

This comment has been minimized.

Copy link
Owner Author

bsodmike commented Jul 14, 2017

Thanks @johnstcn, I can confirm DISPLAY=:0 XAUTHORITY=/var/run/lightdm/root/:0 nvidia-settings -c :0 -q gpus works.

You can replace the path for XAUTHORITY with the path you get for ps aux | grep auth (as per the link you shared).

@bsodmike

This comment has been minimized.

Copy link
Owner Author

bsodmike commented Jul 14, 2017

Here are some further findings, for those running a single GTX1070; if you run a pair you'd probably get more than 60Mh/s with these settings:

Core offset: +0
Memory offset: +1400 MHz
Power limit: Set to 125W

On a single EVGA SC Black GTX1070:

ETH: 07/14/17-15:18:30 - New job from eu1.ethermine.org:4444
ETH - Total Speed: 29.694 Mh/s, Total Shares: 1, Rejected: 0, Time: 00:01
ETH: GPU0 29.694 Mh/s
GPU0 t=62C fan=25%
GPU0 t=62C fan=27%
ETH: 07/14/17-15:19:14 - New job from eu1.ethermine.org:4444
ETH - Total Speed: 30.726 Mh/s, Total Shares: 1, Rejected: 0, Time: 00:01
ETH: GPU0 30.726 Mh/s
@bsodmike

This comment has been minimized.

Copy link
Owner Author

bsodmike commented Jul 14, 2017

For controlling fan speed,

# Check current fan speeds
nvidia-settings -c :0 -q 'GPUTargetFanSpeed'

# Force your own set fanspeed
nvidia-settings -c :0 -a 'GPUFanControlState=1' -a 'GPUTargetFanSpeed=80'

# Specify fan speed settings on a per-card basis
nvidia-settings -c :0 -a '[gpu:0]/GPUFanControlState=1' -a '[fan:0]/GPUTargetFanSpeed=80'
@sblmasta

This comment has been minimized.

Copy link

sblmasta commented Jul 18, 2017

@bsodmike Hello. I have Asus ROG STRIX GTX 1070 OC 8GB.
I tried overclock it but I can get max 30 MH/s in stable mining on +100 GPU, +900 MEM and claymore miner.
It possible to this card more than 30 MH/s? Something like 35 MH/s. I tried set +1200-1300 MEM but is very unstable and system is frozen, I have to reset.

@december-soul

This comment has been minimized.

Copy link

december-soul commented Jul 21, 2017

My current settings

#!/bin/sh

sudo DISPLAY=:0 nvidia-settings --assign "[gpu:0]/GPUGraphicsClockOffset[3]=0" --assign "[gpu:0]/GPUMemoryTransferRateOffset[3]=1400"
sudo nvidia-smi -i 0 -pl 100

./ethdcrminer64 -epool eth-eu1.nanopool.org:9999 -ewal 0x0000000000000000000000000000000000000000/gtx1070/my@me.de -epsw x -mode 1 -ftime 10

my results

GPU #0: GeForce GTX 1070, 8112 MB available, 15 compute units, capability: 6.1

ETH - Total Speed: 31.265 Mh/s, Total Shares: 2, Rejected: 0, Time: 00:01
ETH: GPU0 31.265 Mh/s
Incorrect ETH shares: none
1 minute average ETH total speed: 30.720 Mh/s
@blacksausage

This comment has been minimized.

Copy link

blacksausage commented Jul 29, 2017

Thank you, everyone. My rig is screaming now!

@joeySeal

This comment has been minimized.

Copy link

joeySeal commented Aug 2, 2017

What do I need to change if Im trying to do this on a headless machine with no desktop environment? For example

sudo nvidia-settings -c :0 -q gpus
Failed to connect to Mir: Failed to connect to server socket: No such file or directory
Unable to init server: Could not connect: Connection refused

ERROR: Unable to find display on any available system



ERROR: Unable to find display on any available system
@eliddell1

This comment has been minimized.

Copy link

eliddell1 commented Sep 14, 2017

New to mining and quite frankly it was a big headache installing the Nvidia driver's on my machine, got the dreaded login loop and a million other issues. Finally up and running with cuda 8.0 and 375.66 driver's.

I have 2x gtx1070 and 1 gtx 1080.

I'm mining bytecoin and litecoin with ccminer, but only getting around 2.1 kH/s

Before I attempt to follow your instructions on OCIng was wondering what distro of Ubuntu, cuda version, and driver's you are using.. last time I tried to turn on. The coolbits option to 28 I got locked out of the machine.

@eliddell1

This comment has been minimized.

Copy link

eliddell1 commented Sep 14, 2017

Sorry, I'm running Ubuntu 16.04LTS as well

@MasoudEs48

This comment has been minimized.

Copy link

MasoudEs48 commented Sep 23, 2017

when I run this: nvidia-xconfig --enable-all-gpus I get the following errors:

Using X configuration file: "/etc/X11/xorg.conf".
WARNING: Unable to find CorePointer in X configuration; attempting to add new
CorePointer section.
WARNING: The CorePointer device was not specified explicitly in the layout;
using the first mouse device.
WARNING: Unable to find CoreKeyboard in X configuration; attempting to add new
CoreKeyboard section.
WARNING: The CoreKeyboard device was not specified explicitly in the layout;
using the first keyboard device.
WARNING: Unable to parse X.Org version string.
WARNING: error opening libnvidia-cfg.so.1: libnvidia-cfg.so.1: cannot open
shared object file: No such file or directory.
ERROR: Unable to determine number of GPUs in system; cannot honor
'--enable-all-gpus' option.
ERROR: Unable to write to directory '/etc/X11'.

I use Ubuntu 16.04 and have 8 GPU of GTX 1070 FE.
Do I need to download anything? Do I need Cuda?

@shaozi

This comment has been minimized.

Copy link

shaozi commented Oct 2, 2017

Very helpful. Add a little bit about the headless, when running sudo DISPLAY=:0 XAUTHORITY=/var/run/lightdm/root/:0 nvidia-settings, you need to:

  1. Make sure lightdm is running. (no need to log in)
  2. Make sure there is NO monitor plugged in.

If lightdm is not running, the system error is:

Failed to connect to Mir: Failed to connect to server socket: No such file or directory
Unable to init server: Could not connect: Connection refused

ERROR: The control display is undefined; please run `nvidia-settings --help` for usage information.

If you have a monitor plugged in, the system error is:

ERROR: Error querying enabled displays on GPU 0 (Missing Extension).
@filmo

This comment has been minimized.

Copy link

filmo commented Oct 4, 2017

I've got 2 EVGA GTX-1070 SC and a old 730 which I use only for the screen (not mining on it)

I'm using +1300 memory & +0 core

Seems to work for a bit then I'm getting:

miner  17:16:09.815|ethminer  Mining on PoWhash #9d6172fc… : 59425696 H/s = 32505856 hashes / 0.547 s
[OPENCL]:clEnqueueMapBuffer([OPENCL]:clEnqueueMapBuffer(-36)-36)

And then it dies. I'm assuming this indicates some instability but if anybody can confirm, please let me know. (Before overclocking, I never crashed the miner like this.)

@Roaders

This comment has been minimized.

Copy link

Roaders commented Nov 6, 2017

Hi

This thread is really good as there doesn't seem to be a lot of info out there on this stuff.

However, I've not been able to get my cards above 27 MHs

I have a rig with 3 AMD 480 cards (all for sale currently) and 5 1070 Nvidia.

I have a user automatically logged on so there is a desktop user but I am accessing the box via SSH. The monitor is plugged into the on-motherboard GPU (not the AMD or Nvidia)

I have tried various ways of setting coolbits - I've done
sudo nvidia-xconfig -a --cool-bits=31 --allow-empty-initial-configuration
and have tried manually editing the config file. Whatever I do when the machine reboots there is no sign of coolbits on the device configs:

Section "Device"
    Identifier     "Device0"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BoardName      "GeForce GTX 1070"
    BusID          "PCI:8:0:0"
EndSection

Section "Device"
    Identifier     "Device1"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BoardName      "GeForce GTX 1070"
    BusID          "PCI:11:0:0"
EndSection

SNIP

Section "Screen"
    Identifier     "Screen0"
    Device         "Device0"
    Monitor        "Monitor0"
    DefaultDepth    24
    Option         "AllowEmptyInitialConfiguration" "True"
    Option         "Coolbits" "31"
    SubSection     "Display"
        Depth       24
    EndSubSection
EndSection

Section "Screen"
    Identifier     "Screen1"
    Device         "Device1"
    Monitor        "Monitor1"
    DefaultDepth    24
    Option         "AllowEmptyInitialConfiguration" "True"
    Option         "Coolbits" "31"
    SubSection     "Display"
        Depth       24
    EndSubSection
EndSection

I have been able to run some commands successfully:

sudo DISPLAY=:0 XAUTHORITY=/var/run/lightdm/root/:0 nvidia-settings --assign "[gpu:0]/GPUGraphicsClockOffset[3]=0" --assign "[gpu:0]/GPUMemoryTransferRateOffset[3]=1400"

but this does not seem to have any effect:



GPU #0: Ellesmere, 8165 MB available, 36 compute units
GPU #1: Ellesmere, 8165 MB available, 36 compute units
GPU #2: Ellesmere, 8165 MB available, 36 compute units
GPU #3: GeForce GTX 1070, 8114 MB available, 15 compute units, capability: 6.1

GPU #4: GeForce GTX 1070, 8114 MB available, 15 compute units, capability: 6.1

GPU #5: GeForce GTX 1070, 8114 MB available, 15 compute units, capability: 6.1

GPU #6: GeForce GTX 1070, 8114 MB available, 15 compute units, capability: 6.1

GPU #7: GeForce GTX 1070, 8114 MB available, 15 compute units, capability: 6.1

ETH - Total Speed: 196.977 Mh/s, Total Shares: 342(27+45+40+46+52+46+56+35), Rejected: 0, Time: 01:52
ETH: GPU0 21.094 Mh/s, GPU1 21.144 Mh/s, GPU2 21.140 Mh/s, GPU3 27.264 Mh/s, GPU4 26.413 Mh/s, GPU5 26.647 Mh/s, GPU6 26.637 Mh/s, GPU7 26.640 Mh/s
Incorrect ETH shares: none
1 minute average ETH total speed: 196.911 Mh/s
Pool switches: ETH - 0, DCR - 0
Current ETH share target: 0x0000000112e0be82 (diff: 4000MH), epoch 150(2.17GB)
GPU0 t=83C fan=44%, GPU1 t=82C fan=67%, GPU2 t=82C fan=22%; GPU3 t=68C fan=83%, GPU4 t=83C fan=59%, GPU5 t=70C fan=53%, GPU6 t=73C fan=64%, GPU7 t=62C fan=29%

Any help would be much appreciated please @bsodmike

@tlovie

This comment has been minimized.

Copy link

tlovie commented Dec 15, 2017

Did you ever make any progress here - I've tried many of the same things, but I've been unable to get the memory overclock to work.

$ sudo` DISPLAY=:0 XAUTHORITY=/var/run/lightdm/root/:0 nvidia-settings --assign "[gpu:0]/GPUGraphicsClockOffset[2]=0" --assign "[gpu:0]/GPUMemoryTransferRateOffset[2]=1300"

this seems to have no effect on hashrate. the [2] means the performance level of the card that you are editing (from what I can tell)

I am however get the power level settings to work - those make the card run cooler, and lower the hashrate.

$ sudo nvidia-smi -i 0 -pl 100

here is my card capabilities

$ sudo DISPLAY=:0 XAUTHORITY=/var/run/lightdm/root/:0 nvidia-settings -c :0 -q 'GPUCurrentClockFreqsString'


  Attribute 'GPUCurrentClockFreqsString' (tlovie-TB250-BTC:0.0): nvclock=1873,
  nvclockmin=227, nvclockmax=1999, nvclockeditable=1, memclock=3802,
  memclockmin=3802, memclockmax=3802, memclockeditable=1, memTransferRate=7604,
  memTransferRatemin=7604, memTransferRatemax=7604, memTransferRateeditable=1
  Attribute 'GPUCurrentClockFreqsString' (tlovie-TB250-BTC:0[gpu:0]):
  nvclock=1873, nvclockmin=227, nvclockmax=1999, nvclockeditable=1,
  memclock=3802, memclockmin=3802, memclockmax=3802, memclockeditable=1,
  memTransferRate=7604, memTransferRatemin=7604, memTransferRatemax=7604,
  memTransferRateeditable=1

@DhoTjai

This comment has been minimized.

Copy link

DhoTjai commented Dec 15, 2017

@tlovie
This works for me
sudo DISPLAY=:0 XAUTHORITY=/var/run/lightdm/root/:0 nvidia-settings -c :0 -a GPUMemoryTransferRateOffset[3]=1300

Does anyone have the "illegal memory access was encountered" issue? I am reading on other sites that its related to the performance state P2. I have a 1070 card and I am not able to switch to P0 (full performance). It was running fine for a few days with GPUMemoryTransferRateOffset = 1300, untill yesterday....

@azeefresh

This comment has been minimized.

Copy link

azeefresh commented Dec 19, 2017

Is there any api to change color (backlight on the card) on msi 1070 as in windows?

@SergeKrier

This comment has been minimized.

Copy link

SergeKrier commented Jan 18, 2018

Thanks for sharing the config/tips.

Fyi, achieving stable results with :

  • ubuntu 16.04.03
  • 4x INNO3D GEFORCE GTX 1070TI X2 8GB (2x on PCI 16x on MB, 2x on PCI 1x bridge with USB3 riser)
  • power limit at 120W
  • fan at 90%
  • GPUGraphicsClockOffset[3]=100
  • GPUMemoryTransferRateOffset[3]=1200

ethminer Speed 124.60 Mh/s gpu/0 31.23 46C 90% gpu/1 31.15 50C 90% gpu/2 31.15 55C 90% gpu/3 31.07 58C 90%

EDIT: NVIDIA-SMI Driver Version: 387.34

@OrkunKasapoglu

This comment has been minimized.

Copy link

OrkunKasapoglu commented Jan 18, 2018

Thanks for sharing the tips.

Can you provide what is the driver version?
Because I cannot overclocked memory and GPU core with 384.11 it is still same with stock hash values.

Thanks.

@TitanUranus

This comment has been minimized.

Copy link

TitanUranus commented Jan 19, 2018

Hi! Thanks for doing this work and sharing it.
Was able to OC 5 cards using Xubuntu 16 with Nvidia 384.11 and now 390.12.

3x 1070 Ti MSI DUKE + 1x 1070 Ti Gigabyte stable at -100/+1204 but stuck in p2, so 8808 mem. PL 120 (66%) and dual mining eth/pasc at 31 mh/s + 310 mh/s. Any lower on power or core clock and it drops hash, but any increase in power or core has no effect on hash.

1x 1070 EGVA FTW is able to hit the same mem OC, but it needs more power than the Ti's. -200/+1204 @ -pl 180 (it pulls 166w). I set this card's -pl individually using sudo nvidia-smi -i 2 -pl 180

Awaiting 2x rx 570 4gb to fill out rig and compare the competition.

@leoaltmann

This comment has been minimized.

Copy link

leoaltmann commented Jan 22, 2018

Hi folks, really appreciate the info here. Having an issue getting the OC settings to actually stick on my pair of 1070 Ti's. I'm running Ubuntu 17.10 (server, headless), Nvidia 387.34. Cards are MSI DUKE and PNY 1070 Ti. Accessing server via SSH.

I was able to get lightdm running and use nvidia-settings with the explicit DISPLAY and XAUTHORITY settings. Running either ethminer or nheqminer the cards never go above P2. Using nvidia-settings -q 'GPUCurrentClockFreqsString' the clocks I see match those listed in GPUPerfModes for P2.

The issue appears that the cards don't want to accept the OC settings. For either card, with a miner running, when I try change the clocks explicitly with nvidia-settings it gives no output. Example command I tried:
sudo DISPLAY=:0 XAUTHORITY=/var/run/lightdm/root/:0 nvidia-settings -a '[gpu:0]/GPUMemoryTransferRateOffset[3]=800'

Tried using nvidia-persistenced and nvidia-smi -pm ENABLED, no change.

I also tried setting the clocks manually using nvidia-smi, but was told the setting change wasn't allowed:

sudo nvidia-smi -ac 4004,1911
Setting applications clocks is not supported for GPU 00000000:01:00.0.
Treating as warning and moving on.
Setting applications clocks is not supported for GPU 00000000:04:00.0.
Treating as warning and moving on.
All done.

Any help would be appreciated. Thank you!

EDIT - Well I feel silly, a system restart cleared up the issue and OC settings are now sticking. Thanks again for the good info here!

@nachitox

This comment has been minimized.

Copy link

nachitox commented Jan 29, 2018

@shaozi
About this

If you have a monitor plugged in, the system error is:
ERROR: Error querying enabled displays on GPU 0 (Missing Extension).

I'm running ubuntu with Intel power saving mode, meaning the internal GPU runs xorg and the GTX1070 is free.
But when I run nvidia-settings -c :1 -q gpus the output is the one you mention. How can I solve it?

@gurucp

This comment has been minimized.

Copy link

gurucp commented Mar 9, 2018

@stralex7

This comment has been minimized.

Copy link

stralex7 commented Mar 9, 2018

Got inspired by your post and created my own notes for 1070ti
https://gist.github.com/stralex7/4e86d738beeb6c5d06fd1f1651644609

@jmsjr

This comment has been minimized.

Copy link

jmsjr commented Mar 16, 2018

I am confused with GTX1070's Max Memory. According to my screenshot below ( ascii art really as I can't find an easy way to attach an image here :

PowerMizer Information ------------------------------------

 Adaptive Clocking       : Enabled
 Graphics Clock          : 1594 MHz
 Memory Transfer Rate    : 7604 MHz

 Power Source            : AC

 Current PCIe Link Width : x1
 Current PCIe Link Speed : 2.5 GT/s

 Performance Level: 2

Performance Levels ----------------------------------------

       |  Graphics Clock       |  Memory Transfer Rate
Level  |  Min        Max       |  Min        Max   
0      |  139 MHz    307 MHz   |  810 MHz    810 MHz
1      |  139 MHz    1911 MHz  |  1620 MHz   1620 MHz
2 *    |  215 MHz    1987 MHz  |  7604 MHz   7604 MHz
3      |  215 MHz    1987 MHz  |  8008 MHz   8008 MHz

The above is when there are no offsets applied to either GPU or Memory ( e.g. stock standard values ).
If I apply a memory offset of +450 MHz, the "Memory Transfer Rate" at the Performance level 2 goes up to 8054 MHz.

However, when I run nvidia-smi

$ nvidia-smi --format=csv --query-gpu=clocks.current.memory
clocks.current.memory [MHz]
4032 MHz
4032 MHz

... it shows my memory is only at 4032 MHz.

Thus, my questions are:

  1. Why is nvidia-smi only showing 4032 MHz ? Which tool is showing the correct answer ? nvidia-smi or nvidia-settings ( via X )
  2. Has anyone been able to set the performance level to '3' ( highest performance ) ? Even if I select "Max Performance" instead of "Auto / Adaptive ", it goes to level '3' .. but goes back to level '2' after a few seconds. When I start mining, it still stays at level '2'.
  3. Has anyone been able to set memory clock to 8008 MHz, which appears to be the maximum allowed ?
  4. Lastly, I read a lot of sites which states that P0 is the highest performance, but the nvidia-settings output is showing the other way around. Which one is it ?
@RyanGosden

This comment has been minimized.

Copy link

RyanGosden commented Aug 31, 2018

nvidia-settings -c :0 -a '[gpu:0]/GPUMemoryTransferRateOffset[3]=1300'
nvidia-settings -c :0 -a '[gpu:0]/GPUGraphicsClockOffset[3]=100'

When executing the above, I do not get any errors. Where can I check to see if these have been set?

Thanks

@hadbabits

This comment has been minimized.

Copy link

hadbabits commented Apr 14, 2019

nvidia-settings -c :0 -a '[gpu:0]/GPUMemoryTransferRateOffset[3]=1300'
nvidia-settings -c :0 -a '[gpu:0]/GPUGraphicsClockOffset[3]=100'

When executing the above, I do not get any errors. Where can I check to see if these have been set?

I was also having this issue of the command not working with my GTX 1660, the key was the '3' in brackets: it's the performance level. If you open your Nvidia-Settings, got to the powermizer tab and check to see how many performance levels you have. For me that's 0-2, so that's why using the command with 3 doesn't work. Change it to the highest level, 2 in my case, and it should work :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.