GTX1070 Linux installation and mining clue (Goodbye AMD, Welcome NVidia for miners)

kruisdraadkruisdraad Member Posts: 61
edited June 2016 in Mining
Hi all,

Yeah you read that correctly, there is a card from NVidia that outperforms the AMD cards in the same class. Yeah yeah AMD is getting new cards, but there arent here are they (september perhaps, maybe even Q1/17 sooooo.....) As far as i really seen with an R9 390X its less power, less heat and less stressing on PSU's (cheaper PSU's needed) so that AMD is not that much cheaper, after 1 yr (for me again) that AMD is a lot more expensive, and i am not the only one that done the math on this.

As you might have read i got 18 GTX1070 and posted some benchmark information earlier (http://forum.ethereum.org/discussion/comment/42663). @vaulter asked to some some details (@vaulter perhaps you can add your GTX1080 findings, settings?) In this topic i want to place the more technical notes how you can get this working. The short summery: Yes i managed to get 6x GTX1070 running at 218.11MH/s with heavy tuning / overclocking, but no idea how this would hold long term. Currently i keep them at 192,88MH/s (x3 rigs) which seem to be the 'safe overclocking defaults' to me. Who knows how stuff progresses with updates from @Genoil and if its running under a windows driver stable and fast. Safe to say with the lesser power consumption AND more MH/s then a card like R9 390X this GTX1070 with its price is a very nice card to have (especially if you run apps that only run good on Nvidia cards)

It took me quite a while to get it working and this document only contains 5% of my notes and stuff, its the minimum to get you started and you will need to do some tuning on your own to max your card out. Some stuff aren't as good as i like yet (e.g. headless VNC access without the use of a monitor) but it works and more importantly its stable. Thanks go out to @Genoil for his clue and his work on ethminer. This document is not entirely ment as a walk-through as some knowledge on mining, linux overclocking and common sense is still required ... So here goes.

I took the time to write & test this, so consider donations to me or @Genoil (for his work on the ethminer project keeping it alive) :smile: I am not really a miner, i use the GPU's for other projects as well and mining is a means to and end. Earning back the hardware helps my project because i can get better hardware.

Me: ETH: 0xbF2d2D40caDf23f799B808D4A7Db72863f854c34
Genoil: ETH: 0xeb9310b185455f863f526dab3d245809f6854b4d

Disclaimer: Stuff might not work for you, you must be doing some thinking to get this working. Also i have the Founders Edition and stuff might be very differently on changed PCB from vendors, so overclocking with nvidia-settings might need some tuning if you have a different card. Also even if you have two FE editions the cards perform differently (silicons ... google it) my guide is a stable overall machine so the lowers stable card pulls down the higher rated ones. You can tune it out per card.

Connecting hardware & BIOS

If you buy PCI-e riser buy them from China and/or check the version/quality ... some french dude on ebay sold me bad performing ones and some were broken. Some of my stability issues (and loads of time) went into looking/finding out this issue.

Protip on assembly: The open Air Frames i got could be connected to each other ... its a great racking way, but my 3 rigs weigh over 60KG now ... not really handy to move, so keep them separate and only have to move 20KG 3 times.

Make sure your primary GPU in BIOS is PCI-e and disable your onboard GPU as it confuses X11. In addition connect your monitor to the primary PCI-e graphics slots (the x16 one) during installing / bios setup (others wont work) but after installation you need to stick it in the FIRST GPU (PCI slots numbered 1) to get the headless mode working correctly.

Get the basics going

Install Ubuntu 14.04 SERVER (do not try 16.04 or any desktop version unless your found in fixed a bunch of problems)

apt-get install -y opencl-headers build-essential protobuf-compiler libprotoc-dev libboost-all-dev libleveldb-dev hdf5-tools libhdf5-serial-dev libopencv-core-dev libopencv-highgui-dev libsnappy-dev libsnappy1 libatlas-base-dev cmake libstdc++6-4.8-dbg libgoogle-glog0 libgoogle-glog-dev libgflags-dev liblmdb-dev git python-pip gfortran python-twisted

Download Cuda8 from nvidia and install that

dpkg -i cuda-repo-ubuntu1404-8-0-rc_8.0.27-1_amd64.deb

apt-get install -y cuda

REBOOT FIRST!

Down download the latest driver *367.27* and install that too

bash NVIDIA-Linux-x86_64-367.27.run (say yes on everything, ignore the loading error)

REBOOT AGAIN

Now you should got the driver running, check that with nvidia-smi. If you see all your cards AND the version driver you installed your good.


Fix the hardcoded Cuda8 361 driver

We're going for Cudaminer from Genoil (1.1.5) however somewhere in the cuda framework is a hardcoded detection of loaded drivers which isn't working (for us) but there is an easy fix for this and you want to do this because OpenCL mining with the GTX1070 is not very stable (i never got it running longer then 4 hours with the kernel dying on me). Its the brute solution, but it works for me:

cd /lib/modules/4.2.0-38-generic/updates/dkms/
mkdir old
mv nvidia_361* old/
ln -s nvidia.ko nvidia_361.ko
ln -s nvidia-modeset.ko nvidia_361_modeset.ko
ln -s nvidia-modeset.ko nvidia_361-modeset.ko
ln -s nvidia-drm.ko nvidia_361_drm.ko
ln -s nvidia-uvm.ko nvidia_361_uvm.ko

Building the miner

Now you can run with 1.0.8 from Genoil, but i found 1.1.4 (and now 1.1.5) to be a lot better especially with DAG loading initially(3-4 min vs seconds), so lets install that:

git clone https://github.com/Genoil/cpp-ethereum/
cd cpp-ethereum/
git checkout 110
mkdir build
cd build
cmake -DBUNDLE=cudaminer -DCOMPUTE=61 .. (you will get a warning that libOpenCL might be unsafe/hidden, but i just ignored that and it works fine for me)
make -j32
make install

The libs are now installed in /usr/local/lib, i am lazy (prolly an argument to fix that) but they should be in /usr/lib so i just move them:

cd /usr/local/lib/
mv lib* /usr/lib/

Now check your version:

ethminer --version

if it says 0.9.41-genoil-1.x.x TWICE (first and last lines) your good, else you get it ONCE at the first line and perhaps an error.

then go and see your devices:

[email protected]:/usr/local/lib# ethminer -U --list-devices ethminer: /usr/local/cuda-8.0/targets/x86_64-linux/lib/libOpenCL.so.1: no version information available (required by ethminer) ethminer: /usr/local/cuda-8.0/targets/x86_64-linux/lib/libOpenCL.so.1: no version information available (required by /usr/lib/libethcore.so) ethminer: /usr/local/cuda-8.0/targets/x86_64-linux/lib/libOpenCL.so.1: no version information available (required by /usr/lib/libethash-cl.so) ethminer: /usr/local/cuda-8.0/targets/x86_64-linux/lib/libOpenCL.so.1: no version information available (required by /usr/lib/libethash-cl.so) Genoil's ethminer 0.9.41-genoil-1.1.5 ===================================================================== Forked from github.com/ethereum/cpp-ethereum CUDA kernel ported from Tim Hughes' OpenCL kernel With contributions from nicehash, nerdralph, RoBiK and sp_ Please consider a donation to: ETH: 0xeb9310b185455f863f526dab3d245809f6854b4d [CUDA]: Listing CUDA devices. FORMAT: [deviceID] deviceName [0] GeForce GTX 1070 Compute version: 6.1 cudaDeviceProp::totalGlobalMem: 8507162624 [1] GeForce GTX 1070 Compute version: 6.1 cudaDeviceProp::totalGlobalMem: 8507555840 [2] GeForce GTX 1070 Compute version: 6.1 cudaDeviceProp::totalGlobalMem: 8507555840 [3] GeForce GTX 1070 Compute version: 6.1 cudaDeviceProp::totalGlobalMem: 8507555840 [4] GeForce GTX 1070 Compute version: 6.1 cudaDeviceProp::totalGlobalMem: 8507555840 [5] GeForce GTX 1070 Compute version: 6.1 cudaDeviceProp::totalGlobalMem: 8507555840

Overclocking to the max

Unless anyone can find me a working tool to change clock speeds on the GUI, me (and thus you) are limited to nvidia-settings. The tool is great, but its graphical and you need a GUI. But running it in a virtual X with VNC will not work as the NVidia driver is not loaded, so you will need to connect a monitoring to your first GPU. In addition we need to make X believe each GPU has a monitor connected else you cant control them. Funny enough within an Xterm you can pass along CLI commands to the nvidia-settings so you can script stuff, as long as you run them from within X.

apt-get install vnc4server gnome gnome-session gnome-session-flashback

Next download edid.bin and place it into /etc/X11/ (google it and you'll find it ... i did)

Now create an xorg.conf with the following contents:

# nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 367.27 ([email protected]) Thu Jun 9 19:24:36 PDT 2016 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 Screen 1 "Screen1" 0 0 Screen 2 "Screen2" 0 0 Screen 3 "Screen3" 0 0 Screen 4 "Screen4" 0 0 Screen 5 "Screen5" 0 0 InputDevice "Mouse0" "CorePointer" EndSection Section "Files" EndSection Section "InputDevice" Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Unknown" HorizSync 28.0 - 33.0 VertRefresh 43.0 - 72.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BusID "PCI:1:0:0" Option "Coolbits" "31 EndSection Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" Option "Coolbits" "31" BusID "PCI:2:0:0" Option "ConnectedMonitor" "DFP-0" Option "CustomEDID" "DFP-0:/etc/X11/edid.bin" EndSection Section "Device" Identifier "Device2" Driver "nvidia" VendorName "NVIDIA Corporation" Option "Coolbits" "31" BusID "PCI:3:0:0" Option "ConnectedMonitor" "DFP-0" Option "CustomEDID" "DFP-0:/etc/X11/edid.bin" EndSection Section "Device" Identifier "Device3" Driver "nvidia" VendorName "NVIDIA Corporation" Option "Coolbits" "31" BusID "PCI:4:0:0" Option "ConnectedMonitor" "DFP-0" Option "CustomEDID" "DFP-0:/etc/X11/edid.bin" EndSection Section "Device" Identifier "Device4" Driver "nvidia" VendorName "NVIDIA Corporation" Option "Coolbits" "31" BusID "PCI:5:0:0" Option "ConnectedMonitor" "DFP-0" Option "CustomEDID" "DFP-0:/etc/X11/edid.bin" EndSection Section "Device" Identifier "Device5" Driver "nvidia" VendorName "NVIDIA Corporation" Option "Coolbits" "31" BusID "PCI:6:0:0" Option "ConnectedMonitor" "DFP-0" Option "CustomEDID" "DFP-0:/etc/X11/edid.bin" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "Coolbits" "31" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen1" Device "Device1" Option "Coolbits" "31" Option "UseDisplayDevice" "none" EndSection Section "Screen" Identifier "Screen2" Device "Device2" Option "Coolbits" "31" Option "UseDisplayDevice" "none" EndSection Section "Screen" Identifier "Screen3" Device "Device3" Option "Coolbits" "31" Option "UseDisplayDevice" "none" EndSection Section "Screen" Identifier "Screen4" Device "Device4" Option "Coolbits" "31" Option "UseDisplayDevice" "none" EndSection Section "Screen" Identifier "Screen5" Device "Device5" Option "Coolbits" "31" Option "UseDisplayDevice" "none" EndSection

The EDID options will fake monitors, except on GPU where your real monitor is. Further coolbits 31 allows you to overclock, fan control, etc of all your GPU's

After doing so reboot. You can enable auto-login and autostartup VNC to safe you some time

run 'vnc4server' it will ask for a password, set it and then kill the server again with vnc4server -kill :1

Now you can run:

x0vncserver -display :0 -passwordfile /home/miner/.vnc/passwd

Now you will have VNC access and you can disconnect your monitoring, however if you reboot wou will need to connect a monitoring until X11 has started / logged you in and VNC has started. Else you will not have GUI access and the ability to change settings of your GPU's.


Overclocking

So to summerize:

No overclocking get you about 22MH/s
Overclocking with SMI (which does not need all the X11 crap) will get you up to 26MH/s
Overclocking with nvidia-settings gets you over 32MH/s easily with still leaving 30Watts for tuning/optimizing
Overclocking with nvidia-settings gets you over 37MH/s running Cuda8

These are running benchmarks with a SINGLE card (less work on the PCI bus) running 6 cards limited overclocking and i am currently getting 192,88MH/s with them running CUDA. I did manage to run at 203.39MH/s for a while but after 6/7 it would stop with OpenCL, well all goes to 0MH/s but keeps running and needs a restart. I dont know if this is a driver or ethminer (tuning) issue but if you fancy an ugly daemon wrapper guarding the ethminer for this you can get an extra 10MH/s from your rig. With Cuda i was able to run over 220MH/s for 12 hours before i manged to brake the driver because i started nvidia-smi while the cards where maxed out (and it doenst like that it seems). If tune it fully, you shutdown graphics after, you might get 218.11MH/s (to be exact) too i am still tuning it for stability but i am nearly there.

But how do you do this?

nvidia-smi tool is documentent just find your max clock and set it, if your lazy you can copy/past this:

nvidia-smi -ac 1911,4004

and your set.

IF you going for the nvidia-settings clocks, i've created & use this script:

CLOCK=200 MEM=1500 CMD='/usr/bin/nvidia-settings' echo "performance" >/sys/devices/system/cpu/cpu0/cpufreq/scaling_governor echo "performance" >/sys/devices/system/cpu/cpu1/cpufreq/scaling_governor echo 2800000 > /sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq echo 2800000 > /sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq for i in {0..5} do nvidia-smi -i ${i} -pm 0 nvidia-smi -i ${i} -pl 170 # nvidia-smi -i ${i} -ac 4004,1911 ${CMD} -a [gpu:${i}]/GPUPowerMizerMode=1 ${CMD} -a [gpu:${i}]/GPUFanControlState=1 ${CMD} -a [fan:${i}]/GPUTargetFanSpeed=80 for x in {3..3} do ${CMD} -a [gpu:${i}]/GPUGraphicsClockOffset[${x}]=${CLOCK} ${CMD} -a [gpu:${i}]/GPUMemoryTransferRateOffset[${x}]=${MEM} done done

If you can read, you can figure this out. I am setting the fans to 80% here as its stable on cooling and nice to them ears. IF you don't mind the noice i'd suggest setting it to max and you will be able to overclock more.

As you can see here i am doing +200 on the clock and +1500 on the memory, some notes you might find handy:

single card: +285 & +1800 is the max, single card pushes you to 37.19MH/s with Cuda or 32,66MH/s with OpenCL
multi card: + 275 & + 1775 is the max, but not stable (read: 6 hourely crashes) and pushes 4MH/s per card
multi card: +200 & + 1600 semi stable, needs more testing but this might be better and pushes another 3MH/s per card

again, these settings might not work for you, but gets you in the ballpark.
«1345

Comments

  • shutfushutfu Member Posts: 320 ✭✭
    a 40% gain from overclocking and tuning is insane, great work. I would like to point out that with AMD cards it is basically plug and play though.

    Upfront cost of GTX 1070 is what makes it not worth it for me, as I can get 2x r9 290 for the same price. And they get 28mh/s+
  • kruisdraadkruisdraad Member Posts: 61
    true, but a lot more power & heat ... but it depends where you live and suchs ;) still the card has great potential
  • MaggxMaggx Member Posts: 5
    Buying GTX 1070 just for the mining is wasting money ,But if you have two of them in SLI for the gaming purposes and mining during the rest of day That's good idea
  • kpskps Member Posts: 23
    edited June 2016
    Hi, i've tried following your guide to get cuda accelerated mining on gtx 1080s, however i can't compile cudaminer in any way i keep getting below error, i'm completely out of ideas

    "[ 58%] Building NVCC (Device) object libethash-cuda/CMakeFiles/ethash-cuda.dir/ethash-cuda_generated_ethash_cuda_miner_kernel.cu.o
    nvcc fatal : redefinition of argument 'std'
    CMake Error at ethash-cuda_generated_ethash_cuda_miner_kernel.cu.o.cmake:207 (message):
    Error generating
    /src/cpp-ethereum/build/libethash-cuda/CMakeFiles/ethash-cuda.dir//./ethash-cuda_generated_ethash_cuda_miner_kernel.cu.o

    libethash-cuda/CMakeFiles/ethash-cuda.dir/build.make:63: recipe for target 'libethash-cuda/CMakeFiles/ethash-cuda.dir/ethash-cuda_generated_ethash_cuda_miner_kernel.cu.o' failed
    make[2]: *** [libethash-cuda/CMakeFiles/ethash-cuda.dir/ethash-cuda_generated_ethash_cuda_miner_kernel.cu.o] Error 1
    CMakeFiles/Makefile2:232: recipe for target 'libethash-cuda/CMakeFiles/ethash-cuda.dir/all' failed
    make[1]: *** [libethash-cuda/CMakeFiles/ethash-cuda.dir/all] Error 2
    Makefile:127: recipe for target 'all' failed
    make: *** [all] Error 2
    [email protected]:/src/cpp-ethereum/build#"

    Could you share compiled cuda build, upload it somewhere ? Did you ever come across similiar looking error ? Thanks.

    e: i have managed to compile the ethminer with dbundle=cudaminer, however when i try to use -U for cuda mining i get another error

    Genoil's ethminer 0.9.41-genoil-1.1.6
    =====================================================================
    Forked from github.com/ethereum/cpp-ethereum
    CUDA kernel ported from Tim Hughes' OpenCL kernel
    With contributions from nicehash, nerdralph, RoBiK and sp_

    Please consider a donation to:
    ETH: 0xeb9310b185455f863f526dab3d245809f6854b4d

    modprobe: ERROR: could not insert 'nvidia_361_uvm': Unknown symbol in module, or unknown parameter (see dmesg)
    CUDA error in func 'getNumDevices' at line 112 : unknown error.
    terminate called after throwing an instance of 'std::runtime_error'
    what(): unknown error
    Aborted (core dumped)
    Post edited by kps on
  • kruisdraadkruisdraad Member Posts: 61
    this is a known issue if you use ubuntu 16.04 look the the issue track the solution is listed there

    also follow my instructions about symlinks that 2nd issue was explained and a solution provided.

    I have a full linux image i can give you for a small fee ;)
  • Wolf0Wolf0 Member Posts: 329 ✭✭✭
    Dude, that's a $600 card - I do see how power consumption is a big deal, but you say you're currently running them at 32MH/s or so per card - I can get than with one 290/390. While they won't go to 37MH/s, I don't see 15.63% increase justifying a 50%+ price bump, less power use or no.
  • kruisdraadkruisdraad Member Posts: 61
    perhaps you pay 600$ i didnt pay more then 450$ in euro's perhaps you need to look better? I've seen custom PCB's at 399$ so ...

    also this is a tech posts, not really a place to drop some half baked rant ;)
  • kpskps Member Posts: 23
    I've had the symlink already done, for some reason loading up recovery mode and reinstalling the driver fixed the problem, however i'm not really getting any extra performance with cuda over opencl, but this could be related to 1080 and driver problems, 1070 right now seems to be getting higher performance over 1080 in mining due to gddr5x memory, or so i've read i can't be sure about that one. Do you use any special arguments for ethminer aside from -U to enable cuda mode ?
  • Wolf0Wolf0 Member Posts: 329 ✭✭✭

    perhaps you pay 600$ i didnt pay more then 450$ in euro's perhaps you need to look better? I've seen custom PCB's at 399$ so ...

    also this is a tech posts, not really a place to drop some half baked rant ;)

    I checked NewEgg for prices - here in the US, it is indeed $600.
  • JukeboxJukebox Member Posts: 640 ✭✭✭
    edited June 2016
    We will see 28-30 MH on ETH for $200 soon.
    :p
  • kruisdraadkruisdraad Member Posts: 61
    @kps yeah i've seen posts about GTX1080 and the DDR5X ram that does not do well with ethereum , GTX1070 has normal DDR5

    @Wolf0 auch, big bump then, like i said i managed to get them cheap and did the math. On 1 yr price is about the same ... 2yr and up GTX is cheaper due to half the power. And i am not just looking at the cards but also PSU's etc the whole picture
  • kruisdraadkruisdraad Member Posts: 61
    Jukebox said:

    We will see 28-30 MH on ETH for $200 soon.
    :p

    context? You mean them new AMD cards? if so ... there are alraedy XFX benchmarks out that their not that good at all.
  • JukeboxJukebox Member Posts: 640 ✭✭✭



    context? You mean them new AMD cards? if so ... there are alraedy XFX benchmarks out that their not that good at all.

    Benchmarks on ETH?

    There's some leaks that reference 480 RX OC'ed and tested on 1600 MHz GPU clock @ stock voltage.
    It must be not slower than 390X cards

  • AmphAmph Member Posts: 106
    shutfu said:

    a 40% gain from overclocking and tuning is insane, great work. I would like to point out that with AMD cards it is basically plug and play though.

    Upfront cost of GTX 1070 is what makes it not worth it for me, as I can get 2x r9 290 for the same price. And they get 28mh/s+

    sure 56MH for something like 450w, against 37MH(or 32) for something that only consume at worst 140

    to be honest i prefer the second option
  • AmphAmph Member Posts: 106
    edited June 2016
    Wolf0 said:

    perhaps you pay 600$ i didnt pay more then 450$ in euro's perhaps you need to look better? I've seen custom PCB's at 399$ so ...

    also this is a tech posts, not really a place to drop some half baked rant ;)

    I checked NewEgg for prices - here in the US, it is indeed $600.
    no it is not

    http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&IsNodeId=1&N=100007709 601204369&Tpk=gtx 1070&ignorear=1
    Post edited by Amph on
  • IliketurtlesIliketurtles Member Posts: 18
    Those didn't last long.
  • AmphAmph Member Posts: 106
    edited June 2016
    i'm very interested in this card, just waiting for gigabyte version, so i want to know more about those 37MH per card, can you reach them with a six gpu rig? and be stable?
  • kruisdraadkruisdraad Member Posts: 61
    its stable enough, i had 2 crashes in the last 7 days but i am still fuzziling with settings, but if i set it to like 35MH/s its stable so far.
  • AmphAmph Member Posts: 106
    edited June 2016
    good, now what about the overall consumption, can you test it with a wattmeter? should be around 140w per card but i want to be sure

    Those didn't last long.

    you can still buy founder edition for 450 at worst case
  • thesmokingmanthesmokingman Member Posts: 152 ✭✭
    Just purchased a Gigabyte Gaming 1070 card off the net to give this a go. Different PCB means different settings, so I'll post up my findings along with some pics once I have the card in hand and get the Linux environment setup. Looking to diversify my farm, and if these cards can truly hit 35+mh stable, they're worth the purchase to me...
  • LongsnowsmLongsnowsm Member Posts: 18

    Just purchased a Gigabyte Gaming 1070 card off the net to give this a go. Different PCB means different settings, so I'll post up my findings along with some pics once I have the card in hand and get the Linux environment setup. Looking to diversify my farm, and if these cards can truly hit 35+mh stable, they're worth the purchase to me...

    Please keep us posted. I would love to see more data from other users getting 35-37mh on this card.
  • thesmokingmanthesmokingman Member Posts: 152 ✭✭
    Well I've managed to get my card setup and running/hashing with a hashrate of 23~26MH at default clocks - core: 1911 mem: 7047

    I bypassed the VNC setup part as I'm just working with one GPU for now and the monitor is connected, so I didn't feel it was needed.

    My current issue is I can't seem to get the card to overclock at all using "nvidia-smi". I ran "nvidia-smi -q -d SUPPORTED_CLOCKS | more" and found that the top end of my card would be Mem 4004 Mhz and Core 1911~1999. So I issued the command "sudo nvidia-smi -ac 4004,1999" and received the "All done" message. However, once I go back over to my Genoil ethminer tab it still shows the same hashrate of 23~26Mhs.

    I launched the nvidia-settings gui, but the option to manually change the core and memory clocks isn't present. I can only chose 1 of 3 modes "Auto/Adaptive/Prefer Maximum Performance". I'm guessing there's a setting or tweak that I need to perform in order to get it working, I just need to do some more research and why the checkbox isn't there that allows me to manually set the clocks. Adaptive Clocking is enabled, so I'm wondering if I turn this off, if my manual overclocks will work.

    Just throwing this information out there for anyone else who may be working on getting their Nvidia cards working and running into issues.
  • kruisdraadkruisdraad Member Posts: 61
    dude, follow the docs and dont skip stuff its all explained

    problem is you skipped docs and your using SMI
  • thesmokingmanthesmokingman Member Posts: 152 ✭✭
    edited July 2016
    @kruisdraad - so even though I have a monitor attached and not trying to run it headless, I still need to setup the VNC/EDID part? That's the only part that I "skipped"

    I'm not sure how to use nvidia-settings, so will use that once I figure out what I'm doing. Trying to find documents now on how to use nvidia-settings so I can use the script that you posted..

    EDIT: "Further coolbits 31 allows you to overclock, fan control, etc of all your GPU's" - DOH! :s:D
    Post edited by thesmokingman on
  • kruisdraadkruisdraad Member Posts: 61
    yes! or read the manual if you gonna go offscript ;)

    for 1 ETH i can tell you exactly what you need in that specific case, but if you follow the docs you can work with this ... others got it working ...
  • thesmokingmanthesmokingman Member Posts: 152 ✭✭
    Finally have this all setup. Using the Nvidia-Settings GUI I'm able to achieve 31.46Mhs using Genoil/Cuda for a single Gigabyte Gaming G1 GTX 1070 card using +100/+1700.

    The system crashes immediately with a core clock higher than 170MHz, and doesn't run stable unless ran below 150MHz. Memory on the other hand was ran up to 1900MHz (150 core), but brought back down to 1700MHz as it had no noticeable affect on hash rates. 150/1800 did display 34Mhs from time to time, but 31~32Mhs was the most consistent, so final clock rates were brought down to +100/+1700 for a consistent 31.46Mhs.

    Looks like the FE cards may clock better than some of the custom cards. I'm thinking about picking up a FE card and testing it to compare.



  • algol68algol68 Member Posts: 4
    I think there need over 85% ASIC quality to this overclock
  • bbcoinbbcoin Member Posts: 377 ✭✭✭
    algol68 said:

    I think there need over 85% ASIC quality to this overclock

    High ASIC quality = less leak = less overclock
    Low ASIC quality = more leak = more overclock
  • SmokyishSmokyish Member Posts: 203 ✭✭

    Finally have this all setup. Using the Nvidia-Settings GUI I'm able to achieve 31.46Mhs using Genoil/Cuda for a single Gigabyte Gaming G1 GTX 1070 card using +100/+1700.

    The system crashes immediately with a core clock higher than 170MHz, and doesn't run stable unless ran below 150MHz. Memory on the other hand was ran up to 1900MHz (150 core), but brought back down to 1700MHz as it had no noticeable affect on hash rates. 150/1800 did display 34Mhs from time to time, but 31~32Mhs was the most consistent, so final clock rates were brought down to +100/+1700 for a consistent 31.46Mhs.

    Looks like the FE cards may clock better than some of the custom cards. I'm thinking about picking up a FE card and testing it to compare.

    Also one thing to consider, are the base clocks of the cards.
    I've updated my cards BIOS to run always in OC mode, so cards have +114MHz core and +100MHz mem clocks compared to FE. So my +100/1600 is +214/1700 on an FE :)

    Otherwise your numbers sound very similar to mine.
Sign In or Register to comment.