@restless how do you see the cores are reversed? It's probably the result of the addition of the --opencl-devices flag. Why do you consider it a bug? Is it somehow illogical?
I don't know much about EXP. Do they use the same DAG files but only different epoch? I would recommend to store EXP DAGs in a different directory using the -R flag. Then you can use -E old on both ETH and EXP.
Currently using this on ethermine and it seems my results are lower without using the proxy with it. Do we need to still run the proxy or does cudaminer have that functionality included?
@rether stratum is natively supported since 1.0.5. I haven't been able to carry out real long term tests with it, so I can't guarantee 100% stability. I'd say try it out and keep a close eye.
@restless how do you see the cores are reversed? It's probably the result of the addition of the --opencl-devices flag. Why do you consider it a bug? Is it somehow illogical?
I don't know much about EXP. Do they use the same DAG files but only different epoch? I would recommend to store EXP DAGs in a different directory using the -R flag. Then you can use -E old on both ETH and EXP.
Long story, but on 7990 there are 2 different ethminer's run (using opencl-device 0 or 1 and -R ) Well, with all ethminer versions upto 104 there is no problem. 106b2 & 106 always run on first "card" no matter if I give opencl-device 0 or 1 So... that IS a bug for me -G --list-devices correctly lists 2 Tahiti devices, but no matter how I start the thread its always on same card/core
I have just tried the following build instructions on a vanilla Ubuntu 14.04. system. If I find some time I will include instructions for Ubuntu 15.04. and 15.10. as well.
Hey everyone, i'm new here and to mining in general...
I'm a Windows 10 + GTX 970 owner just like some of you guys, and as we know, the only workaround today for a reasonable hash rate is to use the 347.52 driver.
I had some trouble with overclocking my card using 347.52, So I found a way that gives me the option to use nvidia-smi to switch my GPU to P0 state (couldn't do it with the 347 driver).
For this I'm using a newer driver which is 350.12 (Which was also mentioned here in the forum). And apparently 350.12 does support nvidia-smi!
Here's a short guide, which is a collection of the things I read in this forum topic and across the web:
1. Uninstall all your other NVIDIA display drivers, so windows wont choose those instead. 2. Download v350.12 Geforce 350.12 Driver 3. Install the driver the usual way. 4. Make sure to disable Windows 10 annoying auto driver update functionality or else it will eventually update this driver to a newer version. I followed this guide and used the Registry workaround: Take Back Control Over Driver Updates in Windows 10 5. you can now use nvidia-smi to switch to P-State P0 to gain maximum overclock capability. I followed this guide: How to Squeeze Some Extra Performance Mining Ethereum on Nvidia 6. Use your overclocking tool to increase your GPU memory clock (which wasn't really possible with P2 state). I use NvidiaInspector so I can change my clock using a batch file. 7. Use Genoil's ethminer compiled with CUDA 6.5. I use version 1.0.5 which I found here in the forum, didn't have the patience to compile one by myself ethminer-0.9.41-genoil-1.0.5-cuda6.5.zip, credits to Mo35 for compiling 8. Start mining as you would normally do!
Note that steps 5-8 will be required after every system restart.
I'm running my GTX 970 at a consistent 19-20MH/s, in Windows 10 (vs. 16.5 without the nvidia-smi trick) card is at 82% power, temperatures are not bad, 65C with fan at 33% (EVGA 970 SSC), I can push it much further but wouldn't want to shorten it's life too much.
Thanks for the detailed instruction. I was able to get it compiled in Ubuntu 15.10. I just had use
sudo apt-get install libjsonrpccpp-dev
instead of libjson-rpc-cpp-dev
I think it is has just been renamed in 15.10. Everything else is exactly the same. Thanks for the effort.
I spoke too soon. I was only able to compile the open-cl version. When i tried cmake -DBUNDLE={cuda}miner ..
It blows up here. -- Found CUDA: /usr/local/cuda (found version "7.5") - CUDA header: /usr/local/cuda/include - CUDA lib : /usr/local/cuda/lib64/libcudart.so - jsonrpcstub location : ETH_JSON_RPC_STUB-NOTFOUND CMake Error at cmake/EthDependencies.cmake:144 (find_package): By not providing "FindQt5Quick.cmake" in CMAKE_MODULE_PATH this project has asked CMake to find a package configuration file provided by "Qt5Quick", but CMake did not find one.
Could not find a package configuration file provided by "Qt5Quick" (requested version 5.4) with any of the following names:
Qt5QuickConfig.cmake qt5quick-config.cmake
Add the installation prefix of "Qt5Quick" to CMAKE_PREFIX_PATH or set "Qt5Quick_DIR" to a directory containing one of the above files. If "Qt5Quick" provides a separate development package or SDK, be sure it has been installed. Call Stack (most recent call first): CMakeLists.txt:246 (include)
-- Configuring incomplete, errors occurred! See also "/home/mine1/genoil/cpp-ethereum/build/CMakeFiles/CMakeOutput.log".
Linking CXX shared library libethash-cl.so [ 65%] Built target ethash-cl [ 68%] Building NVCC (Device) object libethash-cuda/CMakeFiles/ethash-cuda.dir/ethash-cuda_generated_ethash_cuda_miner_kernel.cu.o In file included from /usr/local/cuda/include/cuda_runtime.h:76:0, from :0: /usr/local/cuda/include/host_config.h:115:2: error: #error -- unsupported GNU version! gcc versions later than 4.9 are not supported! #error -- unsupported GNU version! gcc versions later than 4.9 are not supported! ^ CMake Error at ethash-cuda_generated_ethash_cuda_miner_kernel.cu.o.cmake:206 (message):
@Genoil When is the --intensity flag going to land. I tried --cuda-extragpu-mem 512 --cuda-grid-size 4086, but when I start the miner, my screen becomes 100% unresponsive still.
@Genoil how hard would it be to make a feature, where it would calculate the params automatically, so you could say --intensity 50 (50% of GPU capacity) and it would only run the graphics card at 50%?
I'm trying to run this on my notebook with a built in Intel i7 video card and NVIDIA GeForce GTX 850M one. If I run ethminer.exe -G -M it starts using Intel video chip, if I try ethminer.exe -U -M I get this error: "CUDA error in func 'ethash_cuda_miner::getNumDevices' at line 112 : CUDA driver version is insufficient for CUDA runtime version."
Is there a way to make it use my CUDA chip? Or is there another miner I can try? Thank you!
@Genoil When is the --intensity flag going to land. I tried --cuda-extragpu-mem 512 --cuda-grid-size 4086, but when I start the miner, my screen becomes 100% unresponsive still.
just lower grid size...use gpu-z to monitor load with differetn settings. there is no real difference between grid size and intensity, other than that the scale (linear vs. logaritmic) is different.
@ali if I'm correct, the 850M is a GM107 chip. The only platform that is still make a bit of hash out of GM107 is Linux. What GPU driver version do you have running?
for opencl, try ethminer.exe -G -M --opencl-platform 1
Just found a way to achieve the same speed as CUDA using OpenCL. Very useful to avoid the build issues experienced in various Linuxes. This will go into 1.0.7
Just found a way to achieve the same speed as CUDA using OpenCL. Very useful to avoid the build issues experienced in various Linuxes. This will go into 1.0.7
This would also mean that there should be some more room for optimization in CUDA since every single application that I've seen and was optimized for CUDA, was running faster than OpenCL on NVIDIA hardware
@Genoil I see. I slashed the grid size in half, yet I got almost 0 difference. But when I slashed grid size by 100x, the GPU usage dropped to 80%.
Now I have 3 questions: - does same grid size on different GPU's result in same, or varying GPU loads (would it be same if GPU's have same max grid size or n of cuda cores?) - is there a mathematical formula one could use to describe relationship between grid size and GPU load - what ratio of --cuda-grid-size and --cuda-block-size would be ideal, to get lowest possible TDP (power consumption, heat) with highest possible hashrate?
If I keep my fans below 55%, I can mine and sleep at night (my pc is right next to my bed).
I'm sorry for stupid questions, I know nothing about GPUs.
New to Eth and new to mining (well, played w/BTC a bit a long time ago...should have stuck with it). Been running a test for the last few hours on AWS G2.2x to see if it can be at all economical (obviously makes more sense to buy, but this will give me an idea of how effective working in a pool can be).
Not familiar w/how pools work so, not sure if spinning up an AWS G2.8x w/4 GPU's would work better in a pool due to the higher MH/s or whether 4 G2.2x's would be better (about the same price per hour). Anyway, hoping some of you with experience could give me some insight. Obviously building a rig is more economical provided the cards purchased can last through the ROI period given the increase in size of the DAG.
Any thoughts or opinions would be much appreciated.
I have just tried the following build instructions on a vanilla Ubuntu 14.04. system. If I find some time I will include instructions for Ubuntu 15.04. and 15.10. as well.
You can then find the executable in the ethminer subfolder
I tried your guide, it's working! But somehow it uses older sources, because the built executable does not support '-S' parameter, so the stratum mining. Is there a way to use newer sources as the latest windows build used?
I have just tried the following build instructions on a vanilla Ubuntu 14.04. system. If I find some time I will include instructions for Ubuntu 15.04. and 15.10. as well.
You can then find the executable in the ethminer subfolder
I tried your guide, it's working! But somehow it uses older sources, because the built executable does not support '-S' parameter, so the stratum mining. Is there a way to use newer sources as the latest windows build used?
@Genoil 1.0.6b2 ethminer restarts every time a disconnection from the pool is triggered. It is running but initiates all the sequences again from start to loading dag until hashing. I am not sure if its just my setting(overclocked) or others are experiencing it. Maybe you can take a look at that on the next release. Did not experience it on previous releases. Thanks.
@newkidONdablock DAG size affects hashrate. The default bench uses a 1GB DAG while current DAG is about 1.4GB. If you want realistic bench, lookup approximate current blocknumber and add it after -M, I.e. -M 1234567
I am using 1.0.6, latest AMD drivers, and Windows 7 with 2 7850s, a 270x and a 280x all running happily around 60M/h (just as an average when looking at the Ethminer output). If it matters, Suprnova shows around 70 M/h. I am looking for feedback on these results and whether I might be able to squeeze out more performance. All of the clocks are running at stock right now and my ethminer commandline is:
Comments
I don't know much about EXP. Do they use the same DAG files but only different epoch? I would recommend to store EXP DAGs in a different directory using the -R flag. Then you can use -E old on both ETH and EXP.
cudaminer works also with Krypton
Well, with all ethminer versions upto 104 there is no problem.
106b2 & 106 always run on first "card" no matter if I give opencl-device 0 or 1
So... that IS a bug for me
-G --list-devices correctly lists 2 Tahiti devices, but no matter how I start the thread its always on same card/core
Ubuntu 14.04. OpenCL only
- sudo apt-get update
- sudo apt-get -y install software-properties-common
- add-apt-repository -y ppa:ethereum/ethereum
- sudo apt-get update
- sudo apt-get install git cmake libcryptopp-dev libleveldb-dev libjsoncpp-dev libjson-rpc-cpp-dev libboost-all-dev libgmp-dev libreadline-dev libcurl4-gnutls-dev ocl-icd-libopencl1 opencl-headers mesa-common-dev libmicrohttpd-dev build-essential -y
- git clone https://github.com/Genoil/cpp-ethereum/
- cd cpp-ethereum/
- mkdir build
- cd build
- cmake -DBUNDLE=miner ..
- make -j8
You can then find the executable in the ethminer subfolderUbuntu 14.04. OpenCL + CUDA
- wget http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1404/x86_64/cuda-repo-ubuntu1404_7.5-18_amd64.deb
- sudo dpkg -i cuda-repo-ubuntu1404_7.5-18_amd64.deb
- sudo apt-get -y install software-properties-common
- sudo add-apt-repository -y ppa:ethereum/ethereum
- sudo apt-get update
- sudo apt-get install git cmake libcryptopp-dev libleveldb-dev libjsoncpp-dev libjson-rpc-cpp-dev libboost-all-dev libgmp-dev libreadline-dev libcurl4-gnutls-dev ocl-icd-libopencl1 opencl-headers mesa-common-dev libmicrohttpd-dev build-essential cuda -y
- git clone https://github.com/Genoil/cpp-ethereum/
- cd cpp-ethereum/
- mkdir build
- cd build
- cmake -DBUNDLE=miner ..
- make -j8
You can then find the executable in the ethminer subfolderI'm a Windows 10 + GTX 970 owner just like some of you guys, and as we know, the only workaround today for a reasonable hash rate is to use the 347.52 driver.
I had some trouble with overclocking my card using 347.52,
So I found a way that gives me the option to use nvidia-smi to switch my GPU to P0 state (couldn't do it with the 347 driver).
For this I'm using a newer driver which is 350.12
(Which was also mentioned here in the forum).
And apparently 350.12 does support nvidia-smi!
Here's a short guide, which is a collection of the things I read in this forum topic and across the web:
1. Uninstall all your other NVIDIA display drivers, so windows wont choose those instead.
2. Download v350.12 Geforce 350.12 Driver
3. Install the driver the usual way.
4. Make sure to disable Windows 10 annoying auto driver update functionality or else it will eventually update this driver to a newer version. I followed this guide and used the Registry workaround: Take Back Control Over Driver Updates in Windows 10
5. you can now use nvidia-smi to switch to P-State P0 to gain maximum overclock capability. I followed this guide: How to Squeeze Some Extra Performance Mining Ethereum on Nvidia
6. Use your overclocking tool to increase your GPU memory clock (which wasn't really possible with P2 state). I use NvidiaInspector so I can change my clock using a batch file.
7. Use Genoil's ethminer compiled with CUDA 6.5. I use version 1.0.5 which I found here in the forum, didn't have the patience to compile one by myself
ethminer-0.9.41-genoil-1.0.5-cuda6.5.zip, credits to Mo35 for compiling
8. Start mining as you would normally do!
Note that steps 5-8 will be required after every system restart.
I'm running my GTX 970 at a consistent 19-20MH/s, in Windows 10 (vs. 16.5 without the nvidia-smi trick)
card is at 82% power, temperatures are not bad, 65C with fan at 33% (EVGA 970 SSC), I can push it much further but wouldn't want to shorten it's life too much.
Hope this helps!
Thanks for the detailed instruction. I was able to get it compiled in Ubuntu 15.10. I just had use
sudo apt-get install libjsonrpccpp-dev
instead of libjson-rpc-cpp-dev
I think it is has just been renamed in 15.10. Everything else is exactly the same. Thanks for the effort.
cmake -DBUNDLE={cuda}miner ..
It blows up here.
-- Found CUDA: /usr/local/cuda (found version "7.5")
- CUDA header: /usr/local/cuda/include
- CUDA lib : /usr/local/cuda/lib64/libcudart.so
- jsonrpcstub location : ETH_JSON_RPC_STUB-NOTFOUND
CMake Error at cmake/EthDependencies.cmake:144 (find_package):
By not providing "FindQt5Quick.cmake" in CMAKE_MODULE_PATH this project has
asked CMake to find a package configuration file provided by "Qt5Quick",
but CMake did not find one.
Could not find a package configuration file provided by "Qt5Quick"
(requested version 5.4) with any of the following names:
Qt5QuickConfig.cmake
qt5quick-config.cmake
Add the installation prefix of "Qt5Quick" to CMAKE_PREFIX_PATH or set
"Qt5Quick_DIR" to a directory containing one of the above files. If
"Qt5Quick" provides a separate development package or SDK, be sure it has
been installed.
Call Stack (most recent call first):
CMakeLists.txt:246 (include)
-- Configuring incomplete, errors occurred!
See also "/home/mine1/genoil/cpp-ethereum/build/CMakeFiles/CMakeOutput.log".
sudo apt-get install qtdeclarative5-dev
sudo apt-get install libqt5webengine5-dev
But now cuda is upset with my GNU version.
Linking CXX shared library libethash-cl.so
[ 65%] Built target ethash-cl
[ 68%] Building NVCC (Device) object libethash-cuda/CMakeFiles/ethash-cuda.dir/ethash-cuda_generated_ethash_cuda_miner_kernel.cu.o
In file included from /usr/local/cuda/include/cuda_runtime.h:76:0,
from
/usr/local/cuda/include/host_config.h:115:2: error: #error -- unsupported GNU version! gcc versions later than 4.9 are not supported!
#error -- unsupported GNU version! gcc versions later than 4.9 are not supported!
^
CMake Error at ethash-cuda_generated_ethash_cuda_miner_kernel.cu.o.cmake:206 (message):
If I run ethminer.exe -G -M it starts using Intel video chip, if I try ethminer.exe -U -M I get this error:
"CUDA error in func 'ethash_cuda_miner::getNumDevices' at line 112 : CUDA driver version is insufficient for CUDA runtime version."
Is there a way to make it use my CUDA chip? Or is there another miner I can try? Thank you!
P.S. notebook is running win 8.1 64 bit
@ali if I'm correct, the 850M is a GM107 chip. The only platform that is still make a bit of hash out of GM107 is Linux. What GPU driver version do you have running?
for opencl, try ethminer.exe -G -M --opencl-platform 1
Now I have 3 questions:
- does same grid size on different GPU's result in same, or varying GPU loads (would it be same if GPU's have same max grid size or n of cuda cores?)
- is there a mathematical formula one could use to describe relationship between grid size and GPU load
- what ratio of --cuda-grid-size and --cuda-block-size would be ideal, to get lowest possible TDP (power consumption, heat) with highest possible hashrate?
If I keep my fans below 55%, I can mine and sleep at night (my pc is right next to my bed).
I'm sorry for stupid questions, I know nothing about GPUs.
New to Eth and new to mining (well, played w/BTC a bit a long time ago...should have stuck with it). Been running a test for the last few hours on AWS G2.2x to see if it can be at all economical (obviously makes more sense to buy, but this will give me an idea of how effective working in a pool can be).
Not familiar w/how pools work so, not sure if spinning up an AWS G2.8x w/4 GPU's would work better in a pool due to the higher MH/s or whether 4 G2.2x's would be better (about the same price per hour). Anyway, hoping some of you with experience could give me some insight. Obviously building a rig is more economical provided the cards purchased can last through the ROI period given the increase in size of the DAG.
Any thoughts or opinions would be much appreciated.
Thanks.
1.0.7 will have pool failover. Maybe I should also look at this behavior, make the timeout to stop mining a bit higher.
newkidONdablock. Tested with 15.12 and its the same
ethminer -G --opencl-devices 0 1 2 3 -F http://eth-us.suprnova.cc:3000/user.eth/100 --cl-global-work 16384 --cl-local-work 128 --cl-extragpu-mem 0
Any ideas or suggestions for changes to the above? Thanks.