Hi all!
I was just curious to know what kind of performances people were getting on the testnet with their setup.
Personally I got ~88khs with CPU mining on Intel Xeon (4 cores)/16GB RAM and Ubuntu 14.04
It looks like
@larz got ~130khs with his
setup (Ubuntu 14.04 on a Core i7 with 16GB RAM)
I'd be curious to know what others are getting! Maybe even some GPU mining performances if someone managed to create such miners already (expected to be ~x100 better I believe).
Thanks!
Comments
curiously my hash rates vary between 60-80kHs with the C++ implementation (eth -m on -f and neth -m on -f) and the 130kHs I've mentioned in the other thread I get out of the go implementation (ethereum --mine) on the exact same hardware.
Whilst the go implementation seems to be happily mining away, updating the hash rate (which varies between 125 and 130KHs), I do get a peer not found error at the very beginning (as discussed in the other thread), so I'm wondering is it actually mining at all or is this just local?
So my stats might have to be taken with a grain of salt...
These screenshots were taken within a minute from one another. neth runs at 40kHz and ethereum seems to run at close to 130kHz.
I am running an Intel i7-4770K @3.5ghz, with 16gb ram. It's running Ubuntu 14.10, with no video card currently installed.
I'm getting a pretty steady 136 khash
There is no GPU miner publicly available is there? If so, I would love to see some specs on GPUs tested.
AFAIK it doesn't make use of the GPU at this point anyway, though it has been said on reddit that Frontier wouldn't launch without a GPU miner (for fairness).
Which code base did you use by the way?
// author Tim Hughes
// Tested on Radeon HD 7850
// Hashrate: 15940347 hashes/s
// Bandwidth: 124533 MB/s
(Reference: GitHub Link )
I have not verified such performance, just reporting here what I read.
Keep in mind this seems to be a raw measurement done with the ethash benchmark, not a measurement while integrated to run against testnet... though it gives us a glimpse of what to expect.
What does bandwdith impact exactly?
In latest code, I see that it is not an independent measurements. It is simply a function of the hashrate:
bandwidth = hashrate * hashimoto access * bytes read per access = hashrate * 64 * 128 bytes
This is likely calculated just to check if near the hardware limit.
From these numbers I assume the Go implementation is single threaded (I don't know the language well enough to be sure).
Can someone familiar with the GO implementation confirm about the CPU mining being currently single threaded or not?
You seem to be getting a similar hashrate as I do. Thanks for clarifying the bandwith bit btw! I'm not overly familiar with Go either, hopefully someone can jump in on that.
Using my modified Ethash benchmark for multithread tests shows a hashrate increase to be a bit less than a multiple of the number of hardware cores (as expected).
I get ~700 khashes/s with 16 threads running over 8 hardware cores. The setup is a dual socket Xeon E5-2609 v2 @ 2.5 GHz.
To me, this is further confirming that CPU mining will not be economically competitive compare to GPU mining rigs (as intended by the design).
1. How much better is it to mine with Ubuntu vs Windows
2. If I have 2MB up and 20MB download connection speed does it make sense to pay an extra 10 dollars a month to get more bandwith to be able to mine more?
3. Can you give me three side by side comparisons of two medium costs systems and one expensive system?
1. There is no difference
2. Your connection is just fine
3. You basically just need a normal computer, you don't need a top of the line cpu or ram or hdd. If you plan on running more than one gpu, you will need a motherboard that can handle that and usb riser cards. Your expense is going to come down to which gpus you want to run and how many and then that will determine which power supplies you need, which are going to be expensive as well. If you are going to run multiple cards there are going to be other expenses as well, structure to hold all the parts ( I just use adjustable shelves from Home Depot ), how to get all that power to the cards, cooling etc. So basically a one card setup can cost you just the card if you can build a computer from spare parts, but a 'rig' with 6 top of the line cards running off one motherboard can easily cost more than $4000. That's probably conservative too. In general nvidia cards are more efficient than AMD's but AMDs are faster. They could be on equal playing ground depending on how dagger hashimoto turns out.
hashrate: 15094211, bw: 117923 MB/s
But that kind of points out the weak NVidia OpenCL implementation, as in gaming circumstances, the GTX780 would blow the HD7850 mentioned earlier out of the water.
It may be useful for people that are planning to purchase hardware that having plenty of VRAM is critical to GPU mining of ether. This is because the full Dagger dataset is loaded onto the GPU. For now that's 1GB, but it will eventually grow to more GB than current cards have.
Also tried to build against Intel OpenCL, but it wouldn't let me allocate 1GB of RAM for the dag buffer.
Assuming your setup has fast memory, then may be the lower CPU single threaded rate you got is related to the compiler not doing auto-vectorization. Just guessing.
I use the Intel compiler on Linux. Using their vector report option I did verify that vectorization was done to match the hardware (AVX).
I will start working on GPU optimization in about one week, and will give a shot to Intel OpenCL also. I will post my findings here.
I did re-write the mining code (for larger vector size and prefetching optimization) and I can hardly break above 5 MH/s on the 57 cores 31S1P model. Anyone found ways to do better with Xeon PHIs?