Core is 1200-1240, Ambiant is 40C~, volt offset at -100mV, Which is a bit higher than it need to be, but necessary to be sure any crashes are mem related while making custom straps.
Rig is just a tad over 1000w atm for 6 GPUs. I doubt you're running those 6 GPUs at 600-700w.
I broke 31mhs on the Hynix MJR today, for the most part anyways...
Core is 1200-1240, Ambiant is 40C~, volt offset at -100mV, Which is a bit higher than it need to be, but necessary to be sure any crashes are mem related while making custom straps.
Rig is just a tad over 1000w atm for 6 GPUs. I doubt you're running those 6 GPUs at 600-700w.
I broke 31mhs on the Hynix MJR today, for the most part anyways...
Too much. I have 950-970 W for dual mining on 6*RX 580. 1150/2125-2175 gpu/mem @ 0.85-0.9 Vcore
I think you need to drop GPU clock to 1150 in case ETH only mining. No affect to hashrate found for clocks higher than 1150, but you can significantly decrease GPU voltage and power draw.
>1150 doesn't give you more hashrate, because your memory can't handle the bandwidth required for 33.5MH/s with your current mod.
And i don't get why you are arguing/comparing your rig that has been optimized with a rig that hasn't been. I posted this because i'm trying to reach the theoretical limit.
>1150 doesn't give you more hashrate, because your memory can't handle the bandwidth required for 33.5MH/s with your current mod.
And i don't get why you are arguing/comparing your rig that has been optimized with a rig that hasn't been. I posted this because i'm trying to reach the theoretical limit.
I don't see what your posts has to do with this.
I'm just trying to help you. Some scientific methods to your theory.
Less GPU clock = less GPU voltage = less power draw = less temps = less load on VRM = higher stability. GPU5: 1000/2150=27.2 Mh, 1050/2150 = 28.5 Mh, 1100/2150 = 30.0 Mh, 1125/2150= 30.7 MH, 1150/2150 =31.2 Mh, 1175/2150 = 31.2 MH
As we can see, each +50 MHz of GPU clock can boost ETH hashrate on 1.5 Mh. So you need only 1200 MHz on GPU to be able to reach 33 Mh/s, 1250 to reach 34.5 Mh/s and 1300 for 36 Mh/s in case when memory subsystem is not bootleneck.
I'm fully aware, however this still has nothing to do with finding the MC's limit. But since nobody is commenting on topic anyways;
In the case of this mod, 580s, 1250~ core is required for 32.5mh/s on linux. The cards do take -200mV though so that's really nice that they can do this at 0.89V~
And on my card doing 33.5mh/s a 470, i needed 1310 core. Def not as nice, requires almost [email protected] too, but takes 2330 mem clock. However there is no gain from 2290 to 2330 so 33.5mh/s may be the limit.
And on my card doing 33.5mh/s a 470, i needed 1310 core. Def not as nice, requires almost [email protected] too, but takes 2330 mem clock. However there is no gain from 2290 to 2330 so 33.5mh/s may be the limit.
Did you checked memory errors during mining with HWINFO?
Rig is just a tad over 1000w atm for 6 GPUs. I doubt you're running those 6 GPUs at 600-700w.
I may be wrong but given that my 8 580 GPU rig runs at 1065W (including M/B and monitor), are you certain the extra few MH/s is worth the risk and energy cost?
Rig is just a tad over 1000w atm for 6 GPUs. I doubt you're running those 6 GPUs at 600-700w.
I may be wrong but given that my 8 580 GPU rig runs at 1065W (including M/B and monitor), are you certain the extra few MH/s is worth the risk and energy cost?
What risk do you speak of? Running 1200 core at -175mV instead of 1150 core at -200mV? (For example)
And it's quite assuredly worth it. In the end, getting +2-3mh/s is only +5-15w per card. For a medium farm, worse case scenario, this is >50 000$ USD profit per year. For some of our bigger clients we're talking millions extra in profit for no additional infrastructure cost.
You also may have noticed the complete lack of stock supply of RX cards lately.
And on my card doing 33.5mh/s a 470, i needed 1310 core. Def not as nice, requires almost [email protected] too, but takes 2330 mem clock. However there is no gain from 2290 to 2330 so 33.5mh/s may be the limit.
Did you checked memory errors during mining with HWINFO?
I used HW report from sgminer-gm, the rate of HW is actually lower than with basic mod and does not go higher until >2340 mem. And the only true test; pool side went up 20mh/s for the whole rig.
What risk do you speak of? Running 1200 core at -175mV instead of 1150 core at -200mV? (For example)
And it's quite assuredly worth it. In the end, getting +2-3mh/s is only +5-15w per card. For a medium farm, worse case scenario, this is >50 000$ USD profit per year. For some of our bigger clients we're talking millions extra in profit for no additional infrastructure cost.
You also may have noticed the complete lack of stock supply of RX cards lately.
I suppose given the rate of difficulty increase the GPUs will become useless way before its shortened MTBF from OC anyway.
What risk do you speak of? Running 1200 core at -175mV instead of 1150 core at -200mV? (For example)
And it's quite assuredly worth it. In the end, getting +2-3mh/s is only +5-15w per card. For a medium farm, worse case scenario, this is >50 000$ USD profit per year. For some of our bigger clients we're talking millions extra in profit for no additional infrastructure cost.
You also may have noticed the complete lack of stock supply of RX cards lately.
I suppose given the rate of difficulty increase the GPUs will become useless way before its shortened MTBF from OC anyway.
I would think so, i'm not getting many failure over the years. Also i was told undervolting causes faster silicon degradation vs running stock voltage, so in a way, pushing to the max may be better for it's life span. Not sure if this can be compensated with LLD calibration.
Anyways, assuming you're cooling your GPU properly in both case. But i think in either case, their shelf life is longer than we'll want to use them for.
Also i was told undervolting causes faster silicon degradation vs running stock voltage
By that logic, disabling from electric source must kill electronics immediately. By that logic, mobile CPU and GPU must die faster than desktop ones, 'cause they built on same chips but use lower voltage and clocks to run longer on battery and fit in notebook cooling conditions.
Even desktop CPU and GPU drop their voltages (and clocks) under light load.
RX 580 have lowest GPU voltage 0.75 V, for example - but in fact it's a stock voltage, 'cause set in stock ROM.
You're wrong in that decision. Integrated circuits degrade faster with higher temps, higher voltages and higher currents. Lower voltage can make integrated circuit run unstable on same frequencies, but not degrade faster.
Maybe it's just me, but i figure a top AMD Engineer would know better?
I was explained It's not about one component's voltage, it's about the operating voltages specified for the whole circuit. It's not about heat directly, it's about optimizing component life time which have an optimal voltage value.
The silicon will degrade faster regardless of heat as the voltage is further away from optimal setting. This goes in both way. Heat is just *another* factor.
When you lower VDD, do you think the only thing you're touching the gpu core operating voltage? You're not...
So let me see if i can get this Right... You don't like what someone post so you go argue it's "not good", regardless of context or intend.
Then you don't like the replies, so you argue hardware engineering, which you seem to know little about. Argue away with Mosfets and Bucks, but without me.
It doesn't look like i will find people to cooperate with, with this thread. Nothing constructive will be achieved by me arguing off topic subjects.
Even he can not change physics laws. Lower voltage cannot harm semiconductor chips. Only heat or radiation.
I like this...
@Virosa: please be aware that >50% of postings in forums like this one, are useless. The challenge is to build up the competency to identify the releveant <50% ones....
So let me see if i can get this Right... You don't like what someone post so you go argue it's "not good", regardless of context or intend.
Then you don't like the replies, so you argue hardware engineering, which you seem to know little about. Argue away with Mosfets and Bucks, but without me.
It doesn't look like i will find people to cooperate with, with this thread. Nothing constructive will be achieved by me arguing off topic subjects.
You're wrong on this. I'm appreciate anybody who tries to do something new. And if I say that somebody says something incorrect - it's only something incorrect and nothing more. I do not know you good enough to like or dislike.
Thing you said about "low voltage degrades integrated circuits" is incorrect, totally, I said why. Instead of discuss or post some proof, you only mentioned some top engineer guy. You acting in that as a believer, not a knowledge guy.
Comments
What clocks ang GPU Voltage? What total power draw?
Extra 300-400 watts per hour not cost extra 10-15 MH.
Just compare my temps and RPM on dual mining. This rig placed on living room, so room air temp ~ +22 C
Rig is just a tad over 1000w atm for 6 GPUs. I doubt you're running those 6 GPUs at 600-700w.
I broke 31mhs on the Hynix MJR today, for the most part anyways...
I think you need to drop GPU clock to 1150 in case ETH only mining. No affect to hashrate found for clocks higher than 1150, but you can significantly decrease GPU voltage and power draw.
And i don't get why you are arguing/comparing your rig that has been optimized with a rig that hasn't been. I posted this because i'm trying to reach the theoretical limit.
I don't see what your posts has to do with this.
Less GPU clock = less GPU voltage = less power draw = less temps = less load on VRM = higher stability.
GPU5: 1000/2150=27.2 Mh, 1050/2150 = 28.5 Mh, 1100/2150 = 30.0 Mh, 1125/2150= 30.7 MH, 1150/2150 =31.2 Mh, 1175/2150 = 31.2 MH
As we can see, each +50 MHz of GPU clock can boost ETH hashrate on 1.5 Mh. So you need only 1200 MHz on GPU to be able to reach 33 Mh/s, 1250 to reach 34.5 Mh/s and 1300 for 36 Mh/s in case when memory subsystem is not bootleneck.
In the case of this mod, 580s, 1250~ core is required for 32.5mh/s on linux. The cards do take -200mV though so that's really nice that they can do this at 0.89V~
And on my card doing 33.5mh/s a 470, i needed 1310 core. Def not as nice, requires almost [email protected] too, but takes 2330 mem clock. However there is no gain from 2290 to 2330 so 33.5mh/s may be the limit.
And it's quite assuredly worth it. In the end, getting +2-3mh/s is only +5-15w per card. For a medium farm, worse case scenario, this is >50 000$ USD profit per year.
For some of our bigger clients we're talking millions extra in profit for no additional infrastructure cost.
You also may have noticed the complete lack of stock supply of RX cards lately. I used HW report from sgminer-gm, the rate of HW is actually lower than with basic mod and does not go higher until >2340 mem. And the only true test; pool side went up 20mh/s for the whole rig.
Anyways, assuming you're cooling your GPU properly in both case. But i think in either case, their shelf life is longer than we'll want to use them for.
By that logic, mobile CPU and GPU must die faster than desktop ones, 'cause they built on same chips but use lower voltage and clocks to run longer on battery and fit in notebook cooling conditions.
Even desktop CPU and GPU drop their voltages (and clocks) under light load.
RX 580 have lowest GPU voltage 0.75 V, for example - but in fact it's a stock voltage, 'cause set in stock ROM.
You're wrong in that decision. Integrated circuits degrade faster with higher temps, higher voltages and higher currents.
Lower voltage can make integrated circuit run unstable on same frequencies, but not degrade faster.
I was explained It's not about one component's voltage, it's about the operating voltages specified for the whole circuit. It's not about heat directly, it's about optimizing component life time which have an optimal voltage value.
The silicon will degrade faster regardless of heat as the voltage is further away from optimal setting. This goes in both way. Heat is just *another* factor.
When you lower VDD, do you think the only thing you're touching the gpu core operating voltage? You're not...
Lower voltage cannot harm semiconductor chips. Only heat or radiation.
Then you don't like the replies, so you argue hardware engineering, which you seem to know little about. Argue away with Mosfets and Bucks, but without me.
It doesn't look like i will find people to cooperate with, with this thread. Nothing constructive will be achieved by me arguing off topic subjects.
@Virosa: please be aware that >50% of postings in forums like this one, are useless. The challenge is to build up the competency to identify the releveant <50% ones....
And if I say that somebody says something incorrect - it's only something incorrect and nothing more.
I do not know you good enough to like or dislike.
Thing you said about "low voltage degrades integrated circuits" is incorrect, totally, I said why.
Instead of discuss or post some proof, you only mentioned some top engineer guy.
You acting in that as a believer, not a knowledge guy.
Good luck with that. I'm out.