The problem is not that the DAG file is larger than 2GB, because it's not.
The problem is that for some reason OpenCL reports CL_DEVICE_MAX_MEM_ALLOC_SIZE: 1399062528, which the DAG file is now larger than.
The bigger issue here, is why does OpenCL think it can only allocate that much out of 2GB?
Need help ASAP!
Also, there are other reports of this issue popping up on Reddit and around the 'net.
0 ·
Comments
export GPU_MAX_HEAP_SIZE=100
have on things? Would it contend with or affect what ethminer does with the --cl-extragpu-mem flag?Yes,
export GPU_SINGLE_ALLOC_PERCENT=100
fixed the problem.Wow. What an unexpected fiasco. Thanks for the help guys -- it's greatly appreciated!
Spent whole morning getting them to work again
I guarantee you and a whole ton of other people woke up this morning to crapped out 2GB cards.
I was lucky and just happened to be at my desk assembling another rig when I noticed all three of my "big" rigs (each has a single 270 onboard) were down at the exact same time and with the exact same output from ethminer.
I knew immediately something was up...and then the panic ensued. lol
Global hashrate crashed 20% and Homestead's still trying to ramp back down to 12s blocks
Huh? There's still like 500-600MB left out of 2GB for the DAG. The DAG is only 1.4GB at the moment.
Well, I'm not running Windows, I'm running Ubuntu. But, I get what you're saying. My original post is what OpenCL reported back (via ethminer) as max alloc. Is that set in stone by the device?
Are you sure about this being an issue again in ~5 days?
The max alloc value that I posted is 1,334.25 MB.
The DAG file is already 1409284744 bytes (1,343.99 MB), which is greater than the returned max alloc value.
I don't think it will be an issue. It currently isn't courtesy of
export GPU_SINGLE_ALLOC_PERCENT=100
Max memory allocation: 338165760
(322.5 MB?) That can't be right?