Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


10 CommentsName DropperFirst Comment


10 CommentsName DropperFirst Comment



Last Active
  • Re: First Build + Workstation/server card benchmark

    Like @Procrastitator said, this is just getting started and mostly for fun. With some voltage adjustments that actually take, I can profitably mine Zcash for now, and focus on gathering resources for a more serious build. Less the cost of power an extra $800/month is nothing to scoff at, especially considering how fuckin broke I am - I'm really fuckin broke. That's not accounting for how the market will change with respect to Zcash, but that's how the game is played, I guess. In any case, I have an enthusiasm and preexisting skill set that I can put to use, and if that means resorting to lower cost, higher power cards until I get my shit going, then so be it. I got these for their high(er) VRAM bandwidth and their relatively low base price. The cost of energy will, as @Juggar pointed out, probably be a bit much. You're also very right to assert that I probably know just enough to get myself into trouble. The present goal, then, is to learn enough and do enough to get myself back out of trouble. Now, the latest Ubuntu drivers do support a compute workload for the S10000. This allows for some significant modification of their overall function. By knocking down the core voltages, I should be able to mitigate some of that power cost, among other parameter adjustments I can make to optimize their performance. Problem remains, though, that these cards I bought think they're Sky 900's. The Sky 900 is another dual Tahiti card, but it doesn't make any damn sense considering all of the product and package labeling. The stickers, boxes, S/N's all point to 12gb Firepro S10000 passive. I do have a water cooling setup in the works, as well. The power consumption of the water-cooling won't be but another $2/month, so even if I get my money back and replace these cards, which is looking like a really good possibility about now, I can still run the water cooler to really push whatever cards I end up with and not worry about cooling. To clarify, I'm getting closer to 11 MH/s per core at a tolerable, i.e. not overheating like all hell, load. How far can I push them? Can't say just yet. Will mining Ethereum be profitable? Maybe not as much as some other algos/coins, and that's fine by me. I can go post to the other boards specific to those coins and chat about it all there. I might well end up taking the advice from @Juggar and get my money back, maybe buy a couple of Vega 56's or 64's for the money. Just two of them would achieve about 80% of the hashing power at 1/3 the power consumption. The fact is, I'm wading into some murky water and uncharted territory here, at least for me. My presence on the forum is as much about calling on the experience of more seasoned miners as it is a learning experience for me as an enthusiast and general tech junkie. Given that these cards may not be what they were advertised to be, I definitely have a legitimate case to pursue action against the eBay seller who sold them to me, and get my money back. All of this said, I'm still doing as much as I can while I wait for a bunch of damn packages to come in from freakin' ShenZhen/wherever else overseas. So, I get what you all are getting at here, and while it comes across as harsh in some cases, I do appreciate your concern and advice. I'm a let-the-chips-fall-where-they-may kind of guy, so that's how I'll move forward. At the very least, I have something that will work for now, and I don't stand to lose all that much. I'm in at about $2k for absolutely everything, so a two-month turn around to break even is fine by me. Would I get that back as quickly mining Ethereum? No, but it is what it is, and I'm going to figure this out, because that's what I do. I'm a problem solver, and any advice you guys have for me will be given due consideration.
  • Re: First Build + Workstation/server card benchmark

    @Procrastitator I have a multimeter running around here somewhere, though I'd have to look up which pins to check. The max TDP of the rig technically exceeds the wattage limitations of most wall power monitors, but setting the voltages down ought to reduce that load to a level that wouldn't set that off. That said, I need to actually buy a wall power meter. In other news, I've just about got everything I need to start the final stage of the build, but man, that shipping time from Shenzhen is a real pain. The wait has been just killing me. While I have set out to build on a budget, I'm also getting down to the last of the money I'd set aside for this. Between waiting on delivery for some of the components and running my budget dry, it might be a minute before I have all of my final benchmarking done and finish getting this thing off the ground. Right now, I'm relearning and learning a bunch of new stuff in Ubuntu to play around with optimizing the rig as a dual-boot system and comparing Ubuntu to Windows 10. In particular, the Windows drivers and FirePro control center have all kinds of quirks that make setting voltages and clock speeds basically impossible without having the S10000 set as my actual display output. What's more, is that since the MOBO recognizes each core of each individual card as a unique device, I can only ever adjust the settings for one of the cores from Windows, and I never know which one that is. Presumably, it's the master core. I'll keep you up to date as I get things wrapped up. I really hope that's soon, because I've got a ton on my plate right now.
  • Re: Data Centers super GPUs

    So, sorry to necro this post. I actually just got some of the S100000's in. Everyone's been scalping the crap out of the Vega's and 1080 series cards and reselling at double and triple the retail price. Anyway, I found a few of the S10000's dirt cheap. I'll start benchmarking later tonight, and I'll let you know how it goes. I'm really new to the boards, so I haven't looked through all the link posting rules just yet, but if it's allowed, I'll leave a link to the supplier if anyone's interested after I establish the benchmarks, get a feel for how close they get to max tdp, etc.
  • Re: First Build + Workstation/server card benchmark

    Update #2! Equihash Benchmark is in at 535 Sol/S. I figured out the crashing issue. The PSU is just fine for the one card. The damn thing overheats, and that's what shuts everything down. Cooling for the Tahiti cores is all over, since they're shared across so many Radeon cards. Being that I'm insane, though, I'm going to run the craziest liquid cooling loop I've ever done. Again, I've scavenged for dirt cheap parts. I haven't really said it explicitly, but there is heavy emphasis on making this a budget rig with high profitability. Anyway, I know other Radeon cards with the Tahiti core have been used to mine in the past, as to why the passive cooling is such a killer in my case, I don't know. I guess the active heat sinks really do make that big of a difference. Even with the added expense of the liquid cooling, I'm still in on the S1000's for under $350 a piece, with a hashing power that still rivals that of cards that cost two or three times as much, depending on the algo and miner. My big remaining concern is seeing how all these cheap Chinese parts are going to hold up under the pressure of mining.
  • First Build + Workstation/server card benchmark

    Well, I've done it. I've seen a few posts running around about it, but no real info to date. So - in the name of science, of course - I picked up a few FirePro S10000 12 GB cards and I've started mining with one of them. Why only one? Because I'm still waiting for FedEx to bring me my beefier PSU that can handle all of them. For those of you who have wondered, here's what I'm running and what I've found so far:
    ASUS M5A97, AMD chipset with FX-4350 CPU.
    16GB RAM
    120 GB SSD
    GT 710 for a simple energy-efficient VGA output (no video output direct from the motherboard)
    Windows 10 Professional, but I might get set up on Ubuntu. Then again, my already limited Linux skills are pretty rusty.
    Currently running Claymore's Dual miner, since I'll be running both AMD and NVidia cards.
    The motherboard recognizes the two Tahiti cores of each S10000 as individual GPU's. That's probably normal, but this is my first build, so how would I know? I'm just hoping my CPU can handle enough PCIe lanes, etc. to manage all five S10000's that I have awaiting installation and tuning, in addition to the five 1050 TI's that are also on the way. Not to mention, I've yet to test all of my risers, splitters, etc.
    For now, I just have a couple 92mm fans that I salvaged from the case that my brother-in-law had for the motherboard directed right at the S10000 that I've installed... right on the itty-bitty motherboard while I await the rest of my components to finish my build. This thing is monstrous, both in size and power.

    So, The max TDP of the S10000 is rated at 375w, though I'm finding mixed information on whether that's per core, or for the whole thing. I really, REALLY hope it's the whole thing. I'm currently running an EVGA 850BQ PSU that I got to fire up and check out the motherboard, since the motherboard was a random Christmas gift/hand-me-down from my brother-in-law. That said, I haven't really been able to push the card that far. It already set off the over-wattage protection on the PSU a couple times during benchmarking, among a couple of random display issues. I'm not sure if my motherboard is trying to switch the display output from the 710 to the S10000 as the latter starts pulling some power, or what. Still troubleshooting that. As for hashing... HOLY COW. Each core can pull about 20 MH/s, at least until it crashes the PSU. So far, I've managed one stable benchmark at ~50% intensity, and each core averages 11.4 MH/s at that intensity. I know, I know. The jump from 50% to 100% intensity isn't necessarily linear in terms of a jump from 22 MH/s to ~40 MH/s. However, AwesomeMiner approximates its configuration as "Dual 20.4 MH/s," so I'll have to see how that pans out given adequate power to the cards, and some fine tuning in AfterBurner.

    Now to the rest of the build:
    All of the above, quite obviously
    5x FirePro S10000
    5x GTX 1050 TI
    Dual PSU's: The aforementioned 850BQ for the Motherboard, 3 1050 TI's, and the fans. Plus a 2350w (supposedly) monstrosity I came across on Wish for dirt cheap (likewise with the 1050 TI's. I hope to God they're the real deal)
    10x 40mm fans in a custom open air frame I'll be constructing myself, and I might find a use yet for the 92mm fans I salvaged from the old tower case. I like the idea of each GPU having a fan to itself, but I'm expecting some diminishing returns on CFM airflow to cooling efficiency.
    All in, given some tuning, etc. I should be looking at about 250 MH/s and around 2000w power draw. It'll take some trial and error to optimize the power draw to hashing power ratio, but I think I'm headed in the right direction so far.

    And in case anyone was wondering how I ever afforded five S10000's, they're not $3500 anymore. You can find 'em on eBay for about $300. Far less expensive than the Vega and 1080 series cards in terms of upfront cost, but at nearly double the power draw. Theoretically, though, they should offer an extra 10ish MH/s over their gaming-oriented counterparts. Whether that's worth it, I've yet to see. Anyway, thanks for reading, and I'll keep you posted on build progress and final benchmarks as I finish up.