I wouldn't recommend it, each x1 will need a separate riser to fit each of your x16 card, and i wouldn't recommend daisychaining risers, line loss, fire risks, etc. and ultimately your mobo's chipset can only recognise so many cards, pretty much useless IMO
Looking at the specs on that chip, we have a fully fledged 4 lane PCIe 2.0 packet switch so line loss shouldn't be a problem.
The Rig parts I've ordered include an i3 2100 for a H61 Pro BTC mobo. The i3 has 16 PCIe lanes and so (if I understand lanes properly) should talk to 16 1x devices.
However in looking at the H61 chipset on the mobo, it can only do 6 lanes.
But I also have an mobo with an AMD 770 which can support upto 22 lanes! It has 5 slots on board and with the 2 1to3 risers should be able to run 9 Cards. Hell, I could order another three and run 15 cards..... Anyone want to donate some GPU's to science?
Here's a great AMD thread from 2012 on some prior efforts toward high count GPU nodes. The cards mentioned are still current, e.g HD's
It seem's there is a BIOS maximum of 8 GPU's given that each GPU requires 256Mb of memory. This was reportedly broken past by a kernal hack to 'correct' the BIOS information. There's some broken link references to 13 and 15 GPU nodes.
I'm not understanding Intel's PCIe architecture very well. Their CPU's can have 20+ PCIe lanes but most of their chipsets (e.g Z97) have only 8. The Z97's are a common gamer motherboard chipset so it makes no sense that 2 or 3 16x GPU's would be pushing data through only 8 lanes....
My entry understanding was that PCIe lanes were CPU ports however Intel chipsets show otherwise. I wonder if the chipsets acting as expansion lanes?
Additionally, AMD CPU's don't have PCIe lanes at all and instead rely on the chipset to manage lanes instead.
So my guess ATM is it's a bit of both for Intel; lanes direct to CPU with auxiliary chipset lanes for....?
Anyway it makes enough nonsense to get my hopes back up about >6 GPU's on my H61 BTC Pro when it comes in.
Until then more home homework....'PCIe architecture'
To mining don't need 16x because don't have graphic processing, just calculations. I have eficienta cooler system So I would know if works 3 HD 7950. My setting: mobo h61, core i3 2100, 4GB RAM DDR3 I need more memory?
Ok I have a better understanding of Intel's PCIe setup now and even a bit specific to the LGA1155 CPU's like the i3 2100... It's best summed up in this pic.
Basically the Northbridge was integrated into the CPU itself as SandyBridge while the H61 chipset is the remnant Southbridge with 'additional' PCIe lanes. (sorry showing my age needing to study Sandybridge architecture now)
The H61 BTC Pro manual doesn't clue up if any PCIe slots are off the chipset but I'm assuming ATM they aren't and again hope these 1 to 3 switch works.
It'll be a couple of weeks as they're coming from Hong Kong.
So far I'm not turning up any research that tells me they consume more than one PCIe lane and appear to act more like additional lanes in a similar way chipsets manage lanes additional to CPU lanes. It also seems typical design practice on Intel mobo's to have CPU lanes route out to 16x slots and route the additional chipset lanes to the 1x slots. That explains my confusion with the H61 chipset having only 6 lanes.
AMD boards in contrast have all lanes routed out from the chipset which is why their chipsets have so many more lanes.
@adaseb, yes, the 6 lanes are from the chipset and only manage x1 slots or other embedded PCIe devices. The rest are on the Intel CPU's themselves. Doesn't really matter though as the PCIe switches should still only consume 1 lane from the mobo.
@o0ragman0o where did you insert the pcie multiplier? on 16x slot? If you have 2 2.0 running at 16x that means additional 6 slots on the mobo. This is exciting. What psu can support 16 non 370 cards?
@oslak, the switch itself is a 4 lane device. 1 lane is used to connect to a x1 slot on the mobo while the other 3 are x1 expansions slots. Given that they don't consume lanes from the CPU or chipset, there's effectively no limit to fanout or daisy chaining except for whatever the PCIe standard can enumerate (at a guess) and OS limits. So it's imaginable that one could load up 6 of these switches and run 18 GPU's, but in previous investigations 8 GPU's seem to be the limit for Linux with without some kernal hack of which I know nothing about.
Comments
Looking at the specs on that chip, we have a fully fledged 4 lane PCIe 2.0 packet switch so line loss shouldn't be a problem.
The Rig parts I've ordered include an i3 2100 for a H61 Pro BTC mobo. The i3 has 16 PCIe lanes and so (if I understand lanes properly) should talk to 16 1x devices.
However in looking at the H61 chipset on the mobo, it can only do 6 lanes.
But I also have an mobo with an AMD 770 which can support upto 22 lanes! It has 5 slots on board and with the 2 1to3 risers should be able to run 9 Cards. Hell, I could order another three and run 15 cards..... Anyone want to donate some GPU's to science?
It seem's there is a BIOS maximum of 8 GPU's given that each GPU requires 256Mb of memory. This was reportedly broken past by a kernal hack to 'correct' the BIOS information. There's some broken link references to 13 and 15 GPU nodes.
I'm not understanding Intel's PCIe architecture very well. Their CPU's can have 20+ PCIe lanes but most of their chipsets (e.g Z97) have only 8. The Z97's are a common gamer motherboard chipset so it makes no sense that 2 or 3 16x GPU's would be pushing data through only 8 lanes....
My entry understanding was that PCIe lanes were CPU ports however Intel chipsets show otherwise. I wonder if the chipsets acting as expansion lanes?
Additionally, AMD CPU's don't have PCIe lanes at all and instead rely on the chipset to manage lanes instead.
So my guess ATM is it's a bit of both for Intel; lanes direct to CPU with auxiliary chipset lanes for....?
Anyway it makes enough nonsense to get my hopes back up about >6 GPU's on my H61 BTC Pro when it comes in.
Until then more home homework....'PCIe architecture'
I have eficienta cooler system
So I would know if works 3 HD 7950.
My setting: mobo h61, core i3 2100, 4GB RAM DDR3
I need more memory?
Basically the Northbridge was integrated into the CPU itself as SandyBridge while the H61 chipset is the remnant Southbridge with 'additional' PCIe lanes. (sorry showing my age needing to study Sandybridge architecture now)
The H61 BTC Pro manual doesn't clue up if any PCIe slots are off the chipset but I'm assuming ATM they aren't and again hope these 1 to 3 switch works.
So far I'm not turning up any research that tells me they consume more than one PCIe lane and appear to act more like additional lanes in a similar way chipsets manage lanes additional to CPU lanes. It also seems typical design practice on Intel mobo's to have CPU lanes route out to 16x slots and route the additional chipset lanes to the 1x slots. That explains my confusion with the H61 chipset having only 6 lanes.
AMD boards in contrast have all lanes routed out from the chipset which is why their chipsets have so many more lanes.
its referring to the 2.0 PCIe not 3.0 PCIe, those are seperate.
Most 3.0PCIe is 16x lines with 2 PCIe ports so if both are used, both run at 8x and 8x or 1 at 16x.
2.0PCIe is seperate.
Photos here
withwithout some kernal hack of which I know nothing about.I have 2 and i cant get it work on my MSI motherboard.
can you see your switch with lsusb and cards with lspci?