Riser 1 to 3 PCI-e 1x?

RayhanRayhan Posts: 111Member
Someon there use this riser?


I think buy this riser but I'm in doubt if suported 3 GPU mining
«1

Comments

  • akropissakropiss Posts: 64Member
    I wouldn't recommend it, each x1 will need a separate riser to fit each of your x16 card, and i wouldn't recommend daisychaining risers, line loss, fire risks, etc. and ultimately your mobo's chipset can only recognise so many cards, pretty much useless IMO
  • o0ragman0oo0ragman0o Posts: 1,252Member, Moderator mod
    I've just ordered a couple to experiment with.

    Looking at the specs on that chip, we have a fully fledged 4 lane PCIe 2.0 packet switch so line loss shouldn't be a problem.

    The Rig parts I've ordered include an i3 2100 for a H61 Pro BTC mobo. The i3 has 16 PCIe lanes and so (if I understand lanes properly) should talk to 16 1x devices.

    However in looking at the H61 chipset on the mobo, it can only do 6 lanes. :(

    But I also have an mobo with an AMD 770 which can support upto 22 lanes! It has 5 slots on board and with the 2 1to3 risers should be able to run 9 Cards. Hell, I could order another three and run 15 cards..... Anyone want to donate some GPU's to science?
  • patrik2patrik2 Posts: 156Member ✭✭
    Please keep us informed!
  • o0ragman0oo0ragman0o Posts: 1,252Member, Moderator mod
    Here's a great AMD thread from 2012 on some prior efforts toward high count GPU nodes. The cards mentioned are still current, e.g HD's

    It seem's there is a BIOS maximum of 8 GPU's given that each GPU requires 256Mb of memory. This was reportedly broken past by a kernal hack to 'correct' the BIOS information. There's some broken link references to 13 and 15 GPU nodes.
  • o0ragman0oo0ragman0o Posts: 1,252Member, Moderator mod
    Any chipset guru's out there?

    I'm not understanding Intel's PCIe architecture very well. Their CPU's can have 20+ PCIe lanes but most of their chipsets (e.g Z97) have only 8. The Z97's are a common gamer motherboard chipset so it makes no sense that 2 or 3 16x GPU's would be pushing data through only 8 lanes....

    My entry understanding was that PCIe lanes were CPU ports however Intel chipsets show otherwise. I wonder if the chipsets acting as expansion lanes?

    Additionally, AMD CPU's don't have PCIe lanes at all and instead rely on the chipset to manage lanes instead.

    So my guess ATM is it's a bit of both for Intel; lanes direct to CPU with auxiliary chipset lanes for....?

    Anyway it makes enough nonsense to get my hopes back up about >6 GPU's on my H61 BTC Pro when it comes in.

    Until then more home homework....'PCIe architecture'
  • RayhanRayhan Posts: 111Member
    To mining don't need 16x because don't have graphic processing, just calculations.
    I have eficienta cooler system
    So I would know if works 3 HD 7950.
    My setting: mobo h61, core i3 2100, 4GB RAM DDR3
    I need more memory?
  • o0ragman0oo0ragman0o Posts: 1,252Member, Moderator mod
    Bought just the same today, h61, i3 2100 and 4Gb. 4Gb's fine.
  • RayhanRayhan Posts: 111Member
    hahaha and also riser?
  • o0ragman0oo0ragman0o Posts: 1,252Member, Moderator mod
    Ok I have a better understanding of Intel's PCIe setup now and even a bit specific to the LGA1155 CPU's like the i3 2100... It's best summed up in this pic.

    Basically the Northbridge was integrated into the CPU itself as SandyBridge while the H61 chipset is the remnant Southbridge with 'additional' PCIe lanes. (sorry showing my age needing to study Sandybridge architecture now)

    The H61 BTC Pro manual doesn't clue up if any PCIe slots are off the chipset but I'm assuming ATM they aren't and again hope these 1 to 3 switch works.
  • RayhanRayhan Posts: 111Member
    Then you come back here to talk us about the result. OK?
  • JBaETHJBaETH Posts: 40Member
    @o0ragman0o Looking forward to your results.
  • o0ragman0oo0ragman0o Posts: 1,252Member, Moderator mod
    It'll be a couple of weeks as they're coming from Hong Kong.

    So far I'm not turning up any research that tells me they consume more than one PCIe lane and appear to act more like additional lanes in a similar way chipsets manage lanes additional to CPU lanes. It also seems typical design practice on Intel mobo's to have CPU lanes route out to 16x slots and route the additional chipset lanes to the 1x slots. That explains my confusion with the H61 chipset having only 6 lanes.

    AMD boards in contrast have all lanes routed out from the chipset which is why their chipsets have so many more lanes.
  • RayhanRayhan Posts: 111Member
    Use three GPU in a single PCIE 1x work?
  • RayhanRayhan Posts: 111Member
  • RayhanRayhan Posts: 111Member
    Hey man! Talk us about, please
  • o0ragman0oo0ragman0o Posts: 1,252Member, Moderator mod
    International shipment. Still waiting for it...
  • RayhanRayhan Posts: 111Member
  • adasebadaseb Posts: 1,043Member ✭✭✭
    WHere it says Max # of PCI Express Lanes 6
    its referring to the 2.0 PCIe not 3.0 PCIe, those are seperate.

    Most 3.0PCIe is 16x lines with 2 PCIe ports so if both are used, both run at 8x and 8x or 1 at 16x.

    2.0PCIe is seperate.
  • RayhanRayhan Posts: 111Member
    works or does not works?
  • o0ragman0oo0ragman0o Posts: 1,252Member, Moderator mod
    @adaseb, yes, the 6 lanes are from the chipset and only manage x1 slots or other embedded PCIe devices. The rest are on the Intel CPU's themselves. Doesn't really matter though as the PCIe switches should still only consume 1 lane from the mobo.
  • RayhanRayhan Posts: 111Member
    I want know if works or not, people! OMG, someone say me, plz
  • o0ragman0oo0ragman0o Posts: 1,252Member, Moderator mod
    edited May 2016
    @Rayhan Finally got to do this. Working perfectly with 8x Asus R7 370 Strix.
    Photos here
  • LP12LP12 Posts: 40Member
    link to the site to purchase the card please?
  • o0ragman0oo0ragman0o Posts: 1,252Member, Moderator mod
    edited May 2016
    @lp12 I just got mine off ebay
  • oslakoslak Posts: 191Member
    @o0ragman0o where did you insert the pcie multiplier? on 16x slot? If you have 2 2.0 running at 16x that means additional 6 slots on the mobo. This is exciting. What psu can support 16 non 370 cards? :)
  • o0ragman0oo0ragman0o Posts: 1,252Member, Moderator mod
    edited May 2016
    @oslak, the switch itself is a 4 lane device. 1 lane is used to connect to a x1 slot on the mobo while the other 3 are x1 expansions slots. Given that they don't consume lanes from the CPU or chipset, there's effectively no limit to fanout or daisy chaining except for whatever the PCIe standard can enumerate (at a guess) and OS limits. So it's imaginable that one could load up 6 of these switches and run 18 GPU's, but in previous investigations 8 GPU's seem to be the limit for Linux with without some kernal hack of which I know nothing about.
    Post edited by o0ragman0o on
  • LP12LP12 Posts: 40Member
    @o0ragman0o thanks for the info; keep us posted how you get on - wow! that would be something; 18 gpu's off one mobo - yee gods...
  • HansHans Posts: 34Member
    @o0ragman0o do you use molex and sata molex to power the extender? coz i see in your rig pic you only connect usb to it, without power.

    I have 2 and i cant get it work on my MSI motherboard.
  • o0ragman0oo0ragman0o Posts: 1,252Member, Moderator mod
    @hans, using SATA. The cable ends up laying along the back of the usb's so is occluded by timber and other cables.

    can you see your switch with lsusb and cards with lspci?
  • HansHans Posts: 34Member
    @o0ragman0o the board got 2 input for power, did you only use 1 power input?
Sign In or Register to comment.