(1x) x1-PCIe to (3x) x1-PCIe riser board - thoughts?

JBaETHJBaETH Member Posts: 40
edited March 2016 in Mining
Morning!

Just stumbled upon this x1 to 3 x1 circuit on AliExpress.

x1 to 3 x1 circuit board

I think it's technically not possible to simultaneously run 3 GPUs on one x1 slot. What do you guys think?

CC: @Jukebox
«134

Comments

  • happytreefriendshappytreefriends Member Posts: 537 ✭✭✭
    Dunno. I looked at other company's similiar boards, but they were not cost effective and not proven.
    If it's cheap, try it out and let us know. :smiley:
  • o0ragman0oo0ragman0o Member, Moderator Posts: 1,291 mod
    On Alibaba? That's new to me, I've only seen them on some other website that didn't seem to have a competent web store. They seem a bit expensive for what they are....


    Couldn't find much discussion about them but for this similar product which seems positive. Anyway, I think I'll get one just to try it out.

    I believe the only hard traffic to the GPU's is loading up the DAG's when mining starts, so that might take longer
    NOTICE: Please don't use it for NVIDIA GeForce GTX 750 Ti because it can't support.
    Well, thank god I had to sell all mine! ;) Seems a bit specific. Perhaps not enough on the power rails, but I wouldn't have thought so?
  • dlehenkydlehenky Member Posts: 2,249 ✭✭✭✭
    @JBaETH The drivers won't recognize more than 6 GPUs, so I would see limited situations where this might be useful. I see a chip on there, so if that is actually a bus extender interface, as used on Extended ATX MBs, it might actually work well. I have no experience with it, myself. I did purchase my x1-x16 risers from AliExpress, and it all went well, although it depends on the particular supplier, of course. YMMV.
  • JBaETHJBaETH Member Posts: 40
    dlehenky said:

    @JBaETH The drivers won't recognize more than 6 GPUs, so I would see limited situations where this might be useful.

    A use-case would be a mainboard with only one x1 slot.
    dlehenky said:

    I see a chip on there, so if that is actually a bus extender interface, as used on Extended ATX MBs, it might actually work well.

    You see this is the part where I struggle. How can you put 3lt of soda in a 1lt bottle?
  • RayhanRayhan Member Posts: 111
    @JBaETH do you buy this riser?
    I think buy, but I dont know if the motherboard or the own riser suport 3 GPU in a PCI-e 1x.
  • JukeboxJukebox Member Posts: 640 ✭✭✭
    edited March 2016
    JBaETH said:


    I think it's technically not possible to simultaneously run 3 GPUs on one x1 slot. What do you guys think?

    LOL. USB 3.0 hub is technically possible, and PCI-E hub - not? Same speeds, same serial interface -no problem at all.

    The only question is video drivers, but this is not a question to this PCI-E hub.
  • JukeboxJukebox Member Posts: 640 ✭✭✭


    I believe the only hard traffic to the GPU's is loading up the DAG's when mining starts, so that might take longer

    One PCI-E (ver.1) lane have 250 MB/sec bandwidth. 2 GB - up to 10 seconds. Faster than i type this message. :)

  • o0ragman0oo0ragman0o Member, Moderator Posts: 1,291 mod
    Did a bit of research on this parallel thread regarding these 1>3 cards
  • JukeboxJukebox Member Posts: 640 ✭✭✭
    This hub is useful when you have little number of pci-e slots onboard.

    No need to buy new mo-bo.
  • JBaETHJBaETH Member Posts: 40
    Jukebox said:

    This hub is useful when you have little number of pci-e slots onboard.

    No need to buy new mo-bo.

    Hey Jukebox, thanks for your comments.
    Do you really think it's worth buying for Gigabyte P35-DS3P mobo?
    I have this board with 4 cards running smoothly, I want to add 2 more.
  • ilia7777ilia7777 Member Posts: 113
    I don't know about the drivers but in theory mobo should work with any number of peripherals provided there are enough chip select lines. Any motherboard has a bus for the sake of simplification lets assume that its shared by all. The way it works is following. All devices (say graphic cards) seat a listen to the bus. Each of them has a unique code assigned called chip select. The CPU when it wants to address particular peripheral issues this chip select and only the peripheral with proper chip select will respond. The limitation could be the number of chip select lines. For the sake of example if there are 4 lines it would produce the maximum chip select code 1111 binary. So 4 lines would support 16 peripherals. This is what I remember from the old days when I was in software team in Orckit Communications. Its was actually our team that built first DSL modem in the world :)
  • ilia7777ilia7777 Member Posts: 113
    If there is a limitation on the number of graphic cards its most likele the OS limitation and not hardware. While its not possible to build an ASIC to mine Ethereum its definetely possible to build a board which would support 20 GPUs.
  • o0ragman0oo0ragman0o Member, Moderator Posts: 1,291 mod
    ilia7777 said:

    This is what I remember from the old days when I was in software team in Orckit Communications.

    @ilia7777, Ahh the good ol' days.... err in these here new days things are a whole lot more complex.

    I think calling PCIe a 'bus' is a misnomer. A bus is a set of data, address and control line which are shared by a number of devices which can be individually selected to listen to or control the bus lines. This is distinct from an interface which is point to point between a device and controller or two devices. So think of the old 25 pin parallel bus, it shared data and address lines with other internal devices like sound cards video cards etc. In contrast the RS232 is a serial interface that requires a UART between the bus and the serial device.

    PCIe is a collection of point to point serial interface wire pairs called lanes each having an endpoint and a controller in a hub and spoke topology. The controller might be an Intel CPU, chipset or (as I've just learnt about) a PCIe switch. The bus'ishness of PCIe is that lanes can be bonded into x1 x4 x8 x16 pairs.

    So for our study, GPU's exclusively use PCIe where there isn't anything like chip select and we are limited by the number of lanes routed out to our board.

    The OP's device, of which I have two on order, is a PCIe switch and as best as I've been able to tell, furnishes two additional PCIe x1 slots (as one on the mobo had to be sacrificed to connect it).

    Daisy chaining or fanning out these might remove the theoretical limit to how many lanes you can have but no doubt you'll quickly hit some deeper barrier to GPU count. I've read that one such barrier is the bios which can only see 2gb and that each GPU requires 256mb which limits out at 8 GPU's. I'll have a total of 10 slots and so will be looking to test this limit though have read it was broken by a kernal hack to correct the bios data. Above that would be OS and driver limitations.


  • ethfanethfan Member Posts: 458 ✭✭✭
    If the adapter actually manages active multiplexing there is no reason to see any limit. I await your test results with bated breath... Almost place an order myself!
  • dlehenkydlehenky Member Posts: 2,249 ✭✭✭✭
    @o0ragman0o @ethfan Isn't anyone concerned about the power requirements? Monolithic power supplies beyond 1300w get pricey. Tying multiple PSUs together is problematic. The system cost per GPU on a 6 GPU rig is low, especially the way most miners build them, so getting a higher GPU density per system isn't going to amount to much. You'll still have the increased cost/complexity of the PSU, regardless. I guess I don't understand what problem you're trying to solve, although sheer curiosity is always reason enough to do it.
  • o0ragman0oo0ragman0o Member, Moderator Posts: 1,291 mod
    @dlehenky, Certainly the curiosity of exploring technical and economic horizons is a big one. Even with 2nd hand system gear, a system is still going to cost $100 ~ $200 but obviously if someone just trying to get a look in, already has a slot limited mobo, then $30 bucks could let them realise a 6 GPU rig.

    Depending on what limits turn up, such as the 8 GPU bios limit I'll be running 6 370's on a 1200w (Silverstone strider gold) and the system with another 2 GPU's on an 850w Gamemax.

    If I find what the actual kernel hack was, or some other method (which I have a low confidence of doing) then I'll try 10 GPU's with 2 of the Silverstones (and then I'm well on my way to concentrating a centralised point of failure ;) )

    I wasn't aware of any problems binding PSU's. What's you're experience?
  • dlehenkydlehenky Member Posts: 2,249 ✭✭✭✭
    @o0ragman0o I have no personal experience, because I won't do it. From what I've been told from electrical types, it's an iffy practice, at best. You can develop feedback between to two supplies, and it's difficult to get a constant output from them. Electrical fires and equipment damage are also possible. It certainly isn't UL tested :) I'm not saying you *will* have a problem, but the potential is there.
  • o0ragman0oo0ragman0o Member, Moderator Posts: 1,291 mod
    @dlehenky, and yet it's such common practice.

    The most significant danger would be from unbonded earths. This can mean the ground potentials differ between PSU's leading to current shunting and differing regulated voltages be pushed into the same system circuitry.

    The PSU couplers bind the ground planes which leave only the hazard of voltage regulation quality from each PSU. If both PSU's are the same model, and of decent quality, I don't see why there wouldn't be a tolerable/safe match (meaning I need to rethink bonding my Silverstone and Gamemax).
  • ethfanethfan Member Posts: 458 ✭✭✭
    I have been thinking about the possible bios/OS limits. It may be that the system "sees" all three GPUs in the extension as just one GPU. As I said this is possible if they share just one lane in a round-robin manner. So GPU1 gets so many microseconds of bandwidth, followed by GPU2, then GPU3 and back to GPU1, etc.

    Yes, plain monkey curiosity. :smiley:
  • CryptuxCryptux Member Posts: 118 ✭✭
    I just ordered one to test, will report back if it works.
  • ilia7777ilia7777 Member Posts: 113

    ilia7777 said:

    This is what I remember from the old days when I was in software team in Orckit Communications.

    @ilia7777, Ahh the good ol' days.... err in these here new days things are a whole lot more complex.

    I think calling PCIe a 'bus' is a misnomer. A bus is a set of data, address and control line which are shared by a number of devices which can be individually selected to listen to or control the bus lines. This is distinct from an interface which is point to point between a device and controller or two devices. So think of the old 25 pin parallel bus, it shared data and address lines with other internal devices like sound cards video cards etc. In contrast the RS232 is a serial interface that requires a UART between the bus and the serial device.

    PCIe is a collection of point to point serial interface wire pairs called lanes each having an endpoint and a controller in a hub and spoke topology. The controller might be an Intel CPU, chipset or (as I've just learnt about) a PCIe switch. The bus'ishness of PCIe is that lanes can be bonded into x1 x4 x8 x16 pairs.

    So for our study, GPU's exclusively use PCIe where there isn't anything like chip select and we are limited by the number of lanes routed out to our board.

    The OP's device, of which I have two on order, is a PCIe switch and as best as I've been able to tell, furnishes two additional PCIe x1 slots (as one on the mobo had to be sacrificed to connect it).

    Daisy chaining or fanning out these might remove the theoretical limit to how many lanes you can have but no doubt you'll quickly hit some deeper barrier to GPU count. I've read that one such barrier is the bios which can only see 2gb and that each GPU requires 256mb which limits out at 8 GPU's. I'll have a total of 10 slots and so will be looking to test this limit though have read it was broken by a kernal hack to correct the bios data. Above that would be OS and driver limitations.


    Read this https://en.wikipedia.org/wiki/PCI_configuration_space

    It talks about 32 devices maximum for 8 bit PCI bus, there is a bus and it works pretty much like I described. Point to point at one time is a typical bus operation :smile: You just naming the same thing in a different way.
  • patrik2patrik2 Member Posts: 156 ✭✭
    Cryptux said:

    I just ordered one to test, will report back if it works.

    @Cryptux Please keep us informed if it works. I ordered one too to try to build rigs with 7-8 gpus
  • o0ragman0oo0ragman0o Member, Moderator Posts: 1,291 mod
    ilia7777 said:


    Read this https://en.wikipedia.org/wiki/PCI_configuration_space

    It talks about 32 devices maximum for 8 bit PCI bus, there is a bus and it works pretty much like I described. Point to point at one time is a typical bus operation :smile: You just naming the same thing in a different way.

    Thought we were talking more of comparing hardware architectures...
    Conventional_PCI
    ... It is a parallel bus, synchronous to a single bus clock.
    So with conventional PCI we have many devices sharing the same parallel bus wiring and clock and subject to clock skew limiting throughput. So a 'bus' in the traditional sense of the word.

    With PCIe we have many individual point-2-point serial ports. No bus skew and no shared wiring (except for perhaps 2 x16 ports which get dropped to x8 if both used. I imagine the lower x8 serial ports are switched off on one slot while the upper x8 switched off on the other slot)

    To make things more muddy, there is the concept of 'multi-drop' on a serial line. This is where multiple devices share a serial line and are addressed by a host controller. I've designed and built such a multi-drop arrangement for RS-232 control of up to 42 RC servo's.
  • CryptuxCryptux Member Posts: 118 ✭✭
    @patrik2 will do , have not received it yet
  • adasebadaseb Member Posts: 1,043 ✭✭✭
    Feedback is good. Only 2 comments
  • ImAMiner?ImAMiner? Member Posts: 208 ✭✭
    This guy used 2 four way splitters for 8 gpus for cracking passwords

    https://hashcat.net/forum/thread-2622-post-16597.html#pid16597
  • o0ragman0oo0ragman0o Member, Moderator Posts: 1,291 mod
    @SteelWeaver, nice. I've bought 2 1>3 switches with all adds up to 4 extra slots (as you lose 2 from the motherboard) for cheaper than this 1 to 4 which only give you 3 extra slots. But it's nice to know it exists.

    Still waiting for them to come in. Next week hopefully.
  • patrik2patrik2 Member Posts: 156 ✭✭
    I am still waiting for my hubs too!
  • bpvarsitybpvarsity Member Posts: 92
    Eager to hear if anyone has success with these. Would love to add a few more gpu's to my miner.
Sign In or Register to comment.