On Alibaba? That's new to me, I've only seen them on some other website that didn't seem to have a competent web store. They seem a bit expensive for what they are....
Couldn't find much discussion about them but for this similar product which seems positive. Anyway, I think I'll get one just to try it out.
I believe the only hard traffic to the GPU's is loading up the DAG's when mining starts, so that might take longer
NOTICE: Please don't use it for NVIDIA GeForce GTX 750 Ti because it can't support.
Well, thank god I had to sell all mine! Seems a bit specific. Perhaps not enough on the power rails, but I wouldn't have thought so?
@JBaETH The drivers won't recognize more than 6 GPUs, so I would see limited situations where this might be useful. I see a chip on there, so if that is actually a bus extender interface, as used on Extended ATX MBs, it might actually work well. I have no experience with it, myself. I did purchase my x1-x16 risers from AliExpress, and it all went well, although it depends on the particular supplier, of course. YMMV.
This hub is useful when you have little number of pci-e slots onboard.
No need to buy new mo-bo.
Hey Jukebox, thanks for your comments. Do you really think it's worth buying for Gigabyte P35-DS3P mobo? I have this board with 4 cards running smoothly, I want to add 2 more.
I don't know about the drivers but in theory mobo should work with any number of peripherals provided there are enough chip select lines. Any motherboard has a bus for the sake of simplification lets assume that its shared by all. The way it works is following. All devices (say graphic cards) seat a listen to the bus. Each of them has a unique code assigned called chip select. The CPU when it wants to address particular peripheral issues this chip select and only the peripheral with proper chip select will respond. The limitation could be the number of chip select lines. For the sake of example if there are 4 lines it would produce the maximum chip select code 1111 binary. So 4 lines would support 16 peripherals. This is what I remember from the old days when I was in software team in Orckit Communications. Its was actually our team that built first DSL modem in the world
If there is a limitation on the number of graphic cards its most likele the OS limitation and not hardware. While its not possible to build an ASIC to mine Ethereum its definetely possible to build a board which would support 20 GPUs.
I think calling PCIe a 'bus' is a misnomer. A bus is a set of data, address and control line which are shared by a number of devices which can be individually selected to listen to or control the bus lines. This is distinct from an interface which is point to point between a device and controller or two devices. So think of the old 25 pin parallel bus, it shared data and address lines with other internal devices like sound cards video cards etc. In contrast the RS232 is a serial interface that requires a UART between the bus and the serial device.
PCIe is a collection of point to point serial interface wire pairs called lanes each having an endpoint and a controller in a hub and spoke topology. The controller might be an Intel CPU, chipset or (as I've just learnt about) a PCIe switch. The bus'ishness of PCIe is that lanes can be bonded into x1 x4 x8 x16 pairs.
So for our study, GPU's exclusively use PCIe where there isn't anything like chip select and we are limited by the number of lanes routed out to our board.
The OP's device, of which I have two on order, is a PCIe switch and as best as I've been able to tell, furnishes two additional PCIe x1 slots (as one on the mobo had to be sacrificed to connect it).
Daisy chaining or fanning out these might remove the theoretical limit to how many lanes you can have but no doubt you'll quickly hit some deeper barrier to GPU count. I've read that one such barrier is the bios which can only see 2gb and that each GPU requires 256mb which limits out at 8 GPU's. I'll have a total of 10 slots and so will be looking to test this limit though have read it was broken by a kernal hack to correct the bios data. Above that would be OS and driver limitations.
If the adapter actually manages active multiplexing there is no reason to see any limit. I await your test results with bated breath... Almost place an order myself!
@o0ragman0o@ethfan Isn't anyone concerned about the power requirements? Monolithic power supplies beyond 1300w get pricey. Tying multiple PSUs together is problematic. The system cost per GPU on a 6 GPU rig is low, especially the way most miners build them, so getting a higher GPU density per system isn't going to amount to much. You'll still have the increased cost/complexity of the PSU, regardless. I guess I don't understand what problem you're trying to solve, although sheer curiosity is always reason enough to do it.
@dlehenky, Certainly the curiosity of exploring technical and economic horizons is a big one. Even with 2nd hand system gear, a system is still going to cost $100 ~ $200 but obviously if someone just trying to get a look in, already has a slot limited mobo, then $30 bucks could let them realise a 6 GPU rig.
Depending on what limits turn up, such as the 8 GPU bios limit I'll be running 6 370's on a 1200w (Silverstone strider gold) and the system with another 2 GPU's on an 850w Gamemax.
If I find what the actual kernel hack was, or some other method (which I have a low confidence of doing) then I'll try 10 GPU's with 2 of the Silverstones (and then I'm well on my way to concentrating a centralised point of failure )
I wasn't aware of any problems binding PSU's. What's you're experience?
@o0ragman0o I have no personal experience, because I won't do it. From what I've been told from electrical types, it's an iffy practice, at best. You can develop feedback between to two supplies, and it's difficult to get a constant output from them. Electrical fires and equipment damage are also possible. It certainly isn't UL tested I'm not saying you *will* have a problem, but the potential is there.
The most significant danger would be from unbonded earths. This can mean the ground potentials differ between PSU's leading to current shunting and differing regulated voltages be pushed into the same system circuitry.
The PSU couplers bind the ground planes which leave only the hazard of voltage regulation quality from each PSU. If both PSU's are the same model, and of decent quality, I don't see why there wouldn't be a tolerable/safe match (meaning I need to rethink bonding my Silverstone and Gamemax).
I have been thinking about the possible bios/OS limits. It may be that the system "sees" all three GPUs in the extension as just one GPU. As I said this is possible if they share just one lane in a round-robin manner. So GPU1 gets so many microseconds of bandwidth, followed by GPU2, then GPU3 and back to GPU1, etc.
I think calling PCIe a 'bus' is a misnomer. A bus is a set of data, address and control line which are shared by a number of devices which can be individually selected to listen to or control the bus lines. This is distinct from an interface which is point to point between a device and controller or two devices. So think of the old 25 pin parallel bus, it shared data and address lines with other internal devices like sound cards video cards etc. In contrast the RS232 is a serial interface that requires a UART between the bus and the serial device.
PCIe is a collection of point to point serial interface wire pairs called lanes each having an endpoint and a controller in a hub and spoke topology. The controller might be an Intel CPU, chipset or (as I've just learnt about) a PCIe switch. The bus'ishness of PCIe is that lanes can be bonded into x1 x4 x8 x16 pairs.
So for our study, GPU's exclusively use PCIe where there isn't anything like chip select and we are limited by the number of lanes routed out to our board.
The OP's device, of which I have two on order, is a PCIe switch and as best as I've been able to tell, furnishes two additional PCIe x1 slots (as one on the mobo had to be sacrificed to connect it).
Daisy chaining or fanning out these might remove the theoretical limit to how many lanes you can have but no doubt you'll quickly hit some deeper barrier to GPU count. I've read that one such barrier is the bios which can only see 2gb and that each GPU requires 256mb which limits out at 8 GPU's. I'll have a total of 10 slots and so will be looking to test this limit though have read it was broken by a kernal hack to correct the bios data. Above that would be OS and driver limitations.
It talks about 32 devices maximum for 8 bit PCI bus, there is a bus and it works pretty much like I described. Point to point at one time is a typical bus operation You just naming the same thing in a different way.
It talks about 32 devices maximum for 8 bit PCI bus, there is a bus and it works pretty much like I described. Point to point at one time is a typical bus operation You just naming the same thing in a different way.
Thought we were talking more of comparing hardware architectures... Conventional_PCI
... It is a parallel bus, synchronous to a single bus clock.
So with conventional PCI we have many devices sharing the same parallel bus wiring and clock and subject to clock skew limiting throughput. So a 'bus' in the traditional sense of the word.
With PCIe we have many individual point-2-point serial ports. No bus skew and no shared wiring (except for perhaps 2 x16 ports which get dropped to x8 if both used. I imagine the lower x8 serial ports are switched off on one slot while the upper x8 switched off on the other slot)
To make things more muddy, there is the concept of 'multi-drop' on a serial line. This is where multiple devices share a serial line and are addressed by a host controller. I've designed and built such a multi-drop arrangement for RS-232 control of up to 42 RC servo's.
@SteelWeaver, nice. I've bought 2 1>3 switches with all adds up to 4 extra slots (as you lose 2 from the motherboard) for cheaper than this 1 to 4 which only give you 3 extra slots. But it's nice to know it exists.
Still waiting for them to come in. Next week hopefully.
Comments
If it's cheap, try it out and let us know.
Couldn't find much discussion about them but for this similar product which seems positive. Anyway, I think I'll get one just to try it out.
I believe the only hard traffic to the GPU's is loading up the DAG's when mining starts, so that might take longer Well, thank god I had to sell all mine! Seems a bit specific. Perhaps not enough on the power rails, but I wouldn't have thought so?
I think buy, but I dont know if the motherboard or the own riser suport 3 GPU in a PCI-e 1x.
The only question is video drivers, but this is not a question to this PCI-E hub.
No need to buy new mo-bo.
Do you really think it's worth buying for Gigabyte P35-DS3P mobo?
I have this board with 4 cards running smoothly, I want to add 2 more.
I think calling PCIe a 'bus' is a misnomer. A bus is a set of data, address and control line which are shared by a number of devices which can be individually selected to listen to or control the bus lines. This is distinct from an interface which is point to point between a device and controller or two devices. So think of the old 25 pin parallel bus, it shared data and address lines with other internal devices like sound cards video cards etc. In contrast the RS232 is a serial interface that requires a UART between the bus and the serial device.
PCIe is a collection of point to point serial interface wire pairs called lanes each having an endpoint and a controller in a hub and spoke topology. The controller might be an Intel CPU, chipset or (as I've just learnt about) a PCIe switch. The bus'ishness of PCIe is that lanes can be bonded into x1 x4 x8 x16 pairs.
So for our study, GPU's exclusively use PCIe where there isn't anything like chip select and we are limited by the number of lanes routed out to our board.
The OP's device, of which I have two on order, is a PCIe switch and as best as I've been able to tell, furnishes two additional PCIe x1 slots (as one on the mobo had to be sacrificed to connect it).
Daisy chaining or fanning out these might remove the theoretical limit to how many lanes you can have but no doubt you'll quickly hit some deeper barrier to GPU count. I've read that one such barrier is the bios which can only see 2gb and that each GPU requires 256mb which limits out at 8 GPU's. I'll have a total of 10 slots and so will be looking to test this limit though have read it was broken by a kernal hack to correct the bios data. Above that would be OS and driver limitations.
Depending on what limits turn up, such as the 8 GPU bios limit I'll be running 6 370's on a 1200w (Silverstone strider gold) and the system with another 2 GPU's on an 850w Gamemax.
If I find what the actual kernel hack was, or some other method (which I have a low confidence of doing) then I'll try 10 GPU's with 2 of the Silverstones (and then I'm well on my way to concentrating a centralised point of failure )
I wasn't aware of any problems binding PSU's. What's you're experience?
The most significant danger would be from unbonded earths. This can mean the ground potentials differ between PSU's leading to current shunting and differing regulated voltages be pushed into the same system circuitry.
The PSU couplers bind the ground planes which leave only the hazard of voltage regulation quality from each PSU. If both PSU's are the same model, and of decent quality, I don't see why there wouldn't be a tolerable/safe match (meaning I need to rethink bonding my Silverstone and Gamemax).
Yes, plain monkey curiosity.
It talks about 32 devices maximum for 8 bit PCI bus, there is a bus and it works pretty much like I described. Point to point at one time is a typical bus operation You just naming the same thing in a different way.
Conventional_PCI So with conventional PCI we have many devices sharing the same parallel bus wiring and clock and subject to clock skew limiting throughput. So a 'bus' in the traditional sense of the word.
With PCIe we have many individual point-2-point serial ports. No bus skew and no shared wiring (except for perhaps 2 x16 ports which get dropped to x8 if both used. I imagine the lower x8 serial ports are switched off on one slot while the upper x8 switched off on the other slot)
To make things more muddy, there is the concept of 'multi-drop' on a serial line. This is where multiple devices share a serial line and are addressed by a host controller. I've designed and built such a multi-drop arrangement for RS-232 control of up to 42 RC servo's.
https://hashcat.net/forum/thread-2622-post-16597.html#pid16597
http://www.aliexpress.com/item/2015-new-PCI-e-PCIe-Express-1X-to-4port-1X-to-4X-multiplier-switch-riser-card/1955206310.html
Still waiting for them to come in. Next week hopefully.