A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Homebuilt PC's
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

PCI express graphics in a 1 lane slot?



 
 
Thread Tools Display Modes
  #1  
Old May 25th 18, 12:45 PM posted to alt.comp.hardware.pc-homebuilt,alt.comp.hardware
Jimmy Wilkinson Knife
external usenet poster
 
Posts: 24
Default PCI express graphics in a 1 lane slot?

PCI express graphics in a 1 lane slot? Yes I know I can fiddle with extension ribbons, but why have they designed the 1 lane slots to physically not allow longer card to go into them? Is it to stop the user squashing motherboard components which may be nearby? Obviously it would run slower, but that depends on what you're using the card for. Only games need fast data transfer rates. If you're going to use the extra cards for more displays, scientific computing, or bitcoin mining, thy just don't need more than 1 lane, as evidenced by the plethora of adapters available. Hell you can even split a single lane socket into 4!

--
Attila the Hun died during a bout of rough sex where his partner broke his nose causing a haemorrhage.
  #2  
Old May 25th 18, 05:34 PM posted to alt.comp.hardware.pc-homebuilt,alt.comp.hardware
Paul[_28_]
external usenet poster
 
Posts: 1,467
Default PCI express graphics in a 1 lane slot?

Jimmy Wilkinson Knife wrote:
PCI express graphics in a 1 lane slot? Yes I know I can fiddle with
extension ribbons, but why have they designed the 1 lane slots to
physically not allow longer card to go into them? Is it to stop the
user squashing motherboard components which may be nearby? Obviously it
would run slower, but that depends on what you're using the card for.
Only games need fast data transfer rates. If you're going to use the
extra cards for more displays, scientific computing, or bitcoin mining,
thy just don't need more than 1 lane, as evidenced by the plethora of
adapters available. Hell you can even split a single lane socket into 4!


They make connectors which are open on one end, and allow
cards larger than x1 to fit in an x1 slot.

https://en.wikipedia.org/wiki/File:P...d_IMG_1820.JPG

MSI had some motherboards with x4 slots that did this.
The slot was yellow in color. Likely the same color as the
one in that picture.

What's missing, is a heel clamp for a 10" long card.
It's easy to keep a HHHL card in place, using only
a faceplace screw. What will keep a 10" card secure
when the only "foot" it's got is an x1 connector body ?

Nothing prevents x1 wiring being used on an x16 slot,
so they could use that technique. That's a material
cost issue.

As for PCIe switching, muxing and routing, the company
that used to make those at reasonable cost, got bought
out. I gather the price of the components went up,
because "bifurcated" configurations seem to have
disappeared from motherboards. Finding four chips next
to a x16 slot is now "less common". Those chips were used
to route x16,x0 or x8,x8 lane patterns, based on the
card presence signal.

Paul
  #3  
Old May 25th 18, 06:01 PM posted to alt.comp.hardware.pc-homebuilt,alt.comp.hardware
Jimmy Wilkinson Knife
external usenet poster
 
Posts: 24
Default PCI express graphics in a 1 lane slot?

On Fri, 25 May 2018 17:34:43 +0100, Paul wrote:

Jimmy Wilkinson Knife wrote:
PCI express graphics in a 1 lane slot? Yes I know I can fiddle with
extension ribbons, but why have they designed the 1 lane slots to
physically not allow longer card to go into them? Is it to stop the
user squashing motherboard components which may be nearby? Obviously it
would run slower, but that depends on what you're using the card for.
Only games need fast data transfer rates. If you're going to use the
extra cards for more displays, scientific computing, or bitcoin mining,
thy just don't need more than 1 lane, as evidenced by the plethora of
adapters available. Hell you can even split a single lane socket into 4!


They make connectors which are open on one end, and allow
cards larger than x1 to fit in an x1 slot.

https://en.wikipedia.org/wiki/File:P...d_IMG_1820.JPG

MSI had some motherboards with x4 slots that did this.
The slot was yellow in color. Likely the same color as the
one in that picture.


That would be useful. Better to have a card that's not quite tight than one that won't fit. And I don't fancy sawing off part of a connector while it's on the board! I'll stick to using adapters and extension ribbons.

What particularly interests me is something I saw on Ebay from China which claims to connect to an x1 slot and produce four x1 slots. Can you actually multiplex these things? I thought a lane was a lane.
This suggests you can, just like with a network switch:
https://superuser.com/questions/8949...google_rich_qa
I wonder how many GPUs could work at once - giving them the physical space and the PCI express connectors isn't a problem with those extensions and adapters, but I wonder if the drivers would get confused, or Windows, or I'd run out of BIOS address space? The most I can find is a bitcoin mining rig with 19 cards, and a special motherboard by Asus I think.

What's missing, is a heel clamp for a 10" long card.
It's easy to keep a HHHL card in place, using only
a faceplace screw. What will keep a 10" card secure
when the only "foot" it's got is an x1 connector body ?


I remember when AGP graphics cards first came out. At that time most computers were desktop format, not tower, so gravity kept them in place, although there were a few that jumped out before they added the heel to the design, usually when the user had been clumsy.

One case I remember was my boss at the time was about to make an exceedingly nasty complaint about one of the IT guys because when installing some software or fixing some minor problem or other, he'd made the whole machine not start and he thought he'd lost some important files. When I switched it on it beeped or did something I immediately recognised. I took the lid off, pushed the graphics card down a bit, and started it up. The guy was lucky he was on a lunch break or he may have been physically abused by my angry boss, especially as I don't think he was bright enough to have figured out what had happened.

Nothing prevents x1 wiring being used on an x16 slot,
so they could use that technique. That's a material
cost issue.


And more likely there's no room, as there are components where the x16 slot would go.

As for PCIe switching, muxing and routing, the company
that used to make those at reasonable cost, got bought
out. I gather the price of the components went up,
because "bifurcated" configurations seem to have
disappeared from motherboards. Finding four chips next
to a x16 slot is now "less common". Those chips were used
to route x16,x0 or x8,x8 lane patterns, based on the
card presence signal.


Another thing that surprises me is this brand new board I just bought has a PCI slot!
https://www.gigabyte.com/Motherboard...HD3P-rev-10#kf
Does anyone really use those now?
It also has headers for (but not a rear connector) serial and parallel (COM and LPT)!
I guess this board is also for electronics engineers who need to connect simple circuitry?

--
If a person with multiple personalities threatens suicide, is that person considered a hostage situation?
  #4  
Old May 25th 18, 06:21 PM posted to alt.comp.hardware.pc-homebuilt,alt.comp.hardware
Jimmy Wilkinson Knife
external usenet poster
 
Posts: 24
Default PCI express graphics in a 1 lane slot?

On Fri, 25 May 2018 18:01:47 +0100, Jimmy Wilkinson Knife wrote:

On Fri, 25 May 2018 17:34:43 +0100, Paul wrote:

Jimmy Wilkinson Knife wrote:
PCI express graphics in a 1 lane slot? Yes I know I can fiddle with
extension ribbons, but why have they designed the 1 lane slots to
physically not allow longer card to go into them? Is it to stop the
user squashing motherboard components which may be nearby? Obviously it
would run slower, but that depends on what you're using the card for.
Only games need fast data transfer rates. If you're going to use the
extra cards for more displays, scientific computing, or bitcoin mining,
thy just don't need more than 1 lane, as evidenced by the plethora of
adapters available. Hell you can even split a single lane socket into 4!


They make connectors which are open on one end, and allow
cards larger than x1 to fit in an x1 slot.

https://en.wikipedia.org/wiki/File:P...d_IMG_1820.JPG

MSI had some motherboards with x4 slots that did this.
The slot was yellow in color. Likely the same color as the
one in that picture.


That would be useful. Better to have a card that's not quite tight than one that won't fit. And I don't fancy sawing off part of a connector while it's on the board! I'll stick to using adapters and extension ribbons.

What particularly interests me is something I saw on Ebay from China which claims to connect to an x1 slot and produce four x1 slots. Can you actually multiplex these things? I thought a lane was a lane.
This suggests you can, just like with a network switch:
https://superuser.com/questions/8949...google_rich_qa
I wonder how many GPUs could work at once - giving them the physical space and the PCI express connectors isn't a problem with those extensions and adapters, but I wonder if the drivers would get confused, or Windows, or I'd run out of BIOS address space? The most I can find is a bitcoin mining rig with 19 cards, and a special motherboard by Asus I think.


Just done a calculation. I could run 35 Radeon HD 7970 cards on an i7-8700K before I ran out of CPU power to assist them on Einstein@Home. I can't find anything on address space though. Does every device nowadays just get mapped to main memory? We don't have limits on interrupts and stuff nowadays do we?

--
The reason people sweat is so that they won't catch fire when having sex.
  #5  
Old May 25th 18, 06:57 PM posted to alt.comp.hardware.pc-homebuilt,alt.comp.hardware
Paul[_28_]
external usenet poster
 
Posts: 1,467
Default PCI express graphics in a 1 lane slot?

Jimmy Wilkinson Knife wrote:
On Fri, 25 May 2018 18:01:47 +0100, Jimmy Wilkinson Knife
wrote:

On Fri, 25 May 2018 17:34:43 +0100, Paul wrote:

Jimmy Wilkinson Knife wrote:
PCI express graphics in a 1 lane slot? Yes I know I can fiddle with
extension ribbons, but why have they designed the 1 lane slots to
physically not allow longer card to go into them? Is it to stop the
user squashing motherboard components which may be nearby?
Obviously it
would run slower, but that depends on what you're using the card for.
Only games need fast data transfer rates. If you're going to use the
extra cards for more displays, scientific computing, or bitcoin mining,
thy just don't need more than 1 lane, as evidenced by the plethora of
adapters available. Hell you can even split a single lane socket
into 4!

They make connectors which are open on one end, and allow
cards larger than x1 to fit in an x1 slot.

https://en.wikipedia.org/wiki/File:P...d_IMG_1820.JPG


MSI had some motherboards with x4 slots that did this.
The slot was yellow in color. Likely the same color as the
one in that picture.


That would be useful. Better to have a card that's not quite tight
than one that won't fit. And I don't fancy sawing off part of a
connector while it's on the board! I'll stick to using adapters and
extension ribbons.

What particularly interests me is something I saw on Ebay from China
which claims to connect to an x1 slot and produce four x1 slots. Can
you actually multiplex these things? I thought a lane was a lane.
This suggests you can, just like with a network switch:
https://superuser.com/questions/8949...google_rich_qa

I wonder how many GPUs could work at once - giving them the physical
space and the PCI express connectors isn't a problem with those
extensions and adapters, but I wonder if the drivers would get
confused, or Windows, or I'd run out of BIOS address space? The most
I can find is a bitcoin mining rig with 19 cards, and a special
motherboard by Asus I think.


Just done a calculation. I could run 35 Radeon HD 7970 cards on an
i7-8700K before I ran out of CPU power to assist them on Einstein@Home.
I can't find anything on address space though. Does every device
nowadays just get mapped to main memory? We don't have limits on
interrupts and stuff nowadays do we?


The PCIe MSI in-band interrupts should have plenty
of number space for this. Interrupts can be sent
as packets.

But I have seen some suggestions of limitations
at the driver level for max_GPU per system. I'm
not a coin miner, so I don't know the details of this.
Since max_GPU is just an addressing issue, you might
expect to see some differences between the various
OSes on the topic.

And the other chip function you were describing,
converting one incoming x1 lane, into four
outgoing x1 lanes, that's a "PCIe Switch chip".
It routes the packets to the appropriate port,
according to the address bits in the packet and
the mapping at setup. Switches have already been
used on motherboards in the past, for fixing up
small issues with x1 routing (more slots than
lanes, that sort of thing).

Larger switch chips allow x16 in, with
x8 out, x4 out, and 4 by x1 out.

And nothing prevents oversubscription either. NVidia
for example, did an x16 in with two x16 out.
And the x16 in, might have been overclocked.

In the picture here, the x16 between the 780a and
the NForce200 is overclocked, so that the x32 lanes
on the right actually deliver a bit more bandwidth
on the x32 side. The right hand side of NForce200
could be connected to two x16 or four x8, as examples
of what that switch could be used for. Since the
PCIe bus between the 780a and the NForce200 is
"private", they can run it at any clock rate
the two hardwares can "tolerate". As long as the
switch has some small buffers to handle the clock rate
difference, the thing can be made to work. The Nforce200
can't actually keep the links filled when driving to
the right - there might be x24 of bandwidth spread
over x32 of lanes at a guess.

https://www.bit-tech.net/reviews/tec...sli_preview/6/

Muxes are used for bifurcation and signal regeneration.

Switches are used for the other cases.

And a switch can go from one x1 in, to four x1 out,
but obviously on average, each outgoing link can only
be 1/4 full.

Switches are also used in cases where "gear changing"
is required. If you have a modern PCIe rev3 to USB3.1 10Gbit/sec
chip, which uses x2 lanes, you could use a PCIe Rev.2 x4
to PCIe Rev.3 x2 chip to allow full speed operation on
a Rev.2 motherboard. This compensates for the fact
that the USB3.1 company was too cheep to put x4 worth
of lanes on the chip, to help older motherboard owners.

But with Broadcom acquiring the company that made a lot
of that stuff, expect a shakeout and a reduction in
innovation. The price will go up, and some neat ideas
will go out the window. (There's no reason to buy a
company, unless you have a "plan" how to milk more
money from it. Whether it's for the patents, or
for the people, who knows. The company wasn't bought
so we could have "even cheaper PCIe switches".)

Paul
  #6  
Old May 25th 18, 07:30 PM posted to alt.comp.hardware.pc-homebuilt,alt.comp.hardware
Jimmy Wilkinson Knife
external usenet poster
 
Posts: 24
Default PCI express graphics in a 1 lane slot?

On Fri, 25 May 2018 18:57:27 +0100, Paul wrote:

Jimmy Wilkinson Knife wrote:
On Fri, 25 May 2018 18:01:47 +0100, Jimmy Wilkinson Knife
wrote:

On Fri, 25 May 2018 17:34:43 +0100, Paul wrote:

Jimmy Wilkinson Knife wrote:
PCI express graphics in a 1 lane slot? Yes I know I can fiddle with
extension ribbons, but why have they designed the 1 lane slots to
physically not allow longer card to go into them? Is it to stop the
user squashing motherboard components which may be nearby?
Obviously it
would run slower, but that depends on what you're using the card for.
Only games need fast data transfer rates. If you're going to use the
extra cards for more displays, scientific computing, or bitcoin mining,
thy just don't need more than 1 lane, as evidenced by the plethora of
adapters available. Hell you can even split a single lane socket
into 4!

They make connectors which are open on one end, and allow
cards larger than x1 to fit in an x1 slot.

https://en.wikipedia.org/wiki/File:P...d_IMG_1820.JPG


MSI had some motherboards with x4 slots that did this.
The slot was yellow in color. Likely the same color as the
one in that picture.

That would be useful. Better to have a card that's not quite tight
than one that won't fit. And I don't fancy sawing off part of a
connector while it's on the board! I'll stick to using adapters and
extension ribbons.

What particularly interests me is something I saw on Ebay from China
which claims to connect to an x1 slot and produce four x1 slots. Can
you actually multiplex these things? I thought a lane was a lane.
This suggests you can, just like with a network switch:
https://superuser.com/questions/8949...google_rich_qa

I wonder how many GPUs could work at once - giving them the physical
space and the PCI express connectors isn't a problem with those
extensions and adapters, but I wonder if the drivers would get
confused, or Windows, or I'd run out of BIOS address space? The most
I can find is a bitcoin mining rig with 19 cards, and a special
motherboard by Asus I think.


Just done a calculation. I could run 35 Radeon HD 7970 cards on an
i7-8700K before I ran out of CPU power to assist them on Einstein@Home.
I can't find anything on address space though. Does every device
nowadays just get mapped to main memory? We don't have limits on
interrupts and stuff nowadays do we?


The PCIe MSI in-band interrupts should have plenty
of number space for this. Interrupts can be sent
as packets.

But I have seen some suggestions of limitations
at the driver level for max_GPU per system. I'm
not a coin miner, so I don't know the details of this.
Since max_GPU is just an addressing issue, you might
expect to see some differences between the various
OSes on the topic.


DirectX has a limit of four. But OpenCL and Cuda compute functions don't have a limit that I know of.

And the other chip function you were describing,
converting one incoming x1 lane, into four
outgoing x1 lanes, that's a "PCIe Switch chip".
It routes the packets to the appropriate port,
according to the address bits in the packet and
the mapping at setup. Switches have already been
used on motherboards in the past, for fixing up
small issues with x1 routing (more slots than
lanes, that sort of thing).

Larger switch chips allow x16 in, with
x8 out, x4 out, and 4 by x1 out.


Jolly good. Will have a play at some point when I get too many cards for the number of computers I have :-)

And nothing prevents oversubscription either. NVidia
for example, did an x16 in with two x16 out.
And the x16 in, might have been overclocked.

In the picture here, the x16 between the 780a and
the NForce200 is overclocked, so that the x32 lanes
on the right actually deliver a bit more bandwidth
on the x32 side. The right hand side of NForce200
could be connected to two x16 or four x8, as examples
of what that switch could be used for. Since the
PCIe bus between the 780a and the NForce200 is
"private", they can run it at any clock rate
the two hardwares can "tolerate". As long as the
switch has some small buffers to handle the clock rate
difference, the thing can be made to work. The Nforce200
can't actually keep the links filled when driving to
the right - there might be x24 of bandwidth spread
over x32 of lanes at a guess.

https://www.bit-tech.net/reviews/tec...sli_preview/6/

Muxes are used for bifurcation and signal regeneration.

Switches are used for the other cases.

And a switch can go from one x1 in, to four x1 out,
but obviously on average, each outgoing link can only
be 1/4 full.

Switches are also used in cases where "gear changing"
is required. If you have a modern PCIe rev3 to USB3.1 10Gbit/sec
chip, which uses x2 lanes, you could use a PCIe Rev.2 x4
to PCIe Rev.3 x2 chip to allow full speed operation on
a Rev.2 motherboard. This compensates for the fact
that the USB3.1 company was too cheep to put x4 worth
of lanes on the chip, to help older motherboard owners.

But with Broadcom acquiring the company that made a lot
of that stuff, expect a shakeout and a reduction in
innovation. The price will go up, and some neat ideas
will go out the window. (There's no reason to buy a
company, unless you have a "plan" how to milk more
money from it. Whether it's for the patents, or
for the people, who knows. The company wasn't bought
so we could have "even cheaper PCIe switches".)


I hate it when things get in the way of innovation. Modern GPUs for example have been trimmed down so they're really fast for games using single precision floating point, and **** at everything else. Anyone wanting to use double precision is better off buying no-longer-produced 2nd hand cards. Like the Radeon HD 7970 for £70 in the UK. Three times faster at double precision than a brand new RX 580 for £200. It uses only 35% more electricity aswell.

--
The success of the "Wonder Bra" for under-endowed women has encouraged the designers to come out with a bra for over-endowed women.
It's called the "Sheep Dog Bra"- it rounds them up and points them in the right direction.
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
PCI Express x16 slot compatibility for Non-Graphics Applications? [email protected] General 8 April 22nd 05 10:42 PM
Non-PCI Express Graphics Card in a PCI Express slot? aether Asus Motherboards 9 March 21st 05 08:07 PM
Non-PCI Express Graphics Card in a PCI Express slot? aether General Hardware 7 March 21st 05 03:00 AM
Non-PCI Express Graphics Card in a PCI Express slot? aether Intel 7 March 21st 05 03:00 AM


All times are GMT +1. The time now is 05:00 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.