Wires wires wires
#11
RE: Wires wires wires
(05-08-2020, 08:12 PM)jan-jaap Wrote:  It's just about impossible to find SC-MIC fibers for halfway reasonable prices.

If 10 feet (3 meters) is long enough, there are some (mis-labelled) ones on ebay for $9.95 USD:  item 222839306054

If you need longer, 15 meter ones are available for $25 USD:  item 293015574554

(Playing with FDDI is on my "to do" list, too!)

SGI:  Indigo, Indigo2, Octane, Origin 300
Sun:  SPARCstation 20 (x4), Ultra 2, Blade 2500, T5240
HP:  9000/380, 425e, C8000
Digital: DECstation 5000/125, PWS 600au
jpstewart
Developer

Trade Count: (1)
Posts: 444
Threads: 6
Joined: May 2018
Location: SW Ontario, CA
Find Reply
05-08-2020, 10:59 PM
#12
RE: Wires wires wires
JJ, thanks for posting more!

Can I ask you why you chose to run FDDI? Did you have a whole bunch of interface cards for various systems and wanted to make use of them for fun? Or is there some other reason you chose to use it?

“The future is already here – it's just not evenly distributed."
    The Economist, December 4, 2003
            ―William Gibson

Onyx2 Octane Octane O2 Indigo2 R10000/IMPACT Indy Indy
ghost180sx
Now-MIPS-Powered

Trade Count: (0)
Posts: 110
Threads: 6
Joined: Dec 2018
Location: The Great White North
Find Reply
05-09-2020, 05:17 AM
#13
RE: Wires wires wires
(05-09-2020, 05:17 AM)ghost180sx Wrote:  JJ, thanks for posting more!

Can I ask you why you chose to run FDDI? Did you have a whole bunch of interface cards for various systems and wanted to make use of them for fun? Or is there some other reason you chose to use it?

Many of the older systems have only 10Mb/s ethernet. You can get a 100Mb/s card for the Onyx but everybody knows it's crap. The 3com card for the Indigo2 is crap (definition of crap: 7MB/s with a system irresponsive due to near-100% INTR-load). At the same time, decent FDDI options exist for these systems which generally run at wire speed with near-zero load on the system. It's simply better (for the old systems).

Several of my old PowerSeries had FDDI cards installed so when I had the opportunity to get a concentrator (FDDI lingo for a switch) plus a shopping bag full of fibre cables for a small fee it was the logical thing to do.

The tricky part of FDDI isn't the FDDI network itself, but how to interface it to the rest of your LAN. You need something with at least a 100Mb/s ethernet interface plus an FDDI interface of course. apparently some 3com core switches qualified, but core switches are nasty, loud, power hungry buggers that have no place in a home network. I'm using a tiny ITX Linux PC with a PCI slot for the FDDI card as a dedicated router. PCI slots in PCs are disappearing, so this dedicated box is reasonably future-proof. The inevitable result is a segmented LAN with multiple internal IP ranges, though. I already had this because I use VLANs and IP ranges dedicated to classes of trust (IOT, SGI, LAN, ...)


The earlier messages in this thread refer to things, but lacking file names it's not possible to restore the pics. Here are some new.

I started out in a bedroom with a dozen or systems, then some more big iron in the garage. I had the usual bundles of cable on the floor, with some ad-hoc switches sprinkled in corners. Systems sharing screens, keyboards and mice. All good and well, but when I moved into my current my current room I wanted a more definitive solution. First of all, I now had more systems. In fact, my collection is complete. I expect no more new systems.

I wanted proper serial console access and FDDI and FC. Unlike ethernet, this is not the sort of thing you 'solve' with a switch here and there. I already had a 19" rack so it made sense to concentrate all "support" equipment (switches, concentrators, port servers, file server, disk arrays, ...) there. This means every system has a bundle of wires going to the equipment rack. Times 25 or so, makes a *lot* of wires. As in: some 700m of twisted pair (ethernet, serial), plus some 300m of fibre (FDDI, FC).

It is important to realize, this is not "how to wire a business network". This is more an exercise in "how to make a kilometer of wire in one room disappear". Like I said, I do not expect new systems. I value neatness over ease of maintenance. I would *never* wire a business network like I did my computer room -- the density and inflexibility would drive you insane.

Anyway, when the room was constructed, I had gutters installed in the floor, along the wall. This makes it possible hide vast amounts of cables, but still, things have to be carefully planned. I do not have enough space to leave an 'isle' behind the 19" rack, so it must be possible to pull the rack from it's place, without detaching it. A massive "umbilical cord" to a patch panel on the wall is necessary. There's the question of what to do with extra lengths of cables. You can't hide them where all cables come together (the patch panel), so I designed cables to have some extra length at the system's end. The same for cables going from the patch panel to the rack: extra length hidden behind/in between the switches.

I assembled the bundles of cables outside in the garden in order to be able to pass them as a whole underneath cupboards I didn't feel like moving. I used a 3D printed cable comb -- this is a brilliant device.

[Image: DSC_1186_01.JPG]
Patch panel under construction

[Image: DSC_1477.JPG]
Roughly half an 48 port 1U patch strip filled. Now you know why I assembled this outside.

[Image: DSC_1478.JPG]
Normally you wire a building / office with solid core wire, terminated at a patch panel with LSA strips. But in this case, the cables are plugged into the systems on the other end, so the cables must be flexible, stranded wire. That's why the patch strip cannot use LSA strips and is built like this: plugs and keystones.

[Image: DSC_1479.JPG]
The cable comb.

[Image: DSC_1674.JPG]
Ethernet and serial 2x 48 ports, roughly 80 wires.

[Image: DSC_2797.JPG]
Cable bundles appearing from underneath a cupboard.

[Image: DSC_3246.JPG]
Behind the deskside systems, extra lengths of cable can be easily hidden here. Notice PDU for remote controlled power to everything.

[Image: DSC_3247.JPG]

[Image: DSC_3248.JPG]
Tada! Houdini trick, almost everything is gone.
(This post was last modified: 05-10-2020, 03:16 PM by jan-jaap.)
jan-jaap
SGI Collector

Trade Count: (0)
Posts: 1,048
Threads: 37
Joined: Jun 2018
Location: Netherlands
Website Find Reply
05-10-2020, 12:41 PM
#14
RE: Wires wires wires
Wonderful work.

There are tons of cheap PCIe-> PCI adapters on eBay and Amazon. I wonder what compatibility is like (surely not perfect). Apparently uefi bioses limit compatibility with many legacy PCI cards that had their own bios anyway (SCSI adapters and presumably most network adapters).
callahan
Octane

Trade Count: (0)
Posts: 147
Threads: 20
Joined: Dec 2018
Location: East Coast, USA
Find Reply
05-10-2020, 03:39 PM
#15
RE: Wires wires wires
Absolutely lovely work Jan-Jaap! So clean and very professional looking.
Gamefan
Octane

Trade Count: (0)
Posts: 140
Threads: 2
Joined: Jul 2018
Location: Denver, Colorado
Find Reply
05-10-2020, 06:13 PM
#16
RE: Wires wires wires
Beautifully done! Smile
Irinikus
Hardware Connoisseur

Trade Count: (0)
Posts: 3,475
Threads: 319
Joined: Dec 2017
Location: South Africa
Website Find Reply
05-11-2020, 08:40 AM
#17
RE: Wires wires wires
(05-10-2020, 03:39 PM)callahan Wrote:  There are tons of cheap PCIe-> PCI adapters on eBay and Amazon. I wonder what compatibility is like (surely not perfect). Apparently uefi bioses limit compatibility with many legacy PCI cards that had their own bios anyway (SCSI adapters and presumably most network adapters).

An adapter would be a possibility, but the FDDI card is regular (full) height. These adapters are good for HH cards, but a FH card + adapter wouldn't fit in the case anymore.

In the past I've used my file server to route the FDDI segment, but my current server is a 2U rack mount with only low profile PCIe slots. By turning the FDDI router into a self-contained appliance I'm not limiting my choice of server hardware. Also, it's not unthinkable that FDDI support in Linux will be removed at some point. Then I will simply freeze the FDDI applicance. Can't do that with my server.

Some old SCSI and network cards have boot ROMs, so if your UEFI firmware can't deal with legacy BIOS option ROMs you won't be able to boot from such a card. I have no idea whether my FDDI card has a ROM, but I'm not netbooting my FDDI router anyway.
jan-jaap
SGI Collector

Trade Count: (0)
Posts: 1,048
Threads: 37
Joined: Jun 2018
Location: Netherlands
Website Find Reply
05-11-2020, 09:22 AM
#18
RE: Wires wires wires
Very smart work!

I'm shocked that the 100MB/s Ethernet cards perform poorly. However, I recall that I had to force my Sun Sparcstation IPX's port on my switch into 10MB/s half duplex mode in order to reduce the interrupt load on the system. It was a real performance killer before I did that, with the CPU getting pegged with interrupts, and terrible effective throughput (~600 kb/s one way over FTP). I'm just very surprised that the driver or design of the hardware and software would have been so poor for such high end SGI systems "back in the day," but I suppose then as in now you really are at the mercy of vendors to provide good solutions.

Can you get behind those Desksides and racks to dust once in a while? They look heavy to move!

“The future is already here – it's just not evenly distributed."
    The Economist, December 4, 2003
            ―William Gibson

Onyx2 Octane Octane O2 Indigo2 R10000/IMPACT Indy Indy
ghost180sx
Now-MIPS-Powered

Trade Count: (0)
Posts: 110
Threads: 6
Joined: Dec 2018
Location: The Great White North
Find Reply
05-11-2020, 03:06 PM
#19
RE: Wires wires wires
(05-11-2020, 03:06 PM)ghost180sx Wrote:  I'm shocked that the 100MB/s Ethernet cards perform poorly. However, I recall that I had to force my Sun Sparcstation IPX's port on my switch into 10MB/s half duplex mode in order to reduce the interrupt load on the system. It was a real performance killer before I did that, with the CPU getting pegged with interrupts, and terrible effective throughput (~600 kb/s one way over FTP). I'm just very surprised that the driver or design of the hardware and software would have been so poor for such high end SGI systems "back in the day," but I suppose then as in now you really are at the mercy of vendors to provide good solutions.

100Base-T didn't exist when these were new. Early 100Base-T products also had compatibility problems with auto negotiation, or were only half duplex etc.

The Onyx 100Mb/s ethernet card is a VME-to-PCI bridge with a pair of PMC 100Mb ethernet daughter cards. It was manufactured by a 3rd party. The same for 100Mb cards for the Indigo2; there's a rebranded 3com EISA card and there's the GIO64 Phobos G160. That one works really well, btw.

(05-11-2020, 03:06 PM)ghost180sx Wrote:  Can you get behind those Desksides and racks to dust once in a while? They look heavy to move!

Easy! They all have wheels.  Biggrin  Which is good, because the desksides weigh 70 ... 100kg each.

Here's a picture of my FDDI router appliance:

[Image: IMG_3499_sm.JPG]

As you can see, finding a smaller system with space for a normal size PCI card won't be easy. All parts scavenged left and right so it didn't cost much.  Uplink is gigabit ethernet, the RJ45 adapter is to connect my Cyclades port server (serial console).

The FDDI card I use in it is a SysKonnect SK-NET FDDI-LP DAS (SK5544):

[Image: IMG_3496_sm.JPG]

DEC made PCI FDDI cards too, but there was a good reason I went with the SysKonnect cards. I think the DEC cards weren't supported by Linux, maybe.
jan-jaap
SGI Collector

Trade Count: (0)
Posts: 1,048
Threads: 37
Joined: Jun 2018
Location: Netherlands
Website Find Reply
05-12-2020, 12:44 PM
#20
RE: Wires wires wires
JJ,
That's a fantastic little bridge. I've never played around with FDDI of course, but always seen the drivers and modules in the kernel config.

How easy is it to setup bridging? Is it a very simple option that you pass through the /proc interface or using something like ifconfig or route?

Have you setup a ring topology?
My understanding was that if you have a single host in your ring topology that goes down, packets could be dropped. Is there a built in bypass to the PCI card to prevent that? Maybe I should just read books on the subject... I have them somewhere from my network study courses.

“The future is already here – it's just not evenly distributed."
    The Economist, December 4, 2003
            ―William Gibson

Onyx2 Octane Octane O2 Indigo2 R10000/IMPACT Indy Indy
ghost180sx
Now-MIPS-Powered

Trade Count: (0)
Posts: 110
Threads: 6
Joined: Dec 2018
Location: The Great White North
Find Reply
05-13-2020, 02:46 AM


Forum Jump:


Users browsing this thread: 1 Guest(s)