Onyx, IO4 and Graphics -
mosiniak - 11-22-2018
Hey,
I have a question about IO4 board and relation of IO4 and graphics subsystem. There were several types of IO4 boards; for example:
- 030-0815-00x
- 030-0646-107
On
this site there is written that only the 030-0815-00x supports IR graphics.
Does anyone know know what is the connection between the IO4 board and graphics and what is the reason why other IO4 boards don't support IR graphics?
Does anyone know know if on the IO4 boards any system configuration data and/or serial number is stored?
RE: Onyx, IO4 and Graphics -
jan-jaap - 11-22-2018
(11-22-2018, 06:50 PM)mosiniak Wrote: Hey,
I have a question about IO4 board and relation of IO4 and graphics subsystem. There were several types of IO4 boards; for example:
- 030-0815-00x
- 030-0646-107
On this site there is written that only the 030-0815-00x supports IR graphics.
Does anyone know know what is the connection between the IO4 board and graphics and what is the reason why other IO4 boards don't support IR graphics?
Does anyone know know if on the IO4 boards any system configuration data and/or serial number is stored?
You should read the Challenge/Onyx Diagnostics Roadmap :
http://sgidepot.co.uk/chalonyxdiag/
There are many subtle incompatibilities in the Challenge/Onyx series, mostly because the started with R4x00 CPUs and RE2 graphics, and added R8000 and R10K later, and IR. So you need a certain revision of IO4 to support IP21/IP25, and you need the latest VCAM to support IR. The VCAM is that daughtercard on the IO4 with the 2nd backplane connector. Basically, the system bus (PowerPath II) connects CPU, memory and IO cards, and the graphics are on the VME bus. The VCAM acts like a bridge between them.
The PROM environment variables are stored in a Dallas chip on the IO4.
The system serial number is stored in two places: the system controller and the IO4. The system controller has a Dallas chip too. If they contradict, the system controller wins (and I think it will overwrite the IO4 system serial# ). If they both run out, the serial number is wiped out, and by means of a special sequence documented in the Diagnostics Roadmap you can set the serial number once a new, blank Dallas has been installed. I've also modded them with CR2032 cells.
RE: Onyx, IO4 and Graphics -
mosiniak - 11-22-2018
(11-22-2018, 08:22 PM)jan-jaap Wrote: There are many subtle incompatibilities in the Challenge/Onyx series, mostly because the started with R4x00 CPUs and RE2 graphics, and added R8000 and R10K later, and IR. So you need a certain revision of IO4 to support IP21/IP25, and you need the latest VCAM to support IR. The VCAM is that daughtercard on the IO4 with the 2nd backplane connector. Basically, the system bus (PowerPath II) connects CPU, memory and IO cards, and the graphics are on the VME bus. The VCAM acts like a bridge between them.
I know basic connection scheme, but I would like to know what kind of "subtle incompatibilities" You are talking about. Do You know where I can find such information?
(11-22-2018, 08:22 PM)jan-jaap Wrote: The PROM environment variables are stored in a Dallas chip on the IO4.
The system serial number is stored in two places: the system controller and the IO4. The system controller has a Dallas chip too. If they contradict, the system controller wins (and I think it will overwrite the IO4 system serial# ). If they both run out, the serial number is wiped out, and by means of a special sequence documented in the Diagnostics Roadmap you can set the serial number once a new, blank Dallas has been installed. I've also modded them with CR2032 cells.
Thanks
RE: Onyx, IO4 and Graphics -
mosiniak - 11-24-2018
After reading some technical information and analyzing system logs I figured out several things. Can anyone confirm that:
1) connection between IO4 and graphics subsystem is in fact a FCI connection routed on the backplane along the VME bus,
2) graphics subsystem does not use VME bus at all,
3) VCAM acts as a "bridge" between PowerPath II<->VME and as a "bridge" between PowerPath II<->graphics subsystem using FCI as an interface?
RE: Onyx, IO4 and Graphics -
jan-jaap - 11-25-2018
(11-24-2018, 11:41 PM)mosiniak Wrote: After reading some technical information and analyzing system logs I figured out several things. Can anyone confirm that:
1) connection between IO4 and graphics subsystem is in fact a FCI connection routed on the backplane along the VME bus,
2) graphics subsystem does not use VME bus at all,
3) VCAM acts as a "bridge" between PowerPath II<->VME and as a "bridge" between PowerPath II<->graphics subsystem using FCI as an interface?
IIRC, FCI is only used in tall rack Onyxes, to hook up additional graphics pipes.
It works roughly like this:
Code:
IO4 -> 'F'-mezz card -> FCI interconnect -> remote VCAM -> extra RE2 pipe
The remote VCAM and the extra RE2 pipe are in a different card cage than the IO4 and the rest of the system.
It is very well possible that the the RE2 doesn't use the VME64 protocol, VME makes it possible to bridge (part of) the connector between a couple of slots and do user defined things, it's quite likely they do that. But you cannot have graphics without a VCAM and the VCAM is what bridges VME to the system bus.
RE: Onyx, IO4 and Graphics -
mosiniak - 11-26-2018
(11-25-2018, 11:33 AM)jan-jaap Wrote: (11-24-2018, 11:41 PM)mosiniak Wrote: After reading some technical information and analyzing system logs I figured out several things. Can anyone confirm that:
1) connection between IO4 and graphics subsystem is in fact a FCI connection routed on the backplane along the VME bus,
2) graphics subsystem does not use VME bus at all,
3) VCAM acts as a "bridge" between PowerPath II<->VME and as a "bridge" between PowerPath II<->graphics subsystem using FCI as an interface?
IIRC, FCI is only used in tall rack Onyxes, to hook up additional graphics pipes.
It works roughly like this:
Code:
IO4 -> 'F'-mezz card -> FCI interconnect -> remote VCAM -> extra RE2 pipe
The remote VCAM and the extra RE2 pipe are in a different card cage than the IO4 and the rest of the system.
It is very well possible that the the RE2 doesn't use the VME64 protocol, VME makes it possible to bridge (part of) the connector between a couple of slots and do user defined things, it's quite likely they do that. But you cannot have graphics without a VCAM and the VCAM is what bridges VME to the system bus.
Please look at hinv -b -v from prom:
Slot 3: IO4 I/O peripheral controller board (Enabled)
Adapter 1: EPC Peripheral controller (Enabled)
Adapter 2: FCI Graphics adapter (Enabled)
Adapter 3: VME adapter (Enabled)
Adapter 4: S1 SCSI controller (Enabled)
Adapter 5: SCIP SCSI controller (Enabled)
Adapter 6: HIPPI adapter (Enabled)
After I removed the HIPPI interface (connected VIA FCI) and graphics boards I got:
Checking hardware inventory...
*** IOA 2 on the IO4 in slot 3 has changed
*** IOA used to be Flat cable interconnect and now is Empty
*** IOA 6 on the IO4 in slot 3 has changed
*** IOA used to be Flat cable interconnect and now is Empty
and
Slot 3: IO4 I/O peripheral controller board (Enabled)
Adapter 1: EPC Peripheral controller (Enabled)
Adapter 3: VME adapter (Enabled)
Adapter 4: S1 SCSI controller (Enabled)
Adapter 5: SCIP SCSI controller (Enabled)
From this and from Challenge/Onyx Diagnostics Roadmap I figured out that graphics is connected via FCI.