XFS filesystem in scsi array Flame / stone + wire
#1
XFS filesystem in scsi array Flame / stone + wire
Hi all,

I have a mountain of scsi discs mounted in two disc arrays. Each array has 3 columns and 6 rows of hvd 9gb discs. The back of each array has three terminated scsi ports marked DF SCSI 2, 3 and 4, respectively, which correlates to external ports on an onyx rack.

Unfortunately I cannot find any information on these arrays, which are marked Stone + Wire 3150, but I tend to think that they are not raid arrays, but rather are just each providing three strings of 6 scsi discs.

One disc from an array has as a single sgi partition on it which is of type unknown (ie not xfs).

Can anyone shed light on what this setup is likely to be and how I am best proceeding with data recovery? It would be good to image everything using dd or something and reconstruct the filesystems on a single high capacity drive but I'm wondering whether this will be possible. From what I now know of xfs it seems likely that each disc may be configured as an allocation group of a larger filesystem, and I am guessing there is likely to be one xfs filesystem per string of scsi discs. Can an xfs filesystem span multiple discs and/or scsi channels? Is there any hope of imaging these seperately then rebuilding the xfs on a single disc?

Update: it looks like this might not be xfs after all... the boxes have flame 1 and flame 2 written on them, and searching for flame irix brought up references to stonefs.

Is there any hope, or even a point, in trying to access this data from linux?
(This post was last modified: 11-14-2019, 12:38 PM by Noris.)
Noris
O2

Trade Count: (0)
Posts: 18
Threads: 3
Joined: Nov 2019
Find Reply
11-14-2019, 11:28 AM
#2
RE: XFS filesystem in scsi array
If the markings mean anything, it just might be a frame store from an old Discreet Logic setup. Discreet uses it's own "home grown" file system, Stone+Wire (not XFS). I believe it was essentially a software RAID3. The discs probably have a special firmware and ID themselves as "STON+WIR" or similar to the OS. The Discreet software would check for this and enforce the use of these "special" hard disks.

To reconstruct it, you'd need the Discreet Stone+Wire software, and possibly it's configuration. And then *maybe* you'd see some old project data from a Discreet application. Honestly, I wouldn't waste too much time on it.

If the arrays go with the Onyx, and the Onyx still has all the bells and whistles of a proper Discreet station (video I/O options, audio, wacom tablet, dongle, ...) and you manage to collect the other associated bits (decks, sync, monitors, cables), then you've got yourself a vintage editing suite.
jan-jaap
SGI Collector

Trade Count: (0)
Posts: 1,048
Threads: 37
Joined: Jun 2018
Location: Netherlands
Website Find Reply
11-14-2019, 12:38 PM
#3
RE: XFS filesystem in scsi array
(11-14-2019, 12:38 PM)jan-jaap Wrote:  If the markings mean anything, it just might be a frame store from an old Discreet Logic setup. Discreet uses it's own "home grown" file system, Stone+Wire (not XFS). I believe it was essentially a software RAID3. The discs probably have a special firmware and ID themselves as "STON+WIR" or similar to the OS. The Discreet software would check for this and enforce the use of these "special" hard disks.

To reconstruct it, you'd need the Discreet Stone+Wire software, and possibly it's configuration. And then *maybe* you'd see some old project data from a Discreet application. Honestly, I wouldn't waste too much time on it.

If the arrays go with the Onyx, and the Onyx still has all the bells and whistles of a proper Discreet station (video I/O options, audio, wacom tablet, dongle, ...) and you manage to collect the other associated bits (decks, sync, monitors, cables), then you've got yourself a vintage editing suite.

Hi again Jan.

Thanks, I just found out a little more... i believe it is for flame (they have flame 1 and 2 written on them).

So if I connect up to one of the scsi strings in linux, I'm not likely to be able to pull anything interesting from it?

Well there is video io and audio. No tablet, and no dongle as far as I am aware. Is this for software license? There were a load of industrial looking monitors in flight cases which could have been from same setup.
(This post was last modified: 11-14-2019, 12:55 PM by Noris.)
Noris
O2

Trade Count: (0)
Posts: 18
Threads: 3
Joined: Nov 2019
Find Reply
11-14-2019, 12:45 PM
#4
RE: XFS filesystem in scsi array
(11-14-2019, 12:45 PM)Noris Wrote:  So if I connect up to one of the scsi strings in linux, I'm not likely to be able to pull anything interesting from it?

Well there is video io and audio. No tablet, and no dongle as far as I am aware. Is this for software license? There were a load of industrial looking monitors in flight cases which could have been from same setup.

First of all, HVD SCSI is incompatible with regular SE or LVD SCSI. As in: magic smoke will escape. You must keep HVD SCSI separate from anything else. But it could also be that the disks are regular SCSI, and only the external interface of the array is HVD. I used to own a SUN D1000 JBOD: externally HVD, internally regular SCA disks.

Second: as with any disks from a RAID set, an individual disk isn't going to give you anything interesting. You have to reconstruct the RAID set(s), and use the original software (Stone+Wire) to read the data. It exists for Linux too, but that's a much newer version than the IRIX version and I have no idea whether it reads the IRIX stones.

Discreet software is normally licensed with a nodelocked FLEXlm license and a dongle. If the Onyx wasn't wiped, the license might still be there. If it was wiped, the frame store was probably wiped as well.
jan-jaap
SGI Collector

Trade Count: (0)
Posts: 1,048
Threads: 37
Joined: Jun 2018
Location: Netherlands
Website Find Reply
11-14-2019, 01:37 PM
#5
RE: XFS filesystem in scsi array
(11-14-2019, 01:37 PM)jan-jaap Wrote:  
(11-14-2019, 12:45 PM)Noris Wrote:  So if I connect up to one of the scsi strings in linux, I'm not likely to be able to pull anything interesting from it?

Well there is video io and audio. No tablet, and no dongle as far as I am aware. Is this for software license? There were a load of industrial looking monitors in flight cases which could have been from same setup.

First of all, HVD SCSI is incompatible with regular SE or LVD SCSI. As in: magic smoke will escape. You must keep HVD SCSI separate from anything else. But it could also be that the disks are regular SCSI, and only the external interface of the array is HVD. I used to own a SUN D1000 JBOD: externally HVD, internally regular SCA disks.

Second: as with any disks from a RAID set, an individual disk isn't going to give you anything interesting. You have to reconstruct the RAID set(s), and use the original software (Stone+Wire) to read the data. It exists for Linux too, but that's a much newer version than the IRIX version and I have no idea whether it reads the IRIX stones.

Discreet software is normally licensed with a nodelocked FLEXlm license and a dongle. If the Onyx wasn't wiped, the license might still be there. If it was wiped, the frame store was probably wiped as well.

Cool Jan, they are definitely hvd discs and I have just got hold of an old hvd card so should be able to connect some of them. The stone boxes just comprise three power supplies and three strings of drives with a scsi port connected to each. There is no controller. Unfortunately I only have 2 channels on my scsi card so would need 2 more of them to connect all discs to PC but might be able to partially construct it. I'll investigate the stone software for linux. It might be possible to image them all then set them up as a bunch of loop devices and reconstruct it.

I dont think they were wiped, but most of the system discs seem corrupted, though I am probably using wrong block size or something stupid as i can see some xfs structures in ufs explorer.

I'm a bit stuck because I dont want to clobber a new install over any of the existing discs and couldnt find a live irix distribution so will need to get hold of decent hvd drive. It would be easier if irix can be installed on the SE channel.
Noris
O2

Trade Count: (0)
Posts: 18
Threads: 3
Joined: Nov 2019
Find Reply
11-14-2019, 02:42 PM
#6
RE: XFS filesystem in scsi array
(11-14-2019, 02:42 PM)Noris Wrote:  It would be easier if irix can be installed on the SE channel.

This is no problem. I'm only familiar with the Onyx deskside, but I think the rack isn't any different. Basically, inspect the IO4. By default, it should have a little green and a little red daughtercard, where the SCSI cabling to the drive bays is attached.

RED = HVD SCSI
GREEN = SE SCSI

Both cables route to the drive backplane. A matching terminator (one HVD, one SE) and a set of configuration jumpers for each channel are on the drive backplane as well. The jumpers must be set to follow the chosen configuration. They don't define it, only the adapter cards on the IO4 define signals present, the rest (jumpers, terminators) must follow and match.

Each sled attaches to both SCSI channels. You may have noticed the dual SCSI connectors on the drive sled, while only one is connected to the disk / CDROM / tape? Each sled has a set of jumpers as well, and they too must match (follow) the configuration.

Fortunately, in most cases, none of this (jumpers, terminators) needs to be touched.

You need to confirm that one channel is HVD (with disks) and one is SE (with CDROM, tape). If you have a green daughter card on the IO4 and a CDROM drive in the drive bays it should have an SE channel. You may find that the connectors on the sleds are different for a CDROM sled (50-pin) vs a hard disk sled (68-pin) but this doesn't matter. Find an ultra-wide SE or LVD hard disk, preferably with an 68pin connector. Replace the HVD system disk with the SE disk and make sure to attach to the other connector on the sled (which routes to the SE channel). NB: if you set the device ID of the new system disk to 1, make sure no other CDROM, tape, ... on the SE channel is taking that ID already.

The only consequence of this is that the system disk is now on the other SCSI channel. Normally the system disk is ID #1 on channel #1 on an Onyx, and it will be ID #1 on channel #0 now (like on almost every other SGI, btw). I think if you proceed with a fresh IRIX install it will set the correct variables in NVRAM, otherwise you may have to do this manually. If your Dallas batteries are shot you may have to repeat this every time the system is powered on.

Only if your IO4 has two RED daughter cards you're stuck with HVD SCSI.

Pictures attached:
1. I converted my Challenge L to SE-only operation: I replaced the RED cards with GREEN, replaced the HVD terminator on the drive backplane and re-jumpered everything (drive backplane and sleds) for dual-SE operation. The system disk is a nice, modern Cheetah 15K.4 LVD disk. No more fear of destroying something. The "ID" wheel is not attached to the disk and doesn't function. Jumpers on the disk define the correct device ID.

2. The IO4 of my Onyx IR, which still has the original configuration and a HVD system disk.

PS: regarding reading the XFS system disk with Linux: the IRIX XFS had two on-disk format generations, v1 and v2. Since IRIX 6.5.14 all new filesystems are created as v2, before that as v1. Upgrading an older IRIX past 6.5.14 doesn't change this. Linux only supports the v2 on-disk format. It is very likely that the Onyx originally came with IRIX 6.2 so unless it was re-formatted and re-installed with IRIX 6.5.14+ years later, it will have the v1 on-disk format and Linux will be useless.


Attached Files Image(s)
       
(This post was last modified: 11-14-2019, 04:05 PM by jan-jaap.)
jan-jaap
SGI Collector

Trade Count: (0)
Posts: 1,048
Threads: 37
Joined: Jun 2018
Location: Netherlands
Website Find Reply
11-14-2019, 03:53 PM
#7
RE: XFS filesystem in scsi array
(11-14-2019, 03:53 PM)jan-jaap Wrote:  
(11-14-2019, 02:42 PM)Noris Wrote:  It would be easier if irix can be installed on the SE channel.

This is no problem. I'm only familiar with the Onyx deskside, but I think the rack isn't any different. Basically, inspect the IO4. By default, it should have a little green and a little red daughtercard, where the SCSI cabling to the drive bays is attached.

RED = HVD SCSI
GREEN = SE SCSI

Both cables route to the drive backplane. A matching terminator (one HVD, one SE) and a set of configuration jumpers for each channel are on the drive backplane as well. The jumpers must be set to follow the chosen configuration. They don't define it, only the adapter cards on the IO4 define signals present, the rest (jumpers, terminators) must follow and match.

Each sled attaches to both SCSI channels. You may have noticed the dual SCSI connectors on the drive sled, while only one is connected to the disk / CDROM / tape? Each sled has a set of jumpers as well, and they too must match (follow) the configuration.

Fortunately, in most cases, none of this (jumpers, terminators) needs to be touched.

You need to confirm that one channel is HVD (with disks) and one is SE (with CDROM, tape). If you have a green daughter card on the IO4 and a CDROM drive in the drive bays it should have an SE channel. You may find that the connectors on the sleds are different for a CDROM sled (50-pin) vs a hard disk sled (68-pin) but this doesn't matter. Find an ultra-wide SE or LVD hard disk, preferably with an 68pin connector. Replace the HVD system disk with the SE disk and make sure to attach to the other connector on the sled (which routes to the SE channel). NB: if you set the device ID of the new system disk to 1, make sure no other CDROM, tape, ... on the SE channel is taking that ID already.

The only consequence of this is that the system disk is now on the other SCSI channel. Normally the system disk is ID #1 on channel #1 on an Onyx, and it will be ID #1 on channel #0 now (like on almost every other SGI, btw). I think if you proceed with a fresh IRIX install it will set the correct variables in NVRAM, otherwise you may have to do this manually. If your Dallas batteries are shot you may have to repeat this every time the system is powered on.

Only if your IO4 has two RED daughter cards you're stuck with HVD SCSI.

Pictures attached:
1. I converted my Challenge L to SE-only operation: I replaced the RED cards with GREEN, replaced the HVD terminator on the drive backplane and re-jumpered everything (drive backplane and sleds) for dual-SE operation. The system disk is a nice, modern Cheetah 15K.4 LVD disk. No more fear of destroying something. The "ID" wheel is not attached to the disk and doesn't function. Jumpers on the disk define the correct device ID.

2. The IO4 of my Onyx IR, which still has the original configuration and a HVD system disk.

PS: regarding reading the XFS system disk with Linux: the IRIX XFS had two on-disk format generations, v1 and v2. Since IRIX 6.5.14 all new filesystems are created as v2, before that as v1. Upgrading an older IRIX past 6.5.14 doesn't change this. Linux only supports the v2 on-disk format. It is very likely that the Onyx originally came with IRIX 6.2 so unless it was re-formatted and re-installed with IRIX 6.5.14+ years later, it will have the v1 on-disk format and Linux will be useless.

Great thanks for info. I haven't checked the channel colours but the cdrom and tape drives are definitely single ended. I'll try installing new system disc in this channel.

Ah thats handy to know and explains the problem... Half of the hvd discs imaged ok but none would mount, I just got bad superblock error. However ufs explorer, a data recovery tool, could see a file structure there. Sounds like they are v1 then.

It looks like the stone and wire stuff is all Autodesk now and there is no mention of scsi in a document I found from 2008 relating to s+w on linux. Do you know if the old stone fs software for irix is available anywhere?
Noris
O2

Trade Count: (0)
Posts: 18
Threads: 3
Joined: Nov 2019
Find Reply
11-14-2019, 04:32 PM


Forum Jump:


Users browsing this thread: 1 Guest(s)