That's a good question. So what you need to understand is that UNIX was initially a brand name from AT&T Bell labs for their operating system. It was written in the C programming language and was actually a big reason behind the C programming language being developed alongside. So the two have an incredibly close relationship. Every UNIX that has come afterward, that actually carries the brand name, has to be derived from some the original source through licensing. It isn't a carbon copy it just has source components in common that are licensed from the original trademark. So when you look at HP-UX, Irix, Tru64, SUN Solaris (SunOS), IBM AIX, etc... They share lineage. That lineage is not only sharing source in common but also sharing the POSIX API standard in common.
Linux is a UNIX clone and it carries no original unique source code, however it's still subscribes to the UNIX tenants of file system layout, POSIX compatibility, and a lot of the tools and shells written for UNIX were open sourced and ported to Linux a very long time ago.
So a lot of people cut their teeth on Linux because it's more user-friendly and has a bunch of very cool shortcuts and it's very easy to install. But every UNIX has its own quirks and its own time in history.
Irix is basically from the mid 80s all the way to 2003 or so. So various standards were added to it as well as the fact that SGI Irix was the pioneering platform for OpenGL... That's right the same standard that 3D video games used to commonly be written in until DirectX, macOS Metal, and Vulcan came along with other game engines like unity and unreal.
So every one of them is going to have their quirks, every one of them is going to have different ways of addressing the hard drives but they're all going to be in the /dev folder. All of them may have different programs but almost all the global configurations will be in the /etc folder. Logs and printer queues are commonly in the /var folder.
Read more about that here:
https://en.wikipedia.org/wiki/Filesystem...y_Standard
UNIX has a multitude of shells, a shell is just what runs inside a terminal and response to your commands. In windows you're used to command.com and later the powershell terminal and the commands that they support. But UNIX has many more shells just from many more creators. Each shell has slightly different features and commands, though the base commands for listing directories and manipulating files and directories tend to be exactly the same because they're programs under UNIX and not shell features. Linux commonly uses the bash shell, Macos recently switch to the Z-Shell zsh, there's also a C Shell (whose scripting resembles the C programming language), etc..
UNIX being so old and having so many cousins and nephews means that there's many many tools and that's really where the entire open source movement came from. UNIX by its nature tries to be source compatible. That is to say not including the graphic system that a UNIX station might have, most people talk about the terminal when they talk about UNIX programs. The POSIX API allowed people to port from one UNIX version to the other vendor's UNIX when they bought a new computer with a different commercial version from a different vendor and it didn't take very long to make minor changes to get your software to run.
Windows is about binary compatibility, the fact that you can take an older exe binary from an older version of windows and run it on a newer version of windows without recompilation is a feature. UNIX and macOS as similar don't make that promise at all. Unless you buy the exact software third-party for your exact version of UNIX don't expect it to run. The idea was the developers would be able to port their source code very easily to take advantage of the various system services through the POSIX API to another UNIX easily. So for businesses this meant that they could buy new machines every so often with a different version of UNIX and port their business software that they made themselves as long as they have the source code and used the best practices by trying to use the core services provide provided by the UNIX POSIX API.
Now you have to understand this is all terminal stuff, command prompt in Windows parlance.
There have been several attempts to make a standard graphical environment for UNIX operating systems. CDE was the biggest attempt which was supported by several vendors including SGI. But SGI chose a very common window kit called MOTIF that is both a rich library with a set of controls and a look and feel for the applications. Linux actually has free open source versions of a MOTIF compatible library as well, it's easily recognizable because it has its own very specific look & feel.
From your standpoint it will look incredibly dated, like you're staring at something from the 1980s and early 90s. But that's exactly what you are doing. Even the most basic Linux window managers look incredibly primitive. Some look like macOS and do a lot of 3-D eye candy. Windows has its own looking feel as well.
What you need to understand is that graphical interfaces came along after the UNIX operating systems were in existence. So they started to bolt that on and while they do have some very interesting standards the actual end result of look & feel, and API, was never standardized. The UNIX/Linux windowing system mechanism, called X windows, was fairly well standardized but not for 3-D acceleration. For that you were expected to use an OpenGL frame/window.
So UNIX and graphical environments tend to be different animals. The graphics are normally just to get the person to launch whatever the application is they need to run. There's not a lot of provided graphical application applications as compared to say macOS.
Back when the Internet started Solaris, now owned by Oracle, was "the" operating system to run your web servers on. So for a while that was with the Internet pretty much ran on. Other companies like SGI also tried to get into the mix and they have their own web technologies of the day but that never really went anywhere. But you'll see vestiges of all of it while using Linux. And yes there were Apache Web Server distributions for all major UNIX OSes in recent history.
The point of a lot of UNIX systems is heavy on the processing and often light on the graphical interaction. It's not meant to be be a gaming platform or be a bunch of multimedia eye candy, normally. The systems were terminal based and they are meant to crunch numbers at various speeds given how awesome and huge/powerful your system was. Whether you were dealing with an IBM AIX mainframe or you were dealing with a Motorola UNIX station at home you'd understand the common commands and file system structure you'd also be familiar with general compilation of software as well as the user tools that are at least provided with every UNIX as a bare minimum.
So UNIX is very terminal-based because that's its lineage. Yes there are some applications graphically to help you do some common tasks but realistically most of what these systems do is backend systems like databases and file storage and processing special workflow for business application applications as well as scientific applications given the malleability of the system.
it's also important to understand that the majority of UNIX usage tends to be dedicated for one specific purpose. Certainly it's a general computing and general purpose operating system standard. But what I mean is normally companies or individuals purchase UNIX stations for a specific application almost like an appliance. That might be to be a web server, or that might be as a 3-D modeler, or that might be as a database machine, or that might be as a computer to run a third-party company product for a special machine or workflow.
Silicon Graphics Inc was one of the UNIX vendors that took graphics very seriously. But that didn't translate into a lush desktop experience. What it translates into is being in the forefront of 2-D and 3-D graphics acceleration for the specific purpose of rendering whether that is scientific data or whether that's Hollywood movies.
There was a time when silicon graphics stations were used at the other end of MRI machines in hospitals to reassemble your medical scans because they were better at doing it then personal computers of the time! Many famous Hollywood movies like Terminator 2 and Jurassic Park were made on Silicon Graphics hardware using famous 3D software products of the time!
There was nearly a 10 year span of history where silicon graphic stations were incredibly expensive and were the fastest 3D graphics you could get in the industry period it was like a Ferrari or a Lamborghini of the computer world, when it came to graphics.
Please note that the SGI Indy was meant as a cheap/affordable station to compete in different sectors so it's not known for its 3D acceleration. It is known for its basic PAL/NTSC video I/O and multimedia input output capability.
SGI did have system offerings with RAID Arrays and huge cluster systems and was the temporary owner of Cray supercomputers, which integrated a lot into those avenues.
Eventually with the formation of Nvidia, which was full of ex-SGI engineers that had become disillusioned with SGI management, they designed video cards for cheaper Intel PCs that eventually overtook SGI graphic dominance by the early 2000s.
So what you need to understand is that you're learning a flavor of UNIX, called Irix that was a derivation of the original AT&T Bell labs UNIX but heavily altered and customized for SGI's need. it shares all the UNIX commonalities but it has its own quirks that no other UNIX will have.
So the long answer to your question is most of us have either industry experience or we got into a different UNIX or Linux OS and later found out about SGI stations, because the average person couldn't afford one, and so you learn about this operating systems quirks based on common knowledge of how UNIX itself works.
Here's some stuff I found briefly that might be of interest to get started.
https://users.cs.duke.edu/~alvy/courses/unixtut/
https://people.ischool.berkeley.edu/~kev...l/toc.html
https://grimoire.carcano.ch/blog/posix-c...lesystems/
A lot of my personal specialty is in UNIX as that's what I pretty much started using, by using Linux and Irix, around 1996, after I got bored with Windows 95 and Windows NT 4.0, I was just starting high school then.
So I got into UNIX by way of Linux. I don't use Linux professionally anymore I'm mainly a macOS user and developing some software on Windows. But all have common basic services and there's a lot that translates between all three operating systems - that's files and drives and networking, just not graphical user interfaces.
So the way most people cut their teeth on UNIX is basically playing around with Linux. The community is many hundreds of times larger than ours and a lot of things are a lot easier to do and there's a lot of nifty time saving things that are great for beginners on Linux.
Linux has gotten off some of the UNIX beaten path with the introduction of a new startup system mechanism that is very different (systemd) from the traditional UNIX services start up system (SystemV init) but other than that I'd say the skills are directly transferable.
Other than the bare minimum configuration of each UNIX or UNIX clone will be very different. For example the command to see what's going on on your network cards and interfaces is mostly the same on common UNIX stations but the commands and files to configure those properties are radically different even if they all may be in the /etc directory. So while there may be commands to check on them and alter them temporarily that are common on all of them, the configuration steps to change those permanently on those systems are very different and that's part of the variants that you'll see in UNIX.
SGI intended their users to use the station as an appliance. That's why a lot of these aspects are not user-friendly. You were supposed to spend $20,000 or $30,000 on an SGI Octane or something like that, then spend another $10,000 or so dollars on Alias 3D Maya or Lightwave or something, then you had someone sit in front of the station that knew that application really well and just use it all day. Yes there was a Netscape browser and yes there was an email client and that was about it. You can almost consider it a single use station because it was bought for one very specific use, to run that specific 3-D program or something of that nature.
The SGi Indy was an economy/cheaper machine built for the purpose of penetrating the market for office environments and web usage. So don't expect it to be a 3-D powerhouse, it wasn't advertised as that to begin with. But it was also 1/4 or less of the cost of a high-powered SGI 3D workstation of the day.
So that's the environment you're sort of asking about. On the one hand you have yourself a machine that has a very specific flavor of UNIX, on the other hand UNIX in general constitutes many vendors and many different variations all of which have both similarities as well as stark differences. Learning about the similarities is probably the strong point of it but I wouldn't call Irix particularly user-friendly, not compared to the Linux terminal.
If you're serious about learning all that I would say get a virtual machine for a modern Linux distribution running on your main desktop/laptop computer. Get comfortable with that then start using that to SSH/Telnet into Irix on your Indy, and start trying to do similar things such as set up compilers, potentially compile sample programs, load pre-compiled free war from our various archives, that kind of thing.
Please note that you're talking about a system that's from basically 1993, it has a web browser but it can't surf what you consider to be the modern web. It'll not only be incredibly slow but it will break on pretty much every page except maybe the Google homepage. It doesn't understand modern encryption, it doesn't understand modern scripting, it doesn't understand modern HTML. Getting a more modern browser like Firefox has been pushed before and there is a slightly more up-to-date Firefox available but it runs or perhaps doesn't really run acceptably well on an Indy and even so it's still so old that the vast majority of websites will be broken when using it.
This is my hint in the saying you can do all the networking you want, you can do all the file sharing via NFS and old SMB shares and all that stuff you want but don't for a moment mistake the Indy as a portal to the Internet. That ship sailed a long time ago. So unfortunately without the Internet you may find it rather boring.
But depending on what aspect of computing you're interested in you may find something about it that you can learn from or enjoy.
Keep a proper mindset to understand what a Windows/DOS 386 or 486 PC was doing in 1993 and 1994 compared to what this thing is doing in front of your face. Compare apples to apples in your expectation of how it functions and what it's for.
You're not gonna make it into something that can watch anything other than low resolution MPEG one and two video on, it's not gonna watch DivX or MP4, it can play MP3s, it's not gonna watch DVDs, though you can hook a DVD player with an S-Video port to the S-video input port of the Indy and watching it on a live screen, that works. Email would be a challenge unless it is basic authentication without encryption. Websites are a challenge to pretty much not gonna happen. It does image file viewing OK for most standard formats.