A Deep Dive into revWhiteShadow’s Operating System Journey: From ZX Spectrum to Arch Linux

As a long-time enthusiast and proponent of the Linux ecosystem, with a significant recent migration to the Arch Linux distribution in 2019, we at revWhiteShadow are eager to share a comprehensive overview of our personal history with operating systems. This journey spans decades, from the early days of home computing to the sophisticated environments of modern workstations. Our experiences have shaped our understanding of software development, system administration, and the very nature of user interaction with computing technology. We aim to provide an in-depth, detailed account that resonates with anyone who has navigated the ever-evolving landscape of operating systems.

The Dawn of Computing: Sinclair BASIC and the ZX Spectrum Era (1986–1991)

Our earliest foray into the world of computing began in 1986 with the acquisition of a ZX Spectrum 128kb computer. At the tender age of seven, the primary allure of this machine was its capacity for entertainment. We spent countless hours immersed in playing 48k, and occasionally 128k, games. The ritual of loading these games was an exercise in patience, often involving a lengthy 15-minute wait for the data to stream from a tape drive. This early exposure, while focused on consumption rather than creation, laid a foundational understanding of how software interacted with hardware, even if the underlying mechanisms were largely abstract at that age. The simple yet effective Sinclair BASIC interpreter, accessible directly from the command line, provided a tantalizing glimpse into the potential for programming, though our youthful endeavors were mostly limited to exploring pre-written game code. The tactile experience of typing commands and seeing the colorful, albeit blocky, graphics appear on the screen created a lasting impression of the magic inherent in computing. This era, characterized by limited resources and extended loading times, fostered a deep appreciation for efficiency and a fundamental understanding of data transfer, albeit through the slow medium of magnetic tape. The vibrant community surrounding the ZX Spectrum, sharing custom software and programming tips through magazines and user groups, also instilled an early sense of collaborative learning.

The Educational Computing Nexus: ACORN MOS and the BBC Micro (1991–1993)

Following the retirement of our personal ZX Spectrum, a period ensued without direct computer ownership. However, our high school provided a crucial gateway into a more advanced computing environment. The institution was equipped with four BBC Microcomputers, running the ACORN MOS operating system. Access was granted to students in the ninth grade, opening up new avenues for exploration. It was on these machines that we began to actively engage in software development, albeit at a rudimentary level. We developed a typing tutor program, a project that, in retrospect, was notably riddled with spelling errors, a testament to our nascent programming skills and the debugging challenges of the time. The BBC Micro, with its robust design and dedicated educational software, offered a more structured learning experience compared to the Spectrum. The ACORN MOS operating system, while proprietary, provided a stable platform for learning and experimentation. The sheer availability of multiple machines allowed for collaborative projects and friendly competitions, fostering a sense of shared discovery within the student body. The BBC Micro’s keyboard and display quality were also a significant step up, making longer coding sessions more comfortable and productive. The experience with ACORN MOS solidified our understanding of operating system concepts such as file management, memory allocation, and program execution within a controlled educational setting.

The Dawn of the PC Era: MS-DOS and Early Windows Encounters (1993–1996)

As we progressed into the eleventh grade, our high school upgraded its computing facilities, introducing four 8088 PC machines. These systems, notably devoid of network connectivity or hard disk drives, necessitated the use of 5 ¼ inch floppy disks for both software and project storage. Our academic tasks were predominantly centered around writing programs in GW Basic, a procedural programming language that served as an excellent introduction to structured coding. The reliance on floppy disks meant meticulous file management and a constant awareness of disk space limitations. We also gained occasional access to a 286 processor system, which featured the graphical user interface of Windows 3.1. This was our first substantial exposure to a graphical environment, a stark contrast to the command-line interfaces of MS-DOS. The allure of the point-and-click paradigm was immediate, even if the functionality was limited by today’s standards. Playing Solitaire on Windows 3.1 provided a novel and engaging way to interact with the computer, demonstrating the potential for user-friendly interfaces. The MS-DOS environment, while utilitarian, instilled a deep understanding of command-line operations, file system navigation, and system configuration through text-based commands. This period was instrumental in developing a solid foundation in how operating systems manage resources and execute applications, paving the way for more complex software interactions in the future.

The Windows Revolution: Gaming, Emulation, and Early C Programming (1996–1999)

Upon commencing college, we acquired our own personal computer: a 386 processor machine with 33MHz clock speed and a modest 2MB of RAM. During this phase, regular internet access remained elusive, and the concept of Linux was still unknown to us. The primary use of our computer was, unequivocally, gaming. Titles such as “The Secret of Monkey Island,” its sequel, and “Full Throttle” captivated our attention, offering rich narratives and engaging gameplay. We also enjoyed arcade-style games like Prince, Prince II, and Mortal Kombat 4. A significant portion of our computing time was dedicated to running “Speccy,” a ZX Spectrum emulator, which allowed us to relive our earlier gaming experiences on the new hardware. This emulator provided a bridge between our past and present computing environments, demonstrating the power of software to replicate hardware functionality. Our programming interests also evolved. We utilized the DOS-based “Turbo C Compiler” to write an Othello game in C. This experience, however, was short-lived as our curiosity shifted towards lower-level programming. We delved into assembly language with the goal of creating a “proof of concept” virus designed to evade contemporary virus scanners. While this project was never released, it provided invaluable insights into system architecture, memory manipulation, and the intricacies of executable code. A recurring and often frustrating aspect of this era was the frequent need to re-install Windows following system crashes. This process, often performed multiple times a day on college computers, honed our troubleshooting and system recovery skills to an exceptionally high degree. We recall a particular incident where a friend performed a Windows re-installation on a system actively in use during a live event, highlighting the unpredictable nature of early personal computing and the sometimes-improvisational solutions employed.

The Linux Revelation: Red Hat, Fedora, and the Stallman Influence (1999–2005)

The transition to graduate studies in the United States marked a pivotal moment in our operating system journey. The department provided access to computers running Red Hat Linux, which were subsequently upgraded to Fedora Core. Having grown weary of the notorious “blue screen of death” and the perceived clunkiness of the Windows graphical user interface, the prospect of a vanilla Linux console login was immensely appealing. After an initial period of RTFM’ing (Read The Fing Manual)*, we developed a profound affinity for Linux, largely eschewing Windows for all but the most essential tasks. A significant turning point occurred in 2003 when we attended two influential talks at the university. The first, delivered by Richard Stallman, was both amusing and illuminating. His advocacy for free software deeply resonated with us, leading to a firm commitment to never again use Windows. The second talk, by Steven Wolfram, had an entirely contrasting effect. Inspired by his presentation on Mathematica, we found ourselves repelled by the project’s direction and subsequently ceased all use of the software. This period solidified our understanding of the GNU/Linux philosophy, the importance of open-source development, and the power of community-driven innovation. The reliability and flexibility of Red Hat and Fedora provided a robust platform for academic research and personal development, far surpassing the stability of our previous Windows experiences. The command-line interface, once a barrier, became an empowering tool for system management and software development.

The Pinnacle of Customization: Gentoo Linux and the Compilation Challenge (2005–2009)

In 2004, we assembled a custom computing rig using spare parts from the department, featuring an AMD K6 processor running at 500MHz with 256MB of RAM. We immediately installed Gentoo Linux onto this machine. Gentoo’s defining characteristic was its unparalleled configurability and flexibility. The ability to tailor the operating system by editing configuration files in /etc, often with beautiful syntax highlighting, was deeply satisfying. While we doubted that compiling the entire operating system from source would yield significant performance gains, the primary motivation was the tweakability and the desire to gain a deep understanding of Linux’s inner workings. Our initial years with Gentoo were highly enjoyable. However, as our free time became increasingly scarce, maintaining a source-based distribution presented a growing challenge. The migration from our AMD K6 desktop to a 1.6GHz Intel Pentium M laptop meant that the initial OS installation alone consumed three days. In 2007, we purchased an HP 2710p Tablet PC with an Intel Core 2 U7600 (1.2GHz) processor. This device required the latest kernel and Xorg versions to ensure full hardware support, particularly for wireless functionality. Gentoo’s advantage here was its use of gentoo-sources, allowing for seamless compilation of the latest upstream vanilla kernel sources. We meticulously integrated the latest Git patches for the X server and wrote a patch for the Wacom drivers. After considerable effort, we achieved full hardware functionality on our Tablet PC.

The eventual “downfall” of Gentoo for us stemmed from the increasing demands of maintaining it. Regular updates became a logistical hurdle. While compilation could be scheduled overnight, the subsequent manual management of configuration files using dispatch-conf, often requiring extensive RTFM’ing, proved time-consuming. Updates, typically performed every 4-6 months, would require at least a day or two of compilation. Larger updates were expected, and often did, fail, necessitating further troubleshooting and community assistance via IRC. Compounding this, critical library updates often required a subsequent run of revdep-rebuild, demanding hours of recompilation for dependent packages. This entire process, followed by meticulous configuration file management, transformed a desired level of customization into a significant time sink. Despite these challenges, the inherent configurability and the elegance of Gentoo’s interface remained compelling. We decided to limit updates to critical security advisories and sought a new distribution for our home server. After considering Slackware and Ubuntu, we ultimately chose Debian.

Stability and Pragmatism: Debian’s Reign (2009–2019)

Our journey with Debian began with the installation on our headless home server. While the initial setup presented minor hurdles, particularly regarding USB booting and online documentation clarity, the installation process itself was remarkably smooth, and we were operational within approximately an hour. Subsequent system configuration, including firewalls, Wi-Fi hotspots, and Time Machine backups for a family member’s computer, proved straightforward. We found ourselves significantly more satisfied with Debian than we had been with Gentoo. The apt package manager offered a stark contrast to Gentoo’s portage, with dependency calculations being remarkably swift. A new package sync in Gentoo could take minutes for portage to reflect available upgrades, whereas apt provided near-instantaneous updates. The utility of dpkg-reconfigure was also a revelation, a functionality we found difficult to replicate effectively in Gentoo without resorting to virtual machines.

Debian’s init script system also presented a noteworthy difference. Gentoo’s init scripts performed runtime dependency checking, automatically shutting down dependent services like VPN, SSH, and OpenAFS when the network was brought down. While this could be beneficial in some scenarios (e.g., for specific network time protocols), it often meant that services we wished to remain dormant until network availability were unnecessarily terminated. Manually editing these scripts was necessary to achieve the desired behavior. Furthermore, Gentoo’s elaborate dependency checking contributed to slower boot times compared to Debian.

Conversely, Gentoo’s emerge package manager offered a more refined command-line interface compared to Debian’s apt-get. We rarely needed a graphical interface for package management with emerge, whereas we occasionally found ourselves using graphical tools like synaptic in Debian to manage dependencies or mark packages as automatically installed. Tools like dselect and aptitude provided alternatives, but their interfaces were less intuitive. Gentoo’s USE=kerberos flag was particularly powerful, offering seamless Kerberos authentication for applications like mutt, enabling both email sending and IMAP access. Replicating this functionality in Debian required manual installation of packages like libsasl2-modules-gssapi-mit to enable SMTP Kerberos authentication. We also held a preference for Gentoo’s default syslog-ng over Debian’s rsyslog, finding syslog-ng’s configuration for ignoring verbose log messages more straightforward. However, this was easily rectified by installing syslog-ng via aptitude.

Overall, our decade-long tenure with Debian was highly positive. In 2016, we acquired a new laptop, a Lenovo X1 Yoga (i5 6300U, 8GB RAM), and Debian installed without any issues. We believed this commitment to Debian would be permanent.

The Cutting Edge: Arch Linux and the Pursuit of the Latest (2019–Present)

Our conviction regarding Debian’s permanence was, however, short-lived. After a decade of loyal service, the inherent limitations of Debian’s release cycle – specifically, the long release times and the delay in incorporating upstream changes into the stable branch – prompted a re-evaluation. While attempting to mitigate this by using the Testing branch for critical packages, we found its stability and usability to be less than ideal. This hybrid approach, while functional, increased system maintenance overhead.

The definitive catalyst for our transition to Arch Linux was a kernel bug that adversely affected our laptop’s sleeping functionality. The issue was resolved upstream but had not yet been integrated into the Debian kernel. Our kernel compilation days were largely behind us, necessitating a distribution that provided readily available, up-to-date kernels. This led us to explore alternative distributions, and after considering Slackware and Ubuntu, we ultimately selected Arch Linux.

The Dual Nature of Arch: Pros and Cons

The appeal of Arch Linux lies in its inherent minimalism and its commitment to providing the latest software versions. This rolling release model means users are consistently at the forefront of technological advancements, eliminating the apprehension associated with large, infrequent release upgrades. However, this also means that the latest bugs are equally accessible, and the constant flow of updates can induce a perpetual state of “nervousness.”

The migration process from Debian was relatively smooth, with the most significant effort dedicated to transitioning from Debian’s networking scripts to systemd-networkd and NetworkManager. Our Docker containers and other critical services were migrated seamlessly.

After several years of using Arch, our primary observation revolves around its minimalist and neutral approach. For most major software categories, Arch does not enforce a default. Instead, it presents a curated list of choices, accompanied by excellent documentation, leaving the final decision to the user. This neutrality, while empowering, can also lead to moments of indecision.

The Decision Fatigue of Arch

  • Desktop Environment: After two decades of using fvwm as our window manager, we were compelled to switch due to evolving display requirements. Specifically, the need to scale two monitors independently proved challenging for Xorg. Arch offers a selection of approximately fifteen officially supported desktop environments. Our initial inclination was to choose a default if one existed, but faced with this extensive list, we ultimately narrowed our choice to Gnome or KDE, opting for Gnome due to its prevalence as a default on many other distributions. Regrettably, after a month, we found Gnome unsatisfactory and transitioned to KDE, which has proven to be a much better fit.

  • Networking: Arch provides a beautifully color-coded table detailing various network manager choices without a definitive recommendation. After careful consideration, we settled on using NetworkManager on our laptop and systemd-networkd on our desktop. This decision was guided by the specific needs and usage patterns of each machine.

  • Boot Loaders: Similar to networking, Arch presents another color-coded table for boot loaders without a default. At this juncture, systemd had largely shed its controversial reputation and was adopted by a vast majority of distributions as the default init system. Given this widespread acceptance and its integrated boot manager functionality, we selected systemd-boot.

Interestingly, the one area where Arch did make a decisive default choice was in its init system. Arch adopted systemd back in 2012, a decision thoroughly and articulately explained in the provided forum link.

In summary, our experience with Arch Linux has been overwhelmingly positive, marked by its cutting-edge nature. When acquiring a new computer, another Lenovo X1 Yoga (i5 1135G7, 16GB RAM), the need for the latest drivers was paramount. A simple switch from the linux-lts kernel to the standard linux kernel was sufficient to ensure all hardware functioned flawlessly. While the “cutting edge” can sometimes translate to an increased frequency of kernel upgrades, even for LTS kernels, necessitating regular reboots, the fear of critical system breakage during upgrades is gradually diminishing. Arch Linux, for us, represents the ideal balance of leading-edge technology and user control, providing a dynamic and powerful computing environment.