functional.cafe is one of the many independent Mastodon servers you can use to participate in the fediverse.
functional.cafe is an instance for people interested in functional programming and languages.

Server stats:

220
active users

"why does software require maintenance, it's not like it wears down"

Because software is not generally useful for any inherent good; what matters is its relationship with the surrounding context, and that context is in perpetual change.

Public

@chris__martin hmm, I think it's mostly because even when we have something called "Software Engineering" and we called it a science, making sure that some program is bug free is terribly expensive, so maintenance exists even if we only think in terms of bugs. Also, software is so malleable that we can adapt it and evolve it faster than anything else. You could make a parallel of a software release with a new model of an exiting product. And then you have 'rolling releases' of things like sites.

Quiet public

@mdione @chris__martin

Marcos I understand the argument you're making here, but there's a grander context around the software beyond just "bugs" and features.

If I took a perfect piece of bug-free software that required no new features and then I left it in the GitHub repo, how long do I have before it stops working for people?
2 years?
20 years?
1000 years?

At some point the software will need care, of even just to ensure the code still builds.

Quiet public

@gatesvp @chris__martin as it is, assuming you can still find all the deps all the way down to the kernel if needed, it should run. Now, whether it would still be useful because the context changed, like you say, yes, it probably won't be. But as a software developer (yes, it could be a narrow mindset) I think of changing requirements mean new features.

And I think the same could be said of physical stuff; in my lifetime we had had changes in plugs, going from two types of screw heads to N, etc

Quiet public

@mdione @chris__martin

"Assuming you can find all of the deps..."

This is the macro point that Chris is trying to make. Marco, we both know that the assumption you're making is unsafe.

Finding all the dependencies is not guaranteed. Having a stable compilation target is not guaranteed. And any dependency can be subject to its own instabilities and security issues.

Software is not just source code. Into the environment in which that source code exists. And that environment is not stable.

Quiet public

@gatesvp @chris__martin well, for instance, for the Debian Linux distribution, you can find the source code and binaries for all ~8000 (then) and ~72000 (now) packages all the way to March 2005. Granted, you will hit issues where architectural changes will prevent you from running binary code, but hardware emulators exists and are more and more common (because we keep creating new architectures); and yes, you can count those as 'maintenance' or dependencies.

1/N

Quiet public

@gatesvp @chris__martin All I am saying is I don't think is this is not exclusive of software. When I was a kid we had to start replacing wall sockets because my country decided to introduce plugs with ground for big appliances. This maintenance is also needed in other places. Just that software is so malleable, that instead of "throwing it away and buying a new one" like we do with physical objects, we have the tendency of changing the one we have (mostly because it's seemingly cheaper).

Public

@chris__martin sadly we often grow thinking it's isolated.

also every node in the software graph is fuzzy, unless you're djb (slightly kidding) your code has bad interfaces and bad implementation.. rot can come from within

Public

@chris__martin And if there's no other context, there's at least the context that operating systems keep changing. And they need to if for no other reason that hardware keeps changing!

Public

@chris__martin Also it was worn down when you deployed it, you just didn’t know that.

I mean sometimes it’s 100% rust!

Quiet public

@chris__martin that's part of why I'm in to retrotech.
DOS hasn't changed much since 1993

Public

@chris__martin Part of design is choosing the scope of that context, thereby limiting or blowing up the degree to which it needs maintenance. Most people overlook this.

Public

@chris__martin This is a really solid quote

Public

@chris__martin "Bitrot" may be a metaphorical term but it describes very real processes.

Public

@chris__martin the bits, they rot

Public
@chris__martin @dalias > "why does software require maintenance"

Improperly defined interfaces between systems & libraries and software, most of the time.

Then there's the odd once in a while (often with decades-long spacing) that one needs to change something to support a new system entirely.

> what matters is its relationship with the surrounding context, and that context is in perpetual change.

That's mostly true for software with unlimited scope. It is perfectly possible to have software that is /complete/ and needs no more maintenance than the second example I gave above.
Public

@lispi314@udongein.xyz @chris__martin@functional.cafe @dalias@hachyderm.io Free and open source software by its very definition encourages people to make frequent changes to every part of the system, world-breaking changes that are otherwise unthinkble elsewhere are welcomed on a daily basis, and the community members in general are proud that they're so innovative (e.g. one can port an entire OS to a new CPU within a year). The consequence of this system is that most software projects are not a product, there's no such a thing called "the software", but a human process: reporting issues, creating breakages, writing patches, doing CI/CD, packaging for distro, that are in constant motion. If the motion stops, the software will stop working very soon.

If you look at a Win32 app, it's exactly the opposite - it's a product, not a process, once it's completed it's "set in stone", and some people will still use the same binary 20 years later, sometimes they spend great effort to keep the mysterious binary running, even when very little is known about it. The later "minimum maintenance" approach is historically rarely used by the free software community. Perhaps some projects should try it seriously.

Public
@niconiconi @chris__martin @dalias None of that prevents stable versioned APIs & protocols. And yes, external behavior can be stable (like a lot of POSIX-standardized UNIX tools').

Maintaining internal version-upgrade code is generally fairly simple. (It's absolutely feasible in classes/types to represent protocol versioning without it being otherwise onoxious to use.)

There's no reason Free Software has to be worse, the (false) dichotomy you present doesn't exist in exclusion of other things.

Furthermore, you realize that example of the "20+ years old binary still runs" is a fairly frequent statement made about POSIX (among other standards frequently implemented in UNIX-like Free Software)?
Quiet public

@lispi314 @chris__martin @dalias @niconiconi This may be a misunderstanding of the point, this isn't a technical issue, it's a social one based on what the prevailing expectations are for compat. Aside from glibc and a few others, very few projects see themselves as *system* components which need strong ABI guarantees, since source is the preferred format and everyone has to support some kind of CI processes to rebuild routinely, source-level API compat is all that's usually needed or provided

Quiet public

@lispi314 @chris__martin @dalias @niconiconi and unlike the few other systems which provide long-term binary compat, most Linux FOSS is built by combining a large number of totally independent projects with _no_ shared governance, so the mechanisms to *enforce* compat guarantees doesn't exist, it can only arise organically from social consensus of the disparate development teams having the will and resources to do so. Very different from Windows which is all in one source repo, or even BSD

Quiet public
@raven667 @niconiconi @dalias @chris__martin It is a technical problem, the social/communication option has been selected (ad-hoc) to haphazardly convey those specifications & limitations rather than have some other method.

Such an other option or mechanism /won't/ arise or at least probably won't see wide adoption because that would break compatibility with a lot of the UNIX-like world and its cultural & historical assumptions (someone willing to disregard wide adoption could absolutely still do it, with the caveat they'd have to design it to be retrocompatible with UNIX-likes that disregard the mechanism).
Quiet public

@raven667@hachyderm.io @lispi314@udongein.xyz @chris__martin@functional.cafe @dalias@hachyderm.io There's a difference between what's theoretically possible and what is the normalized practice. Sure, theoretically, for source-level compatibility, your POSIX-compliant C89 code from the year 2000 will still run in 2024 in theory. For binary compatibility, it has even been demoed it's NOT that bad at all on GNU/Linux, you can still run the original GIMP 1 binary, if everything's packaged carefully with the correct tricks.

It misses the point, which is the cultural preference that I was talking about, I don't even care about binary or source compatibility as they're simply technical means on an end. If you really want a "maintenance mode" project, you don't even need that - you can just write a self-contained C89 project with strict standard comformance and use a small sets of carefully selected stable dependencies (ncurses or Tcl/Tk will work until the end of time). As long as you treat it as a "finish" project, and don't keep adding features to it, the maintenance required is likely minimum, you probably only need to update it once a year with a small patch to keep it running.

But this is not what's going on. The reality of FOSS is that a project's devteam and goals will probably change several times over its lifetime, and the people, driven by technical challenges and the norm of expanding a project to fit your own need, will keep throwing dozens of extra features into the codebase, rewriting the business logic many times over, for for an increasing number of niche uses. Within some year it will be a completely different project, and another few years later there will be 3 competing forks, and 1 year later all will be obsolete as someone wrote something else.

Unless it's a low-level or niche tool, it will be extremely rare for the devs to declare that "The project is officially complete and is in maintenance mode, only security and fixes will be added after this point, all feature requests and patches will be rejected!" If someone does it, it will immediately seen as dead and be forked by the community. In this grand scheme of things, few care about compatibility.

The free software community culture expects a project to constantly in development, because people know they have the power to change it for the (subjective) "better". The mode of thinking is much different in the proprietary Win32 software world - those users have a different set of expectations - which is frozen upon release. Thus I said perhaps "minimum maintenance" projects is something the community should explore.

Quiet public
@niconiconi @dalias @chris__martin @raven667 @lispi314 Sounds like you're assuming that a project being FOSS means a community of leaders (or even a lack of one).

Which is blatantly false, a lot of FOSS projects have one leader, for example Linux, OpenBSD, musl, s6 and other skanet software, … have only one person truly calling the shots.
And Debian devs elects a project leader.

And non-FOSS is also really bad at having a defined scope and so saying no, Enterprise software is riddled with bloat, just somewhat different flavored kind of bloat that comes from managers/marketing instead of developers/users.
Public
@niconiconi @dalias @chris__martin @lispi314 Well you can try to do the same kind of thing as the few ancient enterprise binaries which still work on current Windows with static linux binaries (like Mosaic can be ran that way) or shipping the few libraries without guarantees on ABI (Opera 12 still runs).
But it'll also mean just throwing out security entirely, which enterprise software does all the time.

Meanwhile usually source code works for quite a while as well, with less problems when it comes to security, but you do need to be careful about your dependencies, which seems to be a lost ideal in many ecosystems today.
Public
@niconiconi @chris__martin @dalias @lispi314 At least for me it's a much more widespread culture problem than free software, like browsers including JS barely ever breaks API, NodeJS also has a stable API, yet web frontends proprietary or not which aren't like a tamagotchi are serious outliers.
Public
@lanodan @niconiconi @dalias @chris__martin > with static linux binaries (like Mosaic can be ran that way)

This is an issue mostly because the wrong layer is being distributed. The Operating System APIs for Linux are specified and standard at the ABI level, so particular cached computations (a binary) targeting that ABI will continue working in environments supporting it (different computer architectures being different environments). The stable/standard compatibility layer for most Linux binaries is typically at the source level inherited by libraries that only provide compatiblity & reliability guarantees at the source level.

It makes no sense to distribute the program at a layer that is not subject to reliability guarantees otherwise mentioned. (related: https://gbracha.blogspot.com/2020/01/the-build-is-always-broken.html)

(I don't think GNU LibC makes any claims of ABI stability across versions, so a given cached version of it cannot be used as a reliable primary artifact.)

> But it'll also mean just throwing out security entirely, which enterprise software does all the time.

Unless cryptography is involved, that shouldn't be the case and suggests there's something wrong with the running environment rather than the program.

For cryptography there are ways of handling that, mainly presenting a stable API at some layer or another. Programs using SSH don't need to know more than its command line interface (which SSH itself has somewhat standardized I think), programs using I2P (as clients) need nothing more than standard HTTP or TCP support (through reverse-proxy tunnels) & do not care about the I2P version used. And of course, libraries presenting a stable API (whether type/function-call based or protocol-based, with appropriate upgrade support in the background), which I think is the standard way to go now, will simply keep working.
gbracha.blogspot.comThe Build is Always Broken Programmers are always talking about broken builds: "The build is broken", "I broke the build" etc. However, the real problem is that the ...
Quiet public
@niconiconi @dalias @chris__martin @lanodan > programs using I2P (as clients)

Well, as servers too, it works on both ends. And SAM's protocol handles retrocompatibility too.

Some of the other libraries for I2P do not provide compatiblity guarantees across versions, but I think they're also all deprecated for that same reason.
Quiet public
@lispi314 @chris__martin @dalias @niconiconi Well cryptography is used by a *lot* of software, sometimes just to check integrity (although this shouldn't need updates), but also because TLS is used for the vast majority of networking these days.
And ABI guarantees for libraries in this area are pretty rare.

Same kind of deal with multimedia (which frequently needs security updates), specially if SDL isn't an option.
Quiet public
@lanodan @niconiconi @dalias @chris__martin Indeed, it is rare among those libraries for a number of reasons (very often performance).

For graphics display, that is part of why some programs used to directly talk X11 with the display server to render themselves from scratch.

For some negligible performance loss, it would possible for a graphics library to use its own internal protocol with cross-version compatibility to provide a stable ABI. It's somewhat comparable to what many wrapper libraries do with Tk (although there's no need for multiprocessing to replicate that).
Quiet public

@lispi314 @chris__martin @dalias @niconiconi @lanodan
> I don't think GNU LibC makes any claims of ABI stability across versions

Slight confusion, as glibc uses versioned symbols extensively to provide different variants of its functions that transparently support all older ABIs when they change, they only add new versions, never remove old ones so old binaries run just fine. This feature exists in the linker but few libraries have the discipline to make guarantees using it.

Quiet public

@raven667 @lispi314 @chris__martin @niconiconi @lanodan It's a very bad mechanism, because symbol versioning binds at link time, but dependency on a particular version is determined at compile time.

The right way to do this is to throw away symbol versioning and do versioned interfaces with the preprocessor in the library's public header file. Bonus: it's portable, not dependent on a GNU tooling extension.

Public

@lispi314@udongein.xyz @chris__martin@functional.cafe @dalias@hachyderm.io that doesn't cover things like supporting new hardware.

"Complete" software from 20 years ago will not work with hidpi screens or touchscreens, will run slowly because it's using software rendering and what was acceptable on 640x480 will be dog slow on 3840x2160 (and it will do it on a single thread too because multithreading wasn't a thing yet).

It will also probably not build to begin with now because it was written in C and 1000 more things have now become fatal by default warnings.

Bonus point if it relies on a network service and/or protocol that doesn't exist anymore.

Sure, OS could do a better job with backwards compatibility (this is a thing that e.g. flatpak is made to solve), but there are a lot of factors outside that too.

Public

@alice @lispi314 @chris__martin NES games that were complete 35 years ago still run exactly right now, because you run them in an environment that does not have changing surrounding context.

There's no reason that can't be the same for all sorts of other software.

Public

@dalias@hachyderm.io @lispi314@udongein.xyz @chris__martin@functional.cafe oh, you can, in a VM. You can run literally anything like that. But that's not what the reply was about, is it?

Public

@alice @lispi314 @chris__martin I mean.. it can be? That's one viable way to make software. It's a really annoying way to make gigantic software with sprawling dependencies that needs all sorts of hardware, because the interface surface you need to provide to host it is huge (see also: Docker 🤮), but for many things it's rather reasonable..?

Public

@dalias@hachyderm.io @lispi314@udongein.xyz @chris__martin@functional.cafe sure. Again, you can run anything like that, but then there's no point in this entire thread. What's the point of talking about how systems are bad at backwards compatibility when you can emulate anything?

Also I mean the interface surface for NES is huge too... Because you don't just need to emulate NES itself, but also every mapper chip from every cartridge ever, AND every peripheral ever. Nestopia supports more than 200 different boards, for example.

And then there are things like palette - with how NES outputs its graphics it's impossible to recreate accurate colors - because they differ for every TV.

Also, well, most emulators have game-specific hacks. In case of NES you need to know what mapper each game uses, so it has a game database. Otherwise at least in Nestopia I'm not aware of anything, but e.g. bsnes has plenty (it's SNES tho, not NES).

And of course emulation in general is a compromise. You have to choose between preserving every flaw as is or not to do that and accept that some games will break. For example, lots of LCD games rely on screen ghosting for transparency effects via flickering sprites/backgrounds in and out, e.g. on Game Boy. This means that I have to emulate really strong (unpleasantly so on large screens) optional ghosting for Game Boy games in Highscore, just to prevent them from being flickery... So, you have a choice of flicker or ghosting. Or... someone fixing the game, via a romhack.

Public
@alice @chris__martin @dalias > Also, well, most emulators have game-specific hacks.

They used to, and those prevented accurate emulation leading to other bugs instead. Mapping of coprocessors used by the game so they can be emulated is a consequence of ROM dumps not including the coprocessor whereas a game cartridge /would/.

It's the difference between zsnes (which had /very/ low spec requirements as a result) and higan/bsnes (which cannot work on weak machines).

> This means that I have to emulate really strong (unpleasantly so on large screens) optional ghosting for Game Boy games in Highscore

Would it not be an option to lower those screens' resolution?
Public

@lispi314@udongein.xyz @chris__martin@functional.cafe @dalias@hachyderm.io > Would it not be an option to lower those screens' resolution?

Why would that matter here?

Public
@alice @chris__martin @dalias It should enable displaying the output at the original resolution without flickering and have it be playable.
Public

@lispi314@udongein.xyz @chris__martin@functional.cafe @dalias@hachyderm.io who was even talking about resolution?

Flickering removal is ofc achievable - by adding ghosting. That is exactly what I was saying.

Public
@alice @chris__martin @dalias I misunderstood the matter as being broken rendering from resolution mismatch & upscaling hacks.

The solution to which is to not do those things. I did not know of the transparency hardware-dependent hack.
Public

@lispi314@udongein.xyz @chris__martin@functional.cafe @dalias@hachyderm.io nono, I didn't talk about any upscaling.

If you want another hack like that: on consoles with 320px wide output + NTSC + S-Video, dithering becomes translucency. A lot of mega drive and saturn games do that, for example.

Public
@alice @chris__martin @dalias I'm not familiar enough with the architecture of the gameboy.

If those are things a cartridge could've controlled/toggled, then these are not hacks per se and are accurate emulation.

If those are not things a cartridge could've controlled, then bsnes isn't fully accurate because those shouldn't be necessary. Which is a bug, at least if bsnes aims for full accuracy.
Public

@lispi314@udongein.xyz @chris__martin@functional.cafe @dalias@hachyderm.io I'm not talking about game boy in context of bsnes.

Game Boy games just flicker to emulate transparency. Like this one:
https://www.youtube.com/watch?v=AyWvVeiidwM&t=1359s

It's not an emulator bug. They do it intentionally. But you see, they were meant to be played on a sucky passive matrix lcd screen with horrific ghosting - there this flicker becomes a really nice looking transparency - this way games can have more than 4 shades.

Public
@alice @chris__martin @dalias Ah, I see. So one would need to emulate such a display to be able to portably reproduce the behavior, since the games assume it as part of the environment they can rely on.

That would require particular support for the variety of monitors & displays now common, and wouldn't necessarily be possible (accurately) on every display.

Is that the way the emulators typically go?
Public

@lispi314@udongein.xyz @chris__martin@functional.cafe @dalias@hachyderm.io well, this one is mostly fine, the only problem is if your screen's refresh rate is not a multiple of 60hz and it doesn't support VRR - then the ghosting also becomes jerky). It's more that you have a wonderful choice of flickering or ghosting, and the only way to not have either is to make a romhack that removes the flicker.

Public
@alice @chris__martin @dalias > and the only way to not have either is to make a romhack that removes the flicker.

Couldn't the emulator tune its display of an emulated display accordingly (at the cost of some accuracy)?
Public

@lispi314@udongein.xyz @chris__martin@functional.cafe @dalias@hachyderm.io how would it know what the intent is tho?

it's not even always just plain transparency! The title screen of that same game uses the same effect to make a gradient.

Public
@alice @chris__martin @dalias There would be some loss of accuracy on nonideal frequency mismatches, though to some degree a user providing hints/configuration could potentially mitigate that (with tradeoffs).
Public

@lispi314@udongein.xyz @chris__martin@functional.cafe @dalias@hachyderm.io right, it's all about tradeoffs

but back to the original point - that's exactly the kind of stuff that requires maintenance. Either the game itself is fixed, or everyone has to work around it, forever.

For retro games this is widely accepted, and after all we're emulating different hardware entirely, so fair enough.

For old PC stuff? it gets more questionable. And perfect backwards compat won't save you from software targeting defunct hardware. They obviously didn't have a time machine to know how people would run it in 30 years, but it doesn't make it any easier ^^

At least on PCs we don't need to deal with stuff like lightguns and various 3D glasses systems not working with LCDs because they expects near-instant latency of a CRT...

Public

@dalias@hachyderm.io @alice@mk.nyaa.place @lispi314@udongein.xyz @chris__martin@functional.cafe Personally I believe open source software (yes even source available) will last for a lot longer than proprietary equivalents. Company may go under and without that code I can’t cross compile to a different architecture. A lot of software written in higher level languages can be easily adapted to run on a different architecture (including c assuming nothing crazy with inline assembly or specific compiler extensions). I get it if you’re making software that is accelerated by SIMD instruction sets (simdjson comes to mind, that’s not exactly "portable" but also that’s an edge case right?). The problem with NES games is that they rely too much on the behavior of FPUs (they expect a certain level of precision for floating point arithmetic, this was an issue with Dolphin because you can’t exactly just translate to x86 instructions, you have to fuzz precision for some of these older games)

Public

@dalias@hachyderm.io @alice@mk.nyaa.place @lispi314@udongein.xyz @chris__martin@functional.cafe shader pipelines may not be exactly a universal thing or a portable thing, but if we abstract away that for a second… there’s a reason web assembly is picking up popularity, and that docker provides running things in web assembly environments. Java programs are incredibly portable (even old ones!) due to the nature of bytecode & the JVM. I mean this sort of ecosystem still exists within c but with a lot of caveats.

C needs to be compiled for a specific architecture. the JVMs themselves also need to be mind you? They’re made to be platform dependent in implementation but they abstract that away through bytecode. If I do a certain operation in Java the JVM reads the bytecode and performs the same behavior, just does it differently. Say I’m on a big endian system vs little endian… I typically don’t worry about such things the jvm hides it from me. It maintains a consistent behavior.

If you’re building something that
directly relies on syscalls sure, musl isn’t exactly 100% portable (but that makes sense considering how a variety of kernels handle virtual memory management, and the fact that c relies on malloc() and several other common functions to manipulate memory which need a platform specific implementation). If you’re using a library that requires a libc’s malloc you just care that it uh… works? C can go either way. If I’m including libcurl instead of interacting with a kernel’s network stack directly… etc.

There lies the problem though, the maintenance is compiling new targets (architectures come and go, remember when MIPS32 and MIPS64 were common targets?), testing code on a new platform (what if something differs in how I expect it to behave?). I don’t think Java or web assembly is the answer but rather changing our approach to programming. People should still be empowered to make platform dependent implementations (nothing would work without it) but people outside of that use case should focus on making their code as platform agnostic as possible. Reducing the usage of compiler specific extensions, libraries that are only available on certain platforms, etc. I think that "POSIX" was an attempt at this (although it fails in several areas but I don’t think there’s anything I could say on that you haven’t heard already lol).

tl;dr - we need more emphasis on portability via platform agnostic libraries, and platform specific implementations that focus on compatibility with other platforms. A good example of this is when it comes to audio (OSS, and various other attempts at abstracting audio subsystems. audio scares the shit out of me tbh)

Public

@dalias@hachyderm.io @alice@mk.nyaa.place @lispi314@udongein.xyz @chris__martin@functional.cafe side note I really like the inclusion of audio in my tldr. Java and other bytecode interpreted languages (well you get into a mix of JIT and some other funnies) struggle with audio and gui. Sure, swing exists and is pretty good but it’s not a universal solution. Problem lies within how different operating systems approach the audio subsystem, from apple’s https://developer.apple.com/documentation/coreaudio core audio, to linux' OSS, to sndio and other interfaces. That is a mess. A big one. It’s not always feasible to abstract away this, especially because hardware based audio mixing is completely different from software based. You can’t just get 1:1 here.

Apple Developer DocumentationCore Audio | Apple Developer DocumentationUse the Core Audio framework to interact with device’s audio hardware.
Quiet public

@dalias@hachyderm.io @alice@mk.nyaa.place @lispi314@udongein.xyz @chris__martin@functional.cafe @puppygirlhornypost2@transfem.social Portability is the primary reason why I avoid cgo (ffi for Go , makes everything a lot worse due to Go specific reasons) so much for my projects, namely my Fedi server Linstrom. Example: I deliberately chose to use a pure Go Sqlite driver instead of the more commonly used cgo one to ensure that as little external dependencies exist as possible. And now I'm looking to copy @gotosocial@gts.superseriousbusiness.org [woem.men]'s inclusion of ffmpeg via wasm. Because I know that there is a pure go runtime for wasm.
And due to all this, Linstrom should be able to be compiled for every target someone made a Go compiler for, behaving in the same way everywhere. In theory, you'd even be able to run Linstrom as a plugin for Linstrom in the future

Quiet public
@mstar @puppygirlhornypost2 @dalias @chris__martin @alice > I deliberately chose to use a pure Go Sqlite driver

Wait, how does that work exactly? As far as I'm aware SQLite logic is entirely implemented as a C library. Has someone actually gone and reimplemented that in Golang?
Public
@alice @chris__martin @dalias It'll work quite fine with hidpi screens or touchscreens.

Click emulation software exists and slowness isn't nonfunctionality unless it is cripplingly slow. And usually systems with such absurd monitor specs tend to have decent CPUs.

The runtime environment mention I make below also applies to this for functionality without these issues.

> It will also probably not build to begin with now because it was written in C and 1000 more things have now become
fatal by default warnings.

Building it in C was a bad choice to start with. Common Lisp would've been better.

> Bonus point if it relies on a network service and/or protocol that doesn't exist anymore.

Completeness isn't mutually exclusive with obsolescence. A renewal fork would be an option, among others.

> Sure, OS could do a better job with backwards compatibility (this is a thing that e.g. flatpak is made to solve)

That actually could cover the protocol mention above. Why /does/ the program need to know what protocol it's using to contact a destination and send messages (reliable, unreliable, ordered, etc) or "continuous" datastreams? The runtime environment (whether that's the operating system, a bytecode VM or something else) could /absolutely/ provide an (object-oriented) API that handles those concerns which are not directly relevant to the program.

At which point so long as the environment requirements are fulfilled, it will just work.
Quiet public

@chris__martin because capitalism

Public

@chris__martin I once saw software described as performance art, and I wish I could find it again