"why does software require maintenance, it's not like it wears down"
Because software is not generally useful for any inherent good; what matters is its relationship with the surrounding context, and that context is in perpetual change.
@chris__martin hmm, I think it's mostly because even when we have something called "Software Engineering" and we called it a science, making sure that some program is bug free is terribly expensive, so maintenance exists even if we only think in terms of bugs. Also, software is so malleable that we can adapt it and evolve it faster than anything else. You could make a parallel of a software release with a new model of an exiting product. And then you have 'rolling releases' of things like sites.
Marcos I understand the argument you're making here, but there's a grander context around the software beyond just "bugs" and features.
If I took a perfect piece of bug-free software that required no new features and then I left it in the GitHub repo, how long do I have before it stops working for people?
2 years?
20 years?
1000 years?
At some point the software will need care, of even just to ensure the code still builds.
@gatesvp @chris__martin as it is, assuming you can still find all the deps all the way down to the kernel if needed, it should run. Now, whether it would still be useful because the context changed, like you say, yes, it probably won't be. But as a software developer (yes, it could be a narrow mindset) I think of changing requirements mean new features.
And I think the same could be said of physical stuff; in my lifetime we had had changes in plugs, going from two types of screw heads to N, etc
"Assuming you can find all of the deps..."
This is the macro point that Chris is trying to make. Marco, we both know that the assumption you're making is unsafe.
Finding all the dependencies is not guaranteed. Having a stable compilation target is not guaranteed. And any dependency can be subject to its own instabilities and security issues.
Software is not just source code. Into the environment in which that source code exists. And that environment is not stable.
@gatesvp @chris__martin well, for instance, for the Debian Linux distribution, you can find the source code and binaries for all ~8000 (then) and ~72000 (now) packages all the way to March 2005. Granted, you will hit issues where architectural changes will prevent you from running binary code, but hardware emulators exists and are more and more common (because we keep creating new architectures); and yes, you can count those as 'maintenance' or dependencies.
1/N
@gatesvp @chris__martin All I am saying is I don't think is this is not exclusive of software. When I was a kid we had to start replacing wall sockets because my country decided to introduce plugs with ground for big appliances. This maintenance is also needed in other places. Just that software is so malleable, that instead of "throwing it away and buying a new one" like we do with physical objects, we have the tendency of changing the one we have (mostly because it's seemingly cheaper).
@chris__martin sadly we often grow thinking it's isolated.
also every node in the software graph is fuzzy, unless you're djb (slightly kidding) your code has bad interfaces and bad implementation.. rot can come from within
@chris__martin And if there's no other context, there's at least the context that operating systems keep changing. And they need to if for no other reason that hardware keeps changing!
@chris__martin Also it was worn down when you deployed it, you just didn’t know that.
I mean sometimes it’s 100% rust!
@chris__martin that's part of why I'm in to retrotech.
DOS hasn't changed much since 1993
@chris__martin Part of design is choosing the scope of that context, thereby limiting or blowing up the degree to which it needs maintenance. Most people overlook this.
@chris__martin This is a really solid quote
@chris__martin "Bitrot" may be a metaphorical term but it describes very real processes.
@chris__martin the bits, they rot
@lispi314@udongein.xyz @chris__martin@functional.cafe @dalias@hachyderm.io Free and open source software by its very definition encourages people to make frequent changes to every part of the system, world-breaking changes that are otherwise unthinkble elsewhere are welcomed on a daily basis, and the community members in general are proud that they're so innovative (e.g. one can port an entire OS to a new CPU within a year). The consequence of this system is that most software projects are not a product, there's no such a thing called "the software", but a human process: reporting issues, creating breakages, writing patches, doing CI/CD, packaging for distro, that are in constant motion. If the motion stops, the software will stop working very soon.
If you look at a Win32 app, it's exactly the opposite - it's a product, not a process, once it's completed it's "set in stone", and some people will still use the same binary 20 years later, sometimes they spend great effort to keep the mysterious binary running, even when very little is known about it. The later "minimum maintenance" approach is historically rarely used by the free software community. Perhaps some projects should try it seriously.
@lispi314 @chris__martin @dalias @niconiconi This may be a misunderstanding of the point, this isn't a technical issue, it's a social one based on what the prevailing expectations are for compat. Aside from glibc and a few others, very few projects see themselves as *system* components which need strong ABI guarantees, since source is the preferred format and everyone has to support some kind of CI processes to rebuild routinely, source-level API compat is all that's usually needed or provided
@lispi314 @chris__martin @dalias @niconiconi and unlike the few other systems which provide long-term binary compat, most Linux FOSS is built by combining a large number of totally independent projects with _no_ shared governance, so the mechanisms to *enforce* compat guarantees doesn't exist, it can only arise organically from social consensus of the disparate development teams having the will and resources to do so. Very different from Windows which is all in one source repo, or even BSD
@raven667@hachyderm.io @lispi314@udongein.xyz @chris__martin@functional.cafe @dalias@hachyderm.io There's a difference between what's theoretically possible and what is the normalized practice. Sure, theoretically, for source-level compatibility, your POSIX-compliant C89 code from the year 2000 will still run in 2024 in theory. For binary compatibility, it has even been demoed it's NOT that bad at all on GNU/Linux, you can still run the original GIMP 1 binary, if everything's packaged carefully with the correct tricks.
It misses the point, which is the cultural preference that I was talking about, I don't even care about binary or source compatibility as they're simply technical means on an end. If you really want a "maintenance mode" project, you don't even need that - you can just write a self-contained C89 project with strict standard comformance and use a small sets of carefully selected stable dependencies (ncurses or Tcl/Tk will work until the end of time). As long as you treat it as a "finish" project, and don't keep adding features to it, the maintenance required is likely minimum, you probably only need to update it once a year with a small patch to keep it running.
But this is not what's going on. The reality of FOSS is that a project's devteam and goals will probably change several times over its lifetime, and the people, driven by technical challenges and the norm of expanding a project to fit your own need, will keep throwing dozens of extra features into the codebase, rewriting the business logic many times over, for for an increasing number of niche uses. Within some year it will be a completely different project, and another few years later there will be 3 competing forks, and 1 year later all will be obsolete as someone wrote something else.
Unless it's a low-level or niche tool, it will be extremely rare for the devs to declare that "The project is officially complete and is in maintenance mode, only security and fixes will be added after this point, all feature requests and patches will be rejected!" If someone does it, it will immediately seen as dead and be forked by the community. In this grand scheme of things, few care about compatibility.
The free software community culture expects a project to constantly in development, because people know they have the power to change it for the (subjective) "better". The mode of thinking is much different in the proprietary Win32 software world - those users have a different set of expectations - which is frozen upon release. Thus I said perhaps "minimum maintenance" projects is something the community should explore.
@lispi314 @chris__martin @dalias @niconiconi @lanodan
> I don't think GNU LibC makes any claims of ABI stability across versions
Slight confusion, as glibc uses versioned symbols extensively to provide different variants of its functions that transparently support all older ABIs when they change, they only add new versions, never remove old ones so old binaries run just fine. This feature exists in the linker but few libraries have the discipline to make guarantees using it.
@raven667 @lispi314 @chris__martin @niconiconi @lanodan It's a very bad mechanism, because symbol versioning binds at link time, but dependency on a particular version is determined at compile time.
The right way to do this is to throw away symbol versioning and do versioned interfaces with the preprocessor in the library's public header file. Bonus: it's portable, not dependent on a GNU tooling extension.
@lispi314@udongein.xyz @chris__martin@functional.cafe @dalias@hachyderm.io that doesn't cover things like supporting new hardware.
"Complete" software from 20 years ago will not work with hidpi screens or touchscreens, will run slowly because it's using software rendering and what was acceptable on 640x480 will be dog slow on 3840x2160 (and it will do it on a single thread too because multithreading wasn't a thing yet).
It will also probably not build to begin with now because it was written in C and 1000 more things have now become fatal by default warnings.
Bonus point if it relies on a network service and/or protocol that doesn't exist anymore.
Sure, OS could do a better job with backwards compatibility (this is a thing that e.g. flatpak is made to solve), but there are a lot of factors outside that too.
@alice @lispi314 @chris__martin NES games that were complete 35 years ago still run exactly right now, because you run them in an environment that does not have changing surrounding context.
There's no reason that can't be the same for all sorts of other software.
@dalias@hachyderm.io @lispi314@udongein.xyz @chris__martin@functional.cafe oh, you can, in a VM. You can run literally anything like that. But that's not what the reply was about, is it?
@alice @lispi314 @chris__martin I mean.. it can be? That's one viable way to make software. It's a really annoying way to make gigantic software with sprawling dependencies that needs all sorts of hardware, because the interface surface you need to provide to host it is huge (see also: Docker ), but for many things it's rather reasonable..?
@dalias@hachyderm.io @lispi314@udongein.xyz @chris__martin@functional.cafe sure. Again, you can run anything like that, but then there's no point in this entire thread. What's the point of talking about how systems are bad at backwards compatibility when you can emulate anything?
Also I mean the interface surface for NES is huge too... Because you don't just need to emulate NES itself, but also every mapper chip from every cartridge ever, AND every peripheral ever. Nestopia supports more than 200 different boards, for example.
And then there are things like palette - with how NES outputs its graphics it's impossible to recreate accurate colors - because they differ for every TV.
Also, well, most emulators have game-specific hacks. In case of NES you need to know what mapper each game uses, so it has a game database. Otherwise at least in Nestopia I'm not aware of anything, but e.g. bsnes has plenty (it's SNES tho, not NES).
And of course emulation in general is a compromise. You have to choose between preserving every flaw as is or not to do that and accept that some games will break. For example, lots of LCD games rely on screen ghosting for transparency effects via flickering sprites/backgrounds in and out, e.g. on Game Boy. This means that I have to emulate really strong (unpleasantly so on large screens) optional ghosting for Game Boy games in Highscore, just to prevent them from being flickery... So, you have a choice of flicker or ghosting. Or... someone fixing the game, via a romhack.
@lispi314@udongein.xyz @chris__martin@functional.cafe @dalias@hachyderm.io > Would it not be an option to lower those screens' resolution?
Why would that matter here?
@lispi314@udongein.xyz @chris__martin@functional.cafe @dalias@hachyderm.io who was even talking about resolution?
Flickering removal is ofc achievable - by adding ghosting. That is exactly what I was saying.
@lispi314@udongein.xyz @chris__martin@functional.cafe @dalias@hachyderm.io nono, I didn't talk about any upscaling.
If you want another hack like that: on consoles with 320px wide output + NTSC + S-Video, dithering becomes translucency. A lot of mega drive and saturn games do that, for example.
@lispi314@udongein.xyz @chris__martin@functional.cafe @dalias@hachyderm.io I'm not talking about game boy in context of bsnes.
Game Boy games just flicker to emulate transparency. Like this one: https://www.youtube.com/watch?v=AyWvVeiidwM&t=1359s
It's not an emulator bug. They do it intentionally. But you see, they were meant to be played on a sucky passive matrix lcd screen with horrific ghosting - there this flicker becomes a really nice looking transparency - this way games can have more than 4 shades.
@lispi314@udongein.xyz @chris__martin@functional.cafe @dalias@hachyderm.io well, this one is mostly fine, the only problem is if your screen's refresh rate is not a multiple of 60hz and it doesn't support VRR - then the ghosting also becomes jerky). It's more that you have a wonderful choice of flickering or ghosting, and the only way to not have either is to make a romhack that removes the flicker.
@lispi314@udongein.xyz @chris__martin@functional.cafe @dalias@hachyderm.io how would it know what the intent is tho?
it's not even always just plain transparency! The title screen of that same game uses the same effect to make a gradient.
@lispi314@udongein.xyz @chris__martin@functional.cafe @dalias@hachyderm.io right, it's all about tradeoffs
but back to the original point - that's exactly the kind of stuff that requires maintenance. Either the game itself is fixed, or everyone has to work around it, forever.
For retro games this is widely accepted, and after all we're emulating different hardware entirely, so fair enough.
For old PC stuff? it gets more questionable. And perfect backwards compat won't save you from software targeting defunct hardware. They obviously didn't have a time machine to know how people would run it in 30 years, but it doesn't make it any easier ^^
At least on PCs we don't need to deal with stuff like lightguns and various 3D glasses systems not working with LCDs because they expects near-instant latency of a CRT...
@dalias@hachyderm.io @alice@mk.nyaa.place @lispi314@udongein.xyz @chris__martin@functional.cafe Personally I believe open source software (yes even source available) will last for a lot longer than proprietary equivalents. Company may go under and without that code I can’t cross compile to a different architecture. A lot of software written in higher level languages can be easily adapted to run on a different architecture (including c assuming nothing crazy with inline assembly or specific compiler extensions). I get it if you’re making software that is accelerated by SIMD instruction sets (simdjson comes to mind, that’s not exactly "portable" but also that’s an edge case right?). The problem with NES games is that they rely too much on the behavior of FPUs (they expect a certain level of precision for floating point arithmetic, this was an issue with Dolphin because you can’t exactly just translate to x86 instructions, you have to fuzz precision for some of these older games)
@dalias@hachyderm.io @alice@mk.nyaa.place @lispi314@udongein.xyz @chris__martin@functional.cafe shader pipelines may not be exactly a universal thing or a portable thing, but if we abstract away that for a second… there’s a reason web assembly is picking up popularity, and that docker provides running things in web assembly environments. Java programs are incredibly portable (even old ones!) due to the nature of bytecode & the JVM. I mean this sort of ecosystem still exists within c but with a lot of caveats.
C needs to be compiled for a specific architecture. the JVMs themselves also need to be mind you? They’re made to be platform dependent in implementation but they abstract that away through bytecode. If I do a certain operation in Java the JVM reads the bytecode and performs the same behavior, just does it differently. Say I’m on a big endian system vs little endian… I typically don’t worry about such things the jvm hides it from me. It maintains a consistent behavior.
If you’re building something that directly relies on syscalls sure, musl isn’t exactly 100% portable (but that makes sense considering how a variety of kernels handle virtual memory management, and the fact that c relies on malloc() and several other common functions to manipulate memory which need a platform specific implementation). If you’re using a library that requires a libc’s malloc you just care that it uh… works? C can go either way. If I’m including libcurl instead of interacting with a kernel’s network stack directly… etc.
There lies the problem though, the maintenance is compiling new targets (architectures come and go, remember when MIPS32 and MIPS64 were common targets?), testing code on a new platform (what if something differs in how I expect it to behave?). I don’t think Java or web assembly is the answer but rather changing our approach to programming. People should still be empowered to make platform dependent implementations (nothing would work without it) but people outside of that use case should focus on making their code as platform agnostic as possible. Reducing the usage of compiler specific extensions, libraries that are only available on certain platforms, etc. I think that "POSIX" was an attempt at this (although it fails in several areas but I don’t think there’s anything I could say on that you haven’t heard already lol).
tl;dr - we need more emphasis on portability via platform agnostic libraries, and platform specific implementations that focus on compatibility with other platforms. A good example of this is when it comes to audio (OSS, and various other attempts at abstracting audio subsystems. audio scares the shit out of me tbh)
@dalias@hachyderm.io @alice@mk.nyaa.place @lispi314@udongein.xyz @chris__martin@functional.cafe side note I really like the inclusion of audio in my tldr. Java and other bytecode interpreted languages (well you get into a mix of JIT and some other funnies) struggle with audio and gui. Sure, swing exists and is pretty good but it’s not a universal solution. Problem lies within how different operating systems approach the audio subsystem, from apple’s https://developer.apple.com/documentation/coreaudio core audio, to linux' OSS, to sndio and other interfaces. That is a mess. A big one. It’s not always feasible to abstract away this, especially because hardware based audio mixing is completely different from software based. You can’t just get 1:1 here.
@dalias@hachyderm.io @alice@mk.nyaa.place @lispi314@udongein.xyz @chris__martin@functional.cafe @puppygirlhornypost2@transfem.social Portability is the primary reason why I avoid cgo (ffi for Go , makes everything a lot worse due to Go specific reasons) so much for my projects, namely my Fedi server Linstrom. Example: I deliberately chose to use a pure Go Sqlite driver instead of the more commonly used cgo one to ensure that as little external dependencies exist as possible. And now I'm looking to copy @gotosocial@gts.superseriousbusiness.org [woem.men]'s inclusion of ffmpeg via wasm. Because I know that there is a pure go runtime for wasm.
And due to all this, Linstrom should be able to be compiled for every target someone made a Go compiler for, behaving in the same way everywhere. In theory, you'd even be able to run Linstrom as a plugin for Linstrom in the future
@puppygirlhornypost2@transfem.social @dalias@hachyderm.io @chris__martin@functional.cafe @alice@mk.nyaa.place @lispi314@udongein.xyz yup. Hence the slight performance drop
@chris__martin because capitalism
@chris__martin I once saw software described as performance art, and I wish I could find it again