"How do I get started contributing to open source? What are some good entry-level tasks to work on?"
These are questions I am often asked, so here's the answer for everyone to read:
Scratch your own itches. Find bugs that are causing you problems, conspicuously missing features you would find useful, and implement them - in literally any free/open-source software you're using. Don't worry about not being familiar with the codebase or programming language or whatever, just solve one problem at a time.
Try this: next time you go to report a bug, report it, and immediately start working on a patch which fixes the problem.
Scratching your own itches is the best source of motivation and maximizes your productivity.
Often that means not contributing to my projects at all, if you're asking how to get started with a specific project. Maybe you like it because it's flawless 😉 (hah!), in which case it wouldn't need your help anyway. Go fix something which is bugging you in another project. Spread the contributor wealth around and eventually it'll come back to my projects, too.
I wonder if there is a correlation between the side someone takes in "tabs vs. spaces" and the side they take in "dynamic linking vs. static linking." The results of the following poll ought to give some answer. Boost if you're curious to get an accurate answer.
Which pair of sides are you on? If you don't feel strongly about one of the wars, just go with what you're leaning toward.
When #vim golfing (or golfing with any other editor) I think shift ought to count as at least half a keystroke. shift-j is harder than fj.
Right now, the Type type in my type inferencer is a tree, so function and struct types contain pointers to other Types. Since the algorithm currently does a lot of copying, I'd rather not waste space on pointers, like all linked structures do. Of course, the Type type cannot contain itself directly, since then the size it takes up on the stack cannot be directly known. To work around this, I'm thinking a Type will be an array of bytes in a special format. Atomic types are represented by a single byte, and function types will include a number of parameters in the first byte so that you know how many parameter types to read.
Is there a name for that kind of data structure - a would-be linked data structure that doesn't have pointers? (i.e. it juxtaposes in memory the things that would be linked together)
Not sure if any of that made sense.
I made a mostly-working compositional type inference engine. The core concept IMU is that each sub-expression is analyzed in isolation and the results are combined. The result is that errors are more useful because all locations of the potential are reported, rather than the place where the conflict was discovered.
However, it does not seem to be the most speedy algorithm ever. Part of that is because I haven't put much effort into being more clever, but I think a large part is also due to the nature of the algorithm. There's just so much data memory copying going...
Found this short article about compositional type checking - https://gergo.erdi.hu/blog/2010-10-23-the_case_for_compositional_type_checking/. The writing is refreshingly easy to follow compared to other type inference papers I've tried to read. The notation that papers tend to use always seem to have subtle differences, so it feels like I have to re-learn the language each time. However, I feel like type checking rules are pretty intuitive for the most part, and there is probably just one or two clever details that make the algorithm correct or decidable. I wish writers would put more effort into being easy to understand.
I made a parser. Then I made a pretty printer for the AST. Then I made a function that calls parse and then calls print. Is there any literature on how to preserve things like comments and line breaks?
One thought I had is to make the printer output a stream of (token, following-whitespace) pairs. The token stream ought to match the original tokenizer stream. If I have an option in the tokenizer to include whitespace and comments as tokens, then I could even compare the two streams and ensure the output preserves the important parts of the input stream.
This is more complicated than I would like, but it seems preferable to modifying the Ast nodes to contain extra info, since for ordinary compilation, you don't need that.
Any thoughts? @loke
I had an idea to improve the Zig tokenizer, and it turned out to be a good one!
throughput: 279 MiB/s => 347 MiB/s
Check out the diff - it's a great demo zig's comptime feature.
dd if=pmos.img of=/dev/sdb
via a eMMC to USB adapter. Put the module back into the #PinebooksPro and held the power button. A bit later, I get this screen, followed quickly by the pretty #PostmarketOS login screen. The problem is that neither the trackpad nor the keyboard work. Any ideas? @martijnbraam
The backstory is that I ran out of battery in the middle of a previous overwrite of the eMMC module from the SD Card, so it's possible that things are more borked than usual.
This week I switched away from using Gmail finally to a real email provider. I was very successfully using the strategy described here - https://xph.us/2013/01/22/inbox-zero-for-life.html
Although I'm glad to be away from it, both the mobile app and the web interface have been extremely pleasant to use. Plus, that strategy I linked to is surprisingly Gmail-centric.
At the moment, I probably don't get enough email to need anything more than "move email to Archive when I'm done with an email".
My contention is that folders are inferior to tags in all situations, including filesystems, function namespaces, and email. Tags can do anything folders can, but not vice versa.
Just found https://crontab.guru/