From thought to finger to keys,
The body makes it a breeze;
The slowest part is the mind,
Which usually runs behind.
I just recently started using Yash instead of Fish as my default shell. It feels like it has a similar set of goodies, just like Fish, but it is also POSIX compliant.
A good skill to have is being able to describe the general instance from a single example. For example, if all you know about is Zoom, you should still be able to talk about "video conference software". Recognize the essential value that something provides, rather than always viewing it as an atomic unit.
I've been experimenting with static compilation of a #Haskell program.
First try: 33M
Stripped of symbols: 21M
Compile with "-split-sections": 8.5M
Stripped of symbols: 5M
That's a huge difference when using "-split-sections"! If you don't know what that does, I guess it splits each function into its own section so that the linker can strip out unused sections, which means only code that as actually used will be included in the output.
If you're asking why it isn't enable by default, you're apparently not alone. Looks like it is that way in GHC 8.8; I was compiling with 8.6.
Can someone help me with what I think is a networking problem?
While trying to install Alpine Linux on an ethernet-only machine, the setup script fails during the wget request to get the mirrorslist (the request timed out).
I tried to do the same request on my laptop, which is connected to the same switch that the other machine is, and it also timed out. However, if I use the -4 option on my laptop to force using ipv4, the request succeeds.
Thus, it seems that ipv6 requests are being held up somehow. Where would that be happening? On my router? On the alpine server?
#Haskell point-free vs. point-ful. Note that fromField is a method from the FromField typeclass in the sqlite-simple package. Would you rather:
Which also probably means that I have to have endpoints for both the webpage and the data populating the page.
I just published my first project on sourcehut - https://sr.ht/~philipwhite/webpad/.
My goal is for it to be "notepad.exe for the internet". Not that I necessarily want it to get hugely popular. I just want something extremely simple that provides some of the benefits that Google Docs provides without all the downsides and bloat that comes with Google Docs.
The server is written in Haskell, which may cut down the audience of who would contribute, but please don't be shy. I would enjoy if more than just me were interested in the project.
Does anyone know of literature on how to synchronize text between a browser and a server with as little bandwidth as possible?
The particular scenario I have in mind is that the browser knows the text currently on the server, as well as the text that the server should be updated with.
The easiest thing to do is just send the whole text every time there is an update, but my goal is to transfer less data over the network (for small mobile data plans).
The next thing I could do is send only the lines that have changed, but that assumes that the text is broken into lines.
Does anyone know of papers that address problems similar to this?
If I'm going to store in my database a user-entered string that will eventually make it into a webpage, should I html-escape it first?
I think the answer is no. Databases have no trouble storing arbitrary strings of data. Security problems occur only when we try to put that arbitrary string into an HTML page, so that is where the risk should be mitigated.
However, for some reason, I feel like that "best practice" is to escape everything *before* putting it into a database.
Distro hunting: I am trying to choose a good first Linux distro for a friend. I'm planning to set it up for them, so I'm not worried about having "batteries included".
I would like to go with #Alpine, since it seems to be small and simple, and I might eventually switch to it myself. However, I'm concerned that it might be too idealistic for someone who just wants to use a computer. Does anyone have an experience report from that perspective?
An example of what I'm talking about is with Wayland vs. X. I'm use to fiddling around with newer technology until I get things to work, so I don't mind dealing with the current shortcomings of the Wayland experience. However, I'm more wary of putting a first-timer on Wayland, since occasionally weird things happen.
I've been working on the login flow for a simple web program. In the process, I learned about two things:
1. Forms cannot submit DELETE or PUT requests.
2. How Cross-Site Request Forgery works - https://owasp.org/www-community/attacks/csrf
Synthesizing these two pieces of knowledge, would it not be true that making your whole API use DELETE and PUT would obviate the need for CSRF tokens? The same-origin policy would prevent anyone but the page itself from making DELETE and PUT requests via JS, which is the only way to use those two methods.
To be clear, I think this is an entirely stupid idea, but it's intriguing nonetheless.
Note that I am not saying BARE is pointless, since not everything message is a token. However, it does seem to me that the motivating example in the blog post is does not compel the use of a *standardized* format (although it does a good job of advocating a *small* format).
I'm also aware of the BARE encoding, which could be used to make a much smaller and simpler token format - https://drewdevault.com/2020/06/21/BARE-message-encoding.html
Here's the problem/caveat I have with any standardized format:
Assuming I don't really care about letting the client read the metadata in the token, the only program creating and using these tokens is the server, which means all that matters is whether the server is consistent with itself. That is, standardizing the token format is not important when token contents are not intended to be public (and why should they be?)
Thus, I am inclined to use the most convenient binary encoding/decoding format that my language provides (like the haskell "binary" package).