Somebody asked me for a font. I don't know much about fonts. I do know a whole lot about making fonts. I also know a whole lot about surfing the web.
Sometimes, you choose to buy a pastry or a loaf from your local bakery instead of your nearly-local supermarket. In the same way, instead of loading up Google Fonts, you could consider picking a font from one of the lists on this page.
I also searched the web, searching for "fonts for the web" - which didn't turn up anything that good, even with Kagi's "small web" toggled - and secondly the same but with "site:neocitites.org". From that, I found these lists which look pretty great:
I write these notes in Obsidian. To upload, them, I could visit https://github.com/alifeee/blog/tree/main/notes, click "add file", and copy and paste the file contents. I probably should do that.
But, instead, I wrote a shell script to upload them. Now, I can press "CTRL+P" to open the Obsidian command pallette, type "lint" (to lint the note), then open it again and type "upload" and upload the note. At this point, I could walk away and assume everything went fine, but what I normally do is open the GitHub Actions tab to check that it worked properly.
The process the script undertakes is:
check user inputs are good (all variables exist, file is declared)
check if file exists or not already in GitHub with a curl request
generate a JSON payload for the upload request, including:
commit message
commit author & email
file contents as a base64 encoded string
(if file exists already) sha1 hash of existing file
make a curl request to upload/update the file!
As I use it from inside Obsidian, I use an extension called Obsidian shellcommands, which lets you specify several commands. For this, I specify:
…and when run with a file open, it will upload/update that file to my notes folder on GitHub.
This is maybe a strange way of doing it, as the "source of truth" is now "my Obsidian", and the GitHub is really just a place for the files to live. However, I enjoy it.
I've made the script quite generic as you have to supply most information via environment variables. You can use it to upload an arbitrary file to a specific folder in a specific GitHub repository. Or… you can modify it and do what you want with it!
I've wondered the answer to this question for a while.
Today, I figured I'd find out a nice way, as I wanted to store an SSH private key on a server (so I can access it from different computers in different locations). I could also store it on my phone, as I (mostly) have that on me.
The idea could be that you would have a local file, encrypt it, then store it on a server (or any file sharing service like Dropbox/Google Drive/etc). Then, from a different device, you could download it, and unencrypt it.
set aliases
I found this answer on the internet, and I set some aliases so I can easily arbitrarily password-encrypt/decrypt files. I set the aliases with atuin, so they sync across all my devices, but you could also stick this in ~/.bashrc or elsewhere. The aliases are:
alias decrypt='openssl enc -d -aes-256-cbc -pbkdf2 -in'
alias encrypt='openssl enc -aes-256-cbc -pbkdf2 -in'
encrypt
Then, I can use them by supplying a file, and I get a bunch of jumbled characters, which I certainly couldn't crack.
$ echo 'this is totally an SSH key' > non_encrypted_file.txt
$ encrypt non_encrypted_file.txt | tee encrypted_file.txt
enter AES-256-CBC encryption password: ********
Verifying - enter AES-256-CBC encryption password: ********
=���?���C�9���_Z���>E
decrypt
To decrypt, I put in the same password I used to encrypt:
$ decrypt encrypted_file.txt
enter AES-256-CBC decryption password: ********
this is totally an SSH key
using pipes
I can also use pipes!
$ cat non_encrypted_file.txt | encrypt - > encrypted_file.txt
enter AES-256-CBC encryption password: ********
Verifying - enter AES-256-CBC encryption password: ********
$ cat encrypted_file.txt | decrypt -
enter AES-256-CBC decryption password: ********
this is totally an SSH key
notes
how good is the encryption
I'm not sure how "good" aes-256-cbc as an encryption protocol(?). I'll ignore this fact.
how to expand aliases
in future, I may want to know what type of encryption I use. I could go and look at my aliases file, but I discovered that you can also type ALT+CTRL+E (or ESC+CTRL+E) to expand aliases inline, so turning line 1 into line 2
encrypt
openssl enc -aes-256-cbc -pbkdf2 -in
how to encrypt a folder/multiple files
to encrypt a folder, you could use tar to turn the folder into a .tar file, which looks like a file. Then, use tar to stop making it into a file later. A bit like this.
Given a username, i.e., @thentrythis@thentrythis.org, find the format of the "webfinger request" (which allows you to request data about a user), which should be on /.well-known/host-meta. The key here is that the original site (thentrythis.org) redirects to the "social site" (social.thentrythis.org).
It's always nice to know that I could use Mastodon by reaaaallllyyy slowly issuing my own curl requests (or, what this really means, build my own client).
a very broad-strokes definition of the word "hacking" I spurted out in a text conversation.
when people say hacking they mean one of several things
the (positive) sense is that used by hackspaces, to hack is to make something do something beyond its initial purposes
technologically, a lot of the time, that means taking apart an old TV and reusing parts of it to make a lightning rod, or replacing a phone battery by yourself (the phone companies do not desire this), or adding a circuitboard to your cat flap that uses the chip inside the cat to detect if it's your cat and if not lock the flap
more "software based", it can be like scraping a government website to collect documents into a more readable format, turning trains back on via software that were disabled by their manufacturer as a money-grabbing gambit, getting access to academic papers that are unreasonably locked behind expensive paywalls
If someone says 'my facebook got hacked' what does that mean
usually what they mean is that someone has logged into it without their permission
and most (all) of the time, that person has guessed their password because they said it out loud, they watched them put it in, they guessed it randomly (probs rare), or (rarest) they found the password in a passwork leak for a different website and tried it on Facebook (because the person uses the same password on multiple accounts)
I'd call that a second thing people say hacking for
and a third is the money extorting hackers, who hack into [the British library] and lock all their documents unless they pay [a ransom]
I've thought about installing a VPN on my server for a few months. It wouldn't allow the perhaps-more-common VPN use of getting past region-locked content (as I can't change the region of my server), but as an academic exercise, and for other reasons, I gave it a try installing a VPN on my server.
I accepted all the default settings (IP / UDP / port / DNS servers) apart from username (alifeee), allowed the port through my firewall (which uses Uncomplicated FireWall (ufw)) with sudo ufw allow 1194 (the default port), and a file alifeee.ovpn was created. That file was pretty simple, and basically just a few keys, and looked a bit like this:
This file was small enough that I was able to copy it in only two screens through ConnectBot on my phone. To install it, I:
opened the VPN settings on Linux, where I could import a .ovpn file by default
installed the OpenVPN app on Android which let me import the file
I haven't installed it on Windows but I'm presuming it's as easy as installing some OpenVPN app
Since I've installed it, it's actually been pretty useful. I've used it:
on trains or in cafés to hide my traffic (I think)
to download PlatformIO libraries as my ISP blocked the library hosting website inexplicably
to access https://sci-hub.se/, which my ISP also blocks (this post brought to you by: my ISP being super annoying)
So, if you want to get round blocks, hide your traffic, or other VPN shenanigans, you could create a VPS (Virtual Private Server) and install OpenVPN to it pretty easily. Perhaps you could even get around region locks if you picked a server location in a region you wanted.
I'm writing a blog about hitchhiking, which involves a load os .geojson files, which look a bit like this:
The .geojson files are generated from .gpx traces that I exported from OSRM's (Open Source Routing Machine) demo (which, at time of writing, seems to be offline, but I believe it's on https://map.project-osrm.org/), one of the routing engines on OpenStreetMap.
I put in a start and end point, exported the .gpx trace, and then converted it to .geojson with, e.g., ogr2ogr "2.1 Tamworth -> Tibshelf Northbound.geojson" "2.1 Tamworth -> Tibshelf Northbound.gpx" tracks, where ogr2ogr is a command-line tool from sudo apt install gdal-bin which converts geographic data between many formats (I like it a lot, it feels nicer than searching the web for "errr, kml to gpx converter?"). I also then semi-manually added some properties (see how).
Originally, I was combining them into one .geojson file using https://github.com/mapbox/geojson-merge, which as a binary to merge .geojson files, but I decided to use jq because I wanted to do something a bit more complex, which was to create a structure like
FeatureCollection
Features:
FeatureCollection
Features (1.1 Tamworth -> Woodall Northbound, 1.2 Woodall Northbound -> Hull)
FeatureCollection
Features (2.1 Tamworth -> Tibshelf Northbound, 2.2 Tibshelf Northbound -> Leeds)
FeatureCollection
Features (3.1 Frankley Northbound -> Hilton Northbound, 3.2 Hilton Northbound -> Keele Northbound, 3.3 Keele Northbound -> Liverpool)
I spent a while making a quite-complicated jq query, using variables (an "advanced feature"!) and a reduce statement, but when I completed it, I found out that the above structure is not valid .geojson, so I went back to just having:
I moved to Linux [time ago]. One thing I miss from the Windows file explorer is how easy it was to create text files.
With Nautilus (Pop!_OS' default file browser), you can create templates which appear when you right click in an empty folder (I don't remember where the templates file is and I can't find an obvious way to find out, so... search it yourself), but this doesn't work if you're using nested folders.
i.e., I use this view a lot in Nautilus the file explorer, which is a tree-view that lets you expand folders instead of open them (similar to most code editors).
But in this view, you can't "right click on empty space inside a folder" to create a new template file, you can only "right click the folder" (or if it's empty, "right click a strange fake-file called (Empty)").
So, I created a script in /home/alifeee/.local/share/nautilus/scripts called new file (folder script) with this content:
#!/bin/bash
# create new file within folder (only works if used on folder)
# notify-send requires libnotify-bin -> `sudo apt install libnotify-bin`
if [ -z "${1}" ]; then
notify-send "did not get folder name. use script on folder!"
exit 1
fi
file="${1}/new_file"
i=0
while [ -f "${file}" ]; do
i=$(($i+1))
file="${1}/new_file${i}"
done
touch "${file}"
if [ ! -f "${file}" ]; then
notify-send "tried to create a new file but it doesn't seem to exist"
else
notify-send "I think I created file all well! it's ${file}"
fi
Now I can right click on a folder, click "scripts > new file" and have a new file that I can subsequently rename. Sweet.
I sure hope that in future I don't want anything slightly more complicated like creating multiple new files at once...
I was given an old computer. I'd quite like to make a computer to use in my studio, and take my tower PC home to play video games (mainly/only local coop games like Wilmot's Warehouse, Towerfall Ascension, or Unrailed, and occasionally Gloomhaven).
It's not the best, and I'd like to know what parts I would want to replace to make it suit my needs (which are vaguely "can use a modern web browser" without being slow).
By searching the web, I found these commands to collect hardware information for a computer:
uname -a # vague computer information
lscpu # cpu information
df -h # hard drive information
sudo dmidecode -t bios # bios information
free -h # memory (RAM) info
lspci -v | grep VGA -A11 # GPU info (1)
sudo lshw -numeric -C display # GPU info (2)
I also found these commands to benchmark some things:
sudo apt install sysbench glmark2
# benchmark CPU
sysbench --test=cpu run
# benchmark memory
sysbench --test=memory run
# benchmark graphics
glmark2
I put the output of all of these commands into text files for each computer, into a directory that looks like:
I wanted to make a local archive of personal websites. This is because in the past I have searched my bookmarks for things like fonts to see how many of them mention, talk about, or link to things about fonts. When I did this, I only looked at the homepages, so I've been wondering about a way to search a list of entire sites since.
I came up with the idea of downloading the HTML files for my bookmarked sites, and using grep and...
Also, lua_search doesn't support case-insensitivity yet. search tries to be smart: if you pass in a pattern with any uppercase letters it's treated as case-sensitive, but if it's all lowercase it's treated as case-insensitive. lua_search doesn't have these smarts yet,
Also, lua_search doesn't support case-insensitivity yet. searchtries to be smart: if you pass in a pattern with any uppercase letters it's treated as case-sensitive, but if it's all lowercase it's treated as case-insensitive.lua_search` doesn't have these smarts yet, and all patterns are currently case-sensitive.and all patterns are currently case-sensitive.
search
#!/usr/bin/zsh
# Search a directory for files containing all of the given keywords.
DIR=`mktemp -d`
ROOT=${ROOT:-.}
# generate a list of files on stdout
echo find `eval echo $ROOT` -type f -print0 \> $DIR/1 >&2
find `eval echo $ROOT` -type f -print0 > $DIR/1
INFILE=1
for term in $*
do
# filter file list for one term
OUTFILE=$(($INFILE+1))
if echo $term |grep -q '[A-Z]'
then
echo cat $DIR/$INFILE \|xargs -0 grep -lZ "$term" \> $DIR/$OUTFILE >&2
cat $DIR/$INFILE |xargs -0 grep -lZ "$term" > $DIR/$OUTFILE
else
echo cat $DIR/$INFILE \|xargs -0 grep -ilZ "$term" \> $DIR/$OUTFILE >&2
cat $DIR/$INFILE |xargs -0 grep -ilZ "$term" > $DIR/$OUTFILE
fi
INFILE=$OUTFILE
done
# get rid of nulls in the outermost call, and sort for consistency
cat $DIR/$INFILE |xargs -n 1 -0 echo |sort
#!/usr/bin/lua
local input = io.popen('find . -type f')
-- will scan each file to the end at most once
function match(filename, patterns)
local file = io.open(filename)
for _, pattern in ipairs(patterns) do
if not search(file, pattern) then
return false
end
end
file:close()
return true
end
function search(file, pattern)
if file:seek('set') == nil then error('seek') end
for line in file:lines() do
if line:match(pattern) then
return true
end
end
return false
end
for filename in input:lines() do
filename = filename:sub(3) -- drop the './'
if match(filename, arg) then
print(filename)
end
end
...to search the sites.
initial attempt
I found you can use wget to do exactly this (download an entire site), using a cacophony of arguments. I put them into a script that looks a bit like:
...and set it off. I found several things, which made me modify the script in several ways (mainly I saw these by watching one specific URL take a lot of time to scrape):
some sites had a lot of media, so I searched the web and added a file exclude
some sites just had so many pages, like thousands of fonts, or daily puzzles, or just a very detailed personal wiki, so I added a site disable so I could skip specific URLs
as I was re-running the script a lot, and didn't want to provide excess traffic (it's worth noting that wget respects robots.txt) I added -N to wget so it didn't download things that hadn't been modified (with the Last-Modified: ... header)
a lot of sites didn't have a Last-Modified: ... header, so I made the script skip sites that had a directory already created for them (which I then had to manually delete if the site had only half-downloaded)
some sites had very slow connections (not https://blinry.org/ (and others), which was blazingly fast) which made the crawl seem like it would take ages
At this point (after not much effort, to be honest), I gave up. My final script was:
If I want to continue in future (searching a personal list of sites), I may find another way to do it, perhaps something similar to Google's search syntax potato site:http://site1.org site:http://site2.org, or perhaps I can create a custom search engine filter with DuckDuckGo/Kagi/etc that lets me put a custom list of URLs in. Who's to say. Otherwise, I'll also just continue sticking search queries in the various alternative/indie search engines like those on https://proto.garden/blog/search_engines.html.