notes by alifeee profile picture tagged scripting (17) rss

return to notes / blog / website / weeknotes / linktree

here I may post some short, text-only notes, mostly about programming. source code.

tags: all (48) scripting (17) linux (5) android (4) bash (4) geojson (4) jq (4) obsidian (4) github (3) html (3) ............ see all (+71)

guestbook!

show all /
sign book

getting my wifi name and password from the terminal # prev single next top

2025-05-25 • tags: scripting, wifi, aliases • 265 'words', 80 secs @ 200wpm

I often want to get my current WiFi name (SSID) and password.

How to get name/password manually

Sometimes, it's for a microcontroller. Sometimes, to share it. This time, it's for setting up an info-beamer device with WiFi.

Before today, I would usually open my phone and go to "share" under the WiFi settings, and copy the password manually, and also copy the SSID manually.

It's finally time to write a way to do it with bash!

How to get name/password with bash

After some web-searching, these commands do what I want:

alias wifi=iwgetid -r
alias wifipw=sudo cat "/etc/NetworkManager/system-connections/$(wifi).nmconnection" | pcregrep -o1 "^psk=(.*)"

How to use

…and I can use them like:

$ wifi
the wood raft (2.4G)
$ wifipw
[sudo] password for alifeee: 
**************

Neat!

Using Atuin aliases

Finally, above I suggested I was using Bash aliases, but I actually created them using Atuin, specifically Atuin dotfile aliases, like:

atuin dotfiles alias set wifi 'iwgetid -r'
atuin dotfiles alias set wifipw 'sudo cat "/etc/NetworkManager/system-connections/$(wifi).nmconnection" | pcregrep -o1 "^psk=(.*)"'

Now, they will automatically be enabled on all my computers that use Atuin. This is actually not… amazingly helpful as my other computers all use ethernet, not WiFi, but… it's mainly about having the aliases all in the same place (and "backed up", if you will).

back to top

Getting hackspace Mastodon instances from SpaceAPI # prev single next top

2025-05-22 • tags: scripting, spaceapi, mastodon, hackspaces, json • 622 'words', 187 secs @ 200wpm

We're back on the SpaceAPI grind.

This time, I wanted to see what Mastodon instances different hackspaces used.

The "contact" field in SpaceAPI

SpaceAPI has a "contact" object, which is used for this kind of thing. For example, for Sheffield Hackspace, this is:

$ curl -s "https://www.sheffieldhackspace.org.uk/spaceapi.json" | jq '.contact'
{
  "email": "trustees@sheffieldhackspace.org.uk",
  "twitter": "@shhmakers",
  "facebook": "SHHMakers"
}

Downloading all the SpaceAPI files

Once again, I start by downloading the JSON files, so that (in theory) I can make only one request to each SpaceAPI endpoint, and then work with the data locally (instead of requesting the JSON from the web every time I interact with it).

This script is modified from last time I did it, adding some better feedback of why some endpoints fail.

# download spaces
tot=0; got=0
echo "code,url" > failed.txt
RED='\033[0;31m'; GREEN='\033[0;32m'; YELLOW='\033[0;33m'; NC='\033[0m'
while read double; do
  tot=$(($tot+1))
  name=$(echo "${double}" | awk -F';' '{print $1}');
  url=$(echo "${double}" | awk -F';' '{print $2}');
  fn=$(echo "${name}" | sed 's+/+-+g')
  echo "saving '${name}' - <${url}> to ./spaces/${fn}.json";

  # skip unless manually deleted  
  if [ -f "./spaces/${fn}.json" ]; then
    echo -e "  ${YELLOW}already saved${NC} this URL!" >> /dev/stderr
    got=$(($got+1))
    continue
  fi
  
  # get, skipping if HTTP status >= 400
  code=$(curl -L -s --fail --max-time 5 -o "./spaces/${fn}.json" --write-out "%{http_code}" "${url}")
  if [[ "${?}" -ne 0 ]] || [[ "${code}" -ne 200 ]]; then
    echo "${code},${url}" >> failed.txt
    echo -e "  ${RED}bad${NC} status code (${code}) for this url!"  >> /dev/stderr
    continue
  fi
  
  echo -e "  ${GREEN}fetched${NC}! maybe it's bad :S" >> /dev/stderr
  got=$(($got+1))
done <<<$(cat directory.json | jq -r 'to_entries | .[] | (.key + ";" + .value)')
echo "done, got ${got} of ${tot} files, $(($tot-$got)) failed with HTTP status >= 400"
echo "codes from failed.txt:"
cat failed.txt | awk -F',' 'NR>1{a[$1]+=1} END{printf "  "; for (i in a) {printf "%s (%i) ", i, a[i]}; printf "\n"}'

# some JSON files are malformed (i.e., not JSON) - just remove them
rem=0
for file in spaces/*.json; do
  cat "${file}" | jq > /dev/null
  if [[ "${?}" -ne 0 ]]; then
    echo "=== ${file} does not parse as JSON... removing it... ==="
    rm -v "${file}"
    rem=$(( $rem + 1 ))
  fi
done
echo "removed ${rem} malformed json files"

Extracting contact information

This is basically copied from last time I did it, changing membership_plans? to contact?, and changing the jq format afterwards.

# parse contact info
for file in spaces/*.json; do
  plans=$(cat "${file}" | jq '.contact?')
  [[ "${plans}" == "null" ]] && continue
  echo "${file}"
  echo "${plans}" | jq -r 'to_entries | .[] | (.key + ": " + (.value|tostring) )'
  echo ""
done > contact.txt

It outputs something like:

$ cat contact.txt | tail -n20 | head -n13
spaces/Westwoodlabs.json
twitter: @Westwoodlabs
irc: ircs://irc.hackint.org:6697/westwoodlabs
email: vorstand@westwoodlabs.de

spaces/xHain.json
phone: +493057714272
email: info@x-hain.de
matrix: #general:x-hain.de
mastodon: @xHain_hackspace@chaos.social

spaces/Zeus WPI.json
email: bestuur@zeus.ugent.be

Calculating Mastodon averages

We can filter this file to only the "mastodon:" lines, and then extract the server with a funky regex, and get a list of which instances are most common.

$ cat contact.txt | grep '^[^:]*mastodon' | pcregrep -o1 '([^:\.@\/]*\.[^\/@]*).*' | sort | uniq -c | sort -n
      1 c3d2.social
      1 caos.social
      1 hachyderm.io
      1 hackerspace.pl
      1 mas.to
      1 social.bau-ha.us
      1 social.flipdot.org
      1 social.okoyono.de
      1 social.saarland
      1 social.schaffenburg.org
      1 telefant.net
      2 social.c3l.lu
      3 mastodon.social
      4 hsnl.social
     39 chaos.social

So… it's mostly chaos.social. Neat.

back to top

comparing historical HMO licence data in Sheffield # prev single next top

2025-05-14 • tags: scripting, hmos, open-data • 1321 'words', 396 secs @ 200wpm

What is an HMO licence

Sheffield city council publishes a list of HMO (House in Multiple Occupation) licences on their HMO page, along with other information about HMOs (in brief, an HMO is a shared house/flat with more than 3 non-family members, and it must be licenced if this number is 5 or more).

How accessible is the data on HMO licences

They provide a list of licences as an Excel spreadsheet (.xlsx). I've asked them before if they could (also) provide a CSV, but they told me that was technically impossible. I also asked if they had historical data (i.e., previous spreadsheets), but they said they deleted it every time they uploaded a new one.

Therefore, as I'm interested in private renting in Sheffield, I've been archiving the data in a GitHub repository, as CSVs. I also add additional data like lat/long coordinates (via geocoding), and parse the data into geographical formats like .geojson, .gpx, and .kml (which can be viewed on a map!).

Calculating statistics from the data

What I hadn't done yet was any statistics on the data (I'd only been interested in visualising it on a map) so that's what I've done now.

I spent the afternoon writing some scripts to parse CSV data and calculate things like mean occupants, most common postcodes, number of expiring licences by date, et cetera.

General Statisitcs

I find shell scripting interesting, but I'm not so sure everyone else does (the script for the interested). So I'm not going to put the scripts here, but I will say that I used these command line tools (CLI tools) this many times:

Anyway, here are the statistics from the script (in text form, as is most shareable):

hmos_2024-09-09.csv
  total licences: 1745
  6.29 mean occupants (IQR 2 [5 - 7]) (median 6)
  amount by postcode:
    S1 (60), S2 (214), S3 (100), S4 (12), S5 (18), S6 (90), S7 (62), 
    S8 (10), S9 (5), S10 (742), S11 (425), S12 (1), S13 (2), S14 (1), S20 (1), S35 (1), S36 (1), 
  streets with most licences: Crookesmoor Road (78), Norfolk Park Road (72), Ecclesall Road (48), Harcourt Road (38), School Road (29), 
  
hmos_2025-01-28.csv
  total licences: 1459
  6.35 mean occupants (IQR 2 [5 - 7]) (median 6)
  amount by postcode:
    S1 (50), S2 (199), S3 (94), S4 (9), S5 (17), S6 (78), S7 (57), 
    S8 (10), S9 (4), S10 (614), S11 (321), S12 (1), S13 (2), S20 (1), S35 (1), S36 (1), 
  streets with most licences: Norfolk Park Road (73), Crookesmoor Road (57), Ecclesall Road (43), Harcourt Road (28), School Road (26), 
  
hmos_2025-03-03.csv
  total licences: 1315
  6.37 mean occupants (IQR 2 [5 - 7]) (median 6)
  amount by postcode:
    S1 (48), S2 (161), S3 (92), S4 (8), S5 (13), S6 (70), S7 (55), 
    S8 (9), S9 (3), S10 (560), S11 (290), S12 (1), S13 (2), S20 (1), S35 (1), S36 (1), 
  streets with most licences: Crookesmoor Road (54), Norfolk Park Road (41), Ecclesall Road (38), Harcourt Road (27), Whitham Road (24), 

Potential Conclusions

Draw your own conclusions there, but some could be that:

Statistics on issuing and expiry dates

I also did some statistics on the licence issue and expiry dates with a second stats script, which – as it parses nearly 5,000 dates – takes longer than "almost instantly" to run. As above, this used:

The script outputs:

hmos_2024-09-09.csv
  1745 dates in 1745 lines (627 unique issuing dates)
    637 expired
    1108 active
  Licence Issue Dates:
    Sun 06 Jan 2019, Sun 06 Jan 2019, … … … Wed 12 Jun 2024, Tue 09 Jul 2024, 
    Monday (275), Tuesday (440), Wednesday (405), Thursday (352), Friday (256), Saturday (5), Sunday (12), 
    2019 (84), 2020 (311), 2021 (588), 2022 (422), 2023 (183), 2024 (157), 
  Licence Expiry Dates:
    Mon 09 Sep 2024, Mon 09 Sep 2024, … … … Mon 11 Jun 2029, Sun 08 Jul 2029, 
    2024 (159), 2025 (824), 2026 (263), 2027 (225), 2028 (185), 2029 (89), 
    
hmos_2025-01-28.csv
  1459 dates in 1459 lines (561 unique issuing dates)
    334 expired
    1125 active
  Licence Issue Dates:
    Mon 28 Oct 2019, Mon 04 Nov 2019, … … … Mon 06 Jan 2025, Tue 14 Jan 2025, 
    Monday (243), Tuesday (380), Wednesday (338), Thursday (272), Friday (211), Saturday (6), Sunday (9), 
    2019 (2), 2020 (130), 2021 (567), 2022 (406), 2023 (181), 2024 (170), 2025 (3), 
  Licence Expiry Dates:
    Thu 30 Jan 2025, Fri 31 Jan 2025, … … … Mon 22 Oct 2029, Wed 28 Nov 2029, 
    2025 (681), 2026 (264), 2027 (225), 2028 (184), 2029 (105), 
    
hmos_2025-03-03.csv
  1315 dates in 1315 lines (523 unique issuing dates)
    189 expired
    1126 active
  Licence Issue Dates:
    Mon 28 Oct 2019, Mon 04 Nov 2019, … … … Wed 05 Mar 2025, Wed 05 Mar 2025, 
    Monday (217), Tuesday (339), Wednesday (314), Thursday (244), Friday (189), Saturday (4), Sunday (8), 
    2019 (2), 2020 (64), 2021 (494), 2022 (399), 2023 (177), 2024 (170), 2025 (9), 
  Licence Expiry Dates:
    Fri 07 Mar 2025, Fri 07 Mar 2025, … … … Mon 22 Oct 2029, Wed 28 Nov 2029, 
    2025 (533), 2026 (262), 2027 (225), 2028 (184), 2029 (111),

Potential conclusions on dates

Again, draw your own conclusions (homework!), but some could be:

Why is this interesting

I started collecting HMO data originally because I wanted to visualise the licences on a map. Over a short time, I have created my own archive of licence history (as the council do not provide such).

Since I had multiple months of data, I could make some comparison, so I made these statistics. I don't find them incredibly useful, but there could be people who do.

Perhaps as time goes on, the long-term comparison (over years) could be interesting. I think the above data might not be greatly useful as it seems that Sheffield council are experiencing delays over licensing at the moment, so the decline in licences probably doesn't reflect general housing trends.

Plus, I just wanted to do some shell-scripting ;]

back to top

taking a small bunch of census data from FindMyPast # prev single next top

2025-04-29 • tags: jq, scripting, web-scraping, census, data • 507 'words', 152 secs @ 200wpm

The 1939 Register was an basically-census taken in 1939. On the National Archives Page, it says that it is entirely available online.

However, further down, it lists how to access it, which says:

You can search for and view open records on our partner site Findmypast.co.uk (charges apply). A version of the 1939 Register is also available at Ancestry.co.uk (charges apply), and transcriptions without images are on MyHeritage.com (charges apply). It is free to search for these records, but there is a charge to view full transcriptions and download images of documents. Please note that you can view these records online free of charge in the reading rooms at The National Archives in Kew.

So… charges apply.

Anyway, for a while in April 2025 (until May 8th), FindMyPast is giving free access to the 1939 data.

Of course, family history is hard, and what's much easier is "who lived in my house in 1939". For that you can use:

I created an account with a bogus email address (you're not collecting THIS guy's data) and took a look around at some houses.

Then, I figured I could export my entire street, so I did.

The code and more context is in a GitHub Repository, but in brief, I:

Now it looks like:

AddressStreet,Address,Inhabited,LatLon,FirstName,LastName,BirthDate,ApproxAge,OccupationText,Gender,MaritalStatus,Relationship,Schedule,ScheduleSubNumber,Id
Khartoum Road,"1 Khartoum Road, Sheffield",Y,"53.3701,-1.4943",Constance A,Latch,31 Aug 1904,35,Manageress Restaurant & Canteen,Female,Married,Unknown,172,3,TNA/R39/3506/3506E/003/17
Khartoum Road,"4 Khartoum Road, Sheffield",Y,"53.3701,-1.4943",Catherine,Power,? Feb 1897,42,Music Hall Artists,Female,Married,Unknown,171,8,TNA/R39/3506/3506D/015/37
Khartoum Road,"4 Khartoum Road, Sheffield",Y,"53.3701,-1.4943",Charles F R,Kirby,? Nov 1886,53,Newsagent Canvasser,Male,Married,Head,172,1,TNA/R39/3506/3506D/015/39
Khartoum Road,"4 Khartoum Road, Sheffield",Y,"53.3701,-1.4943",Constance A,Latch,31 Aug 1912,27,Manageress Restairant & Cante,Female,Married,Unknown,172,3,TNA/R39/3506/3506D/015/41

Neat!

Some of my favourite jobs in the sheet of streets I collected are:

back to top

comparing EPC certificates with git-diff # prev single next top

2025-02-28 • tags: git-diff, scripting, housing • 484 'words', 145 secs @ 200wpm

Our house just got a new EPC certificate. You can (maybe) check yours on https://www.gov.uk/find-energy-certificate.

I'm interested in easy ways to see change. Trying to compare the old and new webpages by eye is hard, which leads me to text-diffing. I can copy the contents of the website to a file and compare them that way. Let's. I did a similar thing a while ago with computer benchmarks.

I manually create two files by copying the interesting bits of the webpage, called 1 and 2 (because who has time for .txt extensions). Then, I can run:

git diff --no-index -U1000 ~/1 ~/2 > diff.txt
cat diff.txt | sed -E 's#^\+(.*)#<ins>\1</ins>#' | sed -E 's#^-(.*)#<del>\1</del>#' | sed 's/^ //'

The latter command turns each into HTML by turning + lines into <ins> ("insert"), - into <del> ("delete"), and removing leading spaces on other lines. Then, I can whack the output into a simple HTML template:

<!DOCTYPE html>
<html>
  <head>
    <style>
    body { background: black; color: white; }
    pre { padding: 1rem; }
    del { text-decoration: none; color: red; }
    ins { text-decoration: none; color: green; }
    </style>
  </head>
  <body>
<pre>
diff goes here...
<del>del lines will be red</del>
<ins>ins lines will be green</ins>
</pre>
  </body>
</html>

The final output is something like this (personal information removed. don't doxx me.)

Energy rating
D

Valid until 05 February 2025 05 February 2035

Property type Mid-terrace house Total floor area 130 square metres 123 square metres

This property’s energy rating is D. It has the potential to be C. This property’s energy rating is D. It has the potential to be B.

Features in this property

Window Fully double glazed Good Roof Pitched, no insulation (assumed) Very poor Roof Roof room(s), no insulation (assumed) Very poor Roof Roof room(s), insulated (assumed) Good Lighting Low energy lighting in 64% of fixed outlets Good Lighting Low energy lighting in all fixed outlets Very good Secondary heating None N/A

Primary energy use

The primary energy use for this property per year is 303 kilowatt hours per square metre (kWh/m2). The primary energy use for this property per year is 252 kilowatt hours per square metre (kWh/m2).

Good job on us for having 100% low energy lighting fixtures, I guess...

Really, this is a complicated way to simplify something. I like simple things, so I like this.

back to top

getting hackspace membership prices from SpaceAPI # prev single next top

2025-02-21 • tags: spaceapi, scripting, hackspaces, json • 1100 'words', 330 secs @ 200wpm

SpaceAPI is a project to convince hackspaces to maintain a simple JSON file self-describing themselves.

For example, see Sheffield Hackspace's on https://www.sheffieldhackspace.org.uk/spaceapi.json. It's currently a static file.

I wanted to know which hackspaces published their membership prices using SpaceAPI, and what those rates were. Here are a few bash scripts to do just that:

There is an updated version of this script in the newer note about SpaceAPI.

# get the directory of SpaceAPIs
mkdir -p ~/temp/spaceapi/spaces
cd ~/temp/spaceapi
curl "https://directory.spaceapi.io/" | jq > directory.json

# save (as many as possible of) SpaceAPIs to local computer
tot=0; got=0
while read double; do
  tot=$(($tot+1))
  name=$(echo "${double}" | awk -F';' '{print $1}');
  url=$(echo "${double}" | awk -F';' '{print $2}');
  fn=$(echo "${name}" | sed 's+/+-+g')
  echo "saving '${name}' - <${url}> to ./spaces/${fn}.json";

  # skip unless manually deleted  
  if [ -f "./spaces/${fn}.json" ]; then
    echo "already saved!"
    got=$(($got+1))
    continue
  fi
  
  # get, skipping if HTTP status >= 400
  curl -L -s --fail --max-time 5 "${url}" -o "./spaces/${fn}.json" || continue
  echo "fetched! maybe it's bad :S"
  got=$(($got+1))
done <<<$(cat directory.json | jq -r 'to_entries | .[] | (.key + ";" + .value)')
echo "done, got ${got} of ${tot} files, $(($tot-$got)) failed with HTTP status >= 400"

# some JSON files are malformed (i.e., not JSON) - just remove them
for file in spaces/*.json; do
  cat "${file}" | jq > /dev/null
  if [[ "${?}" -ne 0 ]]; then
    echo "${file} does not parse as JSON... removing it..."
    rm "${file}"
  fi
done

# loop every JSON file, and nicely output any that have a .membership_plans object
for file in spaces/*.json; do
  plans=$(cat "${file}" | jq '.membership_plans?')
  [[ "${plans}" == "null" ]] && continue
  echo "${file}"
#  echo "${plans}" | jq -c
  echo "${plans}" | jq -r '.[] | (.currency_symbol + (.value|tostring) + " " + .currency + " " + .billing_interval + " for " + .name + " (" + .description + ")")'
  echo ""
done

The output of this final loop looks like:

...

spaces/CCC Basel.json
20 CHF monthly for Minimal ()
40 CHF monthly for Recommended ()
60 CHF monthly for Root ()

...

spaces/RevSpace.json
32 EUR monthly for regular ()
20 EUR monthly for junior ()
19.84 EUR monthly for multi2 ()
13.37 EUR monthly for multi3 ()

...

spaces/Sheffield Hackspace.json
£6 GBP monthly for normal membership (regularly attend any of the several open evenings a week)
£21 GBP monthly for keyholder membership (come and go as you please)

...
see full output
spaces/CCC Basel.json
20 CHF monthly for Minimal ()
40 CHF monthly for Recommended ()
60 CHF monthly for Root ()

spaces/ChaosStuff.json
120 EUR yearly for Regular Membership (For people with a regular income)
40 EUR yearly for Student Membership (For pupils and students)
40 EUR yearly for Supporting Membership (For people who want to use the space to work on projects, but don't want to have voting rights an a general assembly.)
1 EUR yearly for Starving Hacker (For people, who cannot afford the membership. Please get in touch with us, before applying.)

spaces/dezentrale.json
16 EUR monthly for Reduced membership ()
32 EUR monthly for Regular membership ()
42 EUR monthly for Nerd membership ()
64 EUR monthly for Nerd membership ()
128 EUR monthly for Nerd membership ()

spaces/Entropia.json
25 EUR yearly for Regular Members (Normale Mitglieder gem. https://entropia.de/Satzung_des_Vereins_Entropia_e.V.#Beitragsordnung)
19 EUR yearly for Members of CCC e.V. (Mitglieder des CCC e.V. gem. https://entropia.de/Satzung_des_Vereins_Entropia_e.V.#Beitragsordnung)
15 EUR yearly for Reduced Fee Members (Schüler, Studenten, Auszubildende und Menschen mit geringem Einkommen gem. https://entropia.de/Satzung_des_Vereins_Entropia_e.V.#Beitragsordnung)
6 EUR yearly for Sustaining Membership (Fördermitglieder gem. https://entropia.de/Satzung_des_Vereins_Entropia_e.V.#Beitragsordnung)

spaces/Hacker Embassy.json
100 USD monthly for Membership ()

spaces/Hackerspace.Gent.json
25 EUR monthly for regular (discount rates and yearly invoice also available)

spaces/Hack Manhattan.json
110 USD monthly for Normal Membership (Membership dues go directly to rent, utilities, and the occasional equipment purchase.)
55 USD monthly for Starving Hacker Membership (Membership dues go directly to rent, utilities, and the occasional equipment purchase. This plan is intended for student/unemployed hackers.)

spaces/Hal9k.json
450 DKK other for Normal membership (Billing is once per quarter)
225 DKK other for Student membership (Billing is once per quarter)

spaces/Leigh Hackspace.json
24 GBP monthly for Member (Our standard membership that allows usage of the hackspace facilities.)
30 GBP monthly for Member+ (Standard membership with an additional donation.)
18 GBP monthly for Concession (A subsidised membership for pensioners, students, and low income earners.)
40 GBP monthly for Family (A discounted family membership for two adults and two children.)
5 GBP daily for Day Pass (Access to the hackspace's facilities for a day.)
5 GBP monthly for Patron (Support the hackspace without being a member.)

spaces/LeineLab.json
120 EUR yearly for Ordentliche Mitgliedschaft ()
30 EUR yearly for Ermäßigte Mitgliedschaft ()
336 EUR yearly for Ordentliche Mitgliedschaft + Werkstatt ()
120 EUR yearly for Ermäßigte Mitgliedschaft + Werkstatt ()

spaces/<name>space Gera.json

spaces/Nerdberg.json
35 EUR monthly for Vollmitgliedschaft (Normal fee, if it is to much for you, contact the leading board, we'll find a solution.)
15 EUR monthly for Fördermitgliedschaft ()

spaces/NYC Resistor.json
115 USD monthly for standard ()
75 USD monthly for teaching ()

spaces/Odenwilusenz.json
0 CHF yearly for Besucher ()
120 CHF yearly for Mitglied ()
480 CHF yearly for Superuser ()
1200 CHF yearly for Co-Worker ()

spaces/RevSpace.json
32 EUR monthly for regular ()
20 EUR monthly for junior ()
19.84 EUR monthly for multi2 ()
13.37 EUR monthly for multi3 ()

spaces/Sheffield Hackspace.json
£6 GBP monthly for normal membership (regularly attend any of the several open evenings a week)
£21 GBP monthly for keyholder membership (come and go as you please)

spaces/TkkrLab.json
30 EUR monthly for Normal member (Member of TkkrLab (https://tkkrlab.nl/deelnemer-worden/))
15 EUR monthly for Student member (Member of TkkrLab, discount for students (https://tkkrlab.nl/deelnemer-worden/))
15 EUR monthly for Student member (Junior member of TkkrLab, discount for people aged 16 or 17 (https://tkkrlab.nl/deelnemer-worden/))

spaces/-usr-space.json

I think a couple have weird names like <name>space or /dev/tal which screw with my script. Oh well, it's for you to improve.

Overall, not that many spaces have published their prices to SpaceAPI. Also, the ones in the US look really expensive. As ever, a good price probably depends on context (size/city/location/etc).

Perhaps I can convince some other spaces to put their membership prices in their SpaceAPI...

back to top

uploading files to a GitHub repository with a bash script # prev single next top

2025-02-02 • tags: obsidian, github, scripting • 364 'words', 109 secs @ 200wpm

I write these notes in Obsidian. To upload, them, I could visit https://github.com/alifeee/blog/tree/main/notes, click "add file", and copy and paste the file contents. I probably should do that.

But, instead, I wrote a shell script to upload them. Now, I can press "CTRL+P" to open the Obsidian command pallette, type "lint" (to lint the note), then open it again and type "upload" and upload the note. At this point, I could walk away and assume everything went fine, but what I normally do is open the GitHub Actions tab to check that it worked properly.

The process the script undertakes is:

  1. check user inputs are good (all variables exist, file is declared)
  2. check if file exists or not already in GitHub with a curl request
  3. generate a JSON payload for the upload request, including:
    1. commit message
    2. commit author & email
    3. file contents as a base64 encoded string
    4. (if file exists already) sha1 hash of existing file
  4. make a curl request to upload/update the file!

As I use it from inside Obsidian, I use an extension called Obsidian shellcommands, which lets you specify several commands. For this, I specify:

export org="alifeee"
export repo="blog"
export fpath="notes/"
export git_name="alifeee"
export git_email="alifeee@alifeee.net"
export GITHUB_TOKEN="github_pat_3890qwug8f989wu89gu98w43ujg98j8wjgj4wjg9j83wjq9gfj38w90jg903wj"
{{vault_path}}/scripts/upload_to_github.sh {{file_path:absolute}}

…and when run with a file open, it will upload/update that file to my notes folder on GitHub.

This is maybe a strange way of doing it, as the "source of truth" is now "my Obsidian", and the GitHub is really just a place for the files to live. However, I enjoy it.

I've made the script quite generic as you have to supply most information via environment variables. You can use it to upload an arbitrary file to a specific folder in a specific GitHub repository. Or… you can modify it and do what you want with it!

It's here: https://gist.github.com/alifeee/d711370698f18851f1927f284fb8eaa8

back to top

combining geojson files with jq # prev single next top

2024-12-13 • tags: geojson, jq, scripting • 520 'words', 156 secs @ 200wpm

I'm writing a blog about hitchhiking, which involves a load os .geojson files, which look a bit like this:

The .geojson files are generated from .gpx traces that I exported from OSRM's (Open Source Routing Machine) demo (which, at time of writing, seems to be offline, but I believe it's on https://map.project-osrm.org/), one of the routing engines on OpenStreetMap.

I put in a start and end point, exported the .gpx trace, and then converted it to .geojson with, e.g., ogr2ogr "2.1 Tamworth -> Tibshelf Northbound.geojson" "2.1 Tamworth -> Tibshelf Northbound.gpx" tracks, where ogr2ogr is a command-line tool from sudo apt install gdal-bin which converts geographic data between many formats (I like it a lot, it feels nicer than searching the web for "errr, kml to gpx converter?"). I also then semi-manually added some properties (see how).

{
  "type": "FeatureCollection",
  "features": [
    {
      "type": "Feature",
      "properties": {
        "label": "2.1 Tamworth -> Tibshelf Northbound",
        "trip": "2"
      },
      "geometry": {
        "type": "MultiLineString",
        "coordinates": [
          [
            [-1.64045, 52.60606]
            [-1.64067, 52.6058],
            [-1.64069, 52.60579],
            ...
          ]
        ]
      }
    }
  ]
}

I then had a load of files that looked a bit like

$ tree -f geojson/
geojson
├── geojson/1.1 Tamworth -> Woodall Northbound.geojson
├── geojson/1.2 Woodall Northbound -> Hull.geojson
├── geojson/2.1 Tamworth -> Tibshelf Northbound.geojson
├── geojson/2.2 Tibshelf Northbound -> Leeds.geojson
├── geojson/3.1 Frankley Northbound -> Hilton Northbound.geojson
├── geojson/3.2 Hilton Northbound -> Keele Northbound.geojson
└── geojson/3.3 Keele Northbound -> Liverpool.geojson

Originally, I was combining them into one .geojson file using https://github.com/mapbox/geojson-merge, which as a binary to merge .geojson files, but I decided to use jq because I wanted to do something a bit more complex, which was to create a structure like

FeatureCollection
  Features:
    FeatureCollection
      Features (1.1 Tamworth -> Woodall Northbound, 1.2 Woodall Northbound -> Hull)
    FeatureCollection
      Features (2.1 Tamworth -> Tibshelf Northbound, 2.2 Tibshelf Northbound -> Leeds)
    FeatureCollection
      Features (3.1 Frankley Northbound -> Hilton Northbound, 3.2 Hilton Northbound -> Keele Northbound, 3.3 Keele Northbound -> Liverpool)

I spent a while making a quite-complicated jq query, using variables (an "advanced feature"!) and a reduce statement, but when I completed it, I found out that the above structure is not valid .geojson, so I went back to just having:

FeatureCollection
  Features (1.1 Tamworth -> Woodall Northbound, 1.2 Woodall Northbound -> Hull, 2.1 Tamworth -> Tibshelf Northbound, 2.2 Tibshelf Northbound -> Leeds, 3.1 Frankley Northbound -> Hilton Northbound, 3.2 Hilton Northbound -> Keele Northbound, 3.3 Keele Northbound -> Liverpool)

...which is... a lot simpler to make.

A query which combines the files above is (the sort exists to sort the files so they are in numerical order downwards in the resulting .geojson):

while read file; do cat "${file}"; done <<< $(find geojson/ -type f | sort -t / -k 2 -n) | jq --slurp '{
    "type": "FeatureCollection",
    "name": "hitchhikes",
    "features": ([.[] | .features[0]])
}' > hitching.geojson

While geojson-merge was cool, it feels nice to have a more "raw" command to do what I want.

back to top

a Nautilus script to create blank files in a folder # prev single next top

2024-12-13 • tags: nautilus, scripting • 330 'words', 99 secs @ 200wpm

I moved to Linux [time ago]. One thing I miss from the Windows file explorer is how easy it was to create text files.

With Nautilus (Pop!_OS' default file browser), you can create templates which appear when you right click in an empty folder (I don't remember where the templates file is and I can't find an obvious way to find out, so... search it yourself), but this doesn't work if you're using nested folders.

i.e., I use this view a lot in Nautilus the file explorer, which is a tree-view that lets you expand folders instead of open them (similar to most code editors).

.
├── ./5.3.2
│   └── ./5.3.2/new_file
├── ./6.1.4
├── ./get_5.3.2.py
└── ./get_6.1.4.py

But in this view, you can't "right click on empty space inside a folder" to create a new template file, you can only "right click the folder" (or if it's empty, "right click a strange fake-file called (Empty)").

So, I created a script in /home/alifeee/.local/share/nautilus/scripts called new file (folder script) with this content:

#!/bin/bash
# create new file within folder (only works if used on folder)
# notify-send requires libnotify-bin -> `sudo apt install libnotify-bin`

if [ -z "${1}" ]; then
  notify-send "did not get folder name. use script on folder!"
  exit 1
fi

file="${1}/new_file"

i=0
while [ -f "${file}" ]; do
  i=$(($i+1))
  file="${1}/new_file${i}"
done

touch "${file}"

if [ ! -f "${file}" ]; then
  notify-send "tried to create a new file but it doesn't seem to exist"
else
  notify-send "I think I created file all well! it's ${file}"
fi

Now I can right click on a folder, click "scripts > new file" and have a new file that I can subsequently rename. Sweet.

I sure hope that in future I don't want anything slightly more complicated like creating multiple new files at once...

back to top

comparing PCs with terminal commands # prev single next top

2024-12-12 • tags: pc-building, scripting, git-diff • 738 'words', 221 secs @ 200wpm

I was given an old computer. I'd quite like to make a computer to use in my studio, and take my tower PC home to play video games (mainly/only local coop games like Wilmot's Warehouse, Towerfall Ascension, or Unrailed, and occasionally Gloomhaven).

It's not the best, and I'd like to know what parts I would want to replace to make it suit my needs (which are vaguely "can use a modern web browser" without being slow).

By searching the web, I found these commands to collect hardware information for a computer:

uname -a # vague computer information
lscpu # cpu information
df -h # hard drive information
sudo dmidecode -t bios # bios information
free -h # memory (RAM) info
lspci -v | grep VGA -A11 # GPU info (1)
sudo lshw -numeric -C display # GPU info (2)

I also found these commands to benchmark some things:

sudo apt install sysbench glmark2
# benchmark CPU
sysbench --test=cpu run
# benchmark memory
sysbench --test=memory run
# benchmark graphics
glmark2

I put the output of all of these commands into text files for each computer, into a directory that looks like:

├── ./current
│   ├── ./current/benchmarks
│   │   ├── ./current/benchmarks/cpu
│   │   ├── ./current/benchmarks/gpu
│   │   └── ./current/benchmarks/memory
│   ├── ./current/bios
│   ├── ./current/cpu
│   ├── ./current/disks
│   ├── ./current/gpu
│   ├── ./current/memory
│   └── ./current/uname
└── ./new
    ├── ./new/benchmarks
    │   ├── ./new/benchmarks/cpu
    │   ├── ./new/benchmarks/gpu
    │   └── ./new/benchmarks/memory
    ├── ./new/bios
    ├── ./new/cpu
    ├── ./new/disks
    ├── ./new/gpu
    ├── ./new/memory
    └── ./new/uname
4 directories, 19 files

Then, I ran this command to generate a diff file to look at:

echo "<html><head><style>html {background: black;color: white;}del {text-decoration: none;color: red;}ins {color: green;text-decoration: none;}</style></head><body>" > compare.html
while read file; do
  f=$(echo "${file}" | sed 's/current\///')
  git diff --no-index --word-diff "current/${f}" "new/${f}" \
    | sed 's/\[\-/<del>/g' | sed 's/-\]/<\/del>/g' \
    | sed -E 's/\{\+/<ins>/g' | sed -E 's/\+\}/<\/ins>/g' \
    | sed '1s/^/<pre>/' | sed '$a</pre>'
done <<< $(find current/ -type f) >> compare.html
echo "</body></html>" >> compare.html 

then I could open that html file and look very easily at the differences between the computers. Here is a snippet of the file as an example:

CPU(s):                   126
  On-line CPU(s) list:    0-110-5
Vendor ID:                AuthenticAMDGenuineIntel
  Model name:             AMD Ryzen 5 1600 Six-Core ProcessorIntel(R) Core(TM) i5-9400F CPU @ 2.90GHz
    CPU family:           236
    Model:                1158
    Thread(s) per core:   21
    Core(s) per socket:   6
    Socket(s):            1
Latency (ms):
         min:                                    0.550.71
         avg:                                    0.570.73
         max:                                    1.621.77
         95th percentile:                        0.630.74
         sum:                                 9997.519998.07
    glmark2 2021.02
=======================================================
    OpenGL Information
    GL_VENDOR:     AMDMesa
    GL_RENDERER:   AMD Radeon RX 580 Series (radeonsi, polaris10, LLVM 15.0.7, DRM 3.57, 6.9.3-76060903-generic)NV106
    GL_VERSION:    4.64.3 (Compatibility Profile) Mesa 24.0.3-1pop1~1711635559~22.04~7a9f319
...
[loop] fragment-loop=false:fragment-steps=5:vertex-steps=5: FPS: 9303213 FrameTime: 0.1074.695 ms
[loop] fragment-steps=5:fragment-uniform=false:vertex-steps=5: FPS: 8108144 FrameTime: 0.1236.944 ms
[loop] fragment-steps=5:fragment-uniform=true:vertex-steps=5: FPS: 7987240 FrameTime: 0.1254.167 ms
=======================================================
                                  glmark2 Score: 7736203

It seems like the big limiting factor is the GPU. Everything else seems reasonable to leave in there.

As ever, I find git diff --no-index a highly invaluable tool.

back to top see more (+7)