I often want to get my current WiFi name (SSID) and password.
How to get name/password manually
Sometimes, it's for a microcontroller. Sometimes, to share it. This time, it's for setting up an info-beamer device with WiFi.
Before today, I would usually open my phone and go to "share" under the WiFi settings, and copy the password manually, and also copy the SSID manually.
It's finally time to write a way to do it with bash!
How to get name/password with bash
After some web-searching, these commands do what I want:
Now, they will automatically be enabled on all my computers that use Atuin. This is actually not… amazingly helpful as my other computers all use ethernet, not WiFi, but… it's mainly about having the aliases all in the same place (and "backed up", if you will).
Once again, I start by downloading the JSON files, so that (in theory) I can make only one request to each SpaceAPI endpoint, and then work with the data locally (instead of requesting the JSON from the web every time I interact with it).
This script is modified from last time I did it, adding some better feedback of why some endpoints fail.
# download spacestot=0;got=0echo"code,url"> failed.txt
RED='\033[0;31m';GREEN='\033[0;32m';YELLOW='\033[0;33m';NC='\033[0m'whileread double;dotot=$(($tot+1))name=$(echo"${double}"|awk -F';''{print $1}');url=$(echo"${double}"|awk -F';''{print $2}');fn=$(echo"${name}"|sed's+/+-+g')echo"saving '${name}' - <${url}> to ./spaces/${fn}.json";# skip unless manually deleted if[-f"./spaces/${fn}.json"];thenecho-e" ${YELLOW}already saved${NC} this URL!">> /dev/stderr
got=$(($got+1))continuefi# get, skipping if HTTP status >= 400code=$(curl-L-s--fail --max-time 5-o"./spaces/${fn}.json" --write-out "%{http_code}""${url}")if[["${?}"-ne0]]||[["${code}"-ne200]];thenecho"${code},${url}">> failed.txt
echo-e" ${RED}bad${NC} status code (${code}) for this url!">> /dev/stderr
continuefiecho-e" ${GREEN}fetched${NC}! maybe it's bad :S">> /dev/stderr
got=$(($got+1))done<<<$(cat directory.json | jq -r'to_entries | .[] | (.key + ";" + .value)')echo"done, got ${got} of ${tot} files, $(($tot-$got)) failed with HTTP status >= 400"echo"codes from failed.txt:"cat failed.txt |awk -F',''NR>1{a[$1]+=1} END{printf " "; for (i in a) {printf "%s (%i) ", i, a[i]}; printf "\n"}'# some JSON files are malformed (i.e., not JSON) - just remove themrem=0forfilein spaces/*.json;docat"${file}"| jq > /dev/null
if[["${?}"-ne0]];thenecho"=== ${file} does not parse as JSON... removing it... ==="rm-v"${file}"rem=$(( $rem +1))fidoneecho"removed ${rem} malformed json files"
Extracting contact information
This is basically copied from last time I did it, changing membership_plans? to contact?, and changing the jq format afterwards.
We can filter this file to only the "mastodon:" lines, and then extract the server with a funky regex, and get a list of which instances are most common.
Sheffield city council publishes a list of HMO (House in Multiple Occupation) licences on their HMO page, along with other information about HMOs (in brief, an HMO is a shared house/flat with more than 3 non-family members, and it must be licenced if this number is 5 or more).
How accessible is the data on HMO licences
They provide a list of licences as an Excel spreadsheet (.xlsx). I've asked them before if they could (also) provide a CSV, but they told me that was technically impossible. I also asked if they had historical data (i.e., previous spreadsheets), but they said they deleted it every time they uploaded a new one.
Therefore, as I'm interested in private renting in Sheffield, I've been archiving the data in a GitHub repository, as CSVs. I also add additional data like lat/long coordinates (via geocoding), and parse the data into geographical formats like .geojson, .gpx, and .kml (which can be viewed on a map!).
Calculating statistics from the data
What I hadn't done yet was any statistics on the data (I'd only been interested in visualising it on a map) so that's what I've done now.
I spent the afternoon writing some scripts to parse CSV data and calculate things like mean occupants, most common postcodes, number of expiring licences by date, et cetera.
General Statisitcs
I find shell scripting interesting, but I'm not so sure everyone else does (the script for the interested). So I'm not going to put the scripts here, but I will say that I used these command line tools (CLI tools) this many times:
cat 7 times, tail 8 times, wc 1 time, csvtool 7 times, awk 4 times, sort 7 times, head 3 times, echo 7 times, uniq 2 times, sed 3 times.
Anyway, here are the statistics from the script (in text form, as is most shareable):
the mean number of occupants may be rising or it may be statistically insignificant
there are either a lot of houses becoming "not HMOs" overall (and on roads like Crookesmoor Road) or there are becoming a lot of unlicenced HMOs
Statistics on issuing and expiry dates
I also did some statistics on the licence issue and expiry dates with a second stats script, which – as it parses nearly 5,000 dates – takes longer than "almost instantly" to run. As above, this used:
date 10 times, while 6 times, cat 3 times, csvtool 2 times, tail 5 times, wc 3 times, echo 23 times, sort 11 times, uniq 4 times, sed 4 times, awk 9 times, head 2 times
The script outputs:
hmos_2024-09-09.csv
1745 dates in 1745 lines (627 unique issuing dates)
637 expired
1108 active
Licence Issue Dates:
Sun 06 Jan 2019, Sun 06 Jan 2019, … … … Wed 12 Jun 2024, Tue 09 Jul 2024,
Monday (275), Tuesday (440), Wednesday (405), Thursday (352), Friday (256), Saturday (5), Sunday (12),
2019 (84), 2020 (311), 2021 (588), 2022 (422), 2023 (183), 2024 (157),
Licence Expiry Dates:
Mon 09 Sep 2024, Mon 09 Sep 2024, … … … Mon 11 Jun 2029, Sun 08 Jul 2029,
2024 (159), 2025 (824), 2026 (263), 2027 (225), 2028 (185), 2029 (89),
hmos_2025-01-28.csv
1459 dates in 1459 lines (561 unique issuing dates)
334 expired
1125 active
Licence Issue Dates:
Mon 28 Oct 2019, Mon 04 Nov 2019, … … … Mon 06 Jan 2025, Tue 14 Jan 2025,
Monday (243), Tuesday (380), Wednesday (338), Thursday (272), Friday (211), Saturday (6), Sunday (9),
2019 (2), 2020 (130), 2021 (567), 2022 (406), 2023 (181), 2024 (170), 2025 (3),
Licence Expiry Dates:
Thu 30 Jan 2025, Fri 31 Jan 2025, … … … Mon 22 Oct 2029, Wed 28 Nov 2029,
2025 (681), 2026 (264), 2027 (225), 2028 (184), 2029 (105),
hmos_2025-03-03.csv
1315 dates in 1315 lines (523 unique issuing dates)
189 expired
1126 active
Licence Issue Dates:
Mon 28 Oct 2019, Mon 04 Nov 2019, … … … Wed 05 Mar 2025, Wed 05 Mar 2025,
Monday (217), Tuesday (339), Wednesday (314), Thursday (244), Friday (189), Saturday (4), Sunday (8),
2019 (2), 2020 (64), 2021 (494), 2022 (399), 2023 (177), 2024 (170), 2025 (9),
Licence Expiry Dates:
Fri 07 Mar 2025, Fri 07 Mar 2025, … … … Mon 22 Oct 2029, Wed 28 Nov 2029,
2025 (533), 2026 (262), 2027 (225), 2028 (184), 2029 (111),
Potential conclusions on dates
Again, draw your own conclusions (homework!), but some could be:
Sheffield council's most productive day of the week is Tuesday
Something is slowing down the process or the data uploaded in March wasn't up to date or the distribution of licences isn't even across the year, as only 9 licences are marked as issued in 2025
Why is this interesting
I started collecting HMO data originally because I wanted to visualise the licences on a map. Over a short time, I have created my own archive of licence history (as the council do not provide such).
Since I had multiple months of data, I could make some comparison, so I made these statistics. I don't find them incredibly useful, but there could be people who do.
Perhaps as time goes on, the long-term comparison (over years) could be interesting. I think the above data might not be greatly useful as it seems that Sheffield council are experiencing delays over licensing at the moment, so the decline in licences probably doesn't reflect general housing trends.
However, further down, it lists how to access it, which says:
You can search for and view open records on our partner site Findmypast.co.uk (charges apply). A version of the 1939 Register is also available at Ancestry.co.uk (charges apply), and transcriptions without images are on MyHeritage.com (charges apply). It is free to search for these records, but there is a charge to view full transcriptions and download images of documents. Please note that you can view these records online free of charge in the reading rooms at The National Archives in Kew.
So… charges apply.
Anyway, for a while in April 2025 (until May 8th), FindMyPast is giving free access to the 1939 data.
Of course, family history is hard, and what's much easier is "who lived in my house in 1939". For that you can use:
I'm interested in easy ways to see change. Trying to compare the old and new webpages by eye is hard, which leads me to text-diffing. I can copy the contents of the website to a file and compare them that way. Let's. I did a similar thing a while ago with computer benchmarks.
I manually create two files by copying the interesting bits of the webpage, called 1 and 2 (because who has time for .txt extensions). Then, I can run:
The latter command turns each into HTML by turning + lines into <ins> ("insert"), - into <del> ("delete"), and removing leading spaces on other lines. Then, I can whack the output into a simple HTML template:
<!DOCTYPEhtml><html><head><style>body{background: black;color: white;}pre{padding: 1rem;}del{text-decoration: none;color: red;}ins{text-decoration: none;color: green;}</style></head><body><pre>
diff goes here...
<del>del lines will be red</del><ins>ins lines will be green</ins></pre></body></html>
The final output is something like this (personal information removed. don't doxx me.)
Energy rating
D
Valid until
05 February 2025
05 February 2035
Property type
Mid-terrace house
Total floor area
130 square metres
123 square metres
This property’s energy rating is D. It has the potential to be C.
This property’s energy rating is D. It has the potential to be B.
Features in this property
Window Fully double glazed Good
Roof Pitched, no insulation (assumed) Very poor
Roof Roof room(s), no insulation (assumed) Very poor
Roof Roof room(s), insulated (assumed) Good
Lighting Low energy lighting in 64% of fixed outlets Good
Lighting Low energy lighting in all fixed outlets Very good
Secondary heating None N/A
Primary energy use
The primary energy use for this property per year is 303 kilowatt hours per square metre (kWh/m2).
The primary energy use for this property per year is 252 kilowatt hours per square metre (kWh/m2).
Good job on us for having 100% low energy lighting fixtures, I guess...
Really, this is a complicated way to simplify something. I like simple things, so I like this.
I wanted to know which hackspaces published their membership prices using SpaceAPI, and what those rates were. Here are a few bash scripts to do just that:
# get the directory of SpaceAPIsmkdir-p ~/temp/spaceapi/spaces
cd ~/temp/spaceapi
curl"https://directory.spaceapi.io/"| jq > directory.json
# save (as many as possible of) SpaceAPIs to local computertot=0;got=0whileread double;dotot=$(($tot+1))name=$(echo"${double}"|awk -F';''{print $1}');url=$(echo"${double}"|awk -F';''{print $2}');fn=$(echo"${name}"|sed's+/+-+g')echo"saving '${name}' - <${url}> to ./spaces/${fn}.json";# skip unless manually deleted if[-f"./spaces/${fn}.json"];thenecho"already saved!"got=$(($got+1))continuefi# get, skipping if HTTP status >= 400curl-L-s--fail --max-time 5"${url}"-o"./spaces/${fn}.json"||continueecho"fetched! maybe it's bad :S"got=$(($got+1))done<<<$(cat directory.json | jq -r'to_entries | .[] | (.key + ";" + .value)')echo"done, got ${got} of ${tot} files, $(($tot-$got)) failed with HTTP status >= 400"# some JSON files are malformed (i.e., not JSON) - just remove themforfilein spaces/*.json;docat"${file}"| jq > /dev/null
if[["${?}"-ne0]];thenecho"${file} does not parse as JSON... removing it..."rm"${file}"fidone# loop every JSON file, and nicely output any that have a .membership_plans objectforfilein spaces/*.json;doplans=$(cat"${file}"| jq '.membership_plans?')[["${plans}"=="null"]]&&continueecho"${file}"# echo "${plans}" | jq -cecho"${plans}"| jq -r'.[] | (.currency_symbol + (.value|tostring) + " " + .currency + " " + .billing_interval + " for " + .name + " (" + .description + ")")'echo""done
The output of this final loop looks like:
...
spaces/CCC Basel.json
20 CHF monthly for Minimal ()
40 CHF monthly for Recommended ()
60 CHF monthly for Root ()
...
spaces/RevSpace.json
32 EUR monthly for regular ()
20 EUR monthly for junior ()
19.84 EUR monthly for multi2 ()
13.37 EUR monthly for multi3 ()
...
spaces/Sheffield Hackspace.json
£6 GBP monthly for normal membership (regularly attend any of the several open evenings a week)
£21 GBP monthly for keyholder membership (come and go as you please)
...
see full output
spaces/CCC Basel.json
20 CHF monthly for Minimal ()
40 CHF monthly for Recommended ()
60 CHF monthly for Root ()
spaces/ChaosStuff.json
120 EUR yearly for Regular Membership (For people with a regular income)
40 EUR yearly for Student Membership (For pupils and students)
40 EUR yearly for Supporting Membership (For people who want to use the space to work on projects, but don't want to have voting rights an a general assembly.)
1 EUR yearly for Starving Hacker (For people, who cannot afford the membership. Please get in touch with us, before applying.)
spaces/dezentrale.json
16 EUR monthly for Reduced membership ()
32 EUR monthly for Regular membership ()
42 EUR monthly for Nerd membership ()
64 EUR monthly for Nerd membership ()
128 EUR monthly for Nerd membership ()
spaces/Entropia.json
25 EUR yearly for Regular Members (Normale Mitglieder gem. https://entropia.de/Satzung_des_Vereins_Entropia_e.V.#Beitragsordnung)
19 EUR yearly for Members of CCC e.V. (Mitglieder des CCC e.V. gem. https://entropia.de/Satzung_des_Vereins_Entropia_e.V.#Beitragsordnung)
15 EUR yearly for Reduced Fee Members (Schüler, Studenten, Auszubildende und Menschen mit geringem Einkommen gem. https://entropia.de/Satzung_des_Vereins_Entropia_e.V.#Beitragsordnung)
6 EUR yearly for Sustaining Membership (Fördermitglieder gem. https://entropia.de/Satzung_des_Vereins_Entropia_e.V.#Beitragsordnung)
spaces/Hacker Embassy.json
100 USD monthly for Membership ()
spaces/Hackerspace.Gent.json
25 EUR monthly for regular (discount rates and yearly invoice also available)
spaces/Hack Manhattan.json
110 USD monthly for Normal Membership (Membership dues go directly to rent, utilities, and the occasional equipment purchase.)
55 USD monthly for Starving Hacker Membership (Membership dues go directly to rent, utilities, and the occasional equipment purchase. This plan is intended for student/unemployed hackers.)
spaces/Hal9k.json
450 DKK other for Normal membership (Billing is once per quarter)
225 DKK other for Student membership (Billing is once per quarter)
spaces/Leigh Hackspace.json
24 GBP monthly for Member (Our standard membership that allows usage of the hackspace facilities.)
30 GBP monthly for Member+ (Standard membership with an additional donation.)
18 GBP monthly for Concession (A subsidised membership for pensioners, students, and low income earners.)
40 GBP monthly for Family (A discounted family membership for two adults and two children.)
5 GBP daily for Day Pass (Access to the hackspace's facilities for a day.)
5 GBP monthly for Patron (Support the hackspace without being a member.)
spaces/LeineLab.json
120 EUR yearly for Ordentliche Mitgliedschaft ()
30 EUR yearly for Ermäßigte Mitgliedschaft ()
336 EUR yearly for Ordentliche Mitgliedschaft + Werkstatt ()
120 EUR yearly for Ermäßigte Mitgliedschaft + Werkstatt ()
spaces/<name>space Gera.json
spaces/Nerdberg.json
35 EUR monthly for Vollmitgliedschaft (Normal fee, if it is to much for you, contact the leading board, we'll find a solution.)
15 EUR monthly for Fördermitgliedschaft ()
spaces/NYC Resistor.json
115 USD monthly for standard ()
75 USD monthly for teaching ()
spaces/Odenwilusenz.json
0 CHF yearly for Besucher ()
120 CHF yearly for Mitglied ()
480 CHF yearly for Superuser ()
1200 CHF yearly for Co-Worker ()
spaces/RevSpace.json
32 EUR monthly for regular ()
20 EUR monthly for junior ()
19.84 EUR monthly for multi2 ()
13.37 EUR monthly for multi3 ()
spaces/Sheffield Hackspace.json
£6 GBP monthly for normal membership (regularly attend any of the several open evenings a week)
£21 GBP monthly for keyholder membership (come and go as you please)
spaces/TkkrLab.json
30 EUR monthly for Normal member (Member of TkkrLab (https://tkkrlab.nl/deelnemer-worden/))
15 EUR monthly for Student member (Member of TkkrLab, discount for students (https://tkkrlab.nl/deelnemer-worden/))
15 EUR monthly for Student member (Junior member of TkkrLab, discount for people aged 16 or 17 (https://tkkrlab.nl/deelnemer-worden/))
spaces/-usr-space.json
I think a couple have weird names like <name>space or /dev/tal which screw with my script. Oh well, it's for you to improve.
Overall, not that many spaces have published their prices to SpaceAPI. Also, the ones in the US look really expensive. As ever, a good price probably depends on context (size/city/location/etc).
Perhaps I can convince some other spaces to put their membership prices in their SpaceAPI...
I write these notes in Obsidian. To upload, them, I could visit https://github.com/alifeee/blog/tree/main/notes, click "add file", and copy and paste the file contents. I probably should do that.
But, instead, I wrote a shell script to upload them. Now, I can press "CTRL+P" to open the Obsidian command pallette, type "lint" (to lint the note), then open it again and type "upload" and upload the note. At this point, I could walk away and assume everything went fine, but what I normally do is open the GitHub Actions tab to check that it worked properly.
The process the script undertakes is:
check user inputs are good (all variables exist, file is declared)
check if file exists or not already in GitHub with a curl request
generate a JSON payload for the upload request, including:
commit message
commit author & email
file contents as a base64 encoded string
(if file exists already) sha1 hash of existing file
make a curl request to upload/update the file!
As I use it from inside Obsidian, I use an extension called Obsidian shellcommands, which lets you specify several commands. For this, I specify:
…and when run with a file open, it will upload/update that file to my notes folder on GitHub.
This is maybe a strange way of doing it, as the "source of truth" is now "my Obsidian", and the GitHub is really just a place for the files to live. However, I enjoy it.
I've made the script quite generic as you have to supply most information via environment variables. You can use it to upload an arbitrary file to a specific folder in a specific GitHub repository. Or… you can modify it and do what you want with it!
I'm writing a blog about hitchhiking, which involves a load os .geojson files, which look a bit like this:
The .geojson files are generated from .gpx traces that I exported from OSRM's (Open Source Routing Machine) demo (which, at time of writing, seems to be offline, but I believe it's on https://map.project-osrm.org/), one of the routing engines on OpenStreetMap.
I put in a start and end point, exported the .gpx trace, and then converted it to .geojson with, e.g., ogr2ogr "2.1 Tamworth -> Tibshelf Northbound.geojson" "2.1 Tamworth -> Tibshelf Northbound.gpx" tracks, where ogr2ogr is a command-line tool from sudo apt install gdal-bin which converts geographic data between many formats (I like it a lot, it feels nicer than searching the web for "errr, kml to gpx converter?"). I also then semi-manually added some properties (see how).
Originally, I was combining them into one .geojson file using https://github.com/mapbox/geojson-merge, which as a binary to merge .geojson files, but I decided to use jq because I wanted to do something a bit more complex, which was to create a structure like
FeatureCollection
Features:
FeatureCollection
Features (1.1 Tamworth -> Woodall Northbound, 1.2 Woodall Northbound -> Hull)
FeatureCollection
Features (2.1 Tamworth -> Tibshelf Northbound, 2.2 Tibshelf Northbound -> Leeds)
FeatureCollection
Features (3.1 Frankley Northbound -> Hilton Northbound, 3.2 Hilton Northbound -> Keele Northbound, 3.3 Keele Northbound -> Liverpool)
I spent a while making a quite-complicated jq query, using variables (an "advanced feature"!) and a reduce statement, but when I completed it, I found out that the above structure is not valid .geojson, so I went back to just having:
I moved to Linux [time ago]. One thing I miss from the Windows file explorer is how easy it was to create text files.
With Nautilus (Pop!_OS' default file browser), you can create templates which appear when you right click in an empty folder (I don't remember where the templates file is and I can't find an obvious way to find out, so... search it yourself), but this doesn't work if you're using nested folders.
i.e., I use this view a lot in Nautilus the file explorer, which is a tree-view that lets you expand folders instead of open them (similar to most code editors).
But in this view, you can't "right click on empty space inside a folder" to create a new template file, you can only "right click the folder" (or if it's empty, "right click a strange fake-file called (Empty)").
So, I created a script in /home/alifeee/.local/share/nautilus/scripts called new file (folder script) with this content:
#!/bin/bash# create new file within folder (only works if used on folder)# notify-send requires libnotify-bin -> `sudo apt install libnotify-bin`if[-z"${1}"];thennotify-send"did not get folder name. use script on folder!"exit1fifile="${1}/new_file"i=0while[-f"${file}"];doi=$(($i+1))file="${1}/new_file${i}"donetouch"${file}"if[!-f"${file}"];thennotify-send"tried to create a new file but it doesn't seem to exist"elsenotify-send"I think I created file all well! it's ${file}"fi
Now I can right click on a folder, click "scripts > new file" and have a new file that I can subsequently rename. Sweet.
I sure hope that in future I don't want anything slightly more complicated like creating multiple new files at once...
I was given an old computer. I'd quite like to make a computer to use in my studio, and take my tower PC home to play video games (mainly/only local coop games like Wilmot's Warehouse, Towerfall Ascension, or Unrailed, and occasionally Gloomhaven).
It's not the best, and I'd like to know what parts I would want to replace to make it suit my needs (which are vaguely "can use a modern web browser" without being slow).
By searching the web, I found these commands to collect hardware information for a computer:
uname-a# vague computer information
lscpu # cpu informationdf-h# hard drive informationsudo dmidecode -t bios # bios informationfree-h# memory (RAM) info
lspci -v|grep VGA -A11# GPU info (1)sudo lshw -numeric-C display # GPU info (2)
I also found these commands to benchmark some things:
sudoaptinstall sysbench glmark2
# benchmark CPU
sysbench --test=cpu run
# benchmark memory
sysbench --test=memory run
# benchmark graphics
glmark2
I put the output of all of these commands into text files for each computer, into a directory that looks like: