Given a username, i.e., @thentrythis@thentrythis.org, find the format of the "webfinger request" (which allows you to request data about a user), which should be on /.well-known/host-meta. The key here is that the original site (thentrythis.org) redirects to the "social site" (social.thentrythis.org).
It's always nice to know that I could use Mastodon by reaaaallllyyy slowly issuing my own curl requests (or, what this really means, build my own client).
a very broad-strokes definition of the word "hacking" I spurted out in a text conversation.
when people say hacking they mean one of several things
the (positive) sense is that used by hackspaces, to hack is to make something do something beyond its initial purposes
technologically, a lot of the time, that means taking apart an old TV and reusing parts of it to make a lightning rod, or replacing a phone battery by yourself (the phone companies do not desire this), or adding a circuitboard to your cat flap that uses the chip inside the cat to detect if it's your cat and if not lock the flap
more "software based", it can be like scraping a government website to collect documents into a more readable format, turning trains back on via software that were disabled by their manufacturer as a money-grabbing gambit, getting access to academic papers that are unreasonably locked behind expensive paywalls
If someone says 'my facebook got hacked' what does that mean
usually what they mean is that someone has logged into it without their permission
and most (all) of the time, that person has guessed their password because they said it out loud, they watched them put it in, they guessed it randomly (probs rare), or (rarest) they found the password in a passwork leak for a different website and tried it on Facebook (because the person uses the same password on multiple accounts)
I'd call that a second thing people say hacking for
and a third is the money extorting hackers, who hack into [the British library] and lock all their documents unless they pay [a ransom]
I've thought about installing a VPN on my server for a few months. It wouldn't allow the perhaps-more-common VPN use of getting past region-locked content (as I can't change the region of my server), but as an academic exercise, and for other reasons, I gave it a try installing a VPN on my server.
I accepted all the default settings (IP / UDP / port / DNS servers) apart from username (alifeee), allowed the port through my firewall (which uses Uncomplicated FireWall (ufw)) with sudo ufw allow 1194 (the default port), and a file alifeee.ovpn was created. That file was pretty simple, and basically just a few keys, and looked a bit like this:
This file was small enough that I was able to copy it in only two screens through ConnectBot on my phone. To install it, I:
opened the VPN settings on Linux, where I could import a .ovpn file by default
installed the OpenVPN app on Android which let me import the file
I haven't installed it on Windows but I'm presuming it's as easy as installing some OpenVPN app
Since I've installed it, it's actually been pretty useful. I've used it:
on trains or in cafés to hide my traffic (I think)
to download PlatformIO libraries as my ISP blocked the library hosting website inexplicably
to access https://sci-hub.se/, which my ISP also blocks (this post brought to you by: my ISP being super annoying)
So, if you want to get round blocks, hide your traffic, or other VPN shenanigans, you could create a VPS (Virtual Private Server) and install OpenVPN to it pretty easily. Perhaps you could even get around region locks if you picked a server location in a region you wanted.
I'm writing a blog about hitchhiking, which involves a load os .geojson files, which look a bit like this:
The .geojson files are generated from .gpx traces that I exported from OSRM's (Open Source Routing Machine) demo (which, at time of writing, seems to be offline, but I believe it's on https://map.project-osrm.org/), one of the routing engines on OpenStreetMap.
I put in a start and end point, exported the .gpx trace, and then converted it to .geojson with, e.g., ogr2ogr "2.1 Tamworth -> Tibshelf Northbound.geojson" "2.1 Tamworth -> Tibshelf Northbound.gpx" tracks, where ogr2ogr is a command-line tool from sudo apt install gdal-bin which converts geographic data between many formats (I like it a lot, it feels nicer than searching the web for "errr, kml to gpx converter?"). I also then semi-manually added some properties (see how).
Originally, I was combining them into one .geojson file using https://github.com/mapbox/geojson-merge, which as a binary to merge .geojson files, but I decided to use jq because I wanted to do something a bit more complex, which was to create a structure like
FeatureCollection
Features:
FeatureCollection
Features (1.1 Tamworth -> Woodall Northbound, 1.2 Woodall Northbound -> Hull)
FeatureCollection
Features (2.1 Tamworth -> Tibshelf Northbound, 2.2 Tibshelf Northbound -> Leeds)
FeatureCollection
Features (3.1 Frankley Northbound -> Hilton Northbound, 3.2 Hilton Northbound -> Keele Northbound, 3.3 Keele Northbound -> Liverpool)
I spent a while making a quite-complicated jq query, using variables (an "advanced feature"!) and a reduce statement, but when I completed it, I found out that the above structure is not valid .geojson, so I went back to just having:
I moved to Linux [time ago]. One thing I miss from the Windows file explorer is how easy it was to create text files.
With Nautilus (Pop!_OS' default file browser), you can create templates which appear when you right click in an empty folder (I don't remember where the templates file is and I can't find an obvious way to find out, so... search it yourself), but this doesn't work if you're using nested folders.
i.e., I use this view a lot in Nautilus the file explorer, which is a tree-view that lets you expand folders instead of open them (similar to most code editors).
But in this view, you can't "right click on empty space inside a folder" to create a new template file, you can only "right click the folder" (or if it's empty, "right click a strange fake-file called (Empty)").
So, I created a script in /home/alifeee/.local/share/nautilus/scripts called new file (folder script) with this content:
#!/bin/bash
# create new file within folder (only works if used on folder)
# notify-send requires libnotify-bin -> `sudo apt install libnotify-bin`
if [ -z "${1}" ]; then
notify-send "did not get folder name. use script on folder!"
exit 1
fi
file="${1}/new_file"
i=0
while [ -f "${file}" ]; do
i=$(($i+1))
file="${1}/new_file${i}"
done
touch "${file}"
if [ ! -f "${file}" ]; then
notify-send "tried to create a new file but it doesn't seem to exist"
else
notify-send "I think I created file all well! it's ${file}"
fi
Now I can right click on a folder, click "scripts > new file" and have a new file that I can subsequently rename. Sweet.
I sure hope that in future I don't want anything slightly more complicated like creating multiple new files at once...
I was given an old computer. I'd quite like to make a computer to use in my studio, and take my tower PC home to play video games (mainly/only local coop games like Wilmot's Warehouse, Towerfall Ascension, or Unrailed, and occasionally Gloomhaven).
It's not the best, and I'd like to know what parts I would want to replace to make it suit my needs (which are vaguely "can use a modern web browser" without being slow).
By searching the web, I found these commands to collect hardware information for a computer:
uname -a # vague computer information
lscpu # cpu information
df -h # hard drive information
sudo dmidecode -t bios # bios information
free -h # memory (RAM) info
lspci -v | grep VGA -A11 # GPU info (1)
sudo lshw -numeric -C display # GPU info (2)
I also found these commands to benchmark some things:
sudo apt install sysbench glmark2
# benchmark CPU
sysbench --test=cpu run
# benchmark memory
sysbench --test=memory run
# benchmark graphics
glmark2
I put the output of all of these commands into text files for each computer, into a directory that looks like:
I wanted to make a local archive of personal websites. This is because in the past I have searched my bookmarks for things like fonts to see how many of them mention, talk about, or link to things about fonts. When I did this, I only looked at the homepages, so I've been wondering about a way to search a list of entire sites since.
I came up with the idea of downloading the HTML files for my bookmarked sites, and using grep and...
Also, lua_search doesn't support case-insensitivity yet. search tries to be smart: if you pass in a pattern with any uppercase letters it's treated as case-sensitive, but if it's all lowercase it's treated as case-insensitive. lua_search doesn't have these smarts yet,
Also, lua_search doesn't support case-insensitivity yet. searchtries to be smart: if you pass in a pattern with any uppercase letters it's treated as case-sensitive, but if it's all lowercase it's treated as case-insensitive.lua_search` doesn't have these smarts yet, and all patterns are currently case-sensitive.and all patterns are currently case-sensitive.
search
#!/usr/bin/zsh
# Search a directory for files containing all of the given keywords.
DIR=`mktemp -d`
ROOT=${ROOT:-.}
# generate a list of files on stdout
echo find `eval echo $ROOT` -type f -print0 \> $DIR/1 >&2
find `eval echo $ROOT` -type f -print0 > $DIR/1
INFILE=1
for term in $*
do
# filter file list for one term
OUTFILE=$(($INFILE+1))
if echo $term |grep -q '[A-Z]'
then
echo cat $DIR/$INFILE \|xargs -0 grep -lZ "$term" \> $DIR/$OUTFILE >&2
cat $DIR/$INFILE |xargs -0 grep -lZ "$term" > $DIR/$OUTFILE
else
echo cat $DIR/$INFILE \|xargs -0 grep -ilZ "$term" \> $DIR/$OUTFILE >&2
cat $DIR/$INFILE |xargs -0 grep -ilZ "$term" > $DIR/$OUTFILE
fi
INFILE=$OUTFILE
done
# get rid of nulls in the outermost call, and sort for consistency
cat $DIR/$INFILE |xargs -n 1 -0 echo |sort
#!/usr/bin/lua
local input = io.popen('find . -type f')
-- will scan each file to the end at most once
function match(filename, patterns)
local file = io.open(filename)
for _, pattern in ipairs(patterns) do
if not search(file, pattern) then
return false
end
end
file:close()
return true
end
function search(file, pattern)
if file:seek('set') == nil then error('seek') end
for line in file:lines() do
if line:match(pattern) then
return true
end
end
return false
end
for filename in input:lines() do
filename = filename:sub(3) -- drop the './'
if match(filename, arg) then
print(filename)
end
end
...to search the sites.
initial attempt
I found you can use wget to do exactly this (download an entire site), using a cacophony of arguments. I put them into a script that looks a bit like:
...and set it off. I found several things, which made me modify the script in several ways (mainly I saw these by watching one specific URL take a lot of time to scrape):
some sites had a lot of media, so I searched the web and added a file exclude
some sites just had so many pages, like thousands of fonts, or daily puzzles, or just a very detailed personal wiki, so I added a site disable so I could skip specific URLs
as I was re-running the script a lot, and didn't want to provide excess traffic (it's worth noting that wget respects robots.txt) I added -N to wget so it didn't download things that hadn't been modified (with the Last-Modified: ... header)
a lot of sites didn't have a Last-Modified: ... header, so I made the script skip sites that had a directory already created for them (which I then had to manually delete if the site had only half-downloaded)
some sites had very slow connections (not https://blinry.org/ (and others), which was blazingly fast) which made the crawl seem like it would take ages
At this point (after not much effort, to be honest), I gave up. My final script was:
If I want to continue in future (searching a personal list of sites), I may find another way to do it, perhaps something similar to Google's search syntax potato site:http://site1.org site:http://site2.org, or perhaps I can create a custom search engine filter with DuckDuckGo/Kagi/etc that lets me put a custom list of URLs in. Who's to say. Otherwise, I'll also just continue sticking search queries in the various alternative/indie search engines like those on https://proto.garden/blog/search_engines.html.
I often turn lists of coordinates into a geojson file, so they can be easily shared and viewed on a map. See several examples on https://alifeee.co.uk/maps/.
One thing I wanted to do recently was turn a list of points ("places I've been") into a list of straight lines connecting them, to show routes on a map. I made a script using jq to do this, using the same data from my note about making a geojson file from a CSV.
Effectively, I want to turn these coordinates...
latitude,longitude,description,best part
53.74402,-0.34753,Hull,smallest window in the UK
54.779764,-1.581559,Durham,great cathedral
52.47771,-1.89930,Birmingham,best board game café
53.37827,-1.46230,Sheffield,5 rivers!!!
...but in a .geojson format, so I can view them on a map. Since this turns N items into N - 1 items, it sounds like it's time for a reduce (I like using map, filter, and reduce a lot. They're very satisfying. Some would say I should get [more] into Functional Programming).
So, the jq script to "combine" coordinates is: (hopefully you can vaguely see which bits of it do what)
As with the previous post, making this script took a lot of reading man jq (very well-written) in my terminal, and a lot of searching "how to do X in jq".
I've gotten into a habit with map-making: my favourite format is geojson, and I've found some tools to help me screw around with it, namely https://github.com/pvernier/csv2geojson to create a .geojson file from a .csv, and https://geojson.io/ to quickly and nicely view the geojson. geojson.io can also export as KML (used to import into Google Maps).
In attempting to turn a .geojson file from a list of "Point"s to a list of "LineString"s using jq, I figured I could also generate the .geojson file myself using jq, instead of using the csv2geojson Go program above. This is my (successful) attempt:
First, create a CSV file places.csv with coordinates (latitude and longitude columns) and other information. There are many ways to find coordinates; one is to use https://www.openstreetmap.org/, zoom into somewhere, and copy them from the URL. For example, some places I have lived:
latitude,longitude,description,best part
53.74402,-0.34753,Hull,smallest window in the UK
54.779764,-1.581559,Durham,great cathedral
52.47771,-1.89930,Birmingham,best board game café
53.37827,-1.46230,Sheffield,5 rivers!!!
Then, I spent a while (maybe an hour) crafting this jq script to turn that (or a similar CSV) into a geojson file. Perhaps you can vaguely see which parts of it do what.
...which I can then export into https://geojson.io/, or turn into another format with gdal (e.g., with ogr2ogr places.gpx places.geojson).
It's very satisfying for me to use jq. I will definitely be re-using this script in the future to make .geojson files, but as well re-using some of the jq techniques I learnt while making it.
Mostly for help I used man jq in my terminal, the .geojson proposal for the .geojson structure, and a lot of searching the web for "how to do X using jq".
a website should work with the simplest use-case, and any improvements should not break the basic behaviour
e.g,., someone should be able to submit a form on a website without using JavaScript. Progressive enhancement would be to make the user experience better (e.g., by not refreshing the whole page when the form is submitted, and only refreshing parts of it) but leave the original case working.
I find this usually has benefits that you didn't think of. For example, using semantic HTML makes your website work better on slow connections, on wacky browsers, with screen readers, et cetera. People with modern powerful browsers can enjoy your weird JavaScript animations and experimental layouts, but everyone can still use the website without these.
in non-technical areas
The idea has spread into non-technical parts of life too. I think it has similar benefits. For example, a non-technical opportunity that often fails progressive enhancement is shops/cafes/venues accepting card payment, while stopping to accept cash payments (card, here, being some way a metaphor for JavaScript).
What inspired me writing this today was seeing the holes on the top of the seats in trains, where conductors used to place tickets to designate which seat was reserved. The train had switched to using digital markers, but since the old slots were still there, had the digital system failed, there was always a fallback, which I thought was nice.
There are many, many other examples of "progressive enhancement not applied to websites". I leave it to you to find them.