import ewintr.nl
This commit is contained in:
parent
f0b8a6276a
commit
211c359c86
|
@ -0,0 +1,16 @@
|
||||||
|
# The URL the site will be built for
|
||||||
|
base_url = "https://ewintr.nl"
|
||||||
|
|
||||||
|
# Whether to automatically compile all Sass files in the sass directory
|
||||||
|
compile_sass = true
|
||||||
|
|
||||||
|
# Whether to build a search index to be used later on by a JavaScript library
|
||||||
|
build_search_index = false
|
||||||
|
|
||||||
|
[markdown]
|
||||||
|
# Whether to do syntax highlighting
|
||||||
|
# Theme can be customised by setting the `highlight_theme` variable to a theme supported by Zola
|
||||||
|
highlight_code = true
|
||||||
|
|
||||||
|
[extra]
|
||||||
|
# Put all your custom variables here
|
|
@ -0,0 +1,58 @@
|
||||||
|
+++
|
||||||
|
title = "Basic caching headers in nginx"
|
||||||
|
date = 2020-01-05
|
||||||
|
+++
|
||||||
|
|
||||||
|
To add basic caching headers for different filetypes, add an expires directive to your nginx config file, like this:
|
||||||
|
|
||||||
|
```
|
||||||
|
# Expires map
|
||||||
|
map $sent_http_content_type $expires {
|
||||||
|
default off;
|
||||||
|
text/html epoch;
|
||||||
|
text/css 30d;
|
||||||
|
application/javascript 30d;
|
||||||
|
~image/ 30d;
|
||||||
|
~font max;
|
||||||
|
}
|
||||||
|
|
||||||
|
server {
|
||||||
|
listen 80 default_server;
|
||||||
|
listen [::]:80 default_server;
|
||||||
|
|
||||||
|
expires $expires;
|
||||||
|
...
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
- `off` means no caching headers.
|
||||||
|
- `epoch` is no caching, ask the website itself.
|
||||||
|
- `30d` cache for 30 days.
|
||||||
|
- `max` is the maximum, cache as long as you can.
|
||||||
|
- A `~` in the mimetype indicates a regular expression.
|
||||||
|
|
||||||
|
## Fonts
|
||||||
|
|
||||||
|
It could be that this does not work right away for fonts, as nginx defaults to the `application/octet-stream mimetype` for those filetypes. To fix this, add these lines to the `/etc/nginx/mime.types` config file:
|
||||||
|
|
||||||
|
```
|
||||||
|
font/ttf ttf;
|
||||||
|
font/opentype otf;
|
||||||
|
font/woff woff;
|
||||||
|
font/woff2 woff2;
|
||||||
|
```
|
||||||
|
|
||||||
|
Don't forget to add the first two to the list of gzipped mimetypes, the last two already have compression baked into the format:
|
||||||
|
|
||||||
|
```
|
||||||
|
gzip_types text/plain text/css ... font/ttf font/opentype;
|
||||||
|
```
|
||||||
|
|
||||||
|
In `/etc/nginx/nginx.conf` (on Debian).
|
||||||
|
|
||||||
|
## Sources
|
||||||
|
|
||||||
|
- [www.digitalocean.com](https://www.digitalocean.com/community/tutorials/how-to-implement-browser-caching-with-nginx-s-header-module-on-centos-7)
|
||||||
|
- [drawingablank.me](http://drawingablank.me/blog/font-mime-types-in-nginx.html)
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,47 @@
|
||||||
|
+++
|
||||||
|
title = "Basic pomodoro timer in bash with termdown and wmctrl"
|
||||||
|
date = 2020-03-33
|
||||||
|
+++
|
||||||
|
|
||||||
|
Like many other systems that aim to add some structure in the chaos of todo-items that we all struggle with, the Pomodoro Technique has some good parts and ideas, but one must be careful not to drive it to far. Neither by forcing every life problem in the system (if all you have is hammer...), nor by a too rigid application of the rules of the system. The same goes for the apps and tools that come with it.
|
||||||
|
|
||||||
|
After some failed attempts to install `i3-pomodoro`, where I first had to upgrade my `i3status` to `i3blocks`, with its own configuration and all, I realized I just wanted one very simple thing from Pomodoro: a timer that lets me do some focused work for a certain period of time. Nothing more. For simple things, Bash is still the best:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
#!/bin/bash
|
||||||
|
MINUTES=25
|
||||||
|
if [ "$1" == "break" ]; then
|
||||||
|
MINUTES=5
|
||||||
|
fi
|
||||||
|
wmctrl -N "Pomodoro" -r :ACTIVE:
|
||||||
|
termdown --no-figlet --no-seconds --no-window-title ${MINUTES}m
|
||||||
|
wmctrl -b add,demands_attention -r "Pomodoro"
|
||||||
|
```
|
||||||
|
|
||||||
|
This runs a terminal timer that counts down to zero from a specified amount of minutes and then lets the window manager draw attention to it in it's native way. In default i3 this means the border turns red, as well as the workspace indicator in the status bar.
|
||||||
|
|
||||||
|
- `wmctrl` handles the part of drawing the attention. The trick here is to make sure that attention is drawn to the right window. This is done setting the window title of the active window, the one we run the script in, to something known beforehand. This way, we can find it back later at the end of the countdown:
|
||||||
|
- `-N "Pomodoro"` sets the title of the window to "Pomodoro".
|
||||||
|
- `-r :ACTIVE:` selects the currently active window.
|
||||||
|
|
||||||
|
Then the timer `termdown` is started:
|
||||||
|
|
||||||
|
- `--no-figlet` don't fill the terminal with large numbers that do the countdown. We want some time focused on our task, so we need as little distraction as possible.
|
||||||
|
- `--no-seconds` for the same reason, don't show the seconds.
|
||||||
|
- `--no-window-title` by default `termdown` also displays the time left in the window title. This is even more distraction, but also removes the title we just set. This option disables that.
|
||||||
|
|
||||||
|
After the countdown, we let wmctrl draw the attention:
|
||||||
|
|
||||||
|
- `-b add,demands_attention` adds the property demands_attention to the window.
|
||||||
|
- `-r "Pomodoro"` selects the window that has "Pomodoro" as title.
|
||||||
|
|
||||||
|
## Sources
|
||||||
|
|
||||||
|
- [en.wikipedia.org](https://en.wikipedia.org/wiki/Pomodoro_Technique)
|
||||||
|
- [gitlab.com](https://gitlab.com/moritz-weber/i3-pomodoro)
|
||||||
|
- [github.com](https://github.com/i3/i3status)
|
||||||
|
- [github.com](https://github.com/vivien/i3blocks)
|
||||||
|
- [linux.die.net](https://linux.die.net/man/1/wmctrl)
|
||||||
|
- [github.com](https://github.com/trehn/termdown)
|
||||||
|
- [askubuntu.com](https://askubuntu.com/questions/40463/is-there-any-way-to-initiate-urgent-animation-of-an-icon-on-the-unity-launcher)
|
||||||
|
|
|
@ -0,0 +1,68 @@
|
||||||
|
+++
|
||||||
|
title = "Conversions with iota in go"
|
||||||
|
date = 2020-04-17
|
||||||
|
+++
|
||||||
|
|
||||||
|
`iota` is a helpful way to enumerate constants in Go:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
const (
|
||||||
|
c0 = iota
|
||||||
|
c1
|
||||||
|
c2
|
||||||
|
)
|
||||||
|
|
||||||
|
func main() {
|
||||||
|
fmt.Println(c0, c1, c2) // "0 1 2"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
But it is more flexible than just defining a constant to be an integer. According to the specification:
|
||||||
|
|
||||||
|
> A constant value is represented by a rune, integer, floating-point, imaginary, or string literal, an identifier denoting a constant, a constant expression, a conversion with a result that is a constant, or the result value of some built-in functions such as unsafe.Sizeof applied to any value, cap or len applied to some expressions, real and imag applied to a complex constant and complex applied to numeric constants."
|
||||||
|
|
||||||
|
So we are allowed to use expressions and conversions on the right hand side of the `=` sign in a constant declaration. That means we can use `iota` in an expression or a conversion. A common use case for expression is creating a bitmask with the bit shift operator.
|
||||||
|
|
||||||
|
However, although less seen, conversions can be pretty useful as well:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
type Priority string
|
||||||
|
|
||||||
|
const (
|
||||||
|
_ = Priority("P" + string(iota+48))
|
||||||
|
P1 // "P1"
|
||||||
|
P2 // "P2"
|
||||||
|
P3 // "P3"
|
||||||
|
...
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
It is not possible to use `strconv` functions in the `const` block because, well, they're functions. According to the specification:
|
||||||
|
|
||||||
|
> "A constant value x can be converted to type T if x is representable by a value of T. As a special case, an integer constant x can be explicitly converted to a string type using the same rule as for non-constant x."
|
||||||
|
|
||||||
|
And that rule is:
|
||||||
|
|
||||||
|
> "Converting a signed or unsigned integer value to a string type yields a string containing the UTF-8 representation of the integer. Values outside the range of valid Unicode code points are converted to "\uFFFD"."
|
||||||
|
|
||||||
|
The first 128 characters of UTF-8 are the same as the ASCII characters, prefixed with a `0`. In ASCII the characters 0 to 9 are encoded sequentially from 48 until 57, so `string(49)` converts to "1". The letters of the alphabet are encoded sequentially as well, so a range of constants representing grades from A to F could be declared in the same fashion:
|
||||||
|
|
||||||
|
```go
|
||||||
|
type Grade string
|
||||||
|
|
||||||
|
const (
|
||||||
|
A = Grade(string(iota+65))
|
||||||
|
B
|
||||||
|
C
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Sources
|
||||||
|
|
||||||
|
- [golang.org](https://golang.org/ref/spec#Constants)
|
||||||
|
- [golang.org](https://golang.org/doc/effective_go.html#constants)
|
||||||
|
- [golang.org](https://golang.org/ref/spec#Conversions)
|
||||||
|
- [golang.org](https://golang.org/ref/spec#Conversions_to_and_from_a_string_type)
|
||||||
|
- [en.wikipedia.org](https://en.wikipedia.org/wiki/UTF-8)
|
||||||
|
- [en.wikipedia.org](https://en.wikipedia.org/wiki/ASCII#Printable_characters)
|
||||||
|
|
|
@ -0,0 +1,40 @@
|
||||||
|
+++
|
||||||
|
title = "Job control in bash scripts"
|
||||||
|
date = 2020-01-06
|
||||||
|
+++
|
||||||
|
|
||||||
|
One should be careful when considering an option like this. Sending processes to the background in a script, are you sure this is what you want?
|
||||||
|
|
||||||
|
However, sometimes ducktape is the only thing that works. Like here, in the case of an old Mongo Docker image that had no working way of importing a seed database at the startup of the container. The `RUN` command had to start Mongo, wait for it to become ready and import the seed data. The container would abort after the command was done, so the two processes had to be managed within the script.
|
||||||
|
|
||||||
|
After some failed attempts with `bg` and `fg`, it turned out that bash does not allow job control in scripts, unless you set the -m option.
|
||||||
|
|
||||||
|
The working result:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
#!/bin/bash
|
||||||
|
set -m
|
||||||
|
|
||||||
|
echo "Starting Mongo..."
|
||||||
|
mongod &
|
||||||
|
|
||||||
|
echo "Waiting for Mongo to accept connections..."
|
||||||
|
RESULT=1
|
||||||
|
while [ $RESULT -ne 0 ]; do
|
||||||
|
mongo --eval "db.stats()" >/dev/null 2>&1
|
||||||
|
RESULT=$?
|
||||||
|
sleep .5
|
||||||
|
done
|
||||||
|
|
||||||
|
echo "Run database fixture script..."
|
||||||
|
mongo /scripts/mongo_init.js
|
||||||
|
|
||||||
|
echo "Bringing back Mongo..."
|
||||||
|
fg
|
||||||
|
```
|
||||||
|
|
||||||
|
## Sources
|
||||||
|
|
||||||
|
- [stackoverflow.com](https://stackoverflow.com/questions/690266/why-cant-i-use-job-control-in-a-bash-script)
|
||||||
|
- [stackoverflow.com](https://stackoverflow.com/questions/16542372/shell-script-check-mongod-server-is-running#16546193)
|
||||||
|
|
|
@ -0,0 +1,312 @@
|
||||||
|
+++
|
||||||
|
title = "LDAP basics for go developers"
|
||||||
|
date = 2020-04-05
|
||||||
|
+++
|
||||||
|
|
||||||
|
Recently I needed to prepare a Go application to accept LDAP as a form of authentication. Knowing absolutely nothing about LDAP, I researched the topic a bit first and I thought it would be helpful to summarize my findings so far.
|
||||||
|
|
||||||
|
- [Minimal LDAP background](#minimal-ldap-background)
|
||||||
|
- [Trying it out on the command line](#trying-it-out-on-the-command-line)
|
||||||
|
- [Authenticating with a Go program](#authenticating-with-a-go-program)
|
||||||
|
|
||||||
|
## Minimal LDAP background
|
||||||
|
|
||||||
|
If you are like me, then you know that LDAP is used for authentication and authorization but not much more. As it turns out, you can do a lot more with it than just store users and permissions. One can put the whole company inventory in it. In fact, I think it is best to view an LDAP server as a kind of weird database. The weird part is dat the data is not stored in tables, like in an SQL database, but in a tree.
|
||||||
|
|
||||||
|
Like a database, you cannot go out and just fire some queries at it. You have to know the schema first.
|
||||||
|
|
||||||
|
## Entries
|
||||||
|
|
||||||
|
Every entry in the tree has at least three components:
|
||||||
|
|
||||||
|
- A distinguished name (DN)
|
||||||
|
- A collection of attributes
|
||||||
|
- A collection of object classes
|
||||||
|
|
||||||
|
The distinguished name is the unique name and the location of the entry in the tree. For instance:
|
||||||
|
|
||||||
|
```
|
||||||
|
cn=admin,dc=example,dc=org
|
||||||
|
```
|
||||||
|
|
||||||
|
could be the DN of the administrator of the example.org company. It is a list of comma separated key-value pairs with the most specific pair (`cn=admin`) on the left and the most common one, the top of the tree, on the right (`dc=org`).
|
||||||
|
|
||||||
|
The `cn` and `dn` describe the type of the value. `cn` means 'common name', `dc` is 'domain component'. Other ones are `ou` (organisational unit) and `uid` (user id).
|
||||||
|
|
||||||
|
The complete entry for this administrator coulde be:
|
||||||
|
|
||||||
|
```
|
||||||
|
dn: cn=admin,dc=example,dc=org
|
||||||
|
objectClass: simpleSecurityObject
|
||||||
|
objectClass: organizationalRole
|
||||||
|
cn: admin
|
||||||
|
description: LDAP administrator
|
||||||
|
userPassword:: e1NTSEF9ZGdKR1g1YTBKQ2twZkZLY1J5cHB0LzYwZmMwVWNReW4=
|
||||||
|
```
|
||||||
|
|
||||||
|
## Binding
|
||||||
|
|
||||||
|
LDAP does not really use the term 'authentication'. Instead one speaks of 'binding' a user. This binding is done to a BindDN, the distinguished name of a branch in the tree. Subsequent requests will be performed in the scope of that branch. That is, this user will only be able to 'see' the subbranches and leave nodes of this BindDN.
|
||||||
|
Trying it out on the command line
|
||||||
|
|
||||||
|
LDAP servers require some work to setup. For the purposes of just testing things out, there are free online servers that kind people have set up and maintain. But a better solution is to find a good Docker image and run things on your local machine. The public servers will not let you modify data, for obvious reasons. The `osixia/openldap` worked for me:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker run -p 389:389 -p 636:636 osixia/openldap
|
||||||
|
```
|
||||||
|
|
||||||
|
Port `389` is a plain `ldap://` connection, port `636` is used for a secure `ldaps://` one.
|
||||||
|
|
||||||
|
This image has a minimal set of data in it. Let's see what it contains by running a search:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ ldapsearch -x -H ldap://localhost -b dc=example,dc=org -D "cn=admin,dc=example,dc=org" -w admin
|
||||||
|
```
|
||||||
|
|
||||||
|
- `-x` to use simple authentication and no SASL
|
||||||
|
- `-H ldap://localhost` to point the program to the uri of our server
|
||||||
|
- `-b` specifies the scope, the branch under which we want to search
|
||||||
|
- `-D` the bindDN
|
||||||
|
- `-w` admin use admin as password
|
||||||
|
|
||||||
|
The result should be something like this:
|
||||||
|
|
||||||
|
```
|
||||||
|
# extended LDIF
|
||||||
|
#
|
||||||
|
# LDAPv3
|
||||||
|
# base <dc=example,dc=org> with scope subtree
|
||||||
|
# filter: (objectclass=*)
|
||||||
|
# requesting: ALL
|
||||||
|
#
|
||||||
|
|
||||||
|
# example.org
|
||||||
|
dn: dc=example,dc=org
|
||||||
|
objectClass: top
|
||||||
|
objectClass: dcObject
|
||||||
|
objectClass: organization
|
||||||
|
o: Example Inc.
|
||||||
|
dc: example
|
||||||
|
|
||||||
|
# admin, example.org
|
||||||
|
dn: cn=admin,dc=example,dc=org
|
||||||
|
objectClass: simpleSecurityObject
|
||||||
|
objectClass: organizationalRole
|
||||||
|
cn: admin
|
||||||
|
description: LDAP administrator
|
||||||
|
userPassword:: e1NTSEF9ZGdKR1g1YTBKQ2twZkZLY1J5cHB0LzYwZmMwVWNReW4=
|
||||||
|
|
||||||
|
# search result
|
||||||
|
search: 2
|
||||||
|
result: 0 Success
|
||||||
|
|
||||||
|
# numResponses: 3
|
||||||
|
# numEntries: 2
|
||||||
|
```
|
||||||
|
|
||||||
|
We see that we have one organization and one admin user in that organization.
|
||||||
|
|
||||||
|
Let's try to add a user. First create a file with the specifics of this new user:
|
||||||
|
|
||||||
|
```
|
||||||
|
# ldapentry
|
||||||
|
dn: cn=john.doe,dc=example,dc=org
|
||||||
|
objectClass: person
|
||||||
|
cn: John Doe
|
||||||
|
sn: john.doe
|
||||||
|
description: just some guy
|
||||||
|
```
|
||||||
|
|
||||||
|
Then add it:
|
||||||
|
|
||||||
|
```bas
|
||||||
|
$ ldapadd -x -H ldap://localhost -D "cn=admin,dc=example,dc=org" -w admin -f ldapentry
|
||||||
|
adding new entry "cn=john.doe,dc=example,dc=org"
|
||||||
|
```
|
||||||
|
|
||||||
|
Now we must set the password:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ldappasswd -s welcome123 -w admin -D "cn=admin,dc=example,dc=org" -x "cn=john.doe,dc=example,dc=org"
|
||||||
|
```
|
||||||
|
|
||||||
|
`-s welcome123` sets the password to welcome123.
|
||||||
|
|
||||||
|
Check that it was added:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ ldapsearch -x -H ldap://localhost -b dc=example,dc=org -D "cn=admin,dc=example,dc=org" -w admin
|
||||||
|
# extended LDIF
|
||||||
|
#
|
||||||
|
# LDAPv3
|
||||||
|
# base <dc=example,dc=org> with scope subtree
|
||||||
|
# filter: (objectclass=*)
|
||||||
|
# requesting: ALL
|
||||||
|
#
|
||||||
|
|
||||||
|
# example.org
|
||||||
|
dn: dc=example,dc=org
|
||||||
|
objectClass: top
|
||||||
|
objectClass: dcObject
|
||||||
|
objectClass: organization
|
||||||
|
o: Example Inc.
|
||||||
|
dc: example
|
||||||
|
|
||||||
|
# admin, example.org
|
||||||
|
dn: cn=admin,dc=example,dc=org
|
||||||
|
objectClass: simpleSecurityObject
|
||||||
|
objectClass: organizationalRole
|
||||||
|
cn: admin
|
||||||
|
description: LDAP administrator
|
||||||
|
userPassword:: e1NTSEF9dnU4ZFo0YmVpMnRQYWN6UmpBVERoK1pRMkhUaDJYc2Q=
|
||||||
|
|
||||||
|
# john.doe, example.org
|
||||||
|
dn: cn=john.doe,dc=example,dc=org
|
||||||
|
objectClass: person
|
||||||
|
cn: John Doe
|
||||||
|
cn: john.doe
|
||||||
|
sn: john.doe
|
||||||
|
description: just some guy
|
||||||
|
userPassword:: e1NTSEF9NXZuQ1dwK1RNOThzMm9oRWF0U0cxRDZiMTF5RDhhbHk=
|
||||||
|
|
||||||
|
# search result
|
||||||
|
search: 2
|
||||||
|
result: 0 Success
|
||||||
|
|
||||||
|
# numResponses: 4
|
||||||
|
# numEntries: 3
|
||||||
|
```
|
||||||
|
|
||||||
|
Now, let's see if we can authenticate as this new user and see ourselves:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ ldapsearch -x -H ldap://localhost -b "cn=john.doe,dc=example,dc=org" -D "cn=john.doe,dc=example
|
||||||
|
,dc=org" -w welcome123
|
||||||
|
# extended LDIF
|
||||||
|
#
|
||||||
|
# LDAPv3
|
||||||
|
# base <cn=john.doe,dc=example,dc=org> with scope subtree
|
||||||
|
# filter: (objectclass=*)
|
||||||
|
# requesting: ALL
|
||||||
|
#
|
||||||
|
|
||||||
|
# john.doe, example.org
|
||||||
|
dn: cn=john.doe,dc=example,dc=org
|
||||||
|
objectClass: person
|
||||||
|
cn: John Doe
|
||||||
|
cn: john.doe
|
||||||
|
sn: john.doe
|
||||||
|
description: just some guy
|
||||||
|
userPassword:: e1NTSEF9NXZuQ1dwK1RNOThzMm9oRWF0U0cxRDZiMTF5RDhhbHk=
|
||||||
|
|
||||||
|
# search result
|
||||||
|
search: 2
|
||||||
|
result: 0 Success
|
||||||
|
|
||||||
|
# numResponses: 2
|
||||||
|
# numEntries: 1
|
||||||
|
```
|
||||||
|
|
||||||
|
Succes!
|
||||||
|
|
||||||
|
## Authenticating with a Go program
|
||||||
|
|
||||||
|
For Go, there a package that can do the heavy lifting for us, called `go-ldap`. The usual steps of authenticating users in a program are:
|
||||||
|
|
||||||
|
- Bind (authenticate) as an admin user
|
||||||
|
- Search for the user we want to authenticate
|
||||||
|
- Try to bind as that user with the supplied password to see if it is correct
|
||||||
|
- Do something useful with the result, such as initiating a session for the user or denying entry
|
||||||
|
- Switch back to the admin user
|
||||||
|
|
||||||
|
The package has example code in the `README.md` that follows exactly these steps. Adjusting for the values we used above, we get:
|
||||||
|
|
||||||
|
```go
|
||||||
|
# main.go
|
||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"log"
|
||||||
|
|
||||||
|
"github.com/go-ldap/ldap"
|
||||||
|
)
|
||||||
|
|
||||||
|
func main() {
|
||||||
|
|
||||||
|
username := "cn=admin,dc=example,dc=org"
|
||||||
|
password := "admin"
|
||||||
|
|
||||||
|
bindusername := "cn=john.doe,dc=example,dc=org"
|
||||||
|
bindpassword := "welcome123"
|
||||||
|
|
||||||
|
url := "ldap://localhost:389"
|
||||||
|
|
||||||
|
fmt.Println("connect..")
|
||||||
|
l, err := ldap.DialURL(url)
|
||||||
|
if err != nil {
|
||||||
|
log.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Println("binding binduser..")
|
||||||
|
if err := l.Bind(username, password); err != nil {
|
||||||
|
log.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Println("searching user...")
|
||||||
|
searchRequest := ldap.NewSearchRequest(
|
||||||
|
bindusername,
|
||||||
|
ldap.ScopeWholeSubtree, ldap.NeverDerefAliases, 0, 0, false,
|
||||||
|
fmt.Sprintf("(&(objectClass=person))"),
|
||||||
|
[]string{"dn"},
|
||||||
|
nil,
|
||||||
|
)
|
||||||
|
sr, err := l.Search(searchRequest)
|
||||||
|
if err != nil {
|
||||||
|
log.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf("%+v\n", sr)
|
||||||
|
if len(sr.Entries) != 1 {
|
||||||
|
log.Fatal("User does not exist or too many entries returned")
|
||||||
|
}
|
||||||
|
|
||||||
|
userdn := sr.Entries[0].DN
|
||||||
|
|
||||||
|
fmt.Println("binding user...")
|
||||||
|
if err := l.Bind(userdn, bindpassword); err != nil {
|
||||||
|
log.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Println("switching back..")
|
||||||
|
if err := l.Bind(username, password); err != nil {
|
||||||
|
log.Fatal(err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Running it:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ go run main.go
|
||||||
|
connect..
|
||||||
|
binding binduser..
|
||||||
|
searching user...
|
||||||
|
&{Entries:[0xc0001086c0] Referrals:[] Controls:[]}
|
||||||
|
binding user...
|
||||||
|
switching back..
|
||||||
|
```
|
||||||
|
|
||||||
|
Success again!
|
||||||
|
|
||||||
|
## Sources
|
||||||
|
|
||||||
|
- [ldap.com](https://ldap.com/basic-ldap-concepts/)
|
||||||
|
- [stackoverflow.com](https://stackoverflow.com/questions/18756688/what-are-cn-ou-dc-in-an-ldap-search)
|
||||||
|
- `man ldapsearch`, `man ldapadd` and `man ldappasswd`
|
||||||
|
- [www.forumsys.com](http://www.forumsys.com/tutorials/integration-how-to/ldap/online-ldap-test-server/)
|
||||||
|
- [serverfault.com](https://serverfault.com/questions/514870/how-do-i-authenticate-with-ldap-via-the-command-line)
|
||||||
|
- [github.com](https://github.com/osixia/docker-openldap)
|
||||||
|
- [www.thegeekstuff.com]()
|
||||||
|
- [pkg.go.dev](https://pkg.go.dev/github.com/go-ldap/ldap?tab=doc)
|
||||||
|
|
|
@ -0,0 +1,33 @@
|
||||||
|
+++
|
||||||
|
title = "Quick go test cycle with reflex"
|
||||||
|
date = 2020-01-04
|
||||||
|
+++
|
||||||
|
|
||||||
|
While you are working on some piece of code, it is nice to have some feedback about whether you broke or fixed something by running the relevant unit tests. To automate this I usually have a terminal window open with the following command:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ reflex -r '\.go$' -- sh -c 'clear && go test -v ./web/model --run TestEvent'
|
||||||
|
```
|
||||||
|
|
||||||
|
- `reflex` is a small utilty that watches for changes on the file system.
|
||||||
|
- `-r` indiates that it should only watch changes in files that satisfy the following regex pattern.
|
||||||
|
- `'\.go$'` the regex tells to only watch changes in go files.
|
||||||
|
- `--` signifies the end of reflex options.
|
||||||
|
- `sh` shell to interpret the command we want to run.
|
||||||
|
- `-c` tell sh to listen to commands entered after, not to standard input.
|
||||||
|
- `clear` first clear the terminal window.
|
||||||
|
- `go test` run the test in verbose mode.
|
||||||
|
- `-v` will produce a more verbose output with PASS/FAIL for each test and the output fom t.Log,
|
||||||
|
- `./web/model` only test files in the web/model package.
|
||||||
|
- `-run` TestEvent only run test with names that satisfy the regext TestEvent.
|
||||||
|
|
||||||
|
Remember, `reflex` is only triggered by changes on the filesystem. After entering the command, nothing happens until you save a `.go` file.
|
||||||
|
|
||||||
|
## Sources
|
||||||
|
|
||||||
|
- [github.com](https://github.com/cespare/reflex)
|
||||||
|
- [unix.stackexchange.com](https://unix.stackexchange.com/questions/11376/what-does-double-dash-mean)
|
||||||
|
- `man sh`
|
||||||
|
- [blog.alexellis.io](https://blog.alexellis.io/golang-writing-unit-tests/)
|
||||||
|
- [golang.org](https://golang.org/cmd/go/#hdr-Testing_flags)
|
||||||
|
|
|
@ -0,0 +1,44 @@
|
||||||
|
+++
|
||||||
|
title = "Shared environment variables for make, bash and docker"
|
||||||
|
date = 2020-01-07
|
||||||
|
+++
|
||||||
|
|
||||||
|
It is possible to define a set of variables and share them in Make, Bash and the Docker containers that are orchestrated by `docker-compose`.
|
||||||
|
|
||||||
|
Docker-compose can use an `.env` file to substitute variables in a `docker-compose.yml` file in the same directory. In this docker-compose.yml they can be exported to the containers.
|
||||||
|
|
||||||
|
Incuding this `.env` file in your `Makefile` makes hem available there as well, but they are not automatically exported to the Bash shells that are spawned by `make` to execute the targets. This can be changed by adding the `.EXPORTALLVARIABLES:` target to your `Makefile`.
|
||||||
|
|
||||||
|
```
|
||||||
|
# .env
|
||||||
|
VAR1=this
|
||||||
|
VAR2=that
|
||||||
|
VAR3=those
|
||||||
|
```
|
||||||
|
|
||||||
|
```make
|
||||||
|
# Makefile
|
||||||
|
include .env
|
||||||
|
|
||||||
|
.EXPORT_ALL_VARIABLES:
|
||||||
|
|
||||||
|
task:
|
||||||
|
@echo "VAR1 is ${VAR1}"
|
||||||
|
@some_command # some_command can use $VAR1, $VAR2 and $VAR3
|
||||||
|
@docker-compose up
|
||||||
|
```
|
||||||
|
|
||||||
|
```
|
||||||
|
# docker-compose.yml
|
||||||
|
...
|
||||||
|
app:
|
||||||
|
image: "registry/the_app:${VAR2}"
|
||||||
|
environment:
|
||||||
|
- VAR3=${VAR3}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Sources
|
||||||
|
|
||||||
|
- [vsupalov.com](https://vsupalov.com/docker-arg-env-variable-guide/#the-dot-env-file-env)
|
||||||
|
- [www.gnu.org](https://www.gnu.org/software/make/manual/html_node/Special-Targets.html#Special-Targets)
|
||||||
|
|
|
@ -0,0 +1,296 @@
|
||||||
|
+++
|
||||||
|
title = "Unit test outbound HTTP requests in go"
|
||||||
|
date = 2020-07-04
|
||||||
|
+++
|
||||||
|
|
||||||
|
In general, when one wants to test the interaction of multiple services and systems, one tries to set up an integration test. This often involves spinning up some Docker containers and a docker-compose file that orchestrates the dependencies between them and starts the integration test suite. In other words, this can be a lot of work.
|
||||||
|
|
||||||
|
Sometimes that is too much for the case at hand, but you still want to check that the outbound HTTP requests of your program are ok. Does it send the right body and the right headers? Does it do authentication? In a world where the main job of a lot of services is to talk to other services, this is important.
|
||||||
|
|
||||||
|
Luckily, it is possible to test this without all that Docker work. The standard library in Golang already provides a mock server for testing purposes: `httptest.NewServer` will give you one. It is designed to give mock responses to HTTP requests, for use in your tests. You can set it to respond with valid and invalid responses, so you can check that your code is able to handle all possible variations. After all, external services are unreliable and your app must be prepared for that.
|
||||||
|
|
||||||
|
This is good. But with a bit of extra code, we can extend these mocks and test the outbound requests as well.
|
||||||
|
|
||||||
|
To demonstrate this, let's look at a simple generic client for the Foo Cloud Service (tm). We'll examine the following parts:
|
||||||
|
|
||||||
|
- [The code we want to test](#the-code-we-want-to-test)
|
||||||
|
- [Setting up the mock server](#setting-up-the-mock-server)
|
||||||
|
- [Writing the tests](#writing-the-tests)
|
||||||
|
- [Checking the outbound requests](#checking-the-outbound-requests)
|
||||||
|
|
||||||
|
_Note: If you're the type of reader that likes code better than words, skip this explanation and go directly to the test/doc folder in [this repository](https://forgejo.ewintr.nl/ewintr/go-kit) that contains a complete working example of everything discussed below._
|
||||||
|
|
||||||
|
## The Code We Want to Test
|
||||||
|
|
||||||
|
We're not particularly interested in the specific implementation of `FooClient` right now. Let's try some TDD first. This is the functionality that we want in our code:
|
||||||
|
|
||||||
|
```go
|
||||||
|
type FooClient struct {
|
||||||
|
...
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewFooClient(url, username, password string) *FooClient{
|
||||||
|
...
|
||||||
|
}
|
||||||
|
|
||||||
|
func (fc *FooClient) DoStuff(param string) (string, error) {
|
||||||
|
...
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
And furthermore we have the requirements that:
|
||||||
|
|
||||||
|
When `DoStuff` is called, a request is send out to the given url, extended with path `/path`.
|
||||||
|
The request has a JSON body with a field `param` and the value that was passed to the function.
|
||||||
|
The request contains the corresponding JSON headers.
|
||||||
|
The request contains an authentication header that does basic authentication with the username and password.
|
||||||
|
A successful response will have status 200 and a JSON body with the field `result`. The value of this field is what we want to return from the method.
|
||||||
|
|
||||||
|
This is all very standard. I'm sure you've seen this type of code before. Let's move on!
|
||||||
|
|
||||||
|
## Setting up the Mock Server
|
||||||
|
|
||||||
|
So how do we set up our mock server?
|
||||||
|
|
||||||
|
To do so, first we need some types:
|
||||||
|
|
||||||
|
```go
|
||||||
|
// MockResponse represents a response for the mock server to serve
|
||||||
|
type MockResponse struct {
|
||||||
|
StatusCode int
|
||||||
|
Headers http.Header
|
||||||
|
Body []byte
|
||||||
|
}
|
||||||
|
|
||||||
|
// MockServerProcedure ties a mock response to a url and a method
|
||||||
|
type MockServerProcedure struct {
|
||||||
|
URI string
|
||||||
|
HTTPMethod string
|
||||||
|
Response MockResponse
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
These types are just a convenient way to tell the mock server to what requests we want it to respond and what to respond with.
|
||||||
|
|
||||||
|
But there is more. We would also like to store the requests that our code makes for later inspection. That is, we want to use something that can record the requests. Let's go for a `Recorder` interface with a method `Record()`:
|
||||||
|
|
||||||
|
```go
|
||||||
|
// MockRecorder provides a way to record request information from every successful request.
|
||||||
|
type MockRecorder interface {
|
||||||
|
Record(r *http.Request)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Then we get to the actual mock server. Note that for the most part, it just builds on the mock server from the standard `httptest` package:
|
||||||
|
|
||||||
|
```go
|
||||||
|
// NewMockServer return a mock HTTP server to test requests
|
||||||
|
func NewMockServer(rec Mockrecorder, procedures ...MockServerProcedure) *httptest.Server {
|
||||||
|
var handler http.Handler
|
||||||
|
|
||||||
|
handler = http.HandlerFunc(
|
||||||
|
func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
|
||||||
|
for _, proc := range procedures {
|
||||||
|
|
||||||
|
if proc.URI == r.URL.RequestURI() && proc.HTTPMethod == r.Method {
|
||||||
|
|
||||||
|
headers := w.Header()
|
||||||
|
for hkey, hvalue := range proc.Response.Headers {
|
||||||
|
headers[hkey] = hvalue
|
||||||
|
}
|
||||||
|
|
||||||
|
code := proc.Response.StatusCode
|
||||||
|
if code == 0 {
|
||||||
|
code = http.StatusOK
|
||||||
|
}
|
||||||
|
|
||||||
|
w.WriteHeader(code)
|
||||||
|
w.Write(proc.Response.Body)
|
||||||
|
|
||||||
|
if rec != nil {
|
||||||
|
rec.Record(r)
|
||||||
|
}
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
w.WriteHeader(http.StatusNotFound)
|
||||||
|
return
|
||||||
|
})
|
||||||
|
|
||||||
|
return httptest.NewServer(handler)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
This function returns a `*httptest.Server` with exactly one handler function. That handler function simply loops through all the given mock server procedures, checks whether the path and the HTTP method match with the request and if so, returns the specified mock response, with status code, headers and response body as configured.
|
||||||
|
|
||||||
|
On a successful match and return, it records the request that was made through our `Recorder` interface. If there was no match, a `http.StatusNotFound` is returned.
|
||||||
|
|
||||||
|
That's all.
|
||||||
|
|
||||||
|
## Writing the Tests
|
||||||
|
|
||||||
|
How would we use this mock server in a test? We can, for instance, create one like this:
|
||||||
|
|
||||||
|
```go
|
||||||
|
mockServer := NewMockServer(nil, MockServerProcedure{
|
||||||
|
URI: "/path",
|
||||||
|
HTTPMethod: http.MethodGet,
|
||||||
|
Response: MockResponse{
|
||||||
|
StatusCode: http.StatusOK,
|
||||||
|
Body: []byte(`First page`),
|
||||||
|
},
|
||||||
|
},
|
||||||
|
// define more if needed
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
And use it as follows:
|
||||||
|
|
||||||
|
```go
|
||||||
|
func TestFooClientDoStuff(t *testing.T) {
|
||||||
|
path := "/path"
|
||||||
|
username := "username"
|
||||||
|
password := "password"
|
||||||
|
|
||||||
|
for _, tc := range []struct {
|
||||||
|
name string
|
||||||
|
param string
|
||||||
|
respCode int
|
||||||
|
respBody string
|
||||||
|
expErr error
|
||||||
|
expResult string
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "upstream failure",
|
||||||
|
respCode: http.StatusInternalServerError,
|
||||||
|
expErr: httpmock.ErrUpstreamFailure,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "valid response to bar",
|
||||||
|
param: "bar",
|
||||||
|
respCode: http.StatusOK,
|
||||||
|
respBody: `{"result":"ok"}`,
|
||||||
|
expResult: "ok",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "valid response to baz",
|
||||||
|
param: "baz",
|
||||||
|
respCode: http.StatusOK,
|
||||||
|
respBody: `{"result":"also ok"}`,
|
||||||
|
expResult: "also ok",
|
||||||
|
},
|
||||||
|
|
||||||
|
...
|
||||||
|
|
||||||
|
} {
|
||||||
|
t.Run(tc.name, func(t *testing.T) {
|
||||||
|
mockServer := test.NewMockServer(nil, test.MockServerProcedure{
|
||||||
|
URI: path,
|
||||||
|
HTTPMethod: http.MethodPost,
|
||||||
|
Response: test.MockResponse{
|
||||||
|
StatusCode: tc.respCode,
|
||||||
|
Body: []byte(tc.respBody),
|
||||||
|
},
|
||||||
|
})
|
||||||
|
|
||||||
|
client := httpmock.NewFooClient(mockServer.URL, username, password)
|
||||||
|
|
||||||
|
actResult, actErr := client.DoStuff(tc.param)
|
||||||
|
|
||||||
|
// check result
|
||||||
|
test.Equals(t, true, errors.Is(actErr, tc.expErr))
|
||||||
|
test.Equals(t, tc.expResult, actResult)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
_Note: the `test.Equals` are part of the small test package in [this go-kit](https://forgejo.ewintr.nl/ewintr/go-kit). The discussed http mock also belongs to that package and together they form a minimal, but sufficient set of test helpers. But if you prefer, you can of course combine this with popular libraries like [testify](https://pkg.go.dev/github.com/stretchr/testify/assert?tab=doc)._
|
||||||
|
|
||||||
|
We've set up a regular table driven test for calling `FooClient.DoStuff`. In the table we have three test cases. One pretends the external server is down en responds with an error status code. The other two mimick a working external server and test two possible inputs, with `param` "bar" and `param` "baz".
|
||||||
|
|
||||||
|
This is just the simple version. It is not shown here, but we can also check different errors with the response body. What if we would set it to `[]byte("{what?")`. Would our code be able to handle that?
|
||||||
|
|
||||||
|
Also, because `NewMockServer` is a variadic function, we can pass in more mock procedures and test more complex scenario's. What if we need to login on a separate endpoint before we can make the request for `DoStuff`? Just add a mock for the login and check that it is called. And remember that the real server might not return the things you expect it to return, so test a failing login too.
|
||||||
|
|
||||||
|
## Checking the Outbound Requests
|
||||||
|
|
||||||
|
Now we come to the interesting part: the recording of our requests. In the code above we conveniently ignored the first argument in `NewMockServer`. But it was this `Recorder` that caused us to set all this up in the first place.
|
||||||
|
|
||||||
|
The nice thing about interfaces is that you can implement them exactly the way you want for the case at hand. This is especially useful in testing, because different situations ask for different checks. However, the go-kit test package has a straightforward implementation called `MockAssertion` and it turns out that that implementation is already enough for 90% of the cases. You mileage may vary, of course.
|
||||||
|
|
||||||
|
It would be too much to discuss all details of `MockAssertion` here. If you want, you can inspect the code in `test/httpmock.go` in the mentioned [go-kit repository](https://forgejo.ewintr.nl/ewintr/go-kit). For now, let's keep it at these observations:
|
||||||
|
|
||||||
|
```go
|
||||||
|
// recordedRequest represents recorded structured information about each request
|
||||||
|
type recordedRequest struct {
|
||||||
|
hits int
|
||||||
|
requests []*http.Request
|
||||||
|
bodies [][]byte
|
||||||
|
}
|
||||||
|
|
||||||
|
// MockAssertion represents a common assertion for requests
|
||||||
|
type MockAssertion struct {
|
||||||
|
indexes map[string]int // indexation for key
|
||||||
|
recs []recordedRequest // request catalog
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
We have a slice with all the requests that were recorded and an index to look them up. This index consists of a string that combines the request uri and the http method. We can look up the requests with these methods:
|
||||||
|
|
||||||
|
```go
|
||||||
|
// Hits returns the number of hits for a uri and method
|
||||||
|
func (m *MockAssertion) Hits(uri, method string) int
|
||||||
|
|
||||||
|
// Headers returns a slice of request headers
|
||||||
|
func (m *MockAssertion) Headers(uri, method string) []http.Header
|
||||||
|
|
||||||
|
// Body returns request body
|
||||||
|
func (m *MockAssertion) Body(uri, method string) [][]byte {
|
||||||
|
```
|
||||||
|
|
||||||
|
And if needed, we can reset the assertion:
|
||||||
|
|
||||||
|
```go
|
||||||
|
// Reset sets all unexpected properties to their zero value
|
||||||
|
func (m *MockAssertion) Reset() error {
|
||||||
|
```
|
||||||
|
|
||||||
|
Armed with this, checking our outbound requests becomes a very simple task.
|
||||||
|
|
||||||
|
First, update the line that creates the mock server, so that we actually pass a recorder:
|
||||||
|
|
||||||
|
```go
|
||||||
|
...
|
||||||
|
var record test.MockAssertion
|
||||||
|
mockServer := test.NewMockServer(&record, test.MockServerProcedure{
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
Then, add the following statements at the end of our test function body:
|
||||||
|
|
||||||
|
```go
|
||||||
|
// check request was done
|
||||||
|
test.Equals(t, 1, record.Hits(path, http.MethodPost))
|
||||||
|
|
||||||
|
// check request body
|
||||||
|
expBody := fmt.Sprintf(`{"param":%q}`, tc.param)
|
||||||
|
actBody := string(record.Body(path, http.MethodPost)[0])
|
||||||
|
test.Equals(t, expBody, actBody)
|
||||||
|
|
||||||
|
// check request headers
|
||||||
|
expHeaders := []http.Header{{
|
||||||
|
"Authorization": []string{"Basic dXNlcm5hbWU6cGFzc3dvcmQ="},
|
||||||
|
"Content-Type": []string{"application/json;charset=utf-8"},
|
||||||
|
}}
|
||||||
|
test.Equals(t, expHeaders, record.Headers(path, http.MethodPost))
|
||||||
|
```
|
||||||
|
|
||||||
|
That's it! We now have tested each and every requirement that was listed above. Congratulations.
|
||||||
|
|
||||||
|
I hope you found this useful. As mentioned above, a complete implementation of `FooClient` that passes all tests can be found in the doc folder of [this repository](https://forgejo.ewintr.nl/ewintr/go-kit).
|
||||||
|
|
||||||
|
If you have comments, please let me know.
|
|
@ -0,0 +1,70 @@
|
||||||
|
+++
|
||||||
|
title = "Why I built my own shitty static site generator"
|
||||||
|
date = 2020-11-09
|
||||||
|
+++
|
||||||
|
|
||||||
|
_Note: this post was featured on [hackernews](https://news.ycombinator.com/item?id=25227181) and [lobste.rs](https://lobste.rs/s/uacznf/why_i_built_my_own_shitty_static_site) and generated lots of discussion. Read on there for more opinions on the subject!_
|
||||||
|
|
||||||
|
On the internet, there is no shortage of good quality [static site generators](https://jamstack.org/generators/) (SSG’s) that you can download for free. [Hugo](https://gohugo.io/), [Jekyll](https://jekyllrb.com/), and hundreds of others are readily available. And they work. You can build all kinds of sites with them. I know that, because I’ve used some of them. Hugo was the driving force behind this website until very recently. Despite that, when I tried to add a new section a while ago, I got rather frustrated with it and decided to build my own generator. It turned out to be a very pleasant experience and not just because I like to program things.
|
||||||
|
|
||||||
|
While working on it, I discovered some of the deeper motivations that drove me to undertake this project. On the surface it would seem an odd thing to do, because it takes a lot of time and it appears to offer little benefit. I did not create any new or spectacular functionality. If you click around this site and think: Hey, but I could totally make this with , then you would probably be right. But that is not the point. There are certain advantages to making it all yourself and I suspect that these advantages trancend the subject of SSG’s and programming. My speculation is that exploring this direction might also be interesting for people who do not like to program, or maybe even for those who don’t like computers that much at all.
|
||||||
|
|
||||||
|
So, why choose this project out so many others that all sound so much more interesting? It is easy to summarize, but without some context it may sound a bit abstract. The real reason for all this work is that I think that a personal site should be personal and to make it personal, one should solely be guided by one’s intuition and not by the mental models of available tools and the restrictions they impose on your thoughts.
|
||||||
|
|
||||||
|
That probably sounded vague and perhaps a little far fetched. After all, as long as you can write the words you want to write, draw the lines you want to draw, you are not limited in your creativity, right? Maybe you are. To make the point more tangible, let me expand on my own situation for a bit. I wil focus on Hugo, since that is the SSG I know best. But the same principles hold for other generators. All other tools even, I believe. Metadata and Organising Thoughts
|
||||||
|
|
||||||
|
As said, the list of available static site generators is endless. But somehow they all seem to focus on [Markdown](https://en.wikipedia.org/wiki/Markdown) as markup language to write your posts in. Markdown is very easy to learn and that is probably the reason why it is so popular. Unfortunately, it is a pretty bad markup language for this use case, as it is very incomplete. It is not really a language. Markdown is better seen as a bunch of shortcuts to simplify writing a few common HTML tags. Out of the box you can only sort of markup the body of a document with it. Titles, paragraphs, lists, etc. But not more than that. As we are dealing with a website, shortcuts for HTML tags can be useful, but we need more. For instance, one also needs metadata, like tags, publishing dates, etc. You do want the latest post to be at the top of the newsfeed, right? Then we must find a way to indicate the time when a post was published.
|
||||||
|
|
||||||
|
In Hugo this is solved with the awkward concept of [frontmatter](https://gohugo.io/content-management/front-matter/). At the top of each Markdown file, one needs to add a block of text that is not Markdown, but another format. You can pick either YAML, TOML, or JSON. In that block you can specify the things I mentioned. Publishing date, author, category, etc. It is all very flexible, you can even define your own metadata. So you can definitely make a nice looking site out of this.
|
||||||
|
|
||||||
|
But the downside is that any blog or article you write is now tied to the system that you use to publish it. Frontmatter is not part of Markdown. It is part of Hugo. So other Markdown tools will not be able to extract that info. Only Hugo can. So if you, like me, like to write all kinds of things, and you want to have some of your writings to end up on your site and others not, a decision that is perhaps made even only after you’ve done the writing, then you have just created a problem. Because now you have to make two piles of writing. One pile for texts that will be published on your site, and another for text that you want to export to, say, PDF. But wait, maybe you also want to compile a selection of your stories and make it into an ebook. And maybe you are really proud of a piece and you want to publish it all three ways at once. It becomes a bit weird, and also very impractical, to store your work then. Are you going to change the document format each time you want to make an export? Are you going to make multiple copies? What happens if you find a spelling error later? This all quickly becomes a big mess.
|
||||||
|
|
||||||
|
Ideally, I want to have one folder for storing all my writing. I want to organize that folder the way I think fits best with the content. Then I want to point the site generator to that folder and it should figure out the rest by itself. What needs to be published on my site? Where should it end up? The metadata should be all that is needed to figure it out. And I want the same thing to be true for all other publishers, generators, indexers, etc. that I use, or may want to use in the future. The only solution is then to store your texts in open, tool agnostic document format that can hold all the relevant info. Preferably a plain text format too. Because that part of Markdown I do like. Using simple text editors, Git version control, yes, give me that.
|
||||||
|
|
||||||
|
Enter [Asciidoc](https://asciidoc.org/). A Markdown so structured and complete that you can write a whole book in it. Yet is has the same simple way of adding markup and it looks very similar. I wrote another post on how I used a subset of Asciidoc to make my generator. The point I want to make here is that a simple, in my opinion very reasonable requirement to not want to be forced to reorganise and duplicate my files in an illogical way, already rules out 90% of the available tools. And, conversely, that merely by adopting one of those existing tools, you have suddenly become a bit restricted in the way you can think about your creative work.
|
||||||
|
|
||||||
|
Think about it. The moment you start anything, the moment where the ideas in your head are not more than undefined glimpses of images or feelings. The moment you have to concentrate really hard to not let the fleeting, still wordless impressions slip. Blink your eyes one time too many and they will be lost, floated away. On that very moment, you get bothered by the question “Where should this end up, once it is finished? Is it a note for my diary, or a book?” That is totally backwards. That should not be the first question about your work, but the last. Not everyone is the same, but for me this upfront question is limiting. Thoughts that are not mature enough to be categorized, are forced to materialize in anyway, so you can put them in the right bucket. And then they slip away. Blank Page Instead of Puzzles and Pieces
|
||||||
|
|
||||||
|
By now, some pepole might be thinking: yes, that is all fine, but some generators, like Hugo can use Asciidoc as input and you can set an alternative content path. Surely you can work something out here and configure things the way it suits you?
|
||||||
|
|
||||||
|
Well, yes, from the face of it, it might be possible to cobble something up. But that is not going to end up well. Going that route, you will get dragged down in an endless cycle of figuring out options and options of options and in the end, you will be happy to get anything on the screen at all.
|
||||||
|
|
||||||
|
Let’s start simple. Some generators say they support Asciidoc, but they don’t do that nativly. At least the ones I’ve seen. That is, you have to install another piece of software to get the functionality. In this case [Asciidoctor](https://asciidoctor.org/). (And Asciidocter in turn requires Ruby, but I think it ends there.) Then the two pieces must be configured to work together and this must be done on overy device you want to use. This is what developers call a dependency and they are to be avoided wherever possible, as they require you to do work, just to keep things running as they are. At the time of writing you can read this in the Hugo documentation on how to configure Asciidoc: “AsciiDoc implementation EOLs in Jan 2020 and is no longer supported. AsciiDoc development is being continued under Asciidoctor. The format AsciiDoc remains of course. Please continue with the implementation Asciidoctor.”
|
||||||
|
|
||||||
|
So what this says is that the people behind Hugo saw that a piece of software they relied on, AsciiDoc, was going to be outdated so they switched to another piece of external software, Asciidoctor, to prevent things from breaking. A sensible move. But for you, the user, things are now already broken, because now you have remove the first piece of software, install the second, and configure things again to make them work together. And again, this must be done on all your devices. You get to choose between options, but the options are not stable and require work by you, the user. Sometimes the options have some dependencies themselves, repeating the problem. Not a fun way to spend an afternoon.
|
||||||
|
|
||||||
|
But enough about Asciidoc. Let’s talk about templates.
|
||||||
|
|
||||||
|
It is wellknown that Hugo has a difficult templating language. This is because Hugo is built in Go and leverages the [template functionality](https://golang.org/pkg/html/template/) of the Go standard library. That functionality is fast and elegant, but very much geared to programmers. I knew this beforehand, but since I am a Go developer, I figured this would be an advantage, not a disadvantage. I do this Go stuff all day. It has payed for the laptop I am writing on right now and the chair I am sitting in. Surely this will be easy?
|
||||||
|
|
||||||
|
Not so much. Writing Hugo templates turned out to be a moderately frustrating experience. It was an endless game of guessing variables, types and trying to combine them into something useful by chaining functions. There is documentation, every variable and function is listed, but a crucial thing is missing: the big picture.
|
||||||
|
|
||||||
|
The act of templating consists of two parts: data, and what to do with the data. Let’s say for instance we want to render a title. In the template we indicate that titles have a certain color, size and font. For each page we get a new title, the data, but we render them all the same way, like we specified. That is the job of a template. Telling how we want data to be rendered.
|
||||||
|
|
||||||
|
However, a page contains a lot more data than just the title. In fact, in Hugo you get access to everything. Everything that is content. Including all metadata and, crucially, how it all relates together in the big abstract, imagined structure that makes up a site. This mental picture is not shown in the documentation. And worse, it is a generalized mental picture of a site. Because Hugo is a universal SSG and you must have the option to build any type of site. Count the levels of unwinding you have to do before you can translate this to your actual site: generalized (one), abstract (two), mental (three) picture.
|
||||||
|
|
||||||
|
Of course, making use of a template already imposes some abstractions. They can be very helpful, I ended up using a few too. But sometimes it is better for all parties involved (i.e. you and your tools) to just hardcode some things.
|
||||||
|
|
||||||
|
These two issues, the templating and the external dependencies, have something in common. To solve them, your mind must switch to an analysing mode. There is a black box with buttons on it. If you figure out how it works inside, you can push the buttons in the right order and make it do the thing you want. If the box contains a lot of complex gears and levers, this can be a hard riddle to solve and you need to spend more effort. You will start to ask yourself questions along the lines of: What did the makers of the box think when they designed it? What was it designed for exactly? What problem does it solve and what would seem logical to them? They probably catered to the most common needs as they saw them.
|
||||||
|
|
||||||
|
If you want solve this riddle, you have to leave your own framing of the the problem aside for a moment and adopt theirs. You have to step out of your own thinking and into theirs.
|
||||||
|
|
||||||
|
At one point, if you are succesful, you’ve grasped it and then you want to get back to your own, original frame. See how you can connect the two. But more often than not this is hard, or even impossible. By making their way of doing things your own, you have overwritten your original perspective. At least in part. This is not always a bad thing, but it is important to realize that it happens. That you might not want that. Compare this with starting from scratch. No solutions to other peoples problems, just your own. This means creating a solution all by yourself, which is hard. But you can al least be sure that it fits your problem. The Spectrum of Software Tools
|
||||||
|
|
||||||
|
So, should everyone and their mother start programming everything from scratch, even if they have no interest in making software whatsoever? That would be impractical. And probably bad for their motivation. Not to mention that for a lot of people, programming feels exactly like that magical black box with buttons and complicated machinery inside, so that would be counterproductive. Nevertheless, I think there are some general lessons to draw from this.
|
||||||
|
|
||||||
|
All software tools make some kind of trade-off between flexibility and ease of use. Some make a better compromise than others, but a compromise it allways will be. The easiest tool to use is the one that has only one button. Push it and you get a complete result. But in order to do that, the tool, or actually the creators of the tool, have to make all kinds of decisions for you, both big and small. If you want more control over the outcome, that is possible, but by definition that means that you have to give more input. More buttons that need to be pushed, more dials to adjust. The level of control you have will match the level of input you have to give. If you extend this far enough, add every control imaginable, you end up with the very intricate and elaborate tool that we call a programming language. In a programming language, every little detail of the end result is yours to dictate. But on the flip side, it requires a lot of input and effort to get something moving.
|
||||||
|
|
||||||
|
Site generators can be anywhere on this scale. One could argue that services like Facebook and Twitter are the ultimate “require only the push of one button” versions in this space. Thanks to them, anyone can publish without having to invest time and effort. Write your text, push the button and it is there for everyone to see. Design, structure, notifying readers, it is al magically there.
|
||||||
|
|
||||||
|
But remember, if you don’t make the decisions, someone else has do it for you. It might be a good feeling to outsource all these difficult problems. Maybe you assume that it is for the better, because you think that other person knows more about the necessary mechanics. They probably do. But on the other hand, that other person does not know what is inside your head.
|
||||||
|
|
||||||
|
If Twitter is the only publishing platform you’ll ever use, then, without trying, you will naturally start to write texts that are 280 characters or less. That is just how most people work. But maybe this limitation irritates you often enough that you start to look for a way around it. You search online and you find apps like [Threadreader](https://threadreaderapp.com/), that lets users string multiple tweets into one document as if they were a single text. This is a solution to the problem you had, but if you read your new posts carefully, you will notice that they don’t “feel” right. The limitation of 280 characters is still there, but it is hidden. One tweet becomes one paragraph, so you are bound very short paragraphs and as a result the flow of your text is still very.. staccato. Even though your texts can now be much longer, you still can’t write the way you want. Not to mention the clumsy process of composing the multiple tweets in the right order.
|
||||||
|
|
||||||
|
In a situation like this, you would have been much better off with starting a [Wordpress](https://wordpress.com/) blog. One step on the scale of tools, a little more work to do, but now you are able to write exactly the way you want. No programming required. If you want to have more control, you have to give more input. But there is a major difference between using one tool with two buttons, versus using two tools with one button.
|
||||||
|
|
||||||
|
So, my advise is to be aware of the restrictions and the hidden models of the tools you use as much as possible. Maybe it is not necessary to become a programmer. But imagine for a moment that you are one. Let you mind wander and see what comes up. What would you build? How would it work? And if you’ve thought of something, take as many steps on the scale as you’re comfortable with and see if you can make it work. Trust me, it will feel liberating. My Shitty SSG
|
||||||
|
|
||||||
|
In the title, I mention that my generator is “shitty” and it is. It does not have many features. It is riddled with bugs and edge cases that it can’t handle. But that is not important. It works for my problem. If I don’t like something, I can fix it. If bug doesn’t bother me, I’ll let it be. Like all creative endavours, it is important to just start and get it out. You can always improve it later.
|
||||||
|
|
||||||
|
I put the source online [here](https://forgejo.ewintr.nl/ewintr/shitty-ssg). See here for a high level overview. Not for people to blindly copy and run (why would you?), but to give some inspiration for people who are still on the fence. To show them that shitty does not have to be hard and that it can be good enough, as long as it is the right kind of shitty. Your kind of shitty.
|
|
@ -0,0 +1,31 @@
|
||||||
|
+++
|
||||||
|
title = "Cgit and go modules"
|
||||||
|
date = 2021-05-19
|
||||||
|
+++
|
||||||
|
|
||||||
|
Recently I started to self host my public git repositories. There are several options to choose from if you want that sort of thing, like [Gitea](https://gitea.com/) or [Gogs](https://gogs.io/) and others. These all look like fine applications, but they all try to imitate the big hosted platforms like Github and GitLab. They do more that making your repositories available to the web, they also include issue trackers, users, project management, etc. That's a bit too much for me. I only want to offer the code of my personal projects that I write about, nothing more.
|
||||||
|
|
||||||
|
Fortunately I found [cgit](https://git.zx2c4.com/cgit/about/). Cgit does just that. No collaboration features, but it does its job in a fast, simple and clean way. If its good enough for the [Linux kernel](https://git.kernel.org/pub/scm/), it's good enough for me. [This blogpost](https://blog.stefan-koch.name/2020/02/16/installing-cgit-nginx-on-debian) covers how to set it up on Debian.
|
||||||
|
|
||||||
|
If you use this for hosting Go projects, there is however an extra step you should take. The location for remote Go modules are indicated by import strings at the top of your code and in your `go.mod` file that indicate an url. But a string alone is not enough to make everything work smoothly in the background, if you for instance use `go get`, or `go mod tidy`. There are different types of version control systems, different protocols to transport the data, like `http`/`https` and `ssh`. Did you know that git even has its own protocol for serving repositories? (See [git-dameon](https://git-scm.com/book/en/v2/Git-on-the-Server-Git-Daemon) for that. It's even lighter than cgit, but insecure, so `go get` doesn't want to download over it without extra confirmation, which is a hassle.)
|
||||||
|
|
||||||
|
The extra ingredient necessary for making `go get` work with a remote repository is an extra `<meta>` tag in the header of the repository's website. See the documentation [here](https://golang.org/cmd/go/#hdr-Remote_import_paths) for the details.
|
||||||
|
|
||||||
|
Luckily, [someone already modified](https://mygit.katolaz.net/katolaz/cgit-70/commit/b522a302c9c4fb9fd9e1ea829ee990afc74980ca) `cgit` with an option to provide this tag: `extra-head-content`. We can pass the necessary `<meta>` tag unmodified through this setting. For instance, this is the configuration of one of the repositories on my instance:
|
||||||
|
|
||||||
|
```
|
||||||
|
repo.url=go-kit
|
||||||
|
repo.owner=Erik Winter
|
||||||
|
repo.path=/var/repo/public/go-kit/
|
||||||
|
repo.desc=a small, personal collection of useful go packages
|
||||||
|
repo.extra-head-content=<meta name="go-import" content="git.ewintr.nl/go-kit git https://git.ewintr.nl/go-kit">
|
||||||
|
```
|
||||||
|
|
||||||
|
## Sources
|
||||||
|
|
||||||
|
- [git.zx2c4.com](https://git.zx2c4.com/cgit/about/)
|
||||||
|
- [blog.stefan-koch.name](https://blog.stefan-koch.name/2020/02/16/installing-cgit-nginx-on-debian)
|
||||||
|
- [golang.org](https://golang.org/cmd/go/#hdr-Remote_import_paths)
|
||||||
|
- [mygit.katolaz.net](https://mygit.katolaz.net/katolaz/cgit-70/commit/b522a302c9c4fb9fd9e1ea829ee990afc74980ca)
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,30 @@
|
||||||
|
+++
|
||||||
|
title = "Depend less on dependencies with the adapter pattern in go"
|
||||||
|
date = 2021-03-31
|
||||||
|
+++
|
||||||
|
|
||||||
|
The package management in Go is pretty convenient. Just run `go get` and you have another nice library installed on your computer. Just for you to use, saving you hours of developing end debugging the functionality yourself. But there is a downside to it, apart from the fact that downloading and running arbitrary code from the internet is generally not a smart thing to do, and that is that your program has now become dependent on this external library.
|
||||||
|
|
||||||
|
This is obvious and intended, you might say. The library was created to make use of and you imported it do exactly that. Let’s say you implicitly trust the authors of the library not to infect your program with viruses, backdoors and other malware, although we would rather not like to think too much about these things when we do our daily work. Another potential problem is that the library project has a lifecycle of its own. It has its own release schedule, bugs are fixed at their convenience. Perhaps the library develops in a direction you don’t want it to, or maybe it gets abandoned and does not get developed anymore at all. Then you’re stuck with it. This gets us to the most ignored risk of using external code and that is: tight coupling.
|
||||||
|
|
||||||
|
It is all too convenient to follow the directions in the documentation and get things working in record time. But by doing so you rely on the path that the library developers have set out for you, both in the abstract model-of-the-world sense and in the concrete these-are-the-names-of-the-methods-thou-shallt-use sense. If you integrated this in your code without planning, chances are that you’ve now painted yourself into a corner. It is not that the provided directions are necessarily bad in themselves. But if you follow them and end up somewhere unpleasant, there should be an easy way back. Tight coupling can be a big problem then.
|
||||||
|
|
||||||
|
The solution is to implement the adapter pattern. An adapter is a piece of code that can bridge one type of interface (or API, or set of methods, or...) to another and so make the code that uses those interfaces compatible. By using it, we are stating beforehand that the librabry is incompatible with our code. We keep it separate and use the adapter as a bridge to make it work. If for some reasing we don’t like the library anymore, we can make a new adapter and use another. As a bonus, having this explicit bridge also gives the opportunity to add a version suitable for testing. Just bridge to some mock code. This comes in handy in the case the library we are using is a client for a webservice, or a database driver, that normally is hard to mock.
|
||||||
|
|
||||||
|
One interface in this story is that of the library. The other is the one your program uses. What is the interface my program uses? you might ask. The answer to that is: what an excellent question! That is something you have to come up with yourself. Start thinking what it is that your program needs, instead of what the library provides. The major benefit of using this pattern is that you can structure your own code the way it fits best with the problem it tries to solve.
|
||||||
|
|
||||||
|
## Steps to take
|
||||||
|
|
||||||
|
How does this work? Just follow these simple steps:
|
||||||
|
|
||||||
|
- Create a separate package for this interface, for instance `/pkg/thing`.
|
||||||
|
- Add the types and interface you need for your functionality, for instance in `/pkg/thing/thing.go`. Remember, interfaces in Go should be as small as possible for flexibility and composability. Favor multiple small interfaces over one big one. Ignore all the grandiose ideas in the library, only put in here what makes sense for your project
|
||||||
|
- Add an in-memory implementation of this interface, for instance in `/pkg/thing/memory.go`. With tests for it in `/pkg/thing/memory_test.go`. If the abstractions made sense, you should be able to get full coverage without much sweat.
|
||||||
|
- Start using this in-memory implementation in the rest of the program and its tests. This will help you decide whether this interface is really the one you need, or that it needs some tweaking. It will also make it easier to set up tests for more complicated functionality.
|
||||||
|
- When you are happy with it, make an implementation with the actual library. For instance in `/pkg/libraryname.go`. This will probably be pretty hard to unit test, as libraries get often used to communicate with other systems like databases, web services, etc. This will need coverage from integration tests. Fortunately, you have now isolated this difficulty to a small corner of the code.
|
||||||
|
|
||||||
|
## Example
|
||||||
|
|
||||||
|
An example of what this looks like in practice is the `log` package in my small personal [go kit](https://forgejo.ewintr.nl/ewintr/go-kit) repository. There is an interface definition in `log.go`, together with two implementations, one for [Logrus](https://github.com/Sirupsen/logrus) and one for the [gokit.io](https://gokit.io/) log package, and an implementation suitable for use in testing.
|
||||||
|
|
||||||
|
Both libraries have their own structure and their own set of features, but changing one for the other is easy.gte-email-based-task-management
|
|
@ -0,0 +1,92 @@
|
||||||
|
+++
|
||||||
|
title = "Faster transfers between go services with ndjson"
|
||||||
|
date = 2021-05-27
|
||||||
|
+++
|
||||||
|
|
||||||
|
A lot of services have the job to query other services for data and the most straightforward way to do that is to use JSON over a REST API. If the data consists of a big list of things, that means sending over a large JSON array with objects.
|
||||||
|
|
||||||
|
Marshalling and unmarshalling the data can become quite expensive if there is enough of it. A simple way to cut down on a part of those costs is to not use a big array for all those items, but to send them just one by one, each one starting on a new line.
|
||||||
|
|
||||||
|
Let's say normally the body of your response would like this:
|
||||||
|
|
||||||
|
```
|
||||||
|
[
|
||||||
|
{...},
|
||||||
|
{...},
|
||||||
|
{...}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
You could change that to the following without losing information:
|
||||||
|
|
||||||
|
```
|
||||||
|
{...}
|
||||||
|
{...}
|
||||||
|
{...}
|
||||||
|
```
|
||||||
|
|
||||||
|
Separating each JSON object with a newline. ndJSON means: newline delimited JSON. This avoids creating (and interpreting) the large structure that binds all the objects together. They already are in the same response body so it didn't add much information anyway.
|
||||||
|
|
||||||
|
## Example
|
||||||
|
|
||||||
|
In pseudo code. Let's say we have a list of Things:
|
||||||
|
|
||||||
|
### Sending service
|
||||||
|
|
||||||
|
```go
|
||||||
|
body := []byte{}
|
||||||
|
switch c.Request.Header.Get("Accept") {
|
||||||
|
case "application/octet-stream": // simple content negotiation, send only if the receiver understands it
|
||||||
|
for _, thing := range things {
|
||||||
|
jsonThing, err := json.Marshal(thing)
|
||||||
|
if err != nil {
|
||||||
|
// do something about it
|
||||||
|
}
|
||||||
|
body = append(ndJson, jsonThing...)
|
||||||
|
body = append(ndJson, []byte("\n")...)
|
||||||
|
}
|
||||||
|
default: // normal json marshaling for those who don't expect anything special
|
||||||
|
body, err = json.Marshal(things)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Receiving service
|
||||||
|
|
||||||
|
At the place where you interpret the response body:
|
||||||
|
|
||||||
|
```go
|
||||||
|
defer respo.Body.Close()
|
||||||
|
jsonThings, err := getLinesFromBody(resp.Body)
|
||||||
|
if err != nil {
|
||||||
|
// do something
|
||||||
|
}
|
||||||
|
|
||||||
|
var things = []*Thing{}
|
||||||
|
for _, jsonThing := range jsonThings {
|
||||||
|
var thing *Thing
|
||||||
|
if err := json.Unmarshal([]byte(jsonThing), thing); err != nil {
|
||||||
|
// do something
|
||||||
|
}
|
||||||
|
|
||||||
|
things = append(things, thing)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Use the standard `bufio.Scanner` to transform the body in a slice of strings:
|
||||||
|
|
||||||
|
```go
|
||||||
|
func getLinesFromBody(r io.Reader) ([]string, error) {
|
||||||
|
var lines []string
|
||||||
|
scanner := bufio.Scanner(r)
|
||||||
|
scanner.Buffer([]byte{}, 1024*1024) // if necessary, increase max buffer size from default 64kb to 1mb
|
||||||
|
for scanner.Scan() {
|
||||||
|
lines = append(lines, scanner.Text())
|
||||||
|
}
|
||||||
|
if err := scanner.Err(); err != nil {
|
||||||
|
return []string{}, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return lines, nil
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
|
@ -0,0 +1,45 @@
|
||||||
|
+++
|
||||||
|
title = "Getting Things Email: Email based task management"
|
||||||
|
date = 2021-05-11
|
||||||
|
+++
|
||||||
|
|
||||||
|
It’s a rough estimate, but I think there about at least a billion apps and systems out there that can help you organize your daily tasks. All of them suck. Task management apparently is a very personal thing.
|
||||||
|
|
||||||
|
Since I too am a person with things to do, I tried a lot of them, but none of them would fit me well enough to just pick it and leave it at that. I kept trying new ones and configure them to my tastes, over and over again. Some call it “productivity porn”, spending more time with the system then with actually doing stuff. There may be some truth in that, but a task management system should fit you. Just like a pairs of shoes should fit your feet. If they don’t, over time the agony will build up and at one point your feet hurt so much that they all feel wrong. When that happens, the only thing you can do is lie down on a couch for a while with your feet suspended in the air. After that, you want to get something that is tailor made for you. And since I am also a developer, the only natural next step is to be my own cobbler and build it myself. Requirements
|
||||||
|
|
||||||
|
- Open source/own my own data.
|
||||||
|
- Works on all my devices, with native apps for the ones I use daily.
|
||||||
|
- Does not cost a ridiculous amount of money.
|
||||||
|
- Is synced, but can handle asynchronous life.
|
||||||
|
|
||||||
|
The first two points seem kind of obvious to me, but in practice they already eliminate every existing solution out there. A lot of companies charge money and try to make things “easier” by building “special” apps and services that can work together seamless. However, that only works if you stick to their plan and their plan only involves the kind of people they consider normal or mainstream. Or just the subgroup of them that wants to spend money on these kind of things.
|
||||||
|
|
||||||
|
My devices are: a few Linux laptops and one or two phones. On the laptops I spend most of my time in terminals. My daily phone runs SailfishOS. My soon-to-be daily phone is a Pinephone and runs Mobian, so Linux again. Both can run terminals, but have quite a different form factor. An app would be nice there. This combination alone already rules out everything I encountered.
|
||||||
|
|
||||||
|
For the money part, I am not against paying for a service and in fact I have paid for some of the ones I tried for a longer time. However, I felt cheated each time. A task is basically a line of text. A bit more if you add a description and metadata, but it should occupy less than a speck of dust on our gigantic hard drives. Still, companies charge a couple of euros a monh for this, which doesn't sound like a lot, unless you start to compare it to other types of services. Take Netflix, for instance, which is in the same ballpark, but is able to deliver multiple streams of non-stop HD video for that money. Try counting the bytes on that and see how many tasks you could fit in there. Or take the subject of this project: email. You can get gigabytes of email storage for free, or almost free if you don't want to be exploited. Aside from storage there is spam filtering, interoperability, a web interface, etc. Email hosting sounds a lot more complicated than hosting a few lines of text with some checkboxes you can tick. Yet most of the time, the latter costs more.
|
||||||
|
|
||||||
|
The last item, handle asynchronous life, is not something that I had many problems with so far. Most existing solutions can handle this. What I mean by that is that it should be possible to be offline for a while, while working with the app and then, when connectivity is restored, all changes get synced between the devices. All solutions rely on a central server to manage the recurring tasks, conflict resolution, etc. to manage this. It could be nice if the system worked without a central server. But in any case, if there is a need to synchronize and somehow this leads to conflicts, I would like to have a basic interface to resolve them. Email is the (almost) solution
|
||||||
|
|
||||||
|
When you think about it, in light of the above, tasks map really well to emails:
|
||||||
|
|
||||||
|
- Both are a collection of separate units with a title and a description.
|
||||||
|
- Every device on the planet has a client for it.
|
||||||
|
- Synchronization and resilience is built in.
|
||||||
|
- Lots of options for hosting, can be free or nearly free.
|
||||||
|
- Easy integration, most tools have a way of sending a mail on a trigger.
|
||||||
|
- Organizable in folders.
|
||||||
|
- Can potentially have attachments.
|
||||||
|
- Easy cooperation. Can have joint task lists with a separate account, can also send to lists of other people.
|
||||||
|
|
||||||
|
## Drawbacks
|
||||||
|
|
||||||
|
The more you think about it, the more logical it seems to store your tasks as emails. There are a couple of things to figure out however and of course there are drawbacks:
|
||||||
|
|
||||||
|
- User interface. How to update a task?
|
||||||
|
- Encryption
|
||||||
|
|
||||||
|
Creating tasks seems easy, just send an email to an account and it's there. Delete it after you've done the thing and it's gone. But often, especially when you follow a GTD-style approach with an inbox for "stuff", you want to separate the creating of the tasks from editing and finalizing the tasks. This can't be done directly. Although some email programs and webapps have a "Drafts" folder with emails that you can edit before you send them out, there is no standard way of doing this and it is not always possible to do this on every system or protocol. The trick of getting this to work is to figure out a way to edit emails that works in every email client.
|
||||||
|
|
||||||
|
Another thing that is certainly not possible with every random email client is encryption. There is no way to encrypt an email that works with every client without a lot of error-prone user interaction and configuration. So having the requirement that it works everywhere rules out the possibility of storing your tasks encrypted. This is unfortunate, but this is a trade-off that can't be worked around, I think. At least for the moment. First version
|
||||||
|
|
||||||
|
With the above ideas in mind, I created a very, very limited prototype and I have been using it exclusively for some time now. I am convinced that this is something that could work well, so I am going to continue developing it. Code is [here](https://forgejo.ewintr.nl/ewintr/gte). Stay tuned.
|
|
@ -0,0 +1,73 @@
|
||||||
|
+++
|
||||||
|
title = "JSON structured logging with nginx"
|
||||||
|
date = 2021-03-21
|
||||||
|
+++
|
||||||
|
|
||||||
|
After writing my [post](/simple-log-file-analysis-for-your-kubernetes-pods-on-the-command-line/) on how to do simple analysis of JSON structured logs on the command line, I realized I could apply the same solution for website statistics. If only I could make Nginx to log in the same format.
|
||||||
|
|
||||||
|
Web stats have always been cumbersome for some reason. Most people resort to a Javascript tracker, but that is a complex solution. It requires an online service that the tracker can report to and it requires Javascript on the client side, which is not always available. Not to mention the privacy issues, performance drain, security concerns and all other forms of morally questionable misery that the advertising industry dumps on everyones everyday internet experience.
|
||||||
|
|
||||||
|
Since most people use a cloud service to deploy their site, they don’t really have another option, as they don’t have access to the webserver logs. But since I have access, I figured I could use just make my own reports. I actually only care about the amount of requests (as indication what users find interesting) and the referer (so I could chime in if there is a discussion elswehere about a page), so the logs should be sufficient.
|
||||||
|
|
||||||
|
As it turns out, Nginx does not have a magical `json=true` option, but it does have an `escape=json` directive that you can use when defining your own log format. So the solution is to just write the JSON you want to have and use this directive to escape the variables.
|
||||||
|
|
||||||
|
In your `nginx.conf`, in the `http` block, define a new log format:
|
||||||
|
|
||||||
|
```nginx
|
||||||
|
log_format jsonformat escape=json '{'
|
||||||
|
'"time_local":"$time_local",'
|
||||||
|
'"remote_addr":"$remote_addr",'
|
||||||
|
'"remote_user":"$remote_user",'
|
||||||
|
'"request":"$request",'
|
||||||
|
'"status": "$status",'
|
||||||
|
... more fields
|
||||||
|
‘}’;
|
||||||
|
```
|
||||||
|
|
||||||
|
And then, in the same `nginx.conf`, or in your site configuration, depending on where you configure the logging, update the format that is used:
|
||||||
|
|
||||||
|
```nginx
|
||||||
|
access_log /var/local/nginx/my_log.log jsonformat;
|
||||||
|
```
|
||||||
|
|
||||||
|
That’s it. If you're not sure what fields you want to have in your output, [this blog post](https://blog.tyk.nu/blog/structured-json-logging-in-nginx/) gives a long list of options:
|
||||||
|
|
||||||
|
```nginx
|
||||||
|
log_format jsonformat escape=json '{'
|
||||||
|
'"time_iso8601": "$time_iso8601", ' # local time in the ISO 8601 standard format
|
||||||
|
'"msec": "$msec", ' # request unixtime in seconds with a milliseconds resolution
|
||||||
|
'"connection": "$connection", ' # connection serial number
|
||||||
|
'"connection_requests": "$connection_requests", ' # number of requests made in connection
|
||||||
|
'"request_id": "$request_id", ' # the unique request id
|
||||||
|
'"request_length": "$request_length", ' # request length (including headers and body)
|
||||||
|
'"request_time": "$request_time", ' # request processing time in seconds with msec resolution
|
||||||
|
'"remote_addr": "$remote_addr", ' # client IP
|
||||||
|
'"remote_port": "$remote_port", ' # client port
|
||||||
|
'"remote_user": "$remote_user", ' # client HTTP username
|
||||||
|
'"ssl_protocol": "$ssl_protocol", ' # TLS protocol
|
||||||
|
'"ssl_cipher": "$ssl_cipher", ' # TLS cipher
|
||||||
|
'"http_user_agent": "$http_user_agent", ' # user agent
|
||||||
|
'"http_referer": "$http_referer", ' # HTTP referer
|
||||||
|
'"http_host": "$http_host", ' # the request Host: header
|
||||||
|
'"server_name": "$server_name", ' # the name of the vhost serving the request
|
||||||
|
'"scheme": "$scheme", ' # http or https
|
||||||
|
'"request_method": "$request_method", ' # request method
|
||||||
|
'"request_uri": "$request_uri", ' # full path and arguments if the request
|
||||||
|
'"server_protocol": "$server_protocol", ' # request protocol, like HTTP/1.1 or HTTP/2.0
|
||||||
|
'"bytes_sent": "$bytes_sent", ' # the number of bytes sent to a client
|
||||||
|
'"status": "$status", ' # response status code
|
||||||
|
'"pipe": "$pipe", ' # “p” if request was pipelined, “.” otherwise
|
||||||
|
'"upstream": "$upstream_addr", ' # upstream backend server for proxied requests
|
||||||
|
'"upstream_connect_time": "$upstream_connect_time", ' # upstream handshake time incl. TLS
|
||||||
|
'"upstream_header_time": "$upstream_header_time", ' # time spent receiving upstream headers
|
||||||
|
'"upstream_response_time": "$upstream_response_time", ' # time spend receiving upstream body
|
||||||
|
'"upstream_cache_status": "$upstream_cache_status"' # cache HIT/MISS where applicable
|
||||||
|
'}';
|
||||||
|
```
|
||||||
|
|
||||||
|
## Sources
|
||||||
|
|
||||||
|
- [stackoverflow.com](https://stackoverflow.com/questions/25049667/how-to-generate-a-json-log-from-nginx)
|
||||||
|
- [blog.tyk.nu](https://blog.tyk.nu/blog/structured-json-logging-in-nginx/)
|
||||||
|
- [www.nginx.com](https://www.nginx.com/blog/diagnostic-logging-nginx-javascript-module/)
|
||||||
|
|
|
@ -0,0 +1,109 @@
|
||||||
|
+++
|
||||||
|
title = "Simple CI/CD with bash and git"
|
||||||
|
date = 2021-11-29
|
||||||
|
+++
|
||||||
|
|
||||||
|
A simple way to run a pipeline after an update of a project is to put the actions in a bash script and to let git trigger it.
|
||||||
|
|
||||||
|
To do so, add a `post-receive` script in the hooks folder on the repository on the server:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
#!/bin/bash
|
||||||
|
TARGET="/home/webuser/deploy-folder"
|
||||||
|
GIT_DIR="/home/webuser/www.git"
|
||||||
|
BRANCH="master"
|
||||||
|
|
||||||
|
while read oldrev newrev ref
|
||||||
|
do
|
||||||
|
# only checking out the master (or whatever branch you would like to deploy)
|
||||||
|
if [[ $ref = refs/heads/$BRANCH ]];
|
||||||
|
then
|
||||||
|
echo "Ref $ref received. Deploying ${BRANCH} branch to production..."
|
||||||
|
git --work-tree=$TARGET --git-dir=$GIT_DIR checkout -f
|
||||||
|
else
|
||||||
|
echo "Ref $ref received. Doing nothing: only the ${BRANCH} branch may be deployed on this server."
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
```
|
||||||
|
|
||||||
|
This will deploy straight to the target folder from the repository by doing a checkout there.
|
||||||
|
|
||||||
|
## With staging area
|
||||||
|
|
||||||
|
A slightly more advanced version is to have the `master` branch deploy to the production folder and all other branches to a test or staging folder that can be viewed by you, but not by the public. So the result of the changes can be reviewed before putting them live.
|
||||||
|
|
||||||
|
Whenever automations become more complicated, it is wise to put them in a separate script
|
||||||
|
|
||||||
|
`post-receive` script:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
while read oldrev newrev ref
|
||||||
|
do
|
||||||
|
if [[ $ref = refs/heads/master ]];
|
||||||
|
then
|
||||||
|
echo "deploying to production from post-receive hook..."
|
||||||
|
/path/to/deploy-prod.sh
|
||||||
|
else
|
||||||
|
echo "deploying test from post-receive hook..."
|
||||||
|
/path/to/deploy-test.sh'
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
```
|
||||||
|
|
||||||
|
`deploy-prod.sh`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
#!/bin/bash
|
||||||
|
SRCDIR=/tmp/deploy
|
||||||
|
RESULTDIR=/tmp/deploy/public
|
||||||
|
TARGETDIR=/var/www/html/my-website.nl
|
||||||
|
|
||||||
|
echo "* checkout project"
|
||||||
|
mkdir $SRCDIR && cd $SRCDIR && git checkout master && git pull
|
||||||
|
|
||||||
|
echo "* generate html"
|
||||||
|
cd $SRCDIR && [commands that put result in $RESULTDIR]
|
||||||
|
|
||||||
|
echo "* deploy to webserver"
|
||||||
|
rm -r $TARGETDIR/*
|
||||||
|
mv $RESULTDIR/* $TARGETDIR/
|
||||||
|
|
||||||
|
echo "* done"
|
||||||
|
```
|
||||||
|
|
||||||
|
`deploy-test.sh` is trickier, because it needs to figure out which branch was pushed and needs to be deployed. One could probably pass some arguments, but if there are not too many people working on it, it is also an option to just pull whatever branch was updated last:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
echo "* checkout site"
|
||||||
|
cd $SRCDIR && git fetch && git checkout $(git rev-parse $(git branch -r --sort=-committerdate | head -1))
|
||||||
|
```
|
||||||
|
|
||||||
|
## Users and permissions
|
||||||
|
|
||||||
|
Speaking of users, while we are talking about simple projects here, one should keep privileges separated and use system accounts for trivial automations. Let's say the repository hooks run as user `git`, but that the deployment script is owned by `web`.
|
||||||
|
|
||||||
|
One option is to use `sudo` in the post receive script to let `git` impersonate `web` and to configure `sudo` so that these, but only these deploy commands may be issued by `git` as `web` without having to enter a password.
|
||||||
|
|
||||||
|
Start `visudo` and add the following line to the config:
|
||||||
|
|
||||||
|
```
|
||||||
|
# User privileges specification
|
||||||
|
|
||||||
|
git ALL=(web) NOPASSWD:/path/to/deploy-prod.sh,/path/to/deploy-test.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
After saving that the deploy scripts can be triggered in the `post-receive` hook by:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo -u web /path/to/deploy-prod.sh
|
||||||
|
sudo -u web /path/to/deploy-test.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
## Sources
|
||||||
|
|
||||||
|
- [stackoverflow.com](https://stackoverflow.com/questions/28106011/understanding-git-hook-post-receive-hook)
|
||||||
|
- [stackoverflow.com](https://stackoverflow.com/questions/2427288/how-to-get-back-to-the-latest-commit-after-checking-out-a-previous-commit)
|
||||||
|
- [www.baeldung.com](https://www.baeldung.com/linux/run-as-another-user)
|
||||||
|
|
|
@ -0,0 +1,302 @@
|
||||||
|
+++
|
||||||
|
title = "Simple log file analysis for your kubernetes pods on the command line"
|
||||||
|
date = 2021-01-23
|
||||||
|
+++
|
||||||
|
|
||||||
|
Recently at my current job there was a lot of complaining about how terribly slow our instance of Kibana was. We use that, as many teams do, to view and analyze the log files generated by the services that run on our Kubernetes cluster. The root cause was that the logs were simply too large, because we logged a lot of stuff that wasn’t necessary. During the conversation, I commented that I never noticed any problem with Kibana, because I never used it. To me the tool feels complicated, cumbersome and, even when it functions properly, way to slow. The natural followup question was what I used instead and the answer was: simple tools that work in the shell and have existed for ages.
|
||||||
|
|
||||||
|
This led to me writing a quick guide on these tools and how to use them. I figured more people might be interested, so I put up here too. With this background information, it is probably understandable that the tips below are focused on a specific, narrow use case. Reading further is most useful if you:
|
||||||
|
|
||||||
|
- Have seen a shell before, but are not terribly comfortable in using it.
|
||||||
|
- Have a Kubernetes cluster and want to check the logs from the services running on it.
|
||||||
|
- Have ssh access, or another way to run kubectl in a shell.
|
||||||
|
- Have [JSON structured logs](/my-take-on-logging-in-go/).
|
||||||
|
|
||||||
|
The information is broken down into the following sections (click on them to jump):
|
||||||
|
|
||||||
|
- [General Principles](#general-principles)
|
||||||
|
- [Environment setup (not required)](#environment-setup-not-required)
|
||||||
|
- [Working with plain text: grep](#working-with-plain-text-grep)
|
||||||
|
- [Working with JSON: jq](#working-with-json-jq)
|
||||||
|
- [Sorting, counting and grouping](#sorting-counting-and-grouping)
|
||||||
|
- [Examples](#examples)
|
||||||
|
|
||||||
|
_Note: later I discovered that the same techniques can be used for your webserver statistics too. Check [this post](/json-structured-logging-with-nginx/) to see how you can get Nginx to produce the same kind of structured logging as is discussed here._
|
||||||
|
|
||||||
|
## General Principles
|
||||||
|
|
||||||
|
### SSH
|
||||||
|
|
||||||
|
Probably every developer working at this level knows what `ssh` is and has set up proper key based authentication for it. That is, you can do the following and not get an error message:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ ssh username@remote.server
|
||||||
|
```
|
||||||
|
|
||||||
|
If you do get an error, ask your local devops person how to fix it. If you are the local devops person, search the internet for “ssh key setup”.
|
||||||
|
|
||||||
|
What might be less known, is that with `ssh`, you can run commands on a remote server transparently from your laptop. Just type the command you would want to run on the remote machine after the `ssh` command and press enter. This command on your local machine:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ ssh username@remote.server ls
|
||||||
|
```
|
||||||
|
|
||||||
|
will give you the contents of your home folder on the remote server. Together with a proper alias for the `ssh` part (see below), this gives lightning fast access for simple operations.
|
||||||
|
|
||||||
|
### Kubectl
|
||||||
|
|
||||||
|
The main command for interacting with the Kubernetes cluster is `kubectl`. Of course, interacting with a cluster is a complex subject matter that requires careful study to become proficient with, but for our purposes we can make do with a few commands like `kubectl get pods` to get a list of pods running in an environment and `kubectl logs XXX` to get the logs of XXX.
|
||||||
|
|
||||||
|
Useful options for `kubectl` logs are:
|
||||||
|
|
||||||
|
- `-l<query>` for combining log of multiple pods and containers. For instance -lapp=user-management-app gives logs for all running instances of the user-management-app in that environment.
|
||||||
|
- `--since=YY` gives only output from the last YY where YY is a period like `3s`, `5m` or `24h`.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ kubectl logs -lapp=some-service --since=5m
|
||||||
|
```
|
||||||
|
|
||||||
|
will give you all logs from all instances of some-service service from the last five minutes.
|
||||||
|
|
||||||
|
See `kubectl help logs` for more.
|
||||||
|
|
||||||
|
### Directing input and output
|
||||||
|
|
||||||
|
Shell commands are designed to work with lines of text. That means that the output of a command is often a bunch of text lines (separated with a newline), but also that it accepts a bunch of text lines (again, separated with newline characters) as input. This means that we can easily chain multiple commands without having to deal with the intermediate results. This chaining is done with the pipe symbol: `|`.
|
||||||
|
|
||||||
|
`ls` gives the contents of a folder and sort can sort lines in alphabetical order. So `ls | sort` also gives the contents of a folder, but sorted. (This is actually not very useful, because `ls` output is already sorted by default. To make it more interesting, try `ls | sort -r`. The `-r` option specifies reverse sorting.)
|
||||||
|
|
||||||
|
Sometimes you do want to save the intermediate results to a file. Get the information while it’s fresh, store it and do some analysis later. To save the output to a file, use `command > filename`. The file does not need to exist. If it does exist, this command will overwrite it without warning. Use `>>` instead of `>` to append the new output at the end of the file.
|
||||||
|
|
||||||
|
Use `cat` to read lines from a file. The following will give the same output as the `ls | sort -r` from above, but now this also stores the (intermediate) results in files:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ ls > contents.txt
|
||||||
|
$ cat contents.txt | sort -r > sorted_contents.txt
|
||||||
|
$ cat sorted_content.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
### Limiting output
|
||||||
|
|
||||||
|
Log files by their nature are very long and having them scroll though your terminal window in their entirety is a sure way to get bored and confused at the same time. An easy way to test the output of your command without the risk of a data overload are `head` and `tail`. By default, these commands limit the output to the first (`head`) or last (`tail`) 10 lines. Both have a parameter -n that can be used for a different number of lines:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ cat long_file.log | tail -n 5
|
||||||
|
```
|
||||||
|
|
||||||
|
`tail` also has the parameter `-f` for following. After the last x amount of lines are shown, the program does not exit, but waits until more lines come and then prints them. This can be helpful when doing some live debugging. Set up a pipe that only prints the lines you are interested in and close it with `tail -f`. Then you can watch the log as you perform different actions in the app without being distracted by log lines you’re not interested in flying by.
|
||||||
|
|
||||||
|
## Splitting long commands over multiple lines
|
||||||
|
|
||||||
|
With all this chaining of commands it is possible that they become less readable, specially when the lines in your terminal start to wrap. There are two ways to spread your commands over multiple lines. The first one is with a `\` at the end of the line:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ my-command-with-lots-of-args \
|
||||||
|
> --arg1=a \
|
||||||
|
> --arg2=b \
|
||||||
|
> --arg3=c
|
||||||
|
```
|
||||||
|
|
||||||
|
The `>` is shown to indicate that more input is expected.
|
||||||
|
|
||||||
|
But if we use pipes, we don’t need to do that, as the pipe itself already signals bash that there is more to come:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ command-1 | command-2 |
|
||||||
|
> command-3 |
|
||||||
|
> command-4
|
||||||
|
```
|
||||||
|
|
||||||
|
If your prompt wants more input after pressing enter, but you did not use any of the two mentioned symbols, then you probably forgot to close a quoted string somewhere.
|
||||||
|
|
||||||
|
## Environment setup (not required)
|
||||||
|
|
||||||
|
As mentioned, this is not required, but using your `.bashrc` (or mac equivalent, I don’t know if it is called the same there) in combination with the `alias` command can be great time saver. alias lets you define abbreviations, `.bashrc` is a place for things you want to run each time after you open a terminal and login.
|
||||||
|
|
||||||
|
At the beginning of this document it was mentioned that `ssh` can be used to run command on a remote server. But typing in the `ssh username@remote.server` over and over again to run something on the other server can become tedious. If you add this line to your `.bashrc` on your laptop you can do it much quicker:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
alias servername='ssh username@remote.server'
|
||||||
|
```
|
||||||
|
|
||||||
|
From now on, you only have to use servername to login to that server and
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ servername ls
|
||||||
|
```
|
||||||
|
|
||||||
|
gives you the contents of your home folder on that server. If you use this often, you could shorten it to s.
|
||||||
|
|
||||||
|
(Note that you have to open a new shell for this to work, because `.bashrc` is only read when the shell is started. If you don’t want to do that, you can do `source ~/.bashrc` to process the file right in the current shell without logging in again.)
|
||||||
|
|
||||||
|
This trick can be used on both your laptop and the remote server. Some suggestions for aliases are:
|
||||||
|
|
||||||
|
```
|
||||||
|
alias acc='command to switch to acceptance environment'
|
||||||
|
alias prod='command to switch to production environment'
|
||||||
|
alias k=kubectl
|
||||||
|
alias kn='kubectl -n "namespace"'
|
||||||
|
alias x=exit
|
||||||
|
```
|
||||||
|
|
||||||
|
## Working with plain text: grep
|
||||||
|
|
||||||
|
If the logs are stored in JSON, processing them with pure text tools does not give the best results. See the next section on how to proper deal with JSON. But, often we don’t need to do advanced things and some text tools can be very helpful for some off-the-cuff filtering of log files.
|
||||||
|
|
||||||
|
Most people know that using `grep` is a simple way to filter lines that contain some text:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ echo $'one\ntwo\nTHREE' | grep "two"
|
||||||
|
two
|
||||||
|
```
|
||||||
|
|
||||||
|
(`echo` sends out text, the `$'...'` notation converts `\n` to a newline.)
|
||||||
|
|
||||||
|
But it is good to also know some options:
|
||||||
|
|
||||||
|
Ignore case with `-i`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ echo $'one\ntwo\nTHREE' | grep -i "three"
|
||||||
|
THREE
|
||||||
|
```
|
||||||
|
|
||||||
|
Inverse the search with `-v`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ echo $'one\ntwo\nTHREE' | grep -v "two"
|
||||||
|
one
|
||||||
|
THREE
|
||||||
|
```
|
||||||
|
|
||||||
|
Only whole words with `-w`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ echo $'one\ntwo\ntwoandahalf' | grep "two"
|
||||||
|
two
|
||||||
|
twoandahalf
|
||||||
|
$ echo $'one\ntwo\ntwoandahalf' | grep -w "two"
|
||||||
|
two
|
||||||
|
```
|
||||||
|
|
||||||
|
Count the results with `-c`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ echo $'one\ntwo\ntwoandahalf' | grep -c "two"
|
||||||
|
2
|
||||||
|
```
|
||||||
|
|
||||||
|
Use regular expressions with `-E`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ echo $'one\ntwo\nthree' | grep -E "^[a-z]{3}$"
|
||||||
|
one
|
||||||
|
two
|
||||||
|
```
|
||||||
|
|
||||||
|
There are a lot more tools available for manipulating and transforming lines of text, but they don’t work that well for lines of JSON, so we’ll skip them here.
|
||||||
|
|
||||||
|
## Working with JSON: jq
|
||||||
|
|
||||||
|
For manipulating JSON on the command line there is only one tool you need, because it is like a Swiss army knife. It can do anything you might think of. Even just piping a piece of JSON through `jq` is useful, as it will pretty print it by default and makes it readable:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ echo '{"some":"stuff"}' | jq .
|
||||||
|
{
|
||||||
|
"some": "stuff"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
As is to be expected, this short introduction covers only a fraction of the available functionality. Most examples and tutorials on the internet will discuss jq in the context of large JSON strings with deep nested structures. But in this case we are dealing with lots of lines of text with short pieces of JSON that are pretty flat. Most of the time, it is only a list of key-value pairs.
|
||||||
|
|
||||||
|
We probably want to feed the results into a next program, so what comes in on one line, must go out on one line. The `-c` option keeps `jq` from spreading the structure over multiple lines.
|
||||||
|
|
||||||
|
### Selecting fields
|
||||||
|
|
||||||
|
`jq` uses a path-like syntax to select fields:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ echo '{"one":1, "two":2, "three":[4, 5, 6]}' | jq -c .one
|
||||||
|
1
|
||||||
|
$ echo '{"one":1, "two":2, "three":[4, 5, 6]}' | jq -c .three
|
||||||
|
[4,5,6]
|
||||||
|
$ echo '{"one":1, "two":2, "three":[4, 5, 6]}' | jq -c .three[1]
|
||||||
|
5
|
||||||
|
```
|
||||||
|
|
||||||
|
### Grouping fields
|
||||||
|
|
||||||
|
`[..]` and `{..}` can be used to group multiple results into an array or an object:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ echo '{"one":1, "two":2, "three":[4, 5, 6]}' | jq -c '[ .three[1], .two ]'
|
||||||
|
[5,2]
|
||||||
|
$ echo '{"one":1, "two":2, "three":[4, 5, 6]}' |
|
||||||
|
> jq -c '{ thing: .three[1], another: .two }'
|
||||||
|
{"thing":5,"another":2}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Filtering values
|
||||||
|
|
||||||
|
We can also select lines based on the values of the fields:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ echo $'{"thing":5, "another":2}\n{"thing":6,"another":1}' |
|
||||||
|
> jq -c 'select(.another < 2)'
|
||||||
|
{"thing":6,"another":1}
|
||||||
|
```
|
||||||
|
|
||||||
|
And at the risk of making things confusing, `jq` has its own piping mechanism that uses the same symbol as bash. For instance, we can combine the select with a grouping like this:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ echo $'{"thing":5, "another":2}\n{"thing":6,"another":1}' |
|
||||||
|
> jq -c '{field: .another} | select(.field < 2)'
|
||||||
|
{"field":1}
|
||||||
|
```
|
||||||
|
|
||||||
|
Comparisons work on strings too. This makes it possible to filter on timestamps, as they are represented as strings with integers ranging from most to least significant:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ echo $'{"time":"2021-01-01T00:00:00Z"}\n{"time":"2021-01-02T00:00:00Z"}\n{"time":"2021-01-03T00:00:00Z"}' |
|
||||||
|
jq -c 'select(.time > "2021-01-01T00:00:00Z" and .time < "2021-01-03T00:00:00Z")'
|
||||||
|
{"time":"2021-01-02T00:00:00Z"}
|
||||||
|
```
|
||||||
|
|
||||||
|
This is a bit sensitive to typos though. For instance, if you accidentally forget a `-` or replace the `T` in one string with a space, but not the other, the sorting still works and no error will be shown. But it will not do what you expect.
|
||||||
|
|
||||||
|
## Sorting, counting and grouping
|
||||||
|
|
||||||
|
As mentioned, all these tools work on a line by line basis. But there are a few programs that operate on multiple lines too. Most notably: `sort` and `uniq`. They can, as is indicated by their names, sort things and filter unique values. The caveat for `uniq` is that it only compares the current with the previous line and thus only eliminates adjacent doubles. This is solved by sorting them first. The nice thing about `uniq` is that it can count how may occurrences of a line were present. To get a ranking, we can sort again:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ echo $'A\nB\nA\nA\nC\nB' | sort | uniq -c | sort -rn
|
||||||
|
3 A
|
||||||
|
2 B
|
||||||
|
1 C
|
||||||
|
```
|
||||||
|
|
||||||
|
`-c` adds the count on `uniq`, `-rn` means sort reverse on numbers for `sort`.
|
||||||
|
|
||||||
|
Combined with `head` you can make a top ten list.
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
Armed with all this knowledge, it is not hard to construct commands that answer the simple questions that come up a lot when dealing with logs:
|
||||||
|
|
||||||
|
Show me the latest errors for a product with this id on that service:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ kubectl logs -lapp=THAT_SERVICE |
|
||||||
|
> grep 'UUID' |
|
||||||
|
> grep -v "err:null" |
|
||||||
|
> tail
|
||||||
|
```
|
||||||
|
|
||||||
|
How often did this message appear and for what users:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ kubectl logs POD_NAME |
|
||||||
|
> grep "MESSAGE" |
|
||||||
|
> jq -c .userId |
|
||||||
|
> sort | uniq -c | sort -rn
|
||||||
|
```
|
|
@ -0,0 +1,63 @@
|
||||||
|
+++
|
||||||
|
title = "AsciiDoc parser"
|
||||||
|
date = 2022-04-06
|
||||||
|
+++
|
||||||
|
|
||||||
|
It started with some lines of throwaway code, held together by staples and duct tape in my little [Shitty SSG](/why-i-built-my-own-shitty-static-site-generator/) project and it kept evolving. A long time ago I decided that I liked [AsciiDoc](AsciiDoc parser) better than [Markdown](AsciiDoc parser) when it comes to simple markup languages.
|
||||||
|
|
||||||
|
On the surface they are very similar, both are very easy to write and read and can be used in their "raw" form. You don't need to render a page before you can read it comfortably, unlike, for instance, [HTML](https://developer.mozilla.org/en-US/docs/Web/HTML).
|
||||||
|
|
||||||
|
The difference becomes clear when you write texts that longer than the typical short comment or snippet. AsciiDoc is much more complete. You can write a whole book in it. I never did that, but I did notice that I started using it everywhere I could. Unfortunately there wasn't a really good library for Go available.
|
||||||
|
|
||||||
|
So I started one myself. The AsciiDoc specification is big and I started implementation with the features I use myself. It's far from complete, but [here](https://forgejo.ewintr.nl/ewintr/adoc) is the code.
|
||||||
|
|
||||||
|
## Example
|
||||||
|
|
||||||
|
[Run the snippet below on the Go Playground](https://go.dev/play/p/hF2wn_GdkBK)
|
||||||
|
|
||||||
|
```go
|
||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"ewintr.nl/adoc"
|
||||||
|
)
|
||||||
|
|
||||||
|
func main() {
|
||||||
|
sourceDoc := `= This is the title
|
||||||
|
|
||||||
|
And this is the first paragraph. With some text. Lists are supported too:
|
||||||
|
|
||||||
|
* Item 1
|
||||||
|
* Item 2
|
||||||
|
* Item 3
|
||||||
|
|
||||||
|
And we also have things like *bold* and _italic_.`
|
||||||
|
|
||||||
|
par := adoc.NewParser(strings.NewReader(sourceDoc))
|
||||||
|
doc := par.Parse()
|
||||||
|
|
||||||
|
htmlDoc := adoc.NewHTMLFormatter().Format(doc)
|
||||||
|
fmt.Println(htmlDoc)
|
||||||
|
|
||||||
|
// output:
|
||||||
|
//
|
||||||
|
// <!DOCTYPE html>
|
||||||
|
// <html>
|
||||||
|
// <head>
|
||||||
|
// <title>This is the title</title>
|
||||||
|
// </head>
|
||||||
|
// <body>
|
||||||
|
// <p>And this is the first paragraph. With some text. Lists are supported too:</p>
|
||||||
|
// <ul>
|
||||||
|
// <li>Item 1</li>
|
||||||
|
// <li>Item 2</li>
|
||||||
|
// <li>Item 3</li>
|
||||||
|
// </ul>
|
||||||
|
// <p>And we also have things like <strong>bold</strong> and <em>italic</em>.</p>
|
||||||
|
// </html>
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
|
@ -0,0 +1,157 @@
|
||||||
|
+++
|
||||||
|
title = "Getting Things Email: Basic flow"
|
||||||
|
date = 2022-02-24
|
||||||
|
+++
|
||||||
|
|
||||||
|
In the [previous post](/gte-email-based-task-management/) I described my wish list for a task management system based on email and IMAP. After some experimenting, I came to s simple system that made it work. At minimum, it consists of the following parts:
|
||||||
|
|
||||||
|
- A central IMAP account with an email address.
|
||||||
|
- A flow to process mails that are sent to that account.
|
||||||
|
- A format for the mails that can describe a task.
|
||||||
|
|
||||||
|
The above was enough to the ball rolling, but since then I added:
|
||||||
|
|
||||||
|
- A sync mechanism for a local Sqlite database through IMAP and SMTP.
|
||||||
|
- A [taskwarrior](https://taskwarrior.org/) inspired cli app for quick manipulation of tasks.
|
||||||
|
|
||||||
|
More local apps are in the works.
|
||||||
|
|
||||||
|
## Central IMAP account
|
||||||
|
|
||||||
|
Naturally, the `INBOX` folder is the place where all emails come in and that is the place where we need some sorting to be done. This process is automated and can be triggered from multiple places. See 'Remote vs local' to get a sense how the different applications of this project can work together for a smooth experience.
|
||||||
|
|
||||||
|
On a functional level, there are two types of mails that can come in:
|
||||||
|
|
||||||
|
- New tasks. These don't necessarily have the the right format yet.
|
||||||
|
- Updates on existing tasks.
|
||||||
|
|
||||||
|
There will be another doc with details off the format of the mails, but the simple summary is that a task is a plain text mail with key-value pairs on separate lines.
|
||||||
|
|
||||||
|
In principle, we have the following folders:
|
||||||
|
|
||||||
|
```
|
||||||
|
INBOX/
|
||||||
|
├── New
|
||||||
|
├── Planned
|
||||||
|
├── Recurring
|
||||||
|
└── Unplanned
|
||||||
|
```
|
||||||
|
|
||||||
|
I say in principle, because after a while, I realized that I don't need to keep a separate mail account for this. It can also work with a regular account that is already being used for personal email, if the above is put into a subfolder and a forwarding rule is added: "send all mails to gte@example.com to the folder GTE/INBOX".
|
||||||
|
|
||||||
|
Now we have:
|
||||||
|
|
||||||
|
```
|
||||||
|
INBOX/ # Normal inbox for personal account
|
||||||
|
Archive/
|
||||||
|
Spam/
|
||||||
|
...
|
||||||
|
GTE/
|
||||||
|
└── INBOX/ # Inbox for tasks
|
||||||
|
├── New
|
||||||
|
├── Planned
|
||||||
|
├── Recurring
|
||||||
|
└── Unplanned
|
||||||
|
```
|
||||||
|
|
||||||
|
So all mails arrive in `GTE/INBOX`. From there the following can happen:
|
||||||
|
|
||||||
|
- If the mail is new, or not recognized as an update, it is put in `New`.
|
||||||
|
- If the mail is recognized as an update, it is put in `Planned`, `Recurring` or `Unplanned`. Existing mails with older versions of the task are removed.
|
||||||
|
- Tasks with a `recur` field go in `Recurring`, tasks with a `due` field go in `Planned` and the the rest goes to `Unplanned`. `recur` takes precedence over `due`.
|
||||||
|
- Shortcut: if a new mail has a `project` specified, it skips `New` and is put into one of the other folders right away.
|
||||||
|
|
||||||
|
That last rule is because sometimes you just want to jot something down and refine it later, but other times you want to specify the whole task right away and not having to go to the trouble of updating it later. Anything in `New` will stay there until there is some manual intervention. This takes inspiration from the [GTD](https://en.wikipedia.org/wiki/Getting_Things_Done) method, where gathering "stuff" and converting it into tasks are two separate actions. Updating a task
|
||||||
|
|
||||||
|
Everything is put together in such a way that the systems even works from a normal (web)mail app. Updates are done by forwarding of, or replying to the mail with the task you want to update. The original mail is then quoted in the body and you just put the fields you want to update with the new values above this. (Yes, essentially top posting. A sin in certain parts of the internet. I planned to make this configurable, but almost all mail clients I use do it like this by default.)
|
||||||
|
|
||||||
|
Let's say we have the following task that we want to update:
|
||||||
|
|
||||||
|
```
|
||||||
|
To: todo@example.com
|
||||||
|
From: todo@example.com
|
||||||
|
Subject: 2022-01-01 (saturday) - resolutions - change system to disallow top posting
|
||||||
|
|
||||||
|
action: change system to disallow top posting
|
||||||
|
project: resolutions
|
||||||
|
due: 2022-01-01 (saturday)
|
||||||
|
id: 416aad43-eec6-4ad2-8a3c-b84482e34c3c
|
||||||
|
version: 2
|
||||||
|
```
|
||||||
|
|
||||||
|
The update: be less harsh about it and postpone it a year. To do this, forward the mail and add two lines above the quoted original one:
|
||||||
|
|
||||||
|
```
|
||||||
|
To: todo@example.com
|
||||||
|
From: your_account@example.com
|
||||||
|
Subject: FWD: 2022-01-01 (saturday) - resolutions - change system to disallow top posting
|
||||||
|
|
||||||
|
action: make quote style configurable
|
||||||
|
due: 2023-01-01
|
||||||
|
|
||||||
|
> action: change system to disallow top posting
|
||||||
|
> project: resolutions
|
||||||
|
> due: 2022-01-01 (saturday)
|
||||||
|
> id: 416aad43-eec6-4ad2-8a3c-b84482e34c3c
|
||||||
|
> version: 2
|
||||||
|
```
|
||||||
|
|
||||||
|
After processing, this will lead to the following mail in the `Planned` folder:
|
||||||
|
|
||||||
|
```
|
||||||
|
To: todo@example.com
|
||||||
|
From: todo@example.com
|
||||||
|
Subject: 2023-01-01 (sunday) - resolutions - make quote style configurable
|
||||||
|
|
||||||
|
action: make quote style configurable
|
||||||
|
project: resolutions
|
||||||
|
due: 2023-01-01 (sunday)
|
||||||
|
id: 416aad43-eec6-4ad2-8a3c-b84482e34c3c
|
||||||
|
version: 3
|
||||||
|
```
|
||||||
|
|
||||||
|
If there are no update lines, the task will stay the same. This means it is very easy to move the data when you want to switch providers. Just forward all tasks to the new mail account and you're done!
|
||||||
|
|
||||||
|
## Marking a task done
|
||||||
|
|
||||||
|
Simply delete the email.
|
||||||
|
|
||||||
|
To facilitate local clients that can only communicate by sending more mails to the address, it is also possible to add a field like this:
|
||||||
|
|
||||||
|
```
|
||||||
|
done: true
|
||||||
|
```
|
||||||
|
|
||||||
|
The central sorting process will then remove the mail for you.
|
||||||
|
|
||||||
|
## Navigating the folders
|
||||||
|
|
||||||
|
As can be seen from the examples above, part of the content is repeated in the subject line. This is to help navigating the tasks in a mail client. Simply sort the folder on subject.
|
||||||
|
|
||||||
|
Tasks with a due date get:
|
||||||
|
|
||||||
|
```
|
||||||
|
yyyy-mm-dd (weekday) - project - action
|
||||||
|
```
|
||||||
|
|
||||||
|
So if the `Planned` folder is sorted on subject, tasks of the same project on the same day get grouped together.
|
||||||
|
|
||||||
|
Tasks without a due date behave similar. They have:
|
||||||
|
|
||||||
|
```
|
||||||
|
project - action
|
||||||
|
```
|
||||||
|
|
||||||
|
## Recurring tasks
|
||||||
|
|
||||||
|
In addition to the process that processes new incoming mails, there is a process that generates new planned tasks based on the recurring tasks.
|
||||||
|
|
||||||
|
Currently there is no relation between the task that has a recurring rule and the individual tasks that get spawned as a result. There simply hasn't been a need for it yet. Instead, there is a process that runs daily and that checks if any of the tasks in the Recur folder recur x amount of days in the future. If so, it will create an instance for that date. x here is configurable. 6 days seems to work for me.
|
||||||
|
|
||||||
|
## Remote vs local
|
||||||
|
|
||||||
|
So far this document has been a bit fuzzy over where exactly these automated processes live. That is because it actually doesn't matter. The IMAP box is the central source of thruth and will reside somewhere on a server of your email provider. To manage it, there are currently two options. A long running daemon, or a local client. The latter perhaps triggered by a cron job.
|
||||||
|
|
||||||
|
Both of these options involve logging in to the IMAP account to do perform actions, so both can be done from anywhere. They can both run on a VPS, or on your laptop.
|
||||||
|
|
||||||
|
Local clients maintain copies of the tasks in the IMAP account, for speed and to be able to use the system when there is no internet connection. For sending and receiving updates, they too use IMAP and SMTP. A local client uses the same format of mails and the same process as a user would with a webmail client.
|
|
@ -0,0 +1,37 @@
|
||||||
|
+++
|
||||||
|
title = "Three simple bots for the Matrix network"
|
||||||
|
date = 2023-07-28
|
||||||
|
+++
|
||||||
|
|
||||||
|
[Matrix](https://matrix.org/) is an open source, decentralized, end-to-end encrypted communication network. It's sort of a combination of Discord and Mastodon. I created a few bots for this network to perform various small tasks.
|
||||||
|
|
||||||
|
The benefit of having a bot on a network like this is that you only need to run it once and then it is immediately available on all your devices, with a shared, searchable history. A Matrix channel can be viewed as a synchronized terminal, with clients available for virtually any device. A bot is analogous to a bash script in that terminal.
|
||||||
|
|
||||||
|
## Matrix-GPTZoo
|
||||||
|
|
||||||
|
Code is [here](https://forgejo.ewintr.nl/ewintr/matrix-gptzoo).
|
||||||
|
|
||||||
|
A ChatGPT interface with multiple configurable prompts.
|
||||||
|
|
||||||
|
Each prompt will have the bot log in with a different user, enabling the creation of a chat room full of AI assistants to answer your questions.
|
||||||
|
|
||||||
|
Bots will only answer questions specifically addressed to them, but it is also possible to configure one to answer questions that are not addressed to a specific bot. Continuing a conversation can be done by replying to an answer.
|
||||||
|
|
||||||
|
## Matrix-FeedReader
|
||||||
|
|
||||||
|
Code is [here](https://forgejo.ewintr.nl/ewintr/matrix-feedreader).
|
||||||
|
|
||||||
|
This is a bot simply posts new entries from a [Miniflux RSS reader](https://miniflux.app/) to a Matrix room.
|
||||||
|
|
||||||
|
Miniflux already has a Matrix integration and can post the entries itself, but this bot adds two things:
|
||||||
|
|
||||||
|
- After posting it marks the entry as read in Miniflux.
|
||||||
|
- Each entry is posted as a separate message, which makes it easier to create other bots that can interact with them.
|
||||||
|
|
||||||
|
## Matrix-KagiSum
|
||||||
|
|
||||||
|
Code is {here](https://forgejo.ewintr.nl/ewintr/matrix-kagisum).
|
||||||
|
|
||||||
|
A quick and dirty bot that summarizes web content from a link. It uses the [Kagi Universal Summarizer](https://blog.kagi.com/universal-summarizer).
|
||||||
|
|
||||||
|
To use it, just react to a message with a 🗒️ emoji and the bot will reply with a summary of the linked content.
|
|
@ -0,0 +1,261 @@
|
||||||
|
+++
|
||||||
|
title = 'An "invisible" media player with physical buttons'
|
||||||
|
// alias
|
||||||
|
date = 2024-12-14
|
||||||
|
+++
|
||||||
|
|
||||||
|
Two things I like are: minimalism and physical buttons. Two other things I like are Homeassistant and the Squeezebox streaming audio player ecosystem.
|
||||||
|
|
||||||
|
I combined these things to solve a problem that is not really a problem, but something I wanted anyway: play music on my computer while I work without having an audio player window, or confusing multimedia controls.
|
||||||
|
|
||||||
|
Something like this:
|
||||||
|
|
||||||
|
![Home automation - foto's in foto - 720 pixels](https://bear-images.sfo2.cdn.digitaloceanspaces.com/ewintr/home-automation-fotos-in-foto-720-pixels.webp)
|
||||||
|
|
||||||
|
|
||||||
|
When I work, I tend to play a random mix from own collection of flacs and mp3s on my headphones. These are songs that I all like and that are very familiar to me, so they become a mix of background radio and white noise. It helps me to focus on the work, but also puts me in a good mood. But for every interruption, walking away from my desk, or a video call, I pause the music and put down my headphones. When I come back, the reverse happens. Sit down, put on headphones, press play.
|
||||||
|
|
||||||
|
This is, of course, easy to arrange on any computer that can output audio. Just start up your favorite player (or streaming service) and go. But as I said, I really like the act of physically pressing a button for it. In my mind, listening to my personal radio station is separate from the rest of the computer experience. I set out to enhance this separation.
|
||||||
|
|
||||||
|
My wishes:
|
||||||
|
|
||||||
|
- A physical button that can pause/play the music and skip to the next song.
|
||||||
|
- No audio player window on my screen.
|
||||||
|
- No confusion with multimedia keys. If I paused the music to watch a video and want to resume it afterwards, the computer should not misinterpret 'play' and start playing the video again.
|
||||||
|
- Some feedback on the name of the song that is playing.
|
||||||
|
|
||||||
|
|
||||||
|
To make it work, I used the following:
|
||||||
|
|
||||||
|
- [Squeezelite](https://github.com/ralph-irving/squeezelite) - the audio player
|
||||||
|
- [Lyrion Music Server](https://lyrion.org/) - the streaming server
|
||||||
|
- [Homeassistant](https://www.home-assistant.io/) - home automation
|
||||||
|
- [Ikea Styrbor](https://www.ikea.com/nl/en/p/styrbar-remote-control-smart-stainless-steel-10435224/) - the physical button
|
||||||
|
- [Cinnamon](https://linuxmint-developer-guide.readthedocs.io/en/latest/cinnamon.html) - the desktop environment I use
|
||||||
|
|
||||||
|
## The player
|
||||||
|
|
||||||
|
Squeezelite is a straightforward player. Just start it with two arguments:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
squeezelite -n Toren -z
|
||||||
|
```
|
||||||
|
|
||||||
|
'Toren' is the name of the player here. It is Dutch for 'tower', this is my big computer. `-z` lets it run as a background daemon.
|
||||||
|
|
||||||
|
Of course, we would like to run this command automatically at startup. It is tempting to create a separate user for it together with systemd service that is started at boot. This might even be done automatically for you at install. But depending on your setup, this can can cause plenty of headaches. If you use PulseAudio, then chances are that it is configured so that only one user can 'own' the audio output. After you logged in, the daemon is not allowed to play audio anymore.
|
||||||
|
|
||||||
|
The simple solution is to just use whatever method your desktop environment supports to execute the command after you logged in. In Cinnamon it is called 'Startup Applications' in 'System Settings'.
|
||||||
|
|
||||||
|
![Screenshot from 2024-11-07 10-21-28](https://bear-images.sfo2.cdn.digitaloceanspaces.com/ewintr/screenshot-from-2024-11-07-10-21-28-1.webp)
|
||||||
|
|
||||||
|
|
||||||
|
Note: the Squeezebox system really cannot handle two players using the same name. If you experience sudden pauses or delays, check that you didn't accidentally start two instances with something like `ps -aux | grep squeezelite`
|
||||||
|
|
||||||
|
## The streaming server
|
||||||
|
|
||||||
|
I have a separate Lyrion server running to manage all things music. Homeassistant has an official add-on called [Music Assistant](https://music-assistant.io/) that is supposed to fully support the Squeezebox protocol. In theory one could drop Lyrion and let Homeassistant do everything. But I am nostalgic, and brief testing showed that Music Assistant does not understand my [Squeezebox Controller](https://lyrion.org/players-and-controllers/squeezebox-controller/), so I keep them separated and only use the [Squeezebox integration](https://www.home-assistant.io/integrations/squeezebox/).
|
||||||
|
|
||||||
|
This comes with a downside, though. Sometimes there is a lag in the communication between Homeassistant and Lyrion. Not really a problem with normal use, but if you are testing and quickly pressing buttons the results can be confusing.
|
||||||
|
|
||||||
|
## The automations in Homeassistant
|
||||||
|
|
||||||
|
The automations in Homeassistant are about as basic as you can get. Once you got the button connected through Zigbee and the player through the Lyrion Add On, it is just a matter of linking them.
|
||||||
|
|
||||||
|
One special case needs to be added, though. If you start up the player for the first time after booting the computer, the play queue is empty. Fortunately, there is a command to continuously fill it with random tracks.
|
||||||
|
|
||||||
|
We can check for the current length of the queue and issue the command if it is zero. Otherwise, execute pause/play on the media player:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
alias: Pause/Play on Squeezelite Toren
|
||||||
|
description: ""
|
||||||
|
triggers:
|
||||||
|
- device_id: xxx
|
||||||
|
domain: zha
|
||||||
|
type: remote_button_short_press
|
||||||
|
subtype: turn_on
|
||||||
|
trigger: device
|
||||||
|
conditions: []
|
||||||
|
actions:
|
||||||
|
- choose:
|
||||||
|
- conditions:
|
||||||
|
- condition: template
|
||||||
|
value_template: >-
|
||||||
|
{{ state_attr('media_player.squeezelite_toren', 'media_duration')
|
||||||
|
== 0 }}
|
||||||
|
sequence:
|
||||||
|
- data:
|
||||||
|
command: randomplay
|
||||||
|
parameters:
|
||||||
|
- tracks
|
||||||
|
action: squeezebox.call_method
|
||||||
|
target:
|
||||||
|
device_id: xxx
|
||||||
|
default:
|
||||||
|
- action: media_player.media_play_pause
|
||||||
|
data: {}
|
||||||
|
target:
|
||||||
|
device_id: xxx
|
||||||
|
mode: single
|
||||||
|
```
|
||||||
|
|
||||||
|
## The Cinnamon applet
|
||||||
|
|
||||||
|
Now, the applet is the most difficult part. There should not be a player window on my desktop, but I do like to have some feedback on what is playing. This should be easy to fix. Just write a small task bar applet in JavaScript that queries the state of the player in Homeassistant and display the band and song in a label.
|
||||||
|
|
||||||
|
Homeassistant has a REST API that provides all the information. Here is a `curl` command that fetches the information of my player:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -X GET -H "Authorization: Bearer ${HA_TOKEN}" https://homeassistant.local:8123/api/states/media_player.squeezelite_toren | jq .
|
||||||
|
```
|
||||||
|
|
||||||
|
Which returns something like this:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
{
|
||||||
|
"entity_id": "media_player.squeezelite_toren
|
||||||
|
"state": "playing",
|
||||||
|
"attributes": {
|
||||||
|
"group_members": [],
|
||||||
|
"volume_level": 0.98,
|
||||||
|
"is_volume_muted": false,
|
||||||
|
"media_content_id": // a bunch of links
|
||||||
|
"media_content_type": "playlist",
|
||||||
|
"media_duration": 254,
|
||||||
|
"media_position": 39,
|
||||||
|
"media_position_updated_at": "2024-11-19T11:19:49.056947+00:00",
|
||||||
|
"media_title": "Devil's Dillema",
|
||||||
|
"media_artist": "Super Preachers",
|
||||||
|
"media_album_name": "The Underdog",
|
||||||
|
"media_channel": "None",
|
||||||
|
"shuffle": false,
|
||||||
|
"repeat": "off",
|
||||||
|
"query_result": {},
|
||||||
|
"entity_picture": "/api/media_player_proxy/media_player.squeezelite_toren?token=3dd15241609284358c81ffb25bb19464ef2a0ed724489876c14766c389df9b51&cache=ff70377516f8f21c",
|
||||||
|
"friendly_name": "Squeezelite Toren",
|
||||||
|
"supported_features": 3077055
|
||||||
|
},
|
||||||
|
"last_changed": "2024-11-19T11:06:43.169101+00:00",
|
||||||
|
"last_reported": "2024-11-19T11:19:49.057895+00:00",
|
||||||
|
"last_updated": "2024-11-19T11:19:49.057895+00:00",
|
||||||
|
"context": {
|
||||||
|
"id": "01JD22CV212NMXMNZ3N4NGKSM1",
|
||||||
|
"parent_id": null,
|
||||||
|
"user_id": null
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
As one can see, one needs to [create an API token](https://developers.home-assistant.io/docs/api/rest/) to access the REST API of Homeassistant. Our applet will use the same method.
|
||||||
|
|
||||||
|
### The actual applet
|
||||||
|
|
||||||
|
As said, creating the applet was the hard part. The first reason for that is: I don't really know JavaScript. The second thing is: there isn't much documentation on how to write those applets.
|
||||||
|
|
||||||
|
Fortunately, I found two posts explaining the process. Also, in the source of a Cinnamon documentation is the first part of a tutorial in XML. Despite that it is quite readable:
|
||||||
|
|
||||||
|
- [cinnamon/docs/reference/cinnamon-tutorials/write-applet.xml](https://github.com/linuxmint/cinnamon/blob/master/docs/reference/cinnamon-tutorials/write-applet.xml)
|
||||||
|
- [Writing a panel applet for Cinnamon: The basics](https://billauer.co.il/blog/2018/12/writing-cinnamon-applet/)
|
||||||
|
- [Writing a simple task Applet for Cinnamon Desktop](https://medium.com/swlh/writing-a-simple-task-applet-for-cinnamon-desktop-38cc4e499372)
|
||||||
|
|
||||||
|
Beyond that, the advice is to simply look at other applets and copy what you see. Here is mine:
|
||||||
|
|
||||||
|
First, create a folder `~/.local/share/cinnamon/applets/currentlyplaying@ewintr`
|
||||||
|
|
||||||
|
This folder will need three files:
|
||||||
|
|
||||||
|
- `metadata.json` - overall metadata for the applet
|
||||||
|
- `settings-schema.json` - configuration
|
||||||
|
- `applet.js` - the actual applet
|
||||||
|
|
||||||
|
### metadata.json
|
||||||
|
|
||||||
|
This is the metadata that helps Cinnamon understand the applet.
|
||||||
|
|
||||||
|
```json
|
||||||
|
// metadata.json
|
||||||
|
{
|
||||||
|
"uuid": "currentlyplaying@ewintr",
|
||||||
|
"name": "Currently playing",
|
||||||
|
"version": "1.0.0",
|
||||||
|
"description": "Shows currently playing media",
|
||||||
|
"settings-schema": "settings-schema"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### settings-schema.json
|
||||||
|
|
||||||
|
This enables a form where the user of the applet can enter their Homeassistant API key.
|
||||||
|
|
||||||
|
```json
|
||||||
|
// settings-schema.json
|
||||||
|
{
|
||||||
|
"ha_token": {
|
||||||
|
"type": "entry",
|
||||||
|
"default": "",
|
||||||
|
"description": "Home Assistant API Token"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### applet.js
|
||||||
|
|
||||||
|
A Javascript snippets that polls the Homeassistant API about the status of the media player and displays the band and song when playing.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// applet.js
|
||||||
|
const Applet = imports.ui.applet;
|
||||||
|
const Util = imports.misc.util;
|
||||||
|
const Mainloop = imports.mainloop;
|
||||||
|
const Soup = imports.gi.Soup;
|
||||||
|
const Json = imports.gi.Json;
|
||||||
|
const Settings = imports.ui.settings;
|
||||||
|
|
||||||
|
class CurrentlyPlaying extends Applet.TextApplet {
|
||||||
|
constructor (metadata, orientation, panelHeight, instance_id) {
|
||||||
|
super(orientation, panelHeight, instance_id);
|
||||||
|
this.set_applet_label("Loading...");
|
||||||
|
this._httpSession = new Soup.Session();
|
||||||
|
this._updateLoop();
|
||||||
|
this.settings = new Settings.AppletSettings(this, metadata.uuid, instance_id);
|
||||||
|
this.settings.bind("ha_token", "ha_token", this.on_settings_changed); }
|
||||||
|
|
||||||
|
_updateLoop() {
|
||||||
|
this._updateText();
|
||||||
|
this._timeoutId = Mainloop.timeout_add(1000, () => this._updateLoop());
|
||||||
|
}
|
||||||
|
|
||||||
|
_updateText() {
|
||||||
|
let url = "https://ha.ewintr.nl:8123/api/states/media_player.squeezelite_toren";
|
||||||
|
let message = Soup.Message.new('GET', url);
|
||||||
|
|
||||||
|
message.request_headers.append('Authorization', 'Bearer ' + this.ha_token);
|
||||||
|
this._httpSession.queue_message(message, (session, message) => {
|
||||||
|
if (message.status_code === Soup.KnownStatusCode.OK) {
|
||||||
|
let jsonData = JSON.parse(message.response_body.data);
|
||||||
|
let mediaTitle = jsonData.attributes.media_title || "";
|
||||||
|
let mediaArtist = jsonData.attributes.media_artist || "";
|
||||||
|
let state = jsonData.state || "paused";
|
||||||
|
let label = `${mediaArtist} - ${mediaTitle}`
|
||||||
|
if (label == " - " || state == "paused" || state == "idle") {
|
||||||
|
label = ""
|
||||||
|
}
|
||||||
|
this.set_applet_label(label);
|
||||||
|
} else {
|
||||||
|
this.set_applet_label("Error fetching data");
|
||||||
|
}
|
||||||
|
}); }
|
||||||
|
|
||||||
|
on_applet_removed_from_panel() {
|
||||||
|
if (this._timeoutId) {
|
||||||
|
Mainloop.source_remove(this._timeoutId);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function main(metadata, orientation, panelHeight, instance_id) {
|
||||||
|
return new CurrentlyPlaying(metadata, orientation, panelHeight, instance_id);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
And there you have it: plenty of moving parts, but on the surface it quietly just works.
|
|
@ -0,0 +1,71 @@
|
||||||
|
+++
|
||||||
|
title = "Forwarding HTTPS on a Zyxel VMG8825-T50 and other configuration shenanigans"
|
||||||
|
date = 2024-09-23
|
||||||
|
+++
|
||||||
|
|
||||||
|
Years ago my ISP provided me with a Zyxel VMG8825-T50 modem/router to connect my home network with their ADSL service. I have been using it ever since, and I'd say it is adequate. I guess. It is stable and has the basic functionality I want, although the admin interface looks like it was built and designed 20 years ago. It also has some quirks. But I was able to work around them for all these years.
|
||||||
|
|
||||||
|
But then the moment came that I really wanted to... *drumroll* forward port 443 to a web server I run at home. To be clear, the device supports port forwarding and multiple ports have successfully been forwarded over the years. It all works fine. However, this one, 443, the port for HTTPS, never worked. You could configure it, but a browser would just never get a connection.
|
||||||
|
|
||||||
|
Digging around once more, I discovered the culprit. The device has an option to allow external management. That I had turned off. But apparently it still somehow occupied port 443. When I updated the configuration for it to be still turned off, but now on port 444, the HTTPS forwarding magically worked.
|
||||||
|
|
||||||
|
![zyxel_external](https://bear-images.sfo2.cdn.digitaloceanspaces.com/ewintr/zyxel_external.png)
|
||||||
|
|
||||||
|
Success!
|
||||||
|
|
||||||
|
Or so I thought.
|
||||||
|
|
||||||
|
The next day, I wanted to look at the settings again, and I discovered that I was unable to log in at all. See, I normally use plain HTTP for logging in. This is my own network, I am the only user, who cares whether the traffic on it is encrypted? But that was the other quirk I got used to. The interface always tried to redirect the browser to the HTTPS version, which had never worked. Presumably because with the setting described above, all management access got turned off.
|
||||||
|
|
||||||
|
This was never a problem, though. The redirect was so slow that I was already done by the time the page refreshed!
|
||||||
|
|
||||||
|
This time, however, the redirect happened instantly. And failed. I could see a flash of the log page and then the browser would redirect and report "connection lost".
|
||||||
|
|
||||||
|
What to do? On the page is a piece of JavaScript that is very eager to move you to "safe" grounds. Disabling does not work because the whole interface is built with JavaScript. I tried to block the specific redirect call with ad blockers, etc., but no luck.
|
||||||
|
|
||||||
|
And then I figured that the script would probably not be very sophisticated. It would likely just look at the protocol, not the host or the whole URL. So I asked an AI to generate a nginx configuration snippet that would let nginx function as an HTTPS proxy with a self-signed certificate:
|
||||||
|
|
||||||
|
```nginx
|
||||||
|
server {
|
||||||
|
listen 443 ssl http2 default_server;
|
||||||
|
listen [::]:443 ssl http2 default_server;
|
||||||
|
server_name _;
|
||||||
|
|
||||||
|
# Self-signed certificate
|
||||||
|
ssl_certificate /etc/nginx/ssl/nginx-selfsigned.crt;
|
||||||
|
ssl_certificate_key /etc/nginx/ssl/nginx-selfsigned.key;
|
||||||
|
|
||||||
|
# SSL configuration (relaxed for testing)
|
||||||
|
ssl_protocols TLSv1.2 TLSv1.3;
|
||||||
|
|
||||||
|
location / {
|
||||||
|
set $redirect 0;
|
||||||
|
|
||||||
|
# redirect to my home page if someone from outside somehow managed to load this
|
||||||
|
if ($remote_addr !~ "^(10\.|172\.(1[6-9]|2[0-9]|3[01])\.|192\.168\.)") {
|
||||||
|
set $redirect 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
if ($redirect = 1) {
|
||||||
|
return 301 https://ewintr.nl$request_uri;
|
||||||
|
}
|
||||||
|
|
||||||
|
# Proxy to HTTP backend, assuming 192.168.1.1 doesn't support HTTPS
|
||||||
|
proxy_pass http://192.168.1.1;
|
||||||
|
proxy_set_header Host $host;
|
||||||
|
proxy_set_header X-Real-IP $remote_addr;
|
||||||
|
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||||
|
proxy_set_header X-Forwarded-Proto $scheme;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Now if I want to access the configuration of my Zyxel:
|
||||||
|
|
||||||
|
- Disable the normal default site and enable the one above
|
||||||
|
- Point browser to `https://[internal IP address of web server]/` (The local server that started the need for the whole forwarding configuration in the first place)
|
||||||
|
- Ignore the warning about the self-signed certificate
|
||||||
|
- Browse the admin interface without any problems.
|
||||||
|
- Disable this site again.
|
||||||
|
|
||||||
|
Success! For real this time.
|
|
@ -0,0 +1,20 @@
|
||||||
|
+++
|
||||||
|
title = "Links #2024-28"
|
||||||
|
date = 2024-07-15
|
||||||
|
+++
|
||||||
|
|
||||||
|
Articles I found last week that I actually read until the end.
|
||||||
|
|
||||||
|
## Go
|
||||||
|
|
||||||
|
- [Locally patching dependencies in Go](https://eli.thegreenplace.net/2024/locally-patching-dependencies-in-go/) - _eli.thegreenplace.net_
|
||||||
|
- [Contextualizing the Go Context API: Program Scopes](https://matttproud.com/blog/posts/contextualizing-context-scopes.html) - _matttproud.com_
|
||||||
|
|
||||||
|
## Hacking
|
||||||
|
|
||||||
|
- [Reverse Engineering TicketMaster's Rotating Barcodes (SafeTix) ](https://conduition.io/coding/ticketmaster/) - _conduition.io_
|
||||||
|
- [PySkyWiFi: completely free, unbelievably stupid wi-fi on long-haul flights](https://robertheaton.com/pyskywifi/) - _robertheaton.com_
|
||||||
|
|
||||||
|
## History
|
||||||
|
|
||||||
|
- [Iconography of the X Window System: The Boot Stipple](https://matttproud.com/blog/posts/x-window-system-boot-stipple.html) - _matttproud.com_
|
|
@ -0,0 +1,16 @@
|
||||||
|
+++
|
||||||
|
title = "Links #2024-29"
|
||||||
|
date = 2024-07-22
|
||||||
|
+++
|
||||||
|
|
||||||
|
Articles I found last week that I actually read until the end.
|
||||||
|
|
||||||
|
## Go
|
||||||
|
|
||||||
|
- [Go range iterators demystified](https://www.dolthub.com/blog/2024-07-12-golang-range-iters-demystified/) - _www.dolthub.com_
|
||||||
|
|
||||||
|
## Other Dev
|
||||||
|
|
||||||
|
- [Where is the programmer inspo?](https://avdi.codes/where-is-the-programmer-inspo/) - _avdi.codes_
|
||||||
|
- [What TeX Gets Right](https://newton.cx/~peter/2024/what-tex-gets-right/) - _newton.cx_
|
||||||
|
- [How not to use box shadows](https://dgerrells.com/blog/how-not-to-use-box-shadows) - _dgerrells.com_
|
|
@ -0,0 +1,13 @@
|
||||||
|
+++
|
||||||
|
title = "Links #2024-30"
|
||||||
|
date = 2024-07-29
|
||||||
|
+++
|
||||||
|
|
||||||
|
Articles I found last week that I actually read until the end.
|
||||||
|
|
||||||
|
- [Social Computing, before the Internet](https://netsettlement.blogspot.com/2024/07/social-computing-before-internet.html?m=1) - _netsettlement.blogspot.com_
|
||||||
|
- [Software engineers are not (and should not be) technicians ](https://www.haskellforall.com/2024/07/software-engineers-are-not-and-should.html) - _www.haskellforall.com_
|
||||||
|
- [How I Use Git Worktrees](https://matklad.github.io/2024/07/25/git-worktrees.html) - _matklad.github.io_
|
||||||
|
- [The Computer Genius the Communists Couldn’t Stand](https://culture.pl/en/article/jacek-karpinski-the-computer-genius-the-communists-couldnt-stand) - _culture.pl_
|
||||||
|
- [C# (almost) has implicit interfaces](https://clipperhouse.com/c-sharp-implicit-interfaces/) - _clipperhouse.com_
|
||||||
|
- [Everlasting jobstoppers: How an AI bot-war destroyed the online job market](https://www.salon.com/2024/07/28/everlasting-jobstoppers-how-an-ai-bot-destroyed-the-online-job-market/) - _www.salon.com_
|
|
@ -0,0 +1,10 @@
|
||||||
|
+++
|
||||||
|
title = "Links #2024-31"
|
||||||
|
date = 2024-08-06
|
||||||
|
+++
|
||||||
|
|
||||||
|
Articles or videos I found last week that I actually read or watched until the end.
|
||||||
|
|
||||||
|
- [Go, a reasonable good language](https://kokada.capivaras.dev/blog/go-a-reasonable-good-language/) - _kokada.capivaras.dev_
|
||||||
|
- [Reduce allocations and comparison performance with the new unique package in Go 1.23](https://josephwoodward.co.uk/2024/08/performance-improvements-unique-package-go-1-23) - _josephwoodward.co.uk_
|
||||||
|
- [Eron Wolf Interviews Andreas Kling About the Ladybird Browser](https://www.youtube.com/watch?v=4xhaAAcKLtI) - _www.youtube.com_
|
|
@ -0,0 +1,10 @@
|
||||||
|
+++
|
||||||
|
title = "Links #2024-32"
|
||||||
|
date = 2024-08-12
|
||||||
|
+++
|
||||||
|
|
||||||
|
Articles I found last week that I actually read until the end.
|
||||||
|
|
||||||
|
- [What's the best Static Analysis tool for Golang?](https://www.dolthub.com/blog/2024-07-24-static-analysis/) - _www.dolthub.com_
|
||||||
|
- [Go structs are copied on assignment (and other things about Go I'd missed)](https://jvns.ca/blog/2024/08/06/go-structs-copied-on-assignment/) - _jvns.ca_
|
||||||
|
- [q What do I title this article?](https://two-wrongs.com/q) - _two-wrongs.com_
|
|
@ -0,0 +1,8 @@
|
||||||
|
+++
|
||||||
|
title = "Links #2024-33"
|
||||||
|
date = 2024-08-19
|
||||||
|
+++
|
||||||
|
|
||||||
|
Articles I found last week that I actually read until the end.
|
||||||
|
|
||||||
|
- [Building static binaries with Go on Linux ](https://eli.thegreenplace.net/2024/building-static-binaries-with-go-on-linux/) - _eli.thegreenplace.net_
|
|
@ -0,0 +1,11 @@
|
||||||
|
+++
|
||||||
|
title = "Links #2024-34"
|
||||||
|
date = 2024-08-26
|
||||||
|
+++
|
||||||
|
|
||||||
|
Articles or videos I found last week that I actually read or watched until the end.
|
||||||
|
|
||||||
|
- [Micro-libraries need to die already](https://bvisness.me/microlibraries/) - _bvisness.me_
|
||||||
|
- [Darius Kazemi, Tiny Subversions - XOXO Festival (2014)](https://youtube.com/watch?v=l_F9jxsfGCw) - _youtube.com_
|
||||||
|
- [Why does getting a job in tech suck right now? (Is it AI?!?)](https://ryxcommar.com/2024/08/17/why-does-getting-a-job-in-tech-suck-right-now-is-it-ai/) - _ryxcommar.com_
|
||||||
|
- [Why am I writing a Rust compiler in C?](https://notgull.net/announcing-dozer/) - _notgull.net_
|
|
@ -0,0 +1,12 @@
|
||||||
|
+++
|
||||||
|
title = "Links #2024-35"
|
||||||
|
date = 2024-09-02
|
||||||
|
+++
|
||||||
|
|
||||||
|
Articles I found last week that I actually read until the end.
|
||||||
|
|
||||||
|
- [The Vindication of Bubble Sort](https://two-wrongs.com/vindication-of-bubble-sort) - _two-wrongs.com_
|
||||||
|
- [The Trouble with Procurement Departments, Resellers and Stripe](https://www.troyhunt.com/the-trouble-with-procurement-departments-resellers-and-stripe/) - _www.troyhunt.com_
|
||||||
|
- [The secret inside One Million Checkboxes](https://eieio.games/essays/the-secret-in-one-million-checkboxes/) - _eieio.games_
|
||||||
|
- [Bypassing airport security via SQL injection](https://ian.sh/tsa) - _ian.sh_
|
||||||
|
|
|
@ -0,0 +1,10 @@
|
||||||
|
+++
|
||||||
|
title = "Links #2024-36"
|
||||||
|
date = 2024-09-09
|
||||||
|
+++
|
||||||
|
|
||||||
|
Videos I found last week that I actually watched until the end.
|
||||||
|
|
||||||
|
- [How to Extract Text Contents from PDF (part 1/3)](https://youtube.com/watch?v=k34wRxaxA_c) - _youtube.com_
|
||||||
|
- [How to Extract Text Contents from PDF (part 2/3)](https://youtube.com/watch?v=_A1M4OdNsiQ) - _youtube.com_
|
||||||
|
- [How to Extract Text Contents from PDF (part 3/3)](https://youtube.com/watch?v=sfV_7cWPgZE) - _youtube.com_
|
|
@ -0,0 +1,9 @@
|
||||||
|
+++
|
||||||
|
title = "Links #2024-37"
|
||||||
|
date = 2024-09-16
|
||||||
|
+++
|
||||||
|
|
||||||
|
Articles I found last week that I actually read until the end.
|
||||||
|
|
||||||
|
- [Don't defer Close() on writable files](https://www.joeshaw.org/dont-defer-close-on-writable-files/) - _www.joeshaw.org_
|
||||||
|
- [We Spent $20 To Achieve RCE And Accidentally Became The Admins Of .MOBI](https://labs.watchtowr.com/we-spent-20-to-achieve-rce-and-accidentally-became-the-admins-of-mobi/) - _labs.watchtowr.com_
|
|
@ -0,0 +1,13 @@
|
||||||
|
+++
|
||||||
|
title = "Links #2024-38"
|
||||||
|
date = 2024-09-23
|
||||||
|
+++
|
||||||
|
|
||||||
|
Articles and videos I found last week that I actually read or watched until the end.
|
||||||
|
|
||||||
|
- [Will we be writing Hare in 2099? (with Drew DeVault)](https://youtube.com/watch?v=42y2Q9io3Xs) - _youtube.com_
|
||||||
|
- [What's in an (Alias) Name?](https://go.dev/blog/alias-names) - _go.dev_
|
||||||
|
- [Stop using SERIAL in Postgres](https://www.naiyerasif.com/post/2024/09/04/stop-using-serial-in-postgres/) - _www.naiyerasif.com_
|
||||||
|
- [I Made The Ultimate Cheating Device](https://youtube.com/watch?v=Bicjxl4EcJg) - _youtube.com_
|
||||||
|
- [Using YouTube to steal your files](https://lyra.horse/blog/2024/09/using-youtube-to-steal-your-files/) - _lyra.horse_
|
||||||
|
- [gaining access to anyones browser without them even visiting a website](https://kibty.town/blog/arc/) - _kibty.town_
|
|
@ -0,0 +1,11 @@
|
||||||
|
+++
|
||||||
|
title = "Links #2024-39"
|
||||||
|
date = 2024-09-30
|
||||||
|
+++
|
||||||
|
|
||||||
|
Articles I found last week that I actually read until the end.
|
||||||
|
|
||||||
|
- [LI + AI = GIGO](https://heatherburns.tech/2024/09/19/li-ai-gigo/) - _heatherburns.tech_
|
||||||
|
- [Improving rendering performance with CSS content-visibility](https://nolanlawson.com/2024/09/18/improving-rendering-performance-with-css-content-visibility/) - _nolanlawson.com_
|
||||||
|
- [Why I still blog after 15 years](https://www.jonashietala.se/blog/2024/09/25/why_i_still_blog_after_15_years/) - _www.jonashietala.se_
|
||||||
|
- [Hacking Kia: Remotely Controlling Cars With Just a License Plate](https://samcurry.net/hacking-kia) - _samcurry.net_
|
|
@ -0,0 +1,13 @@
|
||||||
|
+++
|
||||||
|
title = "Links #2024-40"
|
||||||
|
date = 2024-10-07
|
||||||
|
+++
|
||||||
|
|
||||||
|
Articles and videos I found last week that I actually read or watched until the end.
|
||||||
|
|
||||||
|
- [Our Android App is Frozen in Carbonite](https://ia.net/topics/our-android-app-is-frozen-in-carbonite) - _ia.net_
|
||||||
|
- [Joining errors in Go](https://tpaschalis.me/golang-multierr/) - _tpaschalis.me_
|
||||||
|
- [From opera to tech](https://jordaneldredge.com/notes/opera-to-tech/) - _jordaneldredge.com_
|
||||||
|
- [FOSDEM 2024: you too could have made curl](https://daniel.haxx.se/blog/2024/02/06/fosdem-2024-you-too-could-have-made-curl/) - _daniel.haxx.se_
|
||||||
|
- [the origin of ad: an adaptable text editor](https://sminez.github.io/ad-an-adaptable-text-editor/) - _sminez.github.io_
|
||||||
|
- [How do HTTP servers figure out Content-Length?](https://aarol.dev/posts/go-contentlength/) - _aarol.dev_
|
|
@ -0,0 +1,9 @@
|
||||||
|
+++
|
||||||
|
title = "Links #2024-42"
|
||||||
|
date = 2024-10-21
|
||||||
|
+++
|
||||||
|
|
||||||
|
Articles I found last week that I actually read until the end.
|
||||||
|
|
||||||
|
- [Why is everybody talking about sync engines?](https://fika.bar/blogs/paoramen/why-is-everybody-talking-about-syncing-engines-01JAAEZTCMZA28DSESAJR3J30J) - _fika.bar_
|
||||||
|
- [I love calculator](https://karpathy.ai/blog/calculator.html) - _karpathy.ai_
|
|
@ -0,0 +1,10 @@
|
||||||
|
+++
|
||||||
|
title = "Links #2024-43"
|
||||||
|
date = 2024-10-29
|
||||||
|
+++
|
||||||
|
|
||||||
|
Articles I found last week that I actually read until the end.
|
||||||
|
|
||||||
|
- [Debugging my wife's alarm clock](https://ntietz.com/blog/debugging-my-wifes-alarm-clock/) - _ntietz.com_
|
||||||
|
- [against /tmp](https://dotat.at/@/2024-10-22-tmp.html) - _dotat.at_
|
||||||
|
- [Debug Go core dumps with delve: export byte slices](https://michael.stapelberg.ch/posts/2024-10-22-debug-go-core-dumps-delve-export-bytes/) - _michael.stapelberg.ch_
|
|
@ -0,0 +1,10 @@
|
||||||
|
+++
|
||||||
|
title = "Links #2024-44"
|
||||||
|
date = 2024-11-04
|
||||||
|
+++
|
||||||
|
|
||||||
|
Articles I found last week that I actually read until the end.
|
||||||
|
|
||||||
|
- [Jia Tanning Go code](https://www.arp242.net/jia-tan-go.html) - _www.arp242.net_
|
||||||
|
- [One weird trick to get the whole planet to send abuse complaints to your best friend(s)](https://delroth.net/posts/spoofed-mass-scan-abuse/) - _delroth.net_
|
||||||
|
- [Nobody cares about decentralization until they do](https://kyefox.com/nobody-cares-about-decentralization-until-they-do/) - _kyefox.com_
|
|
@ -0,0 +1,11 @@
|
||||||
|
+++
|
||||||
|
title = "Links #2024-45"
|
||||||
|
date = 2024-11-11
|
||||||
|
+++
|
||||||
|
|
||||||
|
Articles and READMEs I found last week that I actually read until the end.
|
||||||
|
|
||||||
|
- [curl source code age](https://daniel.haxx.se/blog/2024/10/31/curl-source-code-age/) - _daniel.haxx.se_
|
||||||
|
- [smartcat (sc)](https://github.com/efugier/smartcat/) - _github.com_
|
||||||
|
- [Ranging over functions in Go 1.23](https://eli.thegreenplace.net/2024/ranging-over-functions-in-go-123/) - _eli.thegreenplace.net_
|
||||||
|
- [Fruit Credits: a personal accounting app based on hledger](https://dz4k.com/2024/fruit-credits/) - _dz4k.com_
|
|
@ -0,0 +1,12 @@
|
||||||
|
+++
|
||||||
|
title = "Links #2024-46"
|
||||||
|
date = 2024-11-18
|
||||||
|
+++
|
||||||
|
|
||||||
|
Articles and videos I found last week that I actually read or watched until the end.
|
||||||
|
|
||||||
|
- [curl -v https://google.com](https://www.youtube.com/watch?v=atcqMWqB3hw&t=2s) - _www.youtube.com_
|
||||||
|
- [DOOM on a 3D-printed mechanical TV](https://www.youtube.com/watch?v=R-wbfP1pmVw) - _www.youtube.com_
|
||||||
|
- [Quality software deserves your hard‑earned cash](https://stephango.com/quality-software) - _stephango.com_
|
||||||
|
- [Binary vector embeddings are so cool](https://emschwartz.me/binary-vector-embeddings-are-so-cool/) - _emschwartz.me_
|
||||||
|
- [Tutorial videos](https://aider.chat/docs/usage/tutorials.html) - _aider.chat_ (work in progress, but yes, all videos on the page)
|
|
@ -0,0 +1,12 @@
|
||||||
|
+++
|
||||||
|
title = "Links #2024-48"
|
||||||
|
date = 2024-12-02
|
||||||
|
+++
|
||||||
|
|
||||||
|
Articles I found last week that I actually read until the end.
|
||||||
|
|
||||||
|
- [ML in Go with a Python sidecar](https://eli.thegreenplace.net/2024/ml-in-go-with-a-python-sidecar/) - _eli.thegreenplace.net_
|
||||||
|
- [GoMLX: ML in Go without Python](https://eli.thegreenplace.net/2024/gomlx-ml-in-go-without-python/) - _eli.thegreenplace.net_
|
||||||
|
- [Reply on Bluesky and Decentralization](https://whtwnd.com/bnewbold.net/3lbvbtqrg5t2t) - _whtwnd.com_
|
||||||
|
- [Structured Editing and Incremental Parsing](https://tratt.net/laurie/blog/2024/structured_editing_and_incremental_parsing.html) - _tratt.net_
|
||||||
|
- [Private School Labeler on Bluesky](https://simonwillison.net/2024/Nov/22/private-school-labeler-on-bluesky/#atom-everything) - _simonwillison.net_
|
|
@ -0,0 +1,12 @@
|
||||||
|
+++
|
||||||
|
title = "Links #2024-49"
|
||||||
|
date = 2024-12-09
|
||||||
|
+++
|
||||||
|
|
||||||
|
Articles I found last week that I actually read until the end.
|
||||||
|
|
||||||
|
- [No NAT November: My Month Without IPv4](https://blog.infected.systems/posts/2024-12-01-no-nat-november/) - _blog.infected.systems_
|
||||||
|
- [Hugs of Death: How should we think about resilience in the IndieWeb?](https://blog.infected.systems/posts/2024-12-04-hugs-of-death/) - _blog.infected.systems_
|
||||||
|
- [Aider in your IDE](https://aider.chat/docs/usage/watch.html) - _aider.chat_
|
||||||
|
- [Worlds: Mutability with Control](https://jimmyhmiller.github.io/advent-of-papers/2024/dec-5-worlds) - _jimmyhmiller.github.io_
|
||||||
|
- [Intuition in Software Development](https://jimmyhmiller.github.io/advent-of-papers/2024/dec-6-intuition) - _jimmyhmiller.github.io_
|
|
@ -0,0 +1,13 @@
|
||||||
|
+++
|
||||||
|
title = "Links #2024-51"
|
||||||
|
date = 2024-12-23
|
||||||
|
+++
|
||||||
|
|
||||||
|
Articles I found last week that I actually read until the end.
|
||||||
|
|
||||||
|
- [Helix: Why (And How) I Use It](https://jonathan-frere.com/posts/helix/) - _jonathan-frere.com_
|
||||||
|
- [Re: Re: Bluesky and Decentralization](https://dustycloud.org/blog/re-re-bluesky-decentralization/) - _dustycloud.org_
|
||||||
|
- [What Knowledge Isn't](https://jimmyhmiller.github.io/advent-of-papers/2024/dec-13-knowledge) - _jimmyhmiller.github.io_
|
||||||
|
- [AI and Internet Hygiene ](https://www.late-review.com/p/ai-and-internet-hygiene) - _www.late-review.com_
|
||||||
|
- [A polite disagreement bot ring is flooding Bluesky — reply guy as a (dis)service](https://pivot-to-ai.com/2024/12/07/a-polite-disagreement-bot-ring-is-flooding-bluesky-reply-guy-as-a-disservice/) - _pivot-to-ai.com_
|
||||||
|
- [My favourite colour is Chuck Norris red](https://htmhell.dev/adventcalendar/2024/20/) - _htmhell.dev_
|
|
@ -0,0 +1,12 @@
|
||||||
|
+++
|
||||||
|
title = "Links #2024-52"
|
||||||
|
date = 2024-12-30
|
||||||
|
+++
|
||||||
|
|
||||||
|
Articles and videos I found last week that I actually read or watched until the end.
|
||||||
|
|
||||||
|
- [Semantic Compression](https://caseymuratori.com/blog_0015) - _caseymuratori.com_
|
||||||
|
- [My colleague Julius](https://ploum.net/2024-12-23-julius-en.html) - _ploum.net_
|
||||||
|
- [Writing computer code by voice](https://media.ccc.de/v/emf2024-217-writing-computer-code-by-voice) - _media.ccc.de_
|
||||||
|
- [Breaking NATO Radio Encryption](https://media.ccc.de/v/38c3-breaking-nato-radio-encryption) - _media.ccc.de_
|
||||||
|
- [From fault injection to RCE: Analyzing a Bluetooth tracker](https://media.ccc.de/v/38c3-from-fault-injection-to-rce-analyzing-a-bluetooth-tracker) - _media.ccc.de_
|
|
@ -0,0 +1,27 @@
|
||||||
|
+++
|
||||||
|
title = "TIL: Change text console font and size with console-setup"
|
||||||
|
date = 2024-12-03
|
||||||
|
+++
|
||||||
|
|
||||||
|
Today I learned one can change the font and its size on the text console of a Debian system using:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo dpkg-reconfigure console-setup
|
||||||
|
```
|
||||||
|
|
||||||
|
Keep in mind that the available sizes depend on the chosen font; some fonts offer fewer size options than others. If your primary goal is to increase text size (like mine was), it might help to experiment by going back and forth between different fonts.
|
||||||
|
|
||||||
|
![Screenshot from 2024-12-04 14-15-38](https://bear-images.sfo2.cdn.digitaloceanspaces.com/ewintr/screenshot-from-2024-12-04-14-15-38.webp)
|
||||||
|
|
||||||
|
|
||||||
|
## Backstory
|
||||||
|
|
||||||
|
I had a headless server in my home that disappeared from the network at random moments. Walking over to the machine showed, I noticed that it was still running. I was just not available to access it with `ssh` any more.
|
||||||
|
|
||||||
|
Rebooting the server resolved the problem, but that is of course not a solution. I needed to figure out why it would randomly decide to ignore the network.
|
||||||
|
|
||||||
|
The only portable monitor I have is an old eink screen, so I used that to make the server temporary not-headless. The monitor did not cope well with the tiny text on the black background, so after some searching I was fortunate to find the above-mentioned way to increase the size of the font.
|
||||||
|
|
||||||
|
~~And then I discovered that the server was fine, and the real issue was actually that I had misconfigured my router.~~
|
||||||
|
|
||||||
|
And then I discovered that the network controller on the main board suffers from a known bug, fortunately one for which a [workaround](https://forum.proxmox.com/threads/intel-nic-e1000e-hardware-unit-hang.106001/) exists.
|
|
@ -0,0 +1,14 @@
|
||||||
|
+++
|
||||||
|
title = "Links #2025-02"
|
||||||
|
date = 2025-01-06
|
||||||
|
+++
|
||||||
|
|
||||||
|
Videos I found last week that I actually watched until the end.
|
||||||
|
|
||||||
|
- [EU's Digital Identity Systems - Reality Check and Techniques for Better Privacy](https://media.ccc.de/v/38c3-eu-s-digital-identity-systems-reality-check-and-techniques-for-better-privacy) - _media.ccc.de_
|
||||||
|
- [Hacking the RP2350](https://media.ccc.de/v/38c3-hacking-the-rp2350) - _media.ccc.de_
|
||||||
|
- [Proprietary silicon ICs and dubious marketing claims? Let's fight those with a microscope!](https://media.ccc.de/v/38c3-proprietary-silicon-ics-and-dubious-marketing-claims-let-s-fight-those-with-a-microscope) - _media.ccc.de_
|
||||||
|
- [ We've not been trained for this: Life after the Newag DRM disclosure](https://media.ccc.de/v/38c3-we-ve-not-been-trained-for-this-life-after-the-newag-drm-disclosure) - _media.ccc.de_
|
||||||
|
- [The Value of Source Code](https://www.youtube.com/watch?v=Y6ZHV0RH0fQ) - _www.youtube.com_
|
||||||
|
- [ACE up the sleeve: Hacking into Apple's new USB-C Controller](https://media.ccc.de/v/38c3-ace-up-the-sleeve-hacking-into-apple-s-new-usb-c-controller) - _media.ccc.de_
|
||||||
|
- [Feelings are Facts: Love, Privacy, and the Politics of Intellectual Shame](https://media.ccc.de/v/38c3-feelings-are-facts-love-privacy-and-the-politics-of-intellectual-shame) - _media.ccc.de_
|
Loading…
Reference in New Issue