Page Inspect
Internal Links
33
External Links
19
Images
6
Headings
8
Page Content
Title:Jan-Piet Mens
Description:
HTML Size:28 KB
Markdown Size:13 KB
Fetched At:September 16, 2025
Page Structure
h1Jan-Piet Mens
h1A friend in need ...
h1Blast from the past: Facit A2400 terminal
h1Accessing KeePassXC password stores with an Ansible lookup plugin
h3Further reading
h1If only I'd known ... Debian repo signing
h1Git credentials helper
h4Other recent entries
Markdown Content
Jan-Piet Mens
-
- Search
- Pages
- Archives
- @M
- About
- Support
# Jan-Piet Mens
# A friend in need ...
Three years ago, a friend of mine purchased a server (one of those horrendously loud things most of us mere mortals thankfully seldom or never get to hear) from a hosting company. The server held a to-him precious application served by a proprietary content management system. He abandoned the CMS at the time because, as I understand it, the monthly price of the CMS was to increase four-fold.
Stuck in a basement, the server itself gathered more dust for three years until my friend decided he wanted the data. The hosting company had told him “just connect it to the internet and you can access your data” which, obviously, is total bollocks – it begins with not owning the domain and ends with a hard-coded IP which belongs to said hosting company.
He asked me for help. I sighed. I helped.
For the first time in quite some years I really got my hands dirty, so to speak. I went to his house at 17:30 and got home again past midnight. I was, though, given a nice dinner, and the lady of the house had baked my favorite cake!
I’d brought a router/switch which we used to get this screaming beast lying on the table before us networked. I then had to remember what little I knew of Proxmox, and got as far as seeing there was a qemu container with id 100.
The *guest tools* (if I recall the name correctly) weren’t installed in the container so I couldn’t do much from outside, but I was able to `vzdump` the container into a file on the host which thankfully had sufficient disk space. After extracting that dump I had the disk image which I could `losetup` / `mount`, and a `chroot` later I was “in”. I asked for and received a small brandy.
I copied the data onto a portable USB disks (my can those things be slow!) and created a MySQL dump of the database. All in all some 20 GB of data were shuffled in and out.
This morning, then finally, I discovered an absolutely gorgeous little tool called mysql2sqlite with which I created a SQLite database file which I zipped up and sent to my friend with the instruction to install DB Browser for SQLite for viewing the database.
It’s been a very long time since I’ve done this type of messing about, but it was fun, if exhausting.
helping :: 31 Aug 2025 :: e-mail
# Blast from the past: Facit A2400 terminal
The year is (roughly) 1989, and we have a small office with some Unix computers and a handful of Facit A2400 terminals connected to them. What we love most about these terminals is they are positive terminals with black text on a white background. We “grew up” with green on black and, later at Nixdorf Computer AG, with amber on black.
Young readers might be surprised we used to get printed manuals with our terminals (and computers); don’t forget, the Web didn’t exist then, and “download the manual as PDF” hadn’t yet been invented.
I spent untold and countless hours developing a special curses library (dubbed “Ecurses”) with special input functions etc. for customers whom we also recommended these terminals to.
When we gave up that office, I took two or three of the terminals along, even a brand new one which, stupidly, I dumped at recycling many years later. Only one of the Facits permanently in use survived; the dirty case and the old Duesseldorf zip code on the service sticker are proof.
The year is now (exactly) 2025, and Martin S. of the Linuxhotel has the idea of setting up an “old” terminal to show trainees in Unix beginner courses what our life was once like. I tell him the story of the Facits and promise to bring one along to lend to him.
Obviously I’m not going to schlepp this very heavy terminal to Essen just to determine it no longer works, so I configure a Shuttle PC with OpenBSD and set up `com0` as the console at 19200 baud; the Facit can do double that, but I thought this would be a compromise between “speed” and “seeing slowish output” for noobs.
% cat /etc/boot.conf
stty com0 19200
set tty com0
% grep tty00 /etc/ttys
tty00 "/usr/libexec/getty std.19200" vt220 on secure
The most difficult part of the whole operation was finding the correct cable, obviously. I used to be inundated in cables but got rid of most many years ago. Luckily a trip to the cellar uncovered the needed combination, so I booted up.
Astute readers will notice there’s no ESCape key on the keyboard, but it can be mapped to the compose key, and for those who want to avoid doing that, ESCape is also, and has always been, `CTRL-[` (octal 33, hex 1B, decimal 27).
I have decided to make this a permanent loan to the lovely people at the Linuxhotel and very much hope younger generations will have the opportunity of experiencing what we used to work with.
retro, openbsd, unix, and historic :: 26 Aug 2025 :: e-mail
# Accessing KeePassXC password stores with an Ansible lookup plugin
I commented what an amazing and fast job the people of media.CCC do on streaming and subsequent encoding as FrOSCon, say, and Stefan mentioned they do so mostly with open source tooling. That page and some of what it links to piqued my interest, and that’s where I found another Keepass lookup plugin for Ansible. (Note that the project mentions their use of Ansible is outdated.)
The plugin isn’t documented, so I thought I’d write up a few notes on how to use it.
- hosts: localhost
connection: local
gather_facts: no
tasks:
- debug: msg="{{ lookup('keepass', 'Moo.password') }}"
- debug: msg="I occasionally like eating {{ lookup('keepass', 'Moo.attr_food') }}"
Note how I can retrieve the entry’s password (as `"password"`) and any additional attribute contained in the entry, prefixing the attribute’s name with the constant `"attr_"`, hence `".attr_food"` in the second lookup.
The database contains one entry:
% keepassxc-cli export --format csv t.kdbx
Enter password to unlock t.kdbx:
"Group","Title","Username","Password","URL","Notes","TOTP","Icon","Last Modified","Created"
"Root","Moo","jpmens","supersecret","","","","0","2025-08-16T15:14:39Z","2025-08-16T15:13:56Z"
The plugin expects the path to the database and its password in appropriate environment variables, so I then run our playbook:
% export KEEPASS=t.kdbx
% export KEEPASS_PW="media.ccc"
% ansible-playbook jp.yml
TASK [debug] *********************************************************************************
ok: [localhost] => {
"msg": "supersecret"
}
TASK [debug] *********************************************************************************
ok: [localhost] => {
"msg": "I occasionally like eating Döner"
}
### Further reading
- ansible-keepass, a (distinct) Ansible lookup plugin to fetch data from KeePass file
- Notes to self: KeePassXC
keepassxc, ansible, and secrets :: 17 Aug 2025 :: e-mail
# If only I'd known ... Debian repo signing
Unless you’re not interested in Debian Linux at all, you’ll have heard that version 13 (“trixie” – remind me to tell you why that name’s hilarious to me when we next have a beer together) is out, and as such we, that’s Christoph and I, created OwnTracks Recorder packages for it.
The first install on a naked Debian 13 machine produced a diagnostic message, but it turns out we could ignore that for now:
Policy will reject signature within a year, --audit for details
Audit: Sub-process /usr/bin/sqv returned an error code (1), error message is:
Signing key on EAB5C42B35B2139B9CD0CD14BE16751530A5116 is not bound:
No binding signature at time 2025-08-1210:45:252 because:
Policy rejected non-revocation signature
(PositiveCertification) requiring second pre-image resistance because:
SHA1 is not considered secure since 2026-02-01
The reason is, the GPG key we use (used) for signing packages had an SHA1 hash on it, and Debian is slated to deprecate that early 2026. I wanted to fix that now so as to not have our users encounter problems later.
I later determine the diagnostic is being issued by Sequoia-PGP (but why the date above is reported as 2026 is beyond me):
$ sq inspect oldkey.gpg
oldkey.gpg: OpenPGP Keyring.
OpenPGP Certificate.
Fingerprint: EAB5C42B35B2139B9CD0CD14BE1675153E0A5116
Invalid: No binding signature at time 2025-08-14T14:49:58Z: Policy rejected non-revocation signature (PositiveCertification) requiring second pre-image resistance, because SHA1 is not considered secure since 2023-02-01T00:00:00Z
Public-key algo: RSA
Public-key size: 4096 bits
Creation time: 2016-01-28 11:08:16 UTC
I mentioned yesterday that my ignorance of most things ‘packaging’ is boundless, and I quickly proved that! :-(
I generated a new signing key, changed our reprepro configuration to use that key, and we were done. Well, almost.
Problem is the repository can no longer be verified with the key that’s already on client machines, as that key has now changed. A Catch-22 type of situation which I “solved” by issuing a warning that a new key had to be downloaded before the next update. This is ugly obviously, but there was nothing I thought could be done about it.
However, the topic didn’t leave me alone. There MUST be some mechanism by which a key “rollover” can be done. Several helpful people proposed solutions, but after a bit of digging, it turns out that a repository source can actually point to a keyring which contains distinct keys.
Types: deb
URIs: http://repo.example.org/debian/
Suites: trixie
Components: main
Signed-By: /usr/share/keyrings/twokeys.gpg
The secret is in the `twokeys.gpg` file, which I create as follows:
$ gpg --output twokeys.gpg --export name1 name2
Alternatively, I can specify multiple paths to keyrings in the deb822 sources file:
> It is specified as a list of absolute paths to keyring files and fingerprints of keys to select from these keyrings
Signed-By: /path1 /path2 fingerprint1 fingerprint2
To be quite sure that this would have worked, I re-signed our repository with the old key and configured a new Debian 13 machine to use it. After installing a package, I reconfigured the repo source to the URL of the repo signed with the new key, and an `apt update` later, I could install a new package from the repository without warnings or error message.
Next time I would probably also have a package containing keys only, as Ondřej suggested; this would doubtlessly ease distribution of new keys. I also like his idea of not having this package as a dependency on the actual software I want to distribute, meaning people can decide to install the keys by other means if they prefer.
Nice. Too late for our OwnTracks issue, but nice to know for the future.
Oh, and another thing I wrote yesterday:
> This is not a ‘periodic reminder’, as I’ve not got it scheduled, but I do occasionally want to say that Unix/Linux package maintainers are the unsung heroes of my IT world!!
I appreciate ideas offered by two Martins, Ondřej, Anton, and Zhenech.
debian and owntracks :: 14 Aug 2025 :: e-mail
# Git credentials helper
When I recently began using Opengist, I wanted to be able to clone gist repositories to the file system so as to update their files, commit them, and push them back. I purposely disabled SSH access to my Opengist server, leaving HTTPS as the method of choice, but how to automate credential submission?
I can use gitcredentials to provide usernames and passwords to Git.
- I create a file in which I store a “secret” in clear text
echo "secret" > ~/.gist-secret
chmod 400 ~/.gist-secret
- I configure git to use that credential for a particular URL. Git matches the URL (with scheme and optional port number)
[credential "https://example.com"]
username = jpm
helper = "!f() { test \"$1\" = get && echo \"password=$(cat $HOME/.gist-secret)\"; }; f"
- Git will now automatically use the configured credentials, and I don’t have to otherwise specify username / password for cloning, committing, or pushing.
git clone https://example.com
- There are more examples in the documentation.
However, there was a detail missing: I want to be able to use these credentials and the configuration on a couple of machines. I have a directory with files I sync across these machines using syncthing. Would I be able to keep this configuration in such a way that I can include it from `~/.gitconfig` ?
Yes, that is trivial. In my `~/.gitconfig` (which can differ across machines) I can include a file. So I move the appropriate configuration into a separate directory in the synchronized directory, and in my `~/.gitconfig` I add:
[include]
path = ~/syncdir/git/gitconfig.include
In that directory I have the `.gist-secret` file and the include file with the `[credential]` stanza as described above.
git and gist :: 11 Aug 2025 :: e-mail
#### Other recent entries
- 20.07tmux and gist and trying to make students happier
- 19.05Migrating BIND9 auto-dnssec to dnssec-policy
- 04.04Overriding GnuPG's PIN entry
- 04.04Forwarding GnuPG agent over SSH
- 27.03A very theoretical scenario, DNS edition
- 25.03SSH keys from a command: sshd's AuthorizedKeysCommand directive
- 16.03DNSSEC Policy and Key template support in NetBox DNS
- 04.03A look at DNS hosting with deSEC
- 26.02Notes to self: signing git commits with an SSH key
- 22.02Notes to self on OctoDNS and its providers
- 21.02NetBox and Configuration Contexts/Templates
- 19.02Netbox, DNS, and a pinch of OctoDNS
- 09.02Notes to self on NetBox and Ansible
- 08.02Ansible one hundred
- 25.01Create a new issue in a Github repository with Ansible
- 24.01Geolocation in Ansible Local Facts
- 23.01Uploading a message to an IMAP server using curl
- 21.12Tagging POI locations with photos from OwnTracks
- 25.10My small kit of tools
- 24.10Notes to self: GNOME keyring and libsecret
- Archives >