A quick follow-up to yesterday’s post: We have control over the IRC Channels again!
Thank you Freenode support team!
Dear Void Users,
We have a problem. In the last few months people have been complaining about the lack of management capabilities in the Void Core Team. We have been aware of the problem, and it’s time to explain the situation.
The current project leader has disappeared. We have had no contact with him since the end of January, and no meaningful contact for well over a year. This itself would be concerning, on its own but no threat to the project.
The problem is that we currently have no ability to manage some of Void’s central resources. In the past, they were managed exclusively by the former project leader. Namely:
We contacted Github, but they declined any help to regain access to the organisation. This is really really unfortunately as Github grew to a tool for both source and community management. This has led to questions of if Github is still the best option for us, but we are continuing with the platform for the foreseeable future.
We have contacted freenode support. We see hope to regain access to the VoidLinux IRC Channels. IRC is an essential tool for communication of the core team.
We regained partial control over voidlinux.com. But the most used domain, voidlinux.eu, is currently not under our control. It currently works, but as soon as we have to move any IP addresses, it will fail. IP addresses may need to be moved for a variety of reasons, the most obvious being when we upgrade the master build server.
Currently we are in limbo and are trying to get back on track. We see no possibility to regain access to the Github organisation. So for Github, we will move to a new organisation.
We have a similar solution for the domains. We will move to a different domain and continue to support the voidlinux.eu domain as long as possible.
For the IRC Channels, we will try to get in contact with freenode and regain access. We are hoping that freenode support will be open to our request.
We learned our lesson. In future no single person will have exclusive access to Void’s resources.
Furthermore, we’re in contact with a non profit organisation that helps open source projects to manage donations and other resources. We hope that we can announce further details in a few weeks.
For now, just be aware that the engineering work to help mitigate our problems is underway. This engineering work is consuming the full resources of 2 senior contributors, and has unfortunately also led to longer PR review times..
A new XBPS stable version has been released: 0.52. This is a major release that a few new features and bugfixes:
Because many ponies enjoyed the The Advent of Void: Day 25: ponysay post we decided to release a new Void Linux image with all your friends onboard.
Happy eastern to everypony!
Who of you ever wanted a pony for Christmas? Turns out, Void Linux already includes some. Don’t worry, they are just virtual yet just a command away:
# xbps-install ponysay
ponysay features over 400 illustrations of My Little Pony for your terminal. Look at all of them using
% ponysay-tool --browse /usr/share/ponysay/ponies
You even can make the ponies quote themselves using
Lots of fun for everypony! Whee!
cron(8) is a nice tool, but it has some long-standing problems, among them:
A little, but flexible alternative is to use snooze(1), which essentially just waits for a particular time, and then executes a command. To get recurring jobs ala cron, we can use this together with our runit service supervision suite. If we wanted at(1) instead, we can just run snooze once.
The time for snooze is given using the options
-d (for day),
-m (for month),
-w (for weekday),
-D (for day of year),
-W (for ISO week), and
-H (for hour),
-M (for minute),
-S (for second).
Each option of these can be comma-separated list
of values, ranges (with
-) or repetitions (with
The default is daily at midnight,
so if we wanted to run at the next full hour instead, we could run:
% snooze -n -H'*' 2017-12-24T17:00:00+0100 Sun 0d 0h 47m 33s 2017-12-24T18:00:00+0100 Sun 0d 1h 47m 33s 2017-12-24T19:00:00+0100 Sun 0d 2h 47m 33s 2017-12-24T20:00:00+0100 Sun 0d 3h 47m 33s 2017-12-24T21:00:00+0100 Sun 0d 4h 47m 33s
-n option disables the actual execution and shows the next five
matching times instead.
To run every 15 minutes, we’d use
% snooze -n -H'*' -M/15 2017-12-24T16:15:00+0100 Sun 0d 0h 1m 31s 2017-12-24T16:30:00+0100 Sun 0d 0h 16m 31s 2017-12-24T16:45:00+0100 Sun 0d 0h 31m 31s 2017-12-24T17:00:00+0100 Sun 0d 0h 46m 31s 2017-12-24T17:15:00+0100 Sun 0d 1h 1m 31s
More complicated things are possible, for example next Friday the 13th:
% snooze -n -w5 -d13 2018-04-13T00:00:00+0200 Fri 108d 6h 45m 33s 2018-07-13T00:00:00+0200 Fri 199d 6h 45m 33s no satisfying date found within a year.
Note that snooze bails out if it takes more than a year for the event to happen.
By default, snooze will just terminate successfully, but we can give it a command to run instead:
% snooze -H'*' -M'*' -S30 date Sun Dec 24 16:27:30 CET 2017
When snooze receives a SIGALRM, it immediately runs the command.
snooze is quite robust, it checks every 5 minutes the time has progressed as expected, so if you change the system time (or the timezone changes), it is noticed.
For additional robustness, you can use the timefiles option
which ensures a job is not started if its earlier than some
modification time of a file. On success, your job can then touch this
file to unlock the next iteration. Together with the slack option
this can be used for anacron-style invocations that ensure a task is
run, for example, every day at some point.
Day 23, almost the end of the year. You may consider that it’s time to look
back and take stock of your month, or your year. An accounting, if you will.
Fortunately we have a package or two for that:
second one is a rewrite of the first in Haskell, while the first is the one
that sets the spec.
I must apologize in advance: I am not an accountant, so I may confuse the powerful concepts on which accounting depends. Please bear with me, and feel free to let me know of any corrections.
These are accounting tools, with powerful features I never need to use, web interfaces I don’t need (but maybe others in our lives would desire), but are easy to use with text files with a simple format.
To pass a file to ledger (or hledger), just call
path/to/ledger.file, and make sure the file contains entries (or even just
one) of the format:
2017-12-25 My true love gave to me Equity:TrueLove -1 partridge Assets 1 partridge
The notion behind the ledger format is the same as double entry accounting. What goes in must come out, or everything must come from somewhere. If you take $5 from one place, it has to go somewhere else, in that same transaction.
2017-12-26 My true love gave to me and I paid back Tom Equity:TrueLove -2 turtledove Assets 1 turtledove Assets:Tom 1 turtledove
For instance, the following ledger entry will throw an error because nothing matches!
2017-12-27 My true love gave to me Equity:TrueLove -$3 Assets $1 Assets:Tom $1
(I switched to units where I could be certain the double entry mechanisms are checked. They don’t seem to be for french_hens.)
The ledger manual has a lot of information about all the things ledger supports, including inline maths and stock prices.
May your books forever be balanced!
We already covered other cleaning tools like
ncdu and probably everybody used some form of
du -h ... | sort -h as well, but there is another little gem I’d like you to know
QDirStat uses a treemap to display used disk space.
# xbps-install qdirstat
Files are represented as little boxes. The color hints the file type and the area covered corresponds to the file size. Then QDirStat tries to group files within a folder into one rectangle. This is done for the whole hierarchy. The per-folder cushion shading guides your eyes and makes it easy to recognise related files.
The treemap is interactive. To find what file belongs to a box, you can just
click on it. With the
Alt + ↑ shortcut, you can go one level up in the file
hierarchy. At the same time a box is painted around the whole folder.
qdirstat-cache-writer (separate package, minimal dependencies) you can
collect file sizes on remote or headless machines.
On the remote machine install the package and scan your disk with:
# xbps-install qdirstat-cache-writer # qdirstat-cache-writer /path/of/interest cache-file.cache.gz
You can transfer the
cache-file via ssh or any other method and just throw it
# scp remote:/path/to/cace-file.cache.gz . # qdirstat --cache cache-file.cache.gz
You can still examine the hierarchy, but you loose the ability to run the clean-up actions.
At the time of writing QDirStat had no open issues or pull requests. I’m not aware of any obvious bugs. It’s a unique and solid tool. My disks thank Stefan Hundhammer for his amazing work.
But today, I want to talk about neatvi, a reimplementation from scratch with minimal footprint (fewer than 6kLOC); it doesn’t even need ncurses! Nevertheless, it supports UTF-8, and even editing bidirectional text, and generally has a good coverage of the POSIX vi feature set.
Of course, it doesn’t provide all the bells and whistles of vim and friends, but it adds a few important features on top of plain vi, such as infinite undo/redo, basic syntax highlighting, and a partial implementation of ex(1).
It’s a nice editor for limited environments such as embedded devices or recovery systems, or for people that like unbloated software.
When you have multiple machines, sometimes you’ll want to run the same
commands on all of them. There are many tools for this job, starting
for loops on the shell to full-fledged configuration
management systems such as Puppet or Chef.
A good compromise is shmux(1), the shell multiplexer.
For example, we can measure the uptimes of my servers,
passing the command with
% shmux -c uptime vuxu.org firstname.lastname@example.org hecate.home.vuxu.org vuxu.org: 15:48:01 up 91 days, 6:05, 45 users, load average: 0.55, 0.47, 0.40 email@example.com: 15:48:07 up 502 days, 19:29, 1 user, load average: 0.37, 0.29, 0.29 hecate.home.vuxu.org: 15:48:03 up 225 days, 5:51, 2 users, load average: 0.06, 0.03, 0.05 3 targets processed in 2 seconds. Summary: 3 successes
shmux is quite clever about this, e.g. if we do a mistake and the command fails, it stops and asks us what to do:
shmux -c oopstime localhost vuxu.org firstname.lastname@example.org hecate.home.vuxu.org localhost! zsh:1: command not found: oopstime shmux! Child for localhost exited with status 127 -- [PAUSED], 3 Pending/0 Failed/1 Done -- [1.7, 1.8] ? >> Available commands: >> q - Quit gracefully >> Q - Quit immediately >> <space> - Pause (e.g. Do not spawn any more children) >> 1 - Spawn one command, and pause if unsuccessful >> <enter> - Keep spawning commands until one fails >> + - Always spawn more commands, even if some fail >> F - Toggle failure mode to "quit" >> S - Show current spawn strategy >> p - Show pending targets >> r - Show running targets >> f - Show failed targets >> e - Show targets with errors >> s - Show successful targets >> a - Show status of all targets >> k - Kill a target a >>  error: localhost >>  pending: vuxu.org >>  pending: email@example.com >>  pending: hecate.home.vuxu.org Q 1 target processed (out of 4) in 89 seconds. Summary: 3 unprocessed, 1 error Error : localhost
Commands can be spawned in parallel when using
By default, shmux spawns the first command on its own, to check it early.
Let’s say we want to keep the outputs, so we use
% shmux -M10 -o uptimes -c uptime localhost vuxu.org firstname.lastname@example.org hecate.home.vuxu.org ... % ls uptimes hecate.home.vuxu.org.exit 'email@example.com' hecate.home.vuxu.org.stderr 'firstname.lastname@example.org' hecate.home.vuxu.org.stdout 'email@example.com' localhost.exit vuxu.org.exit localhost.stderr vuxu.org.stderr localhost.stdout vuxu.org.stdout
-A options can be used to define
analyzers for the outputs, so see if everything worked fine.
% shmux -o uptimes -a regex -A up -c uptime ...
shmux is a useful tool for adhoc command execution as it requires no configuration and has sensible defaults.