Latest 10 recent news (see index)

May 02, 2018

regained control of IRC

A quick follow-up to yesterday’s post: We have control over the IRC Channels again!

Thank you Freenode support team!

May 01, 2018

Serious Issues

Dear Void Users,

We have a problem. In the last few months people have been complaining about the lack of management capabilities in the Void Core Team. We have been aware of the problem, and it’s time to explain the situation.

The current project leader has disappeared. We have had no contact with him since the end of January, and no meaningful contact for well over a year. This itself would be concerning, on its own but no threat to the project.

The problem is that we currently have no ability to manage some of Void’s central resources. In the past, they were managed exclusively by the former project leader. Namely:

  • The Void Linux Github Organisation.
  • The IRC Channels.
  • The domains.

What have we been doing?


We contacted Github, but they declined any help to regain access to the organisation. This is really really unfortunately as Github grew to a tool for both source and community management. This has led to questions of if Github is still the best option for us, but we are continuing with the platform for the foreseeable future.

Freenode IRC

We have contacted freenode support. We see hope to regain access to the VoidLinux IRC Channels. IRC is an essential tool for communication of the core team.


We regained partial control over But the most used domain,, is currently not under our control. It currently works, but as soon as we have to move any IP addresses, it will fail. IP addresses may need to be moved for a variety of reasons, the most obvious being when we upgrade the master build server.

What’s going to happen?

Currently we are in limbo and are trying to get back on track. We see no possibility to regain access to the Github organisation. So for Github, we will move to a new organisation.

We have a similar solution for the domains. We will move to a different domain and continue to support the domain as long as possible.

For the IRC Channels, we will try to get in contact with freenode and regain access. We are hoping that freenode support will be open to our request.

How will we mitigate these issues?

We learned our lesson. In future no single person will have exclusive access to Void’s resources.

Furthermore, we’re in contact with a non profit organisation that helps open source projects to manage donations and other resources. We hope that we can announce further details in a few weeks.

For now, just be aware that the engineering work to help mitigate our problems is underway. This engineering work is consuming the full resources of 2 senior contributors, and has unfortunately also led to longer PR review times..

If you have any questions, you can contact us via the forum, Twitter, or IRC.

April 30, 2018

XBPS 0.52 is out

A new XBPS stable version has been released: 0.52. This is a major release that a few new features and bugfixes:

  • Memleak fixes found and fixed by Duncaen
  • Fix proxy authentification reported by pulux, fixed by Gottox
  • Fix mmaping big files reported by Leah Neukirchen, fixed by Gottox
  • xbps-create supports a new –changelog field
  • xbps-rindex creates now a staging repository index if it detects incostistent shlibs
  • many more, see NEWS for a complete list

April 01, 2018

My Little Void

Because many ponies enjoyed the The Advent of Void: Day 25: ponysay post we decided to release a new Void Linux image with all your friends onboard.

Screenshot of MyLittleVoid(1)

Happy eastern to everypony!

December 25, 2017

The Advent of Void: Day 25: ponysay

Who of you ever wanted a pony for Christmas? Turns out, Void Linux already includes some. Don’t worry, they are just virtual yet just a command away:

# xbps-install ponysay

Screenshot of ponysay(1)

ponysay features over 400 illustrations of My Little Pony for your terminal. Look at all of them using

% ponysay-tool --browse /usr/share/ponysay/ponies

You even can make the ponies quote themselves using ponysay -q.

Lots of fun for everypony! Whee!

December 24, 2017

The Advent of Void: Day 24: snooze

cron(8) is a nice tool, but it has some long-standing problems, among them:

  • The cron/crond design requires setuid.
  • Making cronjobs not overlap requires additional work.
  • It’s not possible to trigger a cronjob to run now, instead of the next scheduled time.
  • The crontab syntax is confusing (if you think this is not true, do you know about %?).

A little, but flexible alternative is to use snooze(1), which essentially just waits for a particular time, and then executes a command. To get recurring jobs ala cron, we can use this together with our runit service supervision suite. If we wanted at(1) instead, we can just run snooze once.

The time for snooze is given using the options -d (for day), -m (for month), -w (for weekday), -D (for day of year), -W (for ISO week), and -H (for hour), -M (for minute), -S (for second). Each option of these can be comma-separated list of values, ranges (with -) or repetitions (with /). The default is daily at midnight, so if we wanted to run at the next full hour instead, we could run:

% snooze -n -H'*'
2017-12-24T17:00:00+0100 Sun  0d  0h 47m 33s
2017-12-24T18:00:00+0100 Sun  0d  1h 47m 33s
2017-12-24T19:00:00+0100 Sun  0d  2h 47m 33s
2017-12-24T20:00:00+0100 Sun  0d  3h 47m 33s
2017-12-24T21:00:00+0100 Sun  0d  4h 47m 33s

The -n option disables the actual execution and shows the next five matching times instead.

To run every 15 minutes, we’d use

% snooze -n -H'*' -M/15  
2017-12-24T16:15:00+0100 Sun  0d  0h  1m 31s
2017-12-24T16:30:00+0100 Sun  0d  0h 16m 31s
2017-12-24T16:45:00+0100 Sun  0d  0h 31m 31s
2017-12-24T17:00:00+0100 Sun  0d  0h 46m 31s
2017-12-24T17:15:00+0100 Sun  0d  1h  1m 31s

More complicated things are possible, for example next Friday the 13th:

% snooze -n -w5 -d13
2018-04-13T00:00:00+0200 Fri 108d  6h 45m 33s
2018-07-13T00:00:00+0200 Fri 199d  6h 45m 33s
no satisfying date found within a year.

Note that snooze bails out if it takes more than a year for the event to happen.

By default, snooze will just terminate successfully, but we can give it a command to run instead:

% snooze -H'*' -M'*' -S30 date 
Sun Dec 24 16:27:30 CET 2017

When snooze receives a SIGALRM, it immediately runs the command.

snooze is quite robust, it checks every 5 minutes the time has progressed as expected, so if you change the system time (or the timezone changes), it is noticed.

For additional robustness, you can use the timefiles option -t, which ensures a job is not started if its earlier than some modification time of a file. On success, your job can then touch this file to unlock the next iteration. Together with the slack option this can be used for anacron-style invocations that ensure a task is run, for example, every day at some point.

December 23, 2017

The Advent of Void: Day 23: ledger

Day 23, almost the end of the year. You may consider that it’s time to look back and take stock of your month, or your year. An accounting, if you will. Fortunately we have a package or two for that: ledger and hledger. The second one is a rewrite of the first in Haskell, while the first is the one that sets the spec.

I must apologize in advance: I am not an accountant, so I may confuse the powerful concepts on which accounting depends. Please bear with me, and feel free to let me know of any corrections.

These are accounting tools, with powerful features I never need to use, web interfaces I don’t need (but maybe others in our lives would desire), but are easy to use with text files with a simple format.

To pass a file to ledger (or hledger), just call ledger -f path/to/ledger.file, and make sure the file contains entries (or even just one) of the format:

2017-12-25	My true love gave to me
	Equity:TrueLove		-1 partridge
	Assets			1 partridge

The notion behind the ledger format is the same as double entry accounting. What goes in must come out, or everything must come from somewhere. If you take $5 from one place, it has to go somewhere else, in that same transaction.

2017-12-26	My true love gave to me and I paid back Tom
	Equity:TrueLove		-2 turtledove
	Assets			1 turtledove
	Assets:Tom		1 turtledove

For instance, the following ledger entry will throw an error because nothing matches!

2017-12-27	My true love gave to me
	Equity:TrueLove		-$3
	Assets			$1
	Assets:Tom		$1

(I switched to units where I could be certain the double entry mechanisms are checked. They don’t seem to be for french_hens.)

The ledger manual has a lot of information about all the things ledger supports, including inline maths and stock prices.

May your books forever be balanced!

December 22, 2017

The Advent of Void: Day 22: QDirStat

We already covered other cleaning tools like ncdu and probably everybody used some form of du -h ... | sort -h as well, but there is another little gem I’d like you to know about.

QDirStat - Qt-based directory statistics (KDirStat without any KDE - from the original KDirStat author)

QDirStat uses a treemap to display used disk space.

# xbps-install qdirstat

Files are represented as little boxes. The color hints the file type and the area covered corresponds to the file size. Then QDirStat tries to group files within a folder into one rectangle. This is done for the whole hierarchy. The per-folder cushion shading guides your eyes and makes it easy to recognise related files.

QDirStat screenshot with treemap

The treemap is interactive. To find what file belongs to a box, you can just click on it. With the Alt + ↑ shortcut, you can go one level up in the file hierarchy. At the same time a box is painted around the whole folder.

qdirstat-cache-writer - Collecting remote file statistics

With qdirstat-cache-writer (separate package, minimal dependencies) you can collect file sizes on remote or headless machines.

On the remote machine install the package and scan your disk with:

# xbps-install qdirstat-cache-writer
# qdirstat-cache-writer /path/of/interest cache-file.cache.gz

You can transfer the cache-file via ssh or any other method and just throw it against qdirstst locally:

# scp remote:/path/to/cace-file.cache.gz .
# qdirstat --cache cache-file.cache.gz

You can still examine the hierarchy, but you loose the ability to run the clean-up actions.

Extra bits

At the time of writing QDirStat had no open issues or pull requests. I’m not aware of any obvious bugs. It’s a unique and solid tool. My disks thank Stefan Hundhammer for his amazing work.

December 21, 2017

The Advent of Void: Day 21: neatvi

On Void, we have many clones of beloved vi(1) such as vim, neovim, nvi, vile, busybox vi, and of course the original ex-vi.

But today, I want to talk about neatvi, a reimplementation from scratch with minimal footprint (fewer than 6kLOC); it doesn’t even need ncurses! Nevertheless, it supports UTF-8, and even editing bidirectional text, and generally has a good coverage of the POSIX vi feature set.

Of course, it doesn’t provide all the bells and whistles of vim and friends, but it adds a few important features on top of plain vi, such as infinite undo/redo, basic syntax highlighting, and a partial implementation of ex(1).

It’s a nice editor for limited environments such as embedded devices or recovery systems, or for people that like unbloated software.

December 20, 2017

The Advent of Void: Day 20: shmux

When you have multiple machines, sometimes you’ll want to run the same commands on all of them. There are many tools for this job, starting from simple for loops on the shell to full-fledged configuration management systems such as Puppet or Chef.

A good compromise is shmux(1), the shell multiplexer.

For example, we can measure the uptimes of my servers, passing the command with -c:

% shmux -c uptime
    15:48:01 up 91 days,  6:05, 45 users,  load average: 0.55, 0.47, 0.40  15:48:07 up 502 days, 19:29,  1 user,  load average: 0.37, 0.29, 0.29  15:48:03 up 225 days,  5:51,  2 users,  load average: 0.06, 0.03, 0.05

3 targets processed in 2 seconds.
Summary: 3 successes

shmux is quite clever about this, e.g. if we do a mistake and the command fails, it stops and asks us what to do:

shmux -c oopstime localhost
           localhost! zsh:1: command not found: oopstime
               shmux! Child for localhost exited with status 127
-- [PAUSED], 3 Pending/0 Failed/1 Done -- [1.7, 1.8]
>> Available commands:
>>       q - Quit gracefully
>>       Q - Quit immediately
>> <space> - Pause (e.g. Do not spawn any more children)
>>       1 - Spawn one command, and pause if unsuccessful
>> <enter> - Keep spawning commands until one fails
>>       + - Always spawn more commands, even if some fail
>>       F - Toggle failure mode to "quit"
>>       S - Show current spawn strategy
>>       p - Show pending targets
>>       r - Show running targets
>>       f - Show failed targets
>>       e - Show targets with errors
>>       s - Show successful targets
>>       a - Show status of all targets
>>       k - Kill a target
>>  [0]             error: localhost
>>  [1]           pending:
>>  [2]           pending:
>>  [3]           pending:

1 target processed (out of 4) in 89 seconds.
Summary: 3 unprocessed, 1 error
Error    : localhost 

Commands can be spawned in parallel when using -M max. By default, shmux spawns the first command on its own, to check it early.

Let’s say we want to keep the outputs, so we use -o:

% shmux -M10 -o uptimes -c uptime localhost
% ls uptimes    ''  ''  ''

Finally, the -a and -A options can be used to define analyzers for the outputs, so see if everything worked fine.

% shmux -o uptimes -a regex -A up  -c uptime ...

shmux is a useful tool for adhoc command execution as it requires no configuration and has sensible defaults.

Copyright 2008-2018 Juan RP and contributors

Linux® is a registered trademark of Linus Torvalds (info)