Installing VirtualBox-6.0 on CentOS 7.5

This is a quick set of notes about the installation of VirtualBox-6.0 on CentOS 7.5. This OS is not explicitly listed on VirtualBox’s website, so..

Reference page: .

Install the appropriate key:

sudo yum install -y wget
wget -q -O- | rpm --import -

(You might want to change the wget command to use curl, but wget is handy to have so I installed it).


Create the repository:

The right content for the repository is located at and currently it contains the following:

name=Oracle Linux / RHEL / CentOS-$releasever / $basearch - VirtualBox


You can now run “yum make-cache” and search VirtualBox:

VirtualBox-4.3.x86_64 : Oracle VM VirtualBox
VirtualBox-5.0.x86_64 : Oracle VM VirtualBox
VirtualBox-5.1.x86_64 : Oracle VM VirtualBox
VirtualBox-5.2.x86_64 : Oracle VM VirtualBox
VirtualBox-6.0.x86_64 : Oracle VM VirtualBox

Pick the version you need, in my case it’s 6.0:

sudo yum install -y VirtualBox-6.0

Accept adding Oracle’s keys to your keyring.

VirtualBox kernel module configuration:

Switch to user root, and the install the appropriate tools to build VirtualBox’ kernel extension:

sudo yum install -y kernel-devel gcc make perl

And let it do its stuff. Now we can finally do:



Passing Red Hat’s RHCSA certification (EX200): my experience

Just a week ago I passed the Red Hat Certified System Administrator exam, and here are some brief notes about my experience.

Why the RHCSA certification?

At first I didn’t plan on taking the exam at all. But I have been using pretty much Debian/Xubuntu only for a long long time and for the last two years I have been managing RHEL based machines. And I knew nothing about that world.

After taking the exam and passing it this is why, in my opinion, the certification is worth its price:

First things first, even though it’s not explicitly stated you are supposed to know the basic of GNU/Linux to a high level of details. For me an example of this was file and directory permissions. I have always known good ol’ -rwxrwxrwx and quite frankly I’ve gone reasonably far with that, but the studying has forced me to learn all of the details of permissions: all the various control bits (setuid, setgid, sticky), access control lists (ACLs) and extended attributes (getattr/setattr). Some things make a lot more sense to me now that I have formalized the meaning of it all.

SELinux was a marvel that I want to learn more about. Put simply, SELinux is the definitive way of saying “this application can do this and that, and nothing more” and have it actually enforced.

Another thing: I had to learn how to use systemd. Absolutely worth it. Quite frankly, now that I understand it, it’s not that bad. It’s quite good, actually. I like the consistency across many aspect of system management that systemd brings to the GNU/Linux world.

The last thing: the exam is performance based. There’s no escape from this: either you have learned how to manage a RHEL system to a certain degree of detail, or you won’t pass. This is in contrast to some other certifications that are question-based (with multiple answers). There are many approaches to this kind of exams, and some people will just go ahead and cheat. As some people told me, it’s feasible (and many people do it). This is basically not possible at an RHCSA Exam (and this is particularly true if, like me, you take the exam in Kiosk mode — more on that later).

Long story short: if you reach a passing level, you actually have those competencies, and have enough competencies to learn the missing bits (for example some advanced SELinux features that are not needed for the exam but that may come handy in real life).

Learning materials

I have a paid LinuxAcademy account and quite frankly, it’s worth every cent of its cost. I plan on keeping my account and renewing my subscription.

Beside that, I also got a copy of Ghori’s book about RHEL7 RHCSA&RHCE certifications. This book is a bit meh, but since the copy I read has been lent to me by a colleague, it was worth reading.

Another resource was CertDepot’s RHEL7 section. Quite frankly, I did not make use of that content at the time I discovered it (I was ahead enough in the study I had no real need for that content).

I have used a real, (not so much) old computer. This was crucial to my passing the exam. It was a Dell Optiplex 780. Nothing special here: 6GB ram, 160GB hard rotational disk, an Intel Core 2 Duo E8400 CPU. A slow computer that did the job. I tested pretty much all of the likely to happen tasks and some more on this machine. Getting openvpn to run on it was particularly interesting as I got experience in dealing with systemd units since this is an unusual service (from the exam preparation point of view).

Last but not the least, one of the most-worth pieces of learning material was this post from Finnbarr P. Murphy: So you think you’re ready for the RHCSA Exam. This was eye-opening because it let me test my knowledge against somebody else opinion. That was my my metering of knowledge. That was somebody else’s metering of knowledge. The challenges were non-obvious and I failed and found myself in a dead end in many occasions. I went back to study materials after every task I could not do, and did the challenges over and over again until I could do all the tasks from start to finish without any doubt.

How much did it take to prepare?

Well first you should understand that I was studying this content while juggling with a full-time job, some university courses and some social life (my SO mostly).

That being said, I studied on-and-off for roughly six months. When I say on-and-off i mean that I had like a whole month pause because I had other stuff to do.

After taking out pauses, I could probably have condensed it in three months, probably something less if I had not over-studied it.

The Exam

In retrospect, LinuxAcademy’s content was enough and my approach definitely was an overkill. ButI passed the exam and I got 300/300 (even though this does not appear of the certificate). Still a satisfaction, though.

Speaking about my examination experience, I have to first preface that I took the exam in Kiosk mode. That is, I was in a test center alone in a room, using only a test-center-provided laptop and there was a Red Hat proctor looking at me via two (not one) webcams.

The laptop was a ThinkPad P71 (worth noting despite not being relevant or useful, lol). The laptop webcam was pointed at me. The second, usb-connected webcam was needed so that I could take it in my hand and show the proctor the whole room around me. I had to prove that the whole room was empty, that there was nothing on the walls in front of me, at my sides and behind me. I also had to show the roof of the room and show what was below the desk i was using. I was not allowed to even have my watch (understandable). The proctor had video feedback but no audio feedback. I was instructed not to read tasks aloud as this would have triggered the proctor because he/she might think I was reading the question to someone else. Understandable and not a big deal.

Basically, I had to prove I was alone and I could only rely on my knowledge and experience to pass the exam. Super fair. I liked that.

Regarding the exam tasks… The tasks were absolutely reasonable. I cannot say more because of the NDA. But i found them to be just reasonable.

What next?

The next step in the Red Hat ladder is the Red Hat Certified Engineer exam. I had to give back the Ghori book but I am studying some content anyway.

I am not 100% I am interested in taking the exam at all.

The difference between the two exams is clear: RHCSA is about the enteprise operating system and its features, while RHCE less about advanced enterprise operating system features and more about enterprise services (stuff like iSCSI, which is super cool and that I had only heard people speaking about).

If I pass the RHCE, I’ll write another post 🙂

If you have questions, feel free to comment.

Learning Docker

I have been blogging very little and I’ve been thinking for a while about how to start blogging more. I have decided to publicly take some challenges, and to use this blog to write about said challenges.

Since this blog is mainly a technical blog, most of said challenges will be technical challenges.

So for an easy start, I have decided to deepen and formalize my knowledge about the Docker container runtime.

I have got from PackPublishing a copy of “Learning Docker” and I’ll be reading all of it in the following week.


Of course, I have been using Docker for a while now, and many services I run on my home server run inside Docker. I just want to understand it better, hopefully to be able to use it at work in the future.

Stay tuned for some updates 🙂

On commenting code: external vs internal documentation

In my day job, I sometimes have to write scripts in order to automate things that either happen too often to be handled manually or need to be handled in the shortest amount of time possible.But more often, I am tasked with modifying the behavior of an already existing script and/or adding new functionalities.Needless to say, some scripts are way longer than they should be, and they do things that are not supposed to do in a shell-scripting language (think of doing stuff on/to an XML document, using only the Bash shell and tools from the coreutils package). You might think that stuff like that is crazy, and yes, it is a bit crazy but it works and quite frankly, I got to see some rather clever solutions and approaches to unusual shell-scripting problems, and some very creative uses of more or less known features of Bash.

Call it masochism, but I am quite fond of reading code.

Sometimes though, reading scripts is not so pleasant. As shell script is still programming, I see a recurring pattern of poor programming in the form of poor (or non-existant) comments in the code.

I have come to think that code should really be interspersed with two kinds of documentation, and I like to call them “external” and “internal” documentation.

External documentation is supposed to answer the following questions:

  • what does this script/function do?
  • what are, if any, its side effects?
  • what parameters are needed?
  • can you make an example of the use of this script/function ?

Long story short, external documentation is supposed to help the user of your code use correctly, in particular when using your function the first time. Documenting parameters is particularly important: in dynamic languages like python, the name of a parameter says very little about its type but the type, and in the bash language you don’t even have to list formal parameters: whether you pass two or five of them, they will all be available via their position using variables like $1, $5 and so on.

Internal documentation on the other hand, should really address and describe the internals and the details of the implementation. We could say internal documentation answers the following questions:

  • how is the problem being approached?
  • what is the general strategy ?
  • what’s the meaning/the semantics of the various magic number and magic strings that are in the code?

If you think magic strings are bad, you probably haven’t had the pleasure of writing that many shell scripts. Magic strings are things like utility-specific format strings. Think of the date utility and its many available format strings, or the ps utility (see the “STANDARD FORMAT SPECIFIERS” section in the manpage). Sometimes magic strings can be weird and particular bash syntax.

You don’t see magic strings when you’re writing code because you have just looked it up and it’s obvious to you, but it might not be the case for someone else.

As an example: when formatting a timestamp using the date command, it is a good idea to add a comment with an example output of said format string. Are you writing

date '+%D @ %T'

This is more effective:

date '+%D @ %T'  ## 09/02/17 @ 10:20:53

And this is better commenting:

## date: '+%D @ %T' => '09/02/17 @ 10:20:53'

No need to look it up. You can immediately see what generates what.


In the end, as text processing is a significant part of pretty much every system administration task, you can go very far using the standard basic tools from the coreutils package. Once you get to know them, they can be super effective and reasonaby fast for most tasks.

But shell scripting is sometimes underappreciated, and this usually leads to many issue of poor documentation.



Today I stopped serving requests for the domain

Acso-explorer was a patched version of gcc-explorer, bundled with pre-built cross-toolchains of gcc-m68k, gcc-mips and gcc-amd64.

The last request has been served October 29, 2016, but the node.js process was consuming 5-6% CPU with unjustified spikes to 10-12%.

I’ll be setting up a pointed to the container image on the Docker hub, and that will be the end of acso-explorer, at least for now.

XDM, fingerprint readers and ecryptfs

I got myself a new toy last week, it’s a ThinkPad X200s. I bought it home for less than 60 euros and for 24 more i got a compatible battery. It’s a nice toy, definitely slow, but I now have something light I can carry around without worrying too much.

Of course, I installed GNU/Linux on it, and to keep things simple I chose Debian with a combination of light applications: XDM as login manager, i3 as window manager, xterm as terminal emulator, GNU Emacs as editor, claws-mail as mail client. The only fat cow on this laptop is Firefox.

As I started saving passwords and stuff on its disk, I decided to encrypt my home folder and my choice for this “simple” setup is ecryptfs. It’s secure enough, simple to setup and integrates very well with other Debian stuff.

This ThinkPad also has a fingerprint reader, correctly recognized by the operating system and I enrolled one of my fingerprints. It’s very comfortable, and it allows me to “sudo su” to root even with people around without fearing shoulder-surfing.

The first login though, it still requires me to input my password as it is needed to unwrap the ecrypts passphrase. And here is where the problem arises: first login is usually done via display manager, namely XDM.

As far as I know XDM, being an old-times thing, has no clue about fingerprint readers and stuff, but the underlying PAM does. And since XDM relies on PAM for authentication… It works, kinda. Basically you either swipe your fingerprint and log in without all your files (you didn’t input your password ⇒ your passphrase was not unwrapped ⇒ your private data was not mounted in your home) or wait for the fingerprint-based login to timeout and XDM to prompt you fro your password.

So if you are okay with waiting 10-15 seconds before being asked for your password then you can stop reading, otherwise keep reading to see how to fix it.

Long story short, PAM in Debian (my guess is that it works pretty much the same for other distributions too) has some nicely formatted, application-specific configuration files under /etc/pam.d . One of those, is /etc/pam.d/xdm and defines how PAM should behave when it’s interacting with XDM.

If you open it, you’ll see it actually does nothing particularly fancy: it just imports some common definition and uses the same settings every application uses that is, try fingerprint first, then fall back on password if it fails or times out.

Such behaviour is defined in /etc/pam.d/common-auth and it is just fine for all other application. For XDM though, it’s advisable in my opinion to ask right out for password and just don’t ask for fingerprint swipe.

My fix for this problem is then to:

  1. open /etc/pam.d/xdm
  2. replace the line importing /etc/pam.d/common-auth with the content of such file
  3. alter the newly added content fo ask right away for the password



Now XDM is going to ask for the password straight away.

Learn how to use GNU info

Recently I’ve been digging a lot into GNU/Linux system administration and as part of this, I have finally taken some time to google about that mysterious info command that has been sitting here in my GNU/Linux systems, unused for years.

Well, I can tell you, it has been a life-changing experience.

Texinfo-based documentation is awesome.

In this article, I want to share why is info documentation cool and why you should read its documentation if you didn’t already.

First, some terminology.

  • info: the command-line tool you use to read documents written using the texinfo format.
  • Texinfo is a document format and a software package providing tools to manipulate texinfo documents.

Texinfo was invented by the Free Software Foundation to overcome the limitations of manpages such as “un-browseability” (man pages are really documents supposed to be printed to paper but rendered to the terminal instead) and the lack of hyperlinking capabilities in the format.

GNU info was designed as a third level in documentation. If you take a typical program from the free software foundation, you can expect it to have three level of documentations:

  • the –help option: quick, one- or two-screenful of documentation
  • the man page
  • the info manual

Try and do “ls –help”, “man ls” and see the difference.

So info documents are documents that can be divided into sections, browsed only in parts, can have links to other pages of the documentations and can have links to other pieces of documentation as well! Also, they can be viewed with different viewers.

How do I learn to use info ?

Well, if the right way to learn about man is “man man”, the right way to learn about info is “info info”, and indeed such command will teach how to use the info tool.

Basically, you can go browser documents going forwards and backwards between nodes using n/] or p/[. Scrolling happens with space or backspace.

That is really the basic usage. Now go and type “info info” in your terminal.

The game-changer

As I said earlier, Texinfo Documents can be viewed outside the terminal too, while retaining all of their capabilities. I you have ever read some documentation on the webside of the Free Software Foundation then congrats, you have been reading a texinfo document translated to HTML.

For me, the game changer has been reading the GNU Emacs manual (a texinfo document) using GNU Emacs itself! They keystrokes are pretty much the same as the terminal ones, but you get variable-sized fonts and different colors for, say, hyperlinks and stuff like this.

Being able to read the Emacs manual inside Emacs is a game-changer for me because every time I don’t know something about Emacs, I can just start the manual and look it up. Clean and fast, no browser required (my CPUs are thanking me a lot)

Writing Texinfo documents

Here is where, sadly, the story takes an ugly turn.

I didn’t dig this topic very deep, but as far as I’ve seen, Texinfo documents are a major pain to write. The syntax looks quirky, but it seems worth it as Texinfo documents can be exported to HTML and PDF too.

There should be an org-mode plugin to export to Texinfo, but I couldn’t get it to work.

Again, I have to dig this topic a bit more, but it seems quite worth it.