XDM, fingerprint readers and ecryptfs

how to setup pam/xdm to work correctly with fingerprint login and ecryptfs-encrypted filesystem

I got myself a new toy last week, it’s a ThinkPad X200s. I bought it home for less than 60 euros and for 24 more i got a compatible battery. It’s a nice toy, definitely slow, but I now have something light I can carry around without worrying too much.

Of course, I installed GNU/Linux on it, and to keep things simple I chose Debian with a combination of light applications: XDM as login manager, i3 as window manager, xterm as terminal emulator, GNU Emacs as editor, claws-mail as mail client. The only fat cow on this laptop is Firefox.

As I started saving passwords and stuff on its disk, I decided to encrypt my home folder and my choice for this “simple” setup is ecryptfs. It’s secure enough, simple to setup and integrates very well with other Debian stuff.

This ThinkPad also has a fingerprint reader, correctly recognized by the operating system and I enrolled one of my fingerprints. It’s very comfortable, and it allows me to “sudo su” to root even with people around without fearing shoulder-surfing.

The first login though, it still requires me to input my password as it is needed to unwrap the ecrypts passphrase. And here is where the problem arises: first login is usually done via display manager, namely XDM.

As far as I know XDM, being an old-times thing, has no clue about fingerprint readers and stuff, but the underlying PAM does. And since XDM relies on PAM for authentication… It works, kinda. Basically you either swipe your fingerprint and log in without all your files (you didn’t input your password ⇒ your passphrase was not unwrapped ⇒ your private data was not mounted in your home) or wait for the fingerprint-based login to timeout and XDM to prompt you fro your password.

So if you are okay with waiting 10-15 seconds before being asked for your password then you can stop reading, otherwise keep reading to see how to fix it.

Long story short, PAM in Debian (my guess is that it works pretty much the same for other distributions too) has some nicely formatted, application-specific configuration files under /etc/pam.d . One of those, is /etc/pam.d/xdm and defines how PAM should behave when it’s interacting with XDM.

If you open it, you’ll see it actually does nothing particularly fancy: it just imports some common definition and uses the same settings every application uses that is, try fingerprint first, then fall back on password if it fails or times out.

Such behaviour is defined in /etc/pam.d/common-auth and it is just fine for all other application. For XDM though, it’s advisable in my opinion to ask right out for password and just don’t ask for fingerprint swipe.

My fix for this problem is then to:

  1. open /etc/pam.d/xdm
  2. replace the line importing /etc/pam.d/common-auth with the content of such file
  3. alter the newly added content fo ask right away for the password

tl;dr:

pam-xdm-right

Now XDM is going to ask for the password straight away.

Learn how to use GNU info

Recently I’ve been digging a lot into GNU/Linux system administration and as part of this, I have finally taken some time to google about that mysterious info command that has been sitting here in my GNU/Linux systems, unused for years.

Well, I can tell you, it has been a life-changing experience.

Texinfo-based documentation is awesome.

In this article, I want to share why is info documentation cool and why you should read its documentation if you didn’t already.

First, some terminology.

  • info: the command-line tool you use to read documents written using the texinfo format.
  • Texinfo is a document format and a software package providing tools to manipulate texinfo documents.

Texinfo was invented by the Free Software Foundation to overcome the limitations of manpages such as “un-browseability” (man pages are really documents supposed to be printed to paper but rendered to the terminal instead) and the lack of hyperlinking capabilities in the format.

GNU info was designed as a third level in documentation. If you take a typical program from the free software foundation, you can expect it to have three level of documentations:

  • the –help option: quick, one- or two-screenful of documentation
  • the man page
  • the info manual

Try and do “ls –help”, “man ls” and see the difference.

So info documents are documents that can be divided into sections, browsed only in parts, can have links to other pages of the documentations and can have links to other pieces of documentation as well! Also, they can be viewed with different viewers.

How do I learn to use info ?

Well, if the right way to learn about man is “man man”, the right way to learn about info is “info info”, and indeed such command will teach how to use the info tool.

Basically, you can go browser documents going forwards and backwards between nodes using n/] or p/[. Scrolling happens with space or backspace.

That is really the basic usage. Now go and type “info info” in your terminal.

The game-changer

As I said earlier, Texinfo Documents can be viewed outside the terminal too, while retaining all of their capabilities. I you have ever read some documentation on the webside of the Free Software Foundation then congrats, you have been reading a texinfo document translated to HTML.

For me, the game changer has been reading the GNU Emacs manual (a texinfo document) using GNU Emacs itself! They keystrokes are pretty much the same as the terminal ones, but you get variable-sized fonts and different colors for, say, hyperlinks and stuff like this.

Being able to read the Emacs manual inside Emacs is a game-changer for me because every time I don’t know something about Emacs, I can just start the manual and look it up. Clean and fast, no browser required (my CPUs are thanking me a lot)

Writing Texinfo documents

Here is where, sadly, the story takes an ugly turn.

I didn’t dig this topic very deep, but as far as I’ve seen, Texinfo documents are a major pain to write. The syntax looks quirky, but it seems worth it as Texinfo documents can be exported to HTML and PDF too.

There should be an org-mode plugin to export to Texinfo, but I couldn’t get it to work.

Again, I have to dig this topic a bit more, but it seems quite worth it.

Building GNU Emacs from sources

I want to look at the GNU Emacs source code because I have some ideas I want to try and implement.

If you want to write patches for an open-source project, the first thing to do is to check out the latest version from the repository, make sure it compiles and runs. At this stage, you also want to make sure that all of the tests passes, so that you can start working on a verified assumption that you branched from a fully working commit.

So for short this post is a micro-tutorial on building GNU Emacs from source on a Debian-compatible (read: apt-based) system.

The first thing to do is to gather all the required build dependencies:

sudo apt-get build-dep emacs24

This alone will install all of the necessary building dependencies.

Next step is to check out the source code. You can find the url of the repository on the Savannah page:

git clone git clone -b master git://git.sv.gnu.org/emacs.git

At this page you might want to slow down a little and skim through the INSTALL file. It contains instructions for building.


At this point you’re al most ready for compilation. You have to run the auto-tools and generate the Makefile:

./autogen.sh

./autogen.sh git

And now you can generate a Makefile with all the features you want (you can look up some of the configure options in the INSTALL file, or run ./configure –help):

./configure --with-x-toolkit=gtk --with-cairo --with-modules

And now, finally, compile:

make -j9

On my machine (ThinkPad W530, quad-core i7 and 7200rpm rotational hard drive), it took about five minutes:

real	5m9.283s
user	21m21.412s
sys	0m48.284s

You will now find the newly built emacs binaries in the src/ directory.

Have fun!

I wrote a toy url shortener and people started using it

So yesterday I gave an introductory presentation on how to write web applications using Python, Flask and dokku. In my talk I shown how to go from zero to a fully functional application with a small but functional deployment environment based on Dokku.

The nice thing I noticed is that people started using it, and now I have some URLs shortened through my system.

The application is called musho, as in Micro Url SHOrtener. Here it is live, and you can find the source code on Gitlab.

I hope you like it 🙂

Testing Go applications with PostgreSQL and docker

I spawn and terminate docker containers automatically with Jenkins.

This allowed me to have a lighter application and to keep my testing environment closer to the live environment.

In this post I describe how I did this.

Continue reading “Testing Go applications with PostgreSQL and docker”

Correctly erasing an SSD on GNU/Linux

This is not really an article but more of a note for future reference.

So i decided to wipe an SSD drive, and I assumed that using the good old dd would not have been the correct choice.

I was right, and someone (thanks mingdao) from #gentoo on Freenode directed me to this page: ATA Secure Erase.

Basically, when dealing with SSD, it’s better to rely on standard erase feature than doing it yourself.

It took very little (approximately 21 seconds).

Gin-gonic improvements

While writing simple web applications using gin-gonic web framework, there are some things that should be improved, in my opinion:

Ideas

  1. Static file serving: it should be improved to use browser caching, according to https://developers.google.com/speed/docs/insights/LeverageBrowserCaching
  2. Default values for templates rendering: there are some values that I am going to need in pretty much every template i am rendering. It would be nice to be able to set them once and for all so that when I render a template, the values i want to be always present are added to the ones i am passing on the fly.

Continue reading “Gin-gonic improvements”