Some notes on bash shell programming

I recently began working as a GNU/Linux System Engineer at a software company. My job involves answering to tickets from customers, and take action in the case of blocking issues. In doing all this, I am reading a lot of bash scripts, and I am also deepening my knowledge of the Bash shell as a programming environment.

As someone who has been toying with software engineering for a while, I am both pleased by bash shell programming and a bit displeased by how scripts are often written, mostly due to a widespread non-application of some basic software engineering principles when writing shell-script. As if bash scripts were not software that like any other software has to last for a possibly long amount of time, will need maintenance possibly by different people over time.

In this article, I want to take some notes about problems I have faced and things I have noticed. Not all of this directly relates to my current employer, a lot of what I’ll be writing about is something that has been floating in my brain for a while.

Common sense: avoid side effects

Some operations can be easily done by “tricks” that you know due to the intimate knowledge of how bash works.

Real world example: your scripts rotate logfiles, and after a logfile has been copied, you have to flush its content and make it a blank file again.

This is a good way to do it:


user@host $ truncate -s 0 $FILENAME

And here is a not-so-good way to do it:


> $FILENAME

The main difference between the two ways of doing the same thing is that the former  explicitly accomplishes the task while the latter does the same by exploiting a side effect of the bash redirection operator. The former is only a shot of man  away from having a clear understanding of it, while the latter is quite far from being easy to understand unless you already know it or you have seen it before.

Common sense: use side effects

Despite what I have said earlier, there are some side-effects explicitly built into Bash in order to let scripters solve a particular kind of problem in a fast, concise and effective way.

Scenario: you are writing an init-script for something. You spawn a process, and then you are supposed to save its PID to a pidfile.

The bad way to do it:


user@host: ~ # ./my_service -a arg1 -b arg2

user@host: ~ # ps auxwhatever  | grep "my_service -a arg1" | awk '{print $3}' > PIDFILE

A good way to do it:

<br data-mce-bogus="1">

user@host: ~ # ./my_service -a arg1 -b arg2 <br data-mce-bogus="1">

user@host: ~ # echo $! > PIDFILE

The “good” way is good because:

  • It uses a bash featured  explicitly designed for this situation
  • it’s easy to read, as it is an usual echo output redirection scenario
  • it’s clear that $! is a special variable, and as thus it’s clear where to look it up (just search “special variable” in the bash manual or something like that)

On the other hand, the “bad” way:

  • combines many commands, each of them with many possible options, together
  • might be using obscure/arcane/unusual parameters you’ll have to look up in order to have a clear understanding of that the code is doing

If you expect parameters, at least check their presence

This might be obvious, but I’ve seen a number of scripts failing SILENTLY because of this. If your script or your function expects some parameters, you’d better be checking their presence (again, this is is basic software engineering: validate input, always).

This is particularly true because parameters are positional and depending on how you traverse the parameter array (via $* instead of $@, for example), you might get different behaviours.

A basic check might be just checking that the number of parameters ($#) is correct (unless you’re writing “variadic functions” in bash script… that’s cool).

Protip: if you’re writing debug routines, don’t just print “\$variable=$variable”. It helps (a lot!) to know what values were expected.

A better debug code could be one of these lines:


echo "\$VAR1='$VAR1' [expected values: 'A', 'B' or 'C']"

echo "\$VAR1='$VAR1' [expected value x should be 0 < x <= 100]"

The rationale in this is that a debug line telling you the status of a variable actually tells you very little. You will have to go read the whole script anyway in order to understand whether such value is okay or it isn’t. But when you see the value of a variable and the range of values it is at least supposed to have, there’s no need to go further: half of the problem (finding the bug) is done.

Pay attention to quoting

Quoting is good, but pay attention to the single-quote vs double-quote, and pay double-attention if you’re programmatically running a script on a remote machine using ssh.

I had to deal with a nasty bug of logfiles not being correctly collected and rotated because of a like like this:


ssh $USER@$HOST 'command $ARG1 $ARG2 /a/path  $ARG3 /another/path'

What’s wrong with this code? Nothing? Yup, it’s correct. Or at least, it’s syntactically correct.

What I didn’t tell you is that one of the argument (let’s say $ARG2) was locally-supplied, while the others were local variables of the remote machine.

Can you guess what went wrong?

I am sure you can, but in case you’re wondering, single quotes do not perform variable substitution, and the local argument ($ARG2) was not being supplied. On the other hand (on the other side of the ssh connection, that is, on the remote machine) all of the remaining variables were being correctly substituted, so the script was running almost correctly, except for the part that used the unbound variable.

In this case, we/I had to switch from single quotes to double quotes, and escape dollar signs for the remote variables. The script ran.  (Actually the script ran into another bug, but this is a story for another post… (oh and btw we/I fixed the other bug too)).

For the christ’s sake, comment your scripts

This is old, everybody says that. But shell scripts are way more neglected than scripts in other languages, for some reason, despite the fact that quite a number of smart tricks are possible in shell scripts.

In my opinion, the very first five or ten lines past the sha-bang should describe what the script is supposed to do and what its parameters are. Better: include a synopsis i.e. an example on how to call such script with the various parameters.

Protip: past the aforementioned five or ten lines, spend a couple more lines documenting exit codes (you’re using exit codes, right?).

The rationale behind commenting is that another person in the future (or possibly yourself, still in the future) is not supposed to have to read the whole script.

For christ’s sake, use meaningful variable names

i, j, k. When I see so-named variables I am sorry I don’t live in the US and can’t get a firearm and bullets at the mall down the road.

No rant here, just a hint: if you really have to use short variable names, at least write a comment to say what said variables is going to hold, especially if you’re going to reference those variables outside of the function where you define and initialize them (ah, the joys of the global scope).

Effective debug lines: some useful variables

I want to close this post with a couple of hint for writing more-effective debug lines, and a tip to proper and further documentation.

First things first: useful variables.

First of all, if a workflow (a nightly cronjob or something) is involves running many scripts one after another and redirection all of their output to a particular file, and if said scripts call other scripts, it could be a good idea, when writing debug lines, to say what shell script is being executed. In order to do so, at the beginning of your script, the $0 variable will hold the complete path to the script.

If you are writing a debug message for a function, it would be useful to know the line that is causing problems too. In this case, at any point in the script, $LINENO is a variable holding the current line in the script. It can be a very good hint on where to look.

In the end, I want to say that the best documentation about the bash shell, the best “book” I have stumbled into is the bash man page itself. It’s quite long and very exaustive.

My suggestion is to convert it to pdf and print it (if you like to read on paper or like me, like to use an highlighter)


man -t bash | ps2pdf - bash.pdf

 

Well… I think that’s all for now 😀

2 thoughts on “Some notes on bash shell programming”

  1. Use funcions.
    Use local variables and function arguments, not global variables or ‘extern’ed environment variables.
    Use “unofficial bash strict-mode” (`set -ue` is enough for me).
    Debug with `set -x` or `set -p` or `set -v` in right places, not just `echo =$var=.

    Best book about bash – ABS from Mendel Cooper =)

    Like

    1. Hello ildar, and thanks for your comment!

      I didn’t know about “unofficial bash strict mode”, I’ll look that up!

      Thanks for your precious suggestions!

      Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.