Useful Bash command line tips and tricks examples – Part 1

The Bash command line provides nearly limitless power when it comes to executing nearly anything you want to do. Whether it is processing a set of files, editing a set of documents, handling big data, managing a system or automating a routine, Bash can do it all. This series, of which today we present the first part, is sure to arm you with the tools and methods you need to become a much more proficient Bash user. Even already advanced users will likely pickup something new and exciting. Enjoy!

In this tutorial you will learn:

  • Useful Bash command line tips, tricks and methods
  • How to interact with the Bash command line in an advanced manner
  • How to sharpen your Bash skills overall and become a more proficient Bash user

Useful Bash command line tips and tricks examples - Part 1

Useful Bash command line tips and tricks examples – Part 1

Software requirements and conventions used

Software Requirements and Linux Command Line Conventions
Category Requirements, Conventions or Software Version Used
System Linux Distribution-independent
Software Bash command line, Linux based system
Other Various utilities which are either included in the Bash shell by default, or can be installed using sudo apt-get install tool-name (where tool-name represents the tool you would like to install)
Conventions # – requires given linux-commands to be executed with root privileges either directly as a root user or by use of sudo command
$ – requires given linux-commands to be executed as a regular non-privileged user

Example 1: See what processes are accessing a certain file

Would you like to know what processes are accessing a given file? It is easy to do so using the Bash built-in command fuser:

$ fuser -a /usr/bin/gnome-calculator
/usr/bin/gnome-calculator: 619672e
$ ps -ef | grep 619672 | grep -v grep
abc       619672    3136  0 13:13 ?        00:00:01 gnome-calculator


As we can see, the file /usr/bin/gnome-calculator (a binary), is currently being used by the process with ID 619672. Checking that process ID using ps, we soon find out that user abc started the calculator and did so at 13:13.

The e behind the PID (process ID) is to indicate that this is an executable being run. There are various other such qualifiers, and you can use man fuser to see them. This fuser tool can be powerful, especially when used in combination with lsof (an ls of open files):

Let’s say we are debugging a remote computer for a user which is working with a standard Ubuntu desktop. The user started calculator, and now his or her entire screen in frozen. We want to now remotely kill all processes which relate in any way to the locked screen, without restarting the server, in order of how significant those processes are.

# lsof | grep calculator | grep "share" | head -n1
xdg-deskt    3111                                 abc  mem       REG              253,1          3009   12327296 /usr/share/locale-langpack/en_AU/LC_MESSAGES/gnome-calculator.mo
# fuser -a /usr/share/locale-langpack/en_AU/LC_MESSAGES/gnome-calculator.mo
/usr/share/locale-langpack/en_AU/LC_MESSAGES/gnome-calculator.mo:  3111m  3136m 619672m 1577230m
# ps -ef | grep -E "3111|3136|619672|1577230" | grep -v grep
abc         3111    2779  0 Aug03 ?        00:00:11 /usr/libexec/xdg-desktop-portal-gtk
abc         3136    2779  5 Aug03 ?        03:08:03 /usr/bin/gnome-shell
abc       619672    3136  0 13:13 ?        00:00:01 gnome-calculator
abc      1577230    2779  0 Aug04 ?        00:03:15 /usr/bin/nautilus --gapplication-service

First, we located all open files in use by the calculator using lsof. To keep the output short, we only listed the top result for a single shared file. Next we used fuser to find out which processes are using that file. This provided us with the PIDs. Finally we searched using an OR (|) based grep to find which the actual process names. We can see that whereas the Calculator was started at 13:13, the other processes have been running longer.

Next, we could issue, for example, a kill -9 619672 and check if this resolved the issue. If not, we may take a go at process 1577230 (the shared Nautilus file manager), process 3136 (the overarching shell), or finally process 3111, though that would likely kill a significant portion of the user’s desktop experience and may not be easy to restart.

Example 2: Debugging your scripts

So you wrote a great script, with lots of complex code, then run it… and see an error in the output, which at first glance does not make much sense. Even after debugging for a while, you’re still stuck on what happened while the script was executing.

bash -x to the rescue! bash -x allows one to execute a test.sh script and see exactly what happens:

#!/bin/bash
VAR1="Hello linuxconfig.org readers!"
VAR2="------------------------------"
echo ${VAR1}
echo ${VAR2}

Execution:

$ bash -x ./test.sh
+ VAR1='Hello linuxconfig.org readers!'
+ VAR2=------------------------------
+ echo Hello linuxconfig.org 'readers!'
Hello linuxconfig.org readers!
+ echo ------------------------------
------------------------------

As you can see, the bash -x command showed us exactly what happened, step by step. You can also sent the output of this command to a file easily by appending 2>&1 | tee my_output.log to the command:

$ bash -x ./test.sh 2>&1 | tee my_output.log
... same output ...
$ cat my_output.log
+ VAR1='Hello linuxconfig.org readers!'
+ VAR2=------------------------------
+ echo Hello linuxconfig.org 'readers!'
Hello linuxconfig.org readers!
+ echo ------------------------------
------------------------------


The 2>&1 will sent the stderr (standard error output: any errors seen during execution) to stdout (standard output: loosely defined here as the output you usually see on the terminal) and capture all output from bash -x. The tee command will capture all output from stdout, and write it to the file indicated. If you ever want to append to a file (and not start afresh with an empty file) you can use tee -a where the -a option will ensure the file is appended to.

Example 3: A common gotcha: sh -x != bash -x

The last example showed us how to use bash -x, but could we also use sh -x? The tendency for some newer Bash users may be to run sh -x, but this is a rookie mistake; sh is a much more limited shell. Whilst bash is based on sh, it has many more extensions. Thus, if you use sh -x to debug your scripts, you will see odd errors. Want to see an example?

#!/bin/bash

TEST="abc"
if [[ "${TEST}" == *"b"* ]]; then
  echo "yes, in there!"
fi

Execution:

$ ./test.sh
yes, in there!
$ bash -x ./test.sh
+ TEST=abc
+ [[ abc == *\b* ]]
+ echo 'yes, in there!'
yes, in there!
$ sh -x ./test.sh
+ TEST=abc
+ [[ abc == *b* ]]
./test: 4: [[: not found

Here you can see a small test script test.sh which when executed checks if a certain letter (b) appears in a certain input string (as defined by the TEST variable). The script works great, and when we use bash -x we can see the information presented, including the output, looks correct.

Next, using sh -x things go significantly wrong; the sh shell cannot interpret [[ and fails both in the sh -x output as well as in the script execution itself. This is because the advanced if syntax implemented in bash does not exist in sh.

Example 4: uniq or not unique – that’s the question!

Have you ever wanted to sort a file and only list the unique entries? At first glance this would seem to be an easy exercise using the included Bash command uniq:

$ cat input.txt 
1
2
2
3
3
3
$ cat input.txt | uniq
1
2
3

However, if we modify our input file a little, we run into uniqueness issues:

$ cat input.txt 
3
1
2
3
2
3
3
3
$ cat input.txt | uniq
3
1
2
3
2
3


This is because uniq by default will Filter adjacent matching lines, with matching lines being merged to the first occurrence as the uniq manual clarifies. Or in other words, only lines which are exactly the same as the previous one will be removed.

In the example this can be seen by the last three 3 lines being condensed into a single ‘unique’ 3. This is likely only usable in a limited number and specific use cases.

We can however tweak uniq a bit further to give us only truly unique entries by using the -u parameter:

$ cat input.txt  # Note that the '#' symbols were added after execution, to clarify something (read below)
3  #
1  #
2  #
3  #
2  #
3
3
3
$ cat input.txt | uniq -u 
3
1
2
3
2

Still looks a little confusing, right? Look closely at the input and output and you can see how only lines which are individually unique (as marked by # in the example above after execution) are output.

The last three 3 lines are not output as they are not unique as such. This method of uniqueness again would have limited applicability in real world scenario’s, though there may be a few instances where it comes in handy.

We can get a more suitable solution for uniqueness by using a slightly different Bash built-in tool; sort:

$ cat input.txt 
1
2
2
3
3
3
$ cat input.txt | sort -u
1
2
3
DID YOU KNOW?
You can omit the cat command in the above examples and provide file to uniq or sort to read from directly? Example:sort -u input.txt

Great! This is usable in many script where we would like a true list of unique entries. The added benefit is that the list is nicely sorted at the same time (though we may have preferred to use the -n option to sort also to sort in a numerical order according to the string numerical value).

Conclusion

There is much joy in using Bash as your preferred Linux command line. In this tutorial, we explored a number of useful Bash command line tips and tricks. This is the kickoff of a series full of Bash command line examples which, if you follow along, will help you to become much more advanced at and with the Bash command line and shell!

Let us know your thoughts and share some of your own cool bash command line tips, tricks and gotchas below!



Comments and Discussions
Linux Forum