If you are in a subdirectory such as /PROJECTS/P1/A/A1/A11, what single command would you use to return to your home directory from the current working directory?
The easiest but not only way to return to user's home directory from any directory within a filesystem is to use cd command without any options and arguments.
$ cd

How do I ping a specific port of a remote server? I need to find out whether the port on the remote server is open. system.
ping utility does not allow you to ping specific port on you remote server. To see whether a specific port is open on a remote server you can use port-scanner such as nmap or simply try connect to a socket ( IP-addressort ) using telnet. In the example below we test whether a port number of TCP port 80 is open a host
# nmap -p 80 -sT

Hi everyone! I am very new to linux so sorry for a very basic question. I would like to find out what is the IP address of my computer using the linux operating system. Can someone help?
The easiest way to find your IP address on Linux is with ifconfig or ip command or follow this link to check your Local and Public IP address directly using your web browser. The manual process of finding your internal IP address would be as follows. Start by opening your terminal and type:

Hi, what is the Linux ipconfig equivalent? I use command line ipconfig command in windows, however I cannot find linux ipconfig command on my Linux system.


LaTeX is the typesetting system and a markup language that allows for the creation of documents. LaTeX is heavily utilized by the academic and scientific community. LaTeX produces beautiful type and is written in a language that is fairly intuitive. This article will discuss a brief history, introductory usage examples, front-ends, and further readings.

About LaTeX

Latex on linuxFrom its website, LaTeX is a high-quality typesetting system; it includes features designed for the production of technical and scientific documentation. LaTeX is the de-facto standard for the communication and publication of scientific documents. LaTeX is available as free software. LaTeX was first released in 1985 by Leslie Lamport as an extension of TeX. Tex was developed by Donald E. Knuth. It was first released in 1978. LaTeX is used, as mentioned, earlier in academic environments for book publication and article publication. Not to go off-topic, but LaTeX is also used to create the formulas displayed on wikimedia applications such as Wikipedia! In addition to its ability to display formulas and beautifully created pages, LaTeX can do much more but that goes beyond the scope of this article. Look at LaTeX's homepage for further documentation on LaTeX.


If you ever tried to work with Linux command line, ls command was surely one of the first commands you have executed. In fact, ls command is so frequently used, that its name is often considered as the best choice to name a Trojan Horse. Even though you use ls command on daily basis, its wast number of options always makes you to reach for ls's manual page. Doing so you learn something new every time you open ls's manual page . This guide will try to do the same. ls command belongs to a group of core utilities on your Linux system. GNU ls was written by Stallman and David MacKenzie based on the original AT&T code written in the 60s.

Let's get started, no previous Linux skills are required. First, we will cover ls's frequently used options and then we will introduce some more advanced features.

Frequently used options

  • -l
    This is very common option of ls command. By default ls displays only name of a file or directory. -l , alias long listing format, will instruct ls command to display more information for any given output.
  • -a, --all
    Display also hidden files. In shell hidden files contain a "." in front of its name. -a option will ensure that these files are not omitted from ls output.
  • -t
    Sort output by modification date listing the oldest modification date as last
  • -r, --reverse
    This options will simply reverse any ls's output.
  • -h, --human-readable
    With combination of -l option this fill print sizes in human readable format (e.g, 3K, 12M or 1G ).

Long listing format

This is very common and often use ls's option. Not only this option displays additional information for a file or directory, this option is also required as a combination with some other ls options. The first thing we are going to do is to execute ls command without any options and arguments. You cannot go more basic with ls than that:

$ ls
dir1  dir3  dir5       file2.txt  file4.txt
dir2  dir4  file1.txt  file3.txt  file5.txt


Learning and understanding Regular Expressions may not be as straight forward as learning ls command. However, learning Regular Expressions and effectively implementing them in your daily work will doubtlessly reward your learning effort by greater work efficiency and time savings. Regular Expressions is a topic which can easily fill up entire 1000 pages long book. In this article, we only try to explain the basics of Regular Expressions in a concise, non-geeky and example driven manner. Therefore, if you ever wanted to learn Regular Expression basics now you have a viable chance.

The intention of this tutorial is to cover a fundamental core of Basic Regular Expressions and Extended Regular Expressions. For this, we will use a single tool,and that will be the GNU grep command. GNU/Linux operating system and its grep command recognizes three different types of Regular Expressions:

  • Basic Regular Expressions (BRE)
  • Extended Regular Expressions (ERE)
  • Perl Regular Expressions (PRCE)


Welcome to the second part of our series, a part that will focus on sed, the GNU version. As you will see, there are several variants of sed, which is available for quite a few platforms, but we will focus on GNU sed versions 4.x. Many of you have already heard about sed and already used it, mainly as a substitution tool. But that is just a segment of what sed can do, and we will do our best to show you as much as possible of what you can do with it. The name stands for Stream EDitor, and here "stream" can be a file, a pipe or simply stdin. We expect you to have basic Linux knowledge and if you already worked with regular expressions or at least know what a regexp is, the better. We don't have the space for a full tutorial on regular expressions, so instead we will only give you a basic idea and lots of sed examples. There are lots of documents that deal with the subject, and we'll even have some recommendations, as you will see in a minute.


There's not much to tell here, because chances are you have sed installed already, because it's used in various system scripts and an invaluable tool in the life of a Linux user that wants to be efficient. You can test what version you have by typing

 $ sed --version

On my system, this command tells me I have GNU sed 4.2.1 installed, plus links to the home page and other useful stuff. The package is named simply 'sed' regardless of the distribution, but if Gentoo offers sed implicitly, I believe that means you can rest assured.


One of the major differences between various Linux distributions is package management. Many times, this is the reason somebody steers away from one distribution to another, because he/she doesn't like the way software is installed or because there is software needed that isn't available in the distro's repositories. If you are a beginner in the Linux world and are wondering about the differences between distributions, this will be a good start. If you've only used one or two distributions for some time and you want to see what's on the other side of the fence, this article also might be for you. Finally, if you need a good comparison and/or a reminder about major PM systems, you'll find something interesting too. You will learn the most important things a user expects from a PM system, like install/uninstall, search and other advanced options. We don't expect some special knowledge on your part, just some general Linux concepts.

The approach

We chose as terms for the comparison some popular systems from popular distributions, and those will be dpkg/apt*, rpm/yum, pacman and Portage. The first is used in Debian-based systems, rpm is used in Fedora, OpenSUSE or Mandriva, but yum is Fedora/Red Hat only, so we will focus on that.Gentoo is a source-based distribution, you will be able to see how things are done both in binary and source distributions, for a more complete comparison. Bear in mind that we will talk about the higher-level interfaces to package management, e.g. yum instead of rpm or apt* instead of dpkg, but we will not cover graphical tools like Synaptic, because we feel that the CLI tools are more powerful and usable in any environment, be it graphical or console-only.


It's a very common fact that nobody likes to write documentation. Heck, nobody likes to read it either. But there are times when we have to read it in order to, say, finish the project on time, or, especially when working in software development, even write it. If you only have to read it, we always encouraged you to do so, but if you'll have to write the manual pages and need a kickstart, here's the article for you. If you worked previously with HTML your life will be easier, but if not it's alright. Writing manual pages for Linux is not that hard, despite the look of the pages when read in plain-text. So basically you'll need some Linux knowledge and the ability to use a text editor. You will learn (with examples, of course) the main concepts in text formatting as applied to man pages and how to write a simple manual page. Since we used yest as an example for our C development tutorial, we will use snippets from its manual page to illustrate our point during this article.

A little bit of history

The first manual packages written are said to be authored by Dennis Ritchie and Ken Thompson in 1971. The formatting software used was troff, and that format continues to be used to this day, although the tools may be different. The text formatting tool on Linux systems is now groff, with the leading 'g' coming from GNU. groff's existence is owed to the fact that when troff was written, terminals meant something different in terms of capabilities than what they mean today. Another strong incentive for the GNU project to create groff was troff's proprietary license. troff still lives on on other Unix systems, like OpenSolaris or Plan9, although under open source licenses.


If you find yourself interacting with a database system such as MySQL, PostgreSQL, MS SQL, Oracle, or even SQLite, sometimes you find that some of the tasks you perform are more conveniently executed using a GUI rather then using the default management utility (usually run from a CLI) provided by the database system itself. Some of you may already use other tools such as phpMyAdmin, or phpPgAdmin. This article will talk about another web based database management tool known as Adminer. Adminer allows for the management of all the database systems mentioned above.This article covers Debian (& Ubuntu), Fedora, and ArchLinux.

From its website: Adminer (formerly phpMinAdmin) is a full-featured database management tool written in PHP. Conversely to phpMyAdmin, it consist of a single file ready to deploy to the target server. Adminer is available for MySQL, PostgreSQL, SQLite, MS SQL and Oracle.

Adminer has an entire page dedicated to a comparison between itself and phpMyAdmin. Some notable features in Adminer that are either absent or incomplete in phpMyAdmin include: full support for views, full support for triggers, events, functions, routines, and ability to group data and apply functions to data in select data (to name a few). This article will cover its installation, configuration, customization, and some usage example for MySQL and PostgreSQL.


  • Have some knowledge in web administration and development (HTML, CSS, PHP, and Apache)
  • This article assumes you have Apache, PHP, your database system of choice configured.
  • I'll be running Adminer on a local development LAMP stack I run on my netbook


rsnapshot is a backup tool written in Perl that utilizes rsync as its back-end. rsnapshot allows users to create customized incremental backup solutions. This article will discuss the following: the benefits of an incremental backup solution, rsnapshot's installation, its configuration, and usage examples.

Back-it up!

I was recently discussing with a colleague the benefits of backing up your data. My colleague was telling me how one of her customers had recently lost a rather lengthy article that they had been working on. I decided that this may be a good chance to experiment with my netbook and rsnapshot. For this tutorial, I'll assume you have have 2 pieces of hardware: your host computer, and your destination equipment. I'll be using an external hard drive for the majority of this post. However, I will briefly cover usage for backing up files over a LAN.

Backing up your data should not be the question to ask but rather how should I backup my stuff? What's the best way? Well there are many different backup pathways you can take, including block level (dd, partimage), partition level (RAID and all its variations), file level (rsyncand its children applications). I'll discuss two types of backups in the context of file-based backups.

Normal backups, or full backups, are self explanatory. Normal backups are one way of backing up ALL your files every time you perform a backup. One issue with utilizing a multiple normal backup scheme is that a normal backup takes up a considerable amount of space. For example, if you perform a full backup of a 250gig hard drive at 20% capacity, everyday for just one week (assuming that the amount of data does not fluctuate) will mean that you already have used 350gigs for only one week's worth of backups. As you can see, that is not feasible in the long run. The other method that I prefer is the incremental backup method. An incremental backup consists of one full backup and then performing additional backups. These additional backups will only backup files that have changed since the last backup. Instead of backing up your entire hard drive, only the specific files that have changed since the last backup are backed up. As you can probably imagine this is a much more efficient process. One tool that does this on *nix is rsnapshot.

April 20, 2016
by Rares Aioanei


If you're new to server administration and command-line, perhaps you haven't heard of terminal multiplexers or what they do. You want to learn how to be a good Linux sysadmin and how to use the tools of the trade. Or perhaps you're a seasoned administrator already and administer quite a few machines, and want to make your life a little easier. Or maybe you're somewhere in between.

Either way, this article will explain what terminal multiplexers are, what they do and most importantly, how you can benefit from using them. A terminal multiplexer is nothing more than a program that allows its user to multiplex one or more virtual sessions, so the user can have several sessions inside one single terminal. One of the most useful features of such programs is the fact that users can attach and detach such sessions; how is this useful will become clear shortly.

Use cases

Persistent sessions

Let's say you have to administer a remote server via ssh/command-line but your connection is not very stable. That means you have to reconnect often and don't want to start work all over again. Terminal multiplexers offer the feature of saving your sessions between connections so you can continue just where you started. Please note that such sessions are not persistent between reboots (in our case above, reboots of the server you're connecting to) so it's best to know this in order not to expect such a feature. The reason for this is the fact that the multiplexer runs shell sessions, from which you may be running a text editor, a monitoring tool and whatnot. Since all those processes will not be there anymore after a reboot, there is no reason why this feature should be implemented as it would not have any real use.

We spoke in our introduction about attaching and detaching : this is exactly what this feature does. Continuing with our use case, where you have an unstable connection, once you get disconnected, you can just ssh into the server again and reattach to the running session (or choose between sessions to reattach to) and you'll be right where you left off.


OpenSSL is a powerful cryptography toolkit. Many of us have already used OpenSSL for creating RSA Private Keys or CSR (Certificate Signing Request). However, did you know that you can use OpenSSL to benchmark your computer speed or that you can also encrypt files or messages? This article will provide you with some simple to follow tips on how to encrypt messages and files using OpenSSL.

Encrypt and Decrypt Messages

First we can start by encrypting simple messages. The following linux command will encrypt a message "Welcome to" using Base64 Encoding:

$ echo "Welcome to" | openssl enc -base64

The output of the above command is an encrypted string containing encoded message "Welcome to". To decrypt encoded string back to its original message we need to reverse the order and attach -d option for decryption:


In this case, the title might be a little misleading. And that is because awk is more than a command, it's a programming language in its own right. You can write awk scripts for complex operations or you can use awk from the command line. The name stands for Aho, Weinberger and Kernighan (yes, Brian Kernighan), the authors of the language, which was started in 1977, hence it shares the same Unix spirit as the other classic *nix utilities. If you're getting used to C programming or know it already, you will see some familiar concepts in awk, especially since the 'k' in awk stands for the same person as the 'k' in K&R, the C programming bible. You will need some command-line knowledge in Linux and possibly some scripting basics, but the last part is optional, as we will try to offer something for everybody. Many thanks to Arnold Robbins for all his work involved in awk.

What is it that awk does?

awk is a utility/language designed for data extraction. If the word "extraction" rings a bell, it should because awk was one Larry Wall's inspirations when he created Perl. awk is often used with sed to perform useful and practical text manipulation chores, and it depends on the task if you should use awk or Perl, but also on personal preference. Just as sed, awk reads one line at a time, performs some action depending on the condition you give it and outputs the result. One of the most simple and popular uses of awk is selecting a column from a text file or other command's output. One thing I used to do with awk was, if I installed Debian on my second workstation, to get a list of the installed software from my primary box then feed it to aptitude. For that, I did something like that:

  $ dpkg -l | awk ' {print $2} ' > installed

Submit your RESUME, create a JOB ALERT or subscribe to RSS feed.
Subscribe to NEWSLETTER and receive latest news, jobs, career advice and tutorials.
Get extra help by visiting our LINUX FORUM or simply use comments below.

You may also be interested in: