How to setup Nginx Reverse Proxy

In this guide, you will learn how to setup an Nginx reverse proxy with step by step instructions. We will also explain how a reverse proxy server works and what its advantages are. In addition, we also go over various configuration options that Linux administrators commonly employ on their reverse proxy servers.

In this tutorial you will learn:

  • How does a reverse proxy work
  • What are the benefits of a reverse proxy
  • How to setup Nginx reverse proxy
  • How to pass headers
  • How to configure load balancing
  • How to test the Nginx configuration

How to setup Nginx Reverse Proxy

How to setup Nginx Reverse Proxy

Software Requirements and Conventions Used

Software Requirements and Linux Command Line Conventions
Category Requirements, Conventions or Software Version Used
System Distribution-independent
Software Nginx
Other Privileged access to your Linux system as root or via the sudo command.
Conventions # – requires given linux commands to be executed with root privileges either directly as a root user or by use of sudo command
$ – requires given linux commands to be executed as a regular non-privileged user

How does a reverse proxy work?

A system that sits between a client and a web server (or servers) can be configured as a reverse proxy. The proxy service acts as a frontend and works by handling all incoming client requests and distributing them to the backend web, database, and/or other server(s).



Benefits of a reverse proxy

Configuring an Nginx reverse proxy means that all incoming requests are handled at a single point, which provides several advantages:

  • Load balancing – The reverse proxy distributes incoming connections to backend servers, and can even do so according to the current load that each server is under. This ensures that none of the backend servers get overloaded with requests. It also prevents downtime, since the reverse proxy can reroute traffic if a backend server happens to go offline.
  • Central logging – Rather than having multiple servers generate log files, the reverse proxy can log all relevant information in a single location. This makes the administrator’s job immensely easier, since problems can be isolated much more quickly and there is no need to parse log files from multiple locations when troubleshooting issues.
  • Improved security – A reverse proxy will obfuscate information about the backend servers, as well as act as a first line of defense against incoming attacks. Since the reverse proxy is filtering out traffic prior to forwarding it to the backend, only innocuous traffic is passed along to the other servers.
  • Better performance – A reverse proxy server can make smart decisions about how to distribute the load across backend servers, which results in speedier response times. Other common server tasks such as caching and compression can also be offloaded to the reverse proxy server, freeing up resources for the backend servers.
DID YOU KNOW?
A reverse proxy server is not a necessary component in every web hosting scenario. The advantages of a reverse proxy become most apparent under high traffic conditions or situations where multiple backend servers are deployed and need some form of load balancing.

Why Nginx?

Now that we’ve outlined the advantages of a reverse proxy, you may be wondering why you should configure one with Nginx, specifically. The scalability of Nginx and its proven ability to handle an extremely high volume of connections means it’s perfect for deployment as a reverse proxy and load balancer.

A common application is to place Nginx between clients and a web server, where it can operate as an endpoint for SSL encryption and web accelerator. Operations that would normally increase load on a web server, such as encryption, compression, and caching can all be done more efficiently through an Nginx reverse proxy.

How to setup Nginx reverse proxy step by step instructions

Since we’ve explained how a reverse proxy works and what the advantages are to using one, in this section we’ll go over the steps required to set up an Nginx reverse proxy.

  1. Install NginxYou can install Nginx with your system’s package manager. On Ubuntu and Debian distributions, the command is:
    $ sudo apt-get install nginx
    

    On CentOS and Red Hat distributions:

    # yum install nginx
    
  2. Disable the default virtual host
    # unlink /etc/nginx/sites-enabled/default
    


  3. Create a reverse proxy configuration fileAll of the settings for the reverse proxy will go inside of a configuration file, and this file needs be placed inside the sites-available directory. Start by navigating to the following directory:
    # cd /etc/nginx/sites-available
    

    Then use vi or your preferred text editor to create the configuration file:

    # vi reverse-proxy.conf
    

    Paste the following configuration template into this newly created file:

    server {
        listen 80;
        location /some/path/ {
            proxy_pass http://example.com;
        }
    }
    

    Replace example.com with the IP address or hostname of the server you are forwarding to. You may also specify a port with the hostname, such as 127.0.0.1:8080 for example. Save your changes and then exit the text editor.

    Note that this will work for HTTP servers, but Nginx also supports other protocols. We will cover those options in the next section.

  4. Enable the proxyWith your settings saved, enable the new configuring by creating a symbolic link to the sites-enabled directory:
    # ln -s /etc/nginx/sites-available/reverse-proxy.conf /etc/nginx/sites-enabled/reverse-proxy.conf
    

Non-HTTP servers

The example above shows how to pass requests to an HTTP server, but it’s also possible for Nginx to act as a reverse proxy for FastCGI, uwsgi, SCGI, and memcached. Rather than using the proxy_pass directive shown above, replace it with the appropriate type:

  • proxy_pass (HTTP server – as seen above)
  • fastcgi_pass (FastCGI server)
  • uwsgi_pass (uwsgi server)
  • scgi_pass (SCGI server)
  • memcached_pass (memcached server)
Default example for fastcgi_pass directive

Default example for fastcgi_pass directive

How to pass headers

To configure what headers the reverse proxy server passes to the other server(s), we can define them in the configuration file we made earlier. Use the proxy_set_header directive to adjust the headers.
They can be configured in the server, location, or http block. For example:

location /some/path/ {
        proxy_set_header HOST $host;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_pass http://example.com;
}

The example above defines three types of headers and sets them to the respective variables. There are a lot different options for passing headers, but this example showcases three that are very common.

The Host header contains information about which host is being requested. The X-Forwarded-Proto header species if the request is HTTP or HTTPS. And the X-Real-IP header contains the IP address of the requesting client.

How to configure load balancing

Load balancing is one of the primary justifications for configuring a reverse proxy server. We can get started by adding a few extra lines to the configuration file we created earlier. Take a look at an example:

upstream backend_servers {
    server host1.example.com;
    server host2.example.com;
    server host3.example.com;
}

server {
    listen 80;
        server_name example.com;
        location / {
        proxy_pass http://backend_servers;
    }
}

In this example, we’ve added a context called backend_servers. Within there, each server’s hostname/IP is specified on a separate line.

In the proxy_pass directive, where we’d normally enter a hostname or IP address, instead we’ve specified the name of the upstream context defined above: backend_servers.

This configuration will forward incoming requests to example.com to the three different hosts specified in our upstream. By default, Nginx will forward these requests round robin, meaning that each host takes a turn fielding a request.



Configure load balancing algorithms

As mentioned, round robin is the default algorithm that Nginx will use to rotate the requests in the upstream. There are a few other algorithms available, which fit certain situations better:

  • least_conn – Distributes the incoming connections to the backend servers based on their current number of active connections. A server will only receive a request if it has the least amount of connections in that moment. This is particularly helpful in applications that require long lasting connections to the client.
  • ip_hash – Distributes the incoming connections based on the IP address of the client. This is helpful if you need to create session consistency.
  • hash – Distributes the incoming connections based on a hash key. This is helpful with memcached hosts, particularly.

Specify a load balancing method at the top of the upstream context, like so:

upstream backend_servers {
    least_conn;
    server host1.example.com;
    server host2.example.com;
    server host3.example.com;
}

How to test the Nginx configuration

You should always test your configuration for errors immediately after editing the .conf file, and then restart Nginx.

# service nginx configtest
# service nginx restart

Conclusion

In this article we saw how to setup a reverse proxy server with Nginx. We also learned about how a reverse proxy server works, and what the advantages are to using one. We covered load balancing and the various options an administrator needs in order to configure it on their own reverse proxy.

After following the steps in this guide, hopefully you will see a significant performance increase in your web environment, and find it easier to manage now that incoming connections are being sent to a single point.



Comments and Discussions
Linux Forum